The quest for digital sovereignty

We can't read through all the legal notices that we are asked to agree, but we have to find a way to control our data.

by Lucia Conti

alt=
web3 29 January 2021

Denis Jaromil Roio is a digital social innovation expert. His creations are recommended by the Free Software Foundation and redistributed worldwide. Since 2000 Jaromil has been CTO and co-founder of Dyne.org Think & Do Tank, a software house with expertise in social and technical innovation, gathering the contributions of a growing number of developers who value social responsibility above profit.

Jaromil received the Vilém Flusser Award at Transmediale (Berlin, 2009) while leading the R&D department of the Netherlands Media art Institute (Montevideo/TBA) – a position he occupied for six years. He is included in the “Purpose Economy” list of top 100 social entrepreneurs in the EU (2014) and the “40 under 40” European young leaders’ program (2012). In 2018 he was awarded a Ph.D. by the University of Plymouth with a dissertation titled “Algorithmic Sovereignty.”

What is Algorithmic Sovereignty?

To explain what algorithmic sovereignty is, I will examine each one of the two words. Algorithms are the automated rules that evaluate variable situations for us; often, they help us make decisions, and often make decisions about us. They can decide how much credit we can get from a bank or how much our insurance should cost, all based on our behavioral track record and the data we produce. Algorithms can also decide what content we are going to see in our newsfeed. They are the invisible brain that helps us or helps people make decisions that affect us. It is a way to process information very quickly into solutions. 

Sovereignty is a condition in which the citizens of a society can decide about its rules through a democratic process and therefore are governed by a representative entity. So, algorithmic sovereignty defines a condition in which the participants to an algorithm, meaning those who use it, can decide how the algorithm will work.

In a nutshell, algorithmic sovereignty means that the participants to an algorithm can decide on the algorithm itself.

The lack of control over personal data inevitably generates an imbalance of power and concrete risks for all users. What kind of data are we talking about, and what are the risks?

An algorithm can process many kinds of personal data. It can be our heart rate that we feed into fitness gadgets, it can be the list of transactions on our bank account, it can be our movements, it can be the time we spend on our mobile phones, it can be the people we contact most often, it can be the profiles we look at more often on social networks, it can be the people appearing in photos with us, and by collating and analyzing this information, we can find out a lot about ourselves and other people.

Nowadays, social network algorithms can predict how our relations will go, whether we are interested in meeting new partners or changing our habits. All this data is useful, and advertisers, marketing, and businesses in general avail themselves of it regularly.For instance, the travel industry relies heavily on predictions about where people would like to go, to decide to open new routes, or enhance the capacity of its vehicles. Then we have the credit industry: it can perform risk assessment, learn about our health through data from fitness apps, and our commitment to our family through the personal information we share.

Personal data opens up access to very sensitive and intimate aspects of our lives.

What should users do to protect their data, and what role should legislators and institutions play to ensure it stays protected?

First of all, users — whom I like to see as participants because we, as developers, are as much in the game as the people using our algorithms — should be aware of how their data is circulating, how it is used and analyzed, and what it is based on. Unfortunately, nowadays, it is not easy to read through all the legal notices that we are asked to agree to every time they are updated, to use a service that is so essential to us as a search engine or a map navigator. So, we often accept these conditions without even reading them, and this is dangerous. 

State legislators have made it mandatory for us to accept these conditions. Still, nobody is doing much work to ensure that these conditions are readily comprehensible and easy to process. The legal frameworks in place are diverse. Some operate on a national level, but the most important one is the GDPR, the General Data Protection Regulation, drafted and passed by the European Union. The GDPR is an important step forward to ensure the legal protection of people’s rights concerning their data and how it is used. The GDPR also laid the foundation for more effective data sovereignty because it requires clear information on where the data is stored. This is essential because the location of the data storage determines the legal framework that applies.

Could you expand on this?

What I mean is that if the data is stored in the US, the US legal framework applies, even to European citizens who live in Europe, if they use US-based applications.

This is the main gap in legislation because what we feed into the digital dimension can travel at marginal cost at light speed and be stored anywhere in the world. Whether it is Europe, Asia, the US, Africa, or Australia, the place where the data is, will determine the legislation on said data, and we should always pay attention to this aspect. 

When, for instance, we let a Chinese company process our fitness data, we should be aware of what laws China applies to data processing and whether there is the same respect for privacy that we deem necessary for our democracy in Europe. It is a complex situation, and we should worry first and foremost about how to keep track of this complexity.

We live a significant part of our life on social media platforms. Nevertheless, we have no decision-making power, and tech giants decide all the rules. Do you think that such imbalance will produce a reaction anytime soon?

We are facing an imbalance here, and I believe that people understand it more and more. It has been a slow process, but scandals such as that of Cambridge Analytica have been critical in helping people and governments understand how wrong this imbalance is. Media companies provide us with news, insights, and opinions much more than television used to.

These companies sell advertising spaces without performing due diligence. As a result, a platform like Facebook could sell ad spaces to foreign powers for the American elections, but the same occurred for elections in Kenya, Myanmar, and many other countries.

The full list of elections tainted by Cambridge Analytica is so long that it is hard to remember.

They sold ad space to foreign political powers who had no intention of keeping peace and prosperity in the country they were targeting. On the contrary, they have caused social unrest and, in some cases, war in those countries.

Would you explain how this occurred, concretely?

A concrete example of this dynamic of selling people’s attention to foreign powers through social media platforms can be related to Russia’s interest in influencing the outcome of American elections four years ago. Several fake news releases about Hillary Clinton were spread on such platforms, and some people were exposed to the false information that Hillary Clinton had cut funding for cancer research. Well, the people targeted by this content had cancer cases in their families and were targeted because that emotional response could influence their vote. In short, a foreign power who wants to change the outcome of an election could do it by discrediting a candidate they don’t want to succeed through social media campaigns based on fake news and by carefully selecting specific target groups. 

This has been researched at length by journalists like Carole Cadwalladr and documented by whistle-blowers who have decided to speak the truth after working for those companies. And this is something we have to look into to update our notion of deontology in journalism, something we have lost track of in the name of marketing ads and monetizing platforms.

Many companies manage our attention and newsfeed and are not subject to any sort of journalistic ethics, just pure marketing. Meanwhile, journalists are not getting paid enough for their work and investigations.