Life inside the gilded cage

How mass surveillance and the manipulating of data are blurring the lines between sci-fi dystopias and reality.

by MAIZE

alt=
Big data 02 April 2019

In the 2002 Jaume Balaguerò’s horror movie Darkness, the last scene sees a girl and her little brother getting into a car and fleeing the house from satanic creatures after much blood has been shed. It seems that a soothing happy ending is coming to relieve us after a couple of hours of chills, but the very last frames show the car entering a tunnel. The viewer knows what will happen when the darkness embraces them. When speaking of technology, we have cause to think that maybe we share something with the two unlucky characters, as we are blindly and confidently driving towards our doom, unaware of the dangers that lie in wait.

W were told that technology would set us free. Technology charged geopolitical events such as the 2009 Iranian Green Wave or, two years later, the Arab Spring, made us believe that this was certainly the case. We failed to pay attention to the increasingly powerful satellites multiplying in the sky, to the cameras and sensors appearing around every corner and gathering our data, nor even in our local stores which began to spy on our facial expressions. Instead, we willingly offered more by transforming our smartphones into the control room of our very existence.

Paradoxically, we live in a time where data is the new oil, and we are net producers; generating wells-worth as we walk and breath: yet we are not getting richer. So the question is doing what with this data and why?

Precogs are Coming 

The answer is a no-brainer to even those who are unfamiliar with George Orwell’s 1984: surveillance and repression by police and law enforcement agencies. A combination of eye-watering amounts of data, AI and facial recognition software is leading us into a world sinisterly close to that depicted in the prescient sci-fi movie Minority Report. Even more disturbingly, all of this is happening without us being informed.

To make us more aware, several NGOs have started campaigning. The British charity, Privacy International (PI) is one of them and has been monitoring the use of several technologies/practices by police forces. In particular, the charity has examined the growing usage of body cams, facial recognition, hacking, IMSI catchers (IMSI strands for International Mobile Subscriber Identity), mobile phone extraction, predictive policing and social media intelligence.

PI’s Policy Officer, Antonella Napolitano, spoke to us just as her group was set to launch their Neighbourhood Watched campaign. “We believe these technologies should not be bought or used without proper public consultation and the approval of locally elected representatives. But quite often, we learn about them the very moment we start seeing their use’s consequences”, Napolitano stated.

Among the consequences she refers to, there has not been a drop in the crime rates, but a rise in social disparities, as she explains: “The increased use of surveillance technology by local police results in an inclination to do more law enforcement actions against communities of color or targeted groups such as low-income wage earners. This has created an environment in which the members of these communities are treated like prospective criminals. Just take predictive policing, for instance: these programs are used to estimate when and where crimes will be committed, but the algorithms are fed with historical policing data, which happen to be incomplete and biased; risk assessment on the probability of committing a crime is also based on what you earn or where you live. This leads to a ‘feedback loop’ with a consequence is that minority communities are constantly patrolled and over-represented in crime statistics”.

Surprisingly enough, these experiments are not confined to the more technologically-advanced regions of the world. “It’s interesting to notice that to test their products tech companies use real-life environments in countries like India and Pakistan, where they know they won’t be bothered about standards to be respected or guarantees to be given to protect people’s rights”, Napolitano says.

What makes these practices so appealing is that they take up less time and resources than a proper investigation would require, thus projecting an idea of efficiency from which agencies benefit. The problem is that, unlike old investigative tools such as wiretapping, which only violates privacy, the new ones allow for data manipulation, which is an entirely different thing.

“Where evidence is obtained this way, it may interfere with your right to a fair trial. The Italian authorities have been employing hacking powers, without explicit statutory authorization or clearly defined safeguards against abuse, for years. Only recently, hacking has been regulated so that it can be used only for a very limited set of serious crimes.” Napolitano states.

Unfortunately, these new technologies tend to constantly outpace the law, so their use in investigations falls into a grey area. It is not illegal but it’s also not legal, and so it goes on until a judge comes along and says that the party’s over.

Know Yourself (It’s what they want) 

Police and law enforcement agencies are not the only actors who know how to use our data: companies crave for it even more. In fact, data collection has become the dominant business model of today. The most obvious reason is that data is necessary for developing tailor-made advertisements and services, but this is not the only purpose they serve, as Marek Tuszynski of Tactical Tech, a company which helps people and organizations to use technology in the safest possible way, explained.

“This is just the first way to bring in money but there is a lot whole more because data is persistent and can be used for other purposes. It has become a commodity, and can be bought by different actors and then used for different purposes”. Last year, Tactical Tech’s group of tech experts, conducted a study on dating service apps and sites: a major means of gathering data. “We discovered that with just $150 we could buy a very detailed data set of a million individuals, including photographs, information about gender preferences and so forth. More shockingly, we’ve found that many of these companies used to sell their data on the market”.

The data gathered this way is not just those the users share to find a soul mate, but also other sets of data contained within their devices. Analyzed and worked through AI algorithms, they are a treasure trove of information, the keys to deciphering a person’s personality and decoding how they think. Once this process is known, it can be used the other way round, to induce feelings, shape perceptions and alter needs. Here data, from an object of manipulation, becomes a means of manipulation.

This hidden manipulative power generated a very insidious market in to elections. In fact, the Cambridge Analytica scandal could be just the tip of the iceberg. “Actors taking part in electoral campaigns can influence voters either to vote for something or vote against something or not to vote at all. Suppression of voters is as important as convincing them to act and is way easier. Different private companies and consultants have mushroomed, promoting themselves as experts in using personal data, in the profiling of voters and as being able to influence them. They can do this by placing different quality of information, misinformation or fake information through advertising. We’ve analyzed data-driven social campaigns during elections held in around 15 countries, and we’ve seen about 350 different private entities collecting or selling personal data that would have helped parties involved in democratic processes to influence voters”, Tuszynski went on.

Some figures do support this view. According to The Electoral Commission, the UK’s independent body overseeing elections and regulating finance, between 2011 and 2017 parties’ spending on digital advertising rose from 0.3% to a whopping 42.8% of total expenditure. This has enormous implications: should these techniques be honed to the point of creating a surgically precise means of miscommunication, it would deprive the very idea of democracy of any significance by making the word ‘accountability’ meaningless.

The Age of Surveillance 

This is not the exact Orwellian world envisioned however due to missing detail: this process is not being led by the public sector – but the private one, meaning that it will not, as in 1984, be the public institutions profiting from this the most. This is alarm is already being raised by Amy Webb, futurist and NYU professor and author of The Big Nine: How the Tech Titans & Their Thinking Machines Could Warp Humanity. In an interview recently given to the MIT Technology Review, she expressed her concern: “Rather than create a grand strategy for AI or our long-term futures, the federal government has stripped funding from science and tech research. So the money must come from the private sector. But investors also expect some kind of return. That’s a problem… Instead, we now have countless examples of bad decisions that somebody in the G MAFIA (Google, Amazon, IBM, Facebook, Apple and Microsoft) made, probably because they were working fast. We’re starting to see the negative effects of the tension between doing research that’s in the best interest of humanity and making investors happy”.

The split between private and public interest is nothing new, but in the tech world became particularly meaningful after the 2000 dot.com bubble. According to Harvard Business School’s professor Shoshana Zuboff, it all began with Google: “It was a fledgling firm, and its investors were threatening to bail—in spite of its superior search product. That’s when Google turned to previously discarded and ignored data logs and repurposed them as a ‘behavioral surplus.’ Instead of being used for product improvement, this behavioral data was directed toward an entirely new goal: predicting user behavior”.

Zuboff, who authored the seminal 1988 book, In the Age of the Smart Machine: The Future of Work and Power, has recently published the 700-page-long The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power, in which she describes the coming of a new age of capitalism “invented in the context of targeted advertising”. If the powers behind this “surveillance economy” are left unchecked and unrestrained, it could lead to a world of citizens turned into consumers trapped in a vicious cycle of induced needs leading to induced, never-ending consumption. Something which is already happening, according to Jonathan Crary, author of 24/7: Late Capitalism and the Ends of Sleep, where he describes how we are becoming 24/7 consumers.

For long, we thought that this exponential technology’s only pitfalls were centred on privacy and employment. It seems that there is much more than that: the future we are sailing toward resembles Aldous Huxley’s Brave New World dystopia mixed with the Orwellian all-knowing state. It would seem that technology is building us a gilded cage, yet a cage it still remains.