The digitalism doctrine

A beneficial social structure or a way to create greater divisiveness?

by Epi Ludvik

alt=

In the post-industrial Information Age, the volume of available data puts it beyond human capability to fully analyze and harness the knowledge it represents. Two routes resolve the dilemma. One is to involve more people than before in business, social and governmental issues through a process of opening the search for solutions to wider groups of problem-solvers. This may be done through the creation of a prize challenge to incentivize involvement. The second route is to employ artificial intelligence technology to assess data faster and in higher volumes than is humanly possible.

The use of digital technology and AI have already improved and enriched our lives in truly transformative ways. Automated image assessment identifies more cancer tumors than teams of humans. Robots can perform surgery at higher levels of accuracy that accelerate patient recovery and free up hospital beds. Robots can also perform tasks in environments lethal for humans, such as space, deep under the sea, and in hot and cold extreme temperatures. They can also perform the most boring, repetitive work.

Online crowdfunding enables startup entrepreneurs to secure financial support when early-stage lack of experience or an empty order book would deter traditional lenders or investors. The process creates goods, services, and job opportunities that would not have otherwise existed. Vast numbers of gig economy workers are able to source freelance work, alleviate personal financial hardship, gain valuable experience, make contacts, and perhaps build previously unthinkable careers. A digital generation of new financial service providers enables better control of cash flow and credit, a faster and more accurate flow of payments, cross-border if required, and at a lower cost to both payers and payees.

The benefits of digital technology and the developments they enable are clear: they help us unlock our potential to do what we humans do best, innovate. However, other uses of the technology and their impacts on crowds – which means “us” – that’s causing me, and many other crowd sector commentators, deep concern.

Faults in the system

The lack of personal data security is subject to growing scrutiny and criticism. This operates at two levels: one is security breaches by hackers, and the second one is the ‘legitimate’ use of data by the organizations that harvest it.

Security breaches

Smart Cities
Organizations are the usual targets for ransomware threats, though media coverage of these incidents breeds mistrust in the minds of many individuals, which slows wider progression of data collection and data sharing. Smart cities, for example, use technology to monitor traffic flows and better manage use of roads and public transport, reducing congestion and air pollution. Installing and using digital technology is not the end of the job. More than 40 US municipalities were victims of cyberattacks in 2019. Baltimore was a notable casualty from a hack that shut down most of the city’s servers and some government applications. By declining to pay a Bitcoin ransom worth about $80,000, the hack cost the city an estimated $18m in direct costs and revenue shortfalls.
The Center for Long Term Cybersecurity at the University of California at Berkeley found that the greatest smart city security risks were posed by emergency alerts, street video surveillance, and smart traffic signals. City authorities need robust security plans, with an appropriate budget, to protect the integrity of their systems and the welfare and confidence of their citizens.

Medical Records
Digitized medical records would reduce admin time in health service providers and could improve treatment. Mass, anonymized data collection would lead to improved norms for the identification, prevention, and treatment of a multitude of conditions and illnesses. Yet citizen resistance is strong and commonplace. Many people believe the health service providers who would use the data, and the government officials responsible for creating the system, aren’t up to the job of keeping it secure. They fear that personal details could be obtained by commercial users as diverse as insurance and pharmaceutical companies, dating agencies, and social media platforms. This truly makes me very concerned, though not about whether digitalism is good or bad but the capabilities of government agencies and health service providers to safeguard the data the technology relies on.
There are endless examples of why the public perceives such risks. In 2013, the UK Government closed a nine-year project aiming to create a centralized IT system for the country’s National Health Service that had cost £2.7 billion, and would have gone on to cost £11bn in total. In 2017, ransomware attacks on regional NHS IT systems halted operations, froze access to patient data, crippled ambulance services, and cost £92m in ransomware payments and subsequent system upgrades. The public does not easily forget such incidents. However, I still believe markets always have a short memory.

Data usage by corporate owners
By ‘legitimate’ use of data I chiefly mean the leading US technology companies Facebook, Amazon, Microsoft, and Google, along with Apple (FAANGs). They were pioneers, and billions of people have formed their internet habits around using them to connect, socialize, shop, and search for information or entertainment. We all agreed to whatever terms and conditions were flashed up before us to use the services free of charge. We thought tools such as “people who also bought what you just did also like these items….” were clever and helpful.
Then, as Shoshana Zuboff detailed in her 2019 book “The Age of Surveillance Capitalism”, Google pioneered a way to use what they termed “digital exhaust” – their leftover data after helping advertisers to get us to click through to their online messages. After 2001, advertisers were told that where their advertising appeared would be scheduled by a “magic, black box of tricks.” Advertisers must have found it worked. From 2001 when Google introduced the service to 2004 when it went public through an IPO, their net advertising revenue grew by a staggering 3,590%. So, is digitalism good or bad? It’s certainly very good for Google and its shareholders.
Off the back of guiding advertisers, future “click behavior” was universally adopted and has become a predictor of all behavior. Our private and personal records of what we buy, watch, listen to and read, what we say on social media and in emails, plus where we go, is consolidated into identity profiles and sometimes offered to a marketplace. This is all part of the massive growth of the amount of information in the world – it’s about us – the true contributors in the grand scheme of things!

We have no idea who has data about us or what they do with it, other than try to find the right stimuli to make us do what “they” want, whether that is making a purchase, holding an opinion, receiving news feeds with a particular perspective, or even how we cast our vote. In liberal societies people earnestly defend their freedom of will, but does it really still exist if what any of us believes is manipulated by not necessarily fake news, but often news that comes with a slanted tone of voice and context? This leads us to a massive programming of our minds and manipulated information for the benefit of a few. Is this what we opted-in to?
The FAANGs corporations, and many others that would like to be like them, are already so rich and powerful they are beyond national government control. They avoid paying taxes; they trade nation states off against each other in tenders for where they will build their offices, employ thousands of staff and boost local economies; and they may be quizzed by US Senate committees whose members admit they have no clue about how these businesses operate. To put this in a simple context and taking Apple computers as an example: almost every Apple device is manufactured in China, they park their money in Ireland, and pay little to no tax in the US. Yet our new US administration willingly pays for a carbon emission tax in the world with taxpayer’s money without tracing the root cause of this issue. Now, I’m not saying Apple or China are the cause, I’m saying these are just examples of what we are seeing here. There are many more. In the meantime the FAANGs founders and shareholders achieve unbelievable wealth: how much has it corrupted their original agendas and led them to set themselves apart?

Government use of digitalism to control rather than create

China has the most carefully watched population in the world, and it runs a social credit system. A set of databases and initiatives monitor and assess the behavior and trustworthiness of individuals, companies, and government entities. Each entry is given a social credit score, with rewards for those who have a high rating and punishments for those with low scores. At a personal level, known acquaintances are encouraged to cold shoulder people who commit even such minor misdemeanors as jaywalking.
In a recent escalation of their oppressive use of data, China’s Huawei corporation has tested AI software to identify guilty culprits among people who may not even be suspected of doing anything. It is being tested on the persecuted Uighur Moslems, and rings alarm bells of indiscriminate future random testing among the wider populace. Similar oppressive things are happening in Myanmar, Thailand, and other countries.

What does the future hold?

The conclusion to all this is going to be what? The digitalization of technology empowers every citizen with a device to access the internet or phone systems. They can tap into more information and knowledge than was ever imagined.
Though how will ‘digitalism’ develop as a framework to structure communities, commerce and governance? Will the technology be used to advance welfare, wealth, and wellbeing?
Or how much to fuel divisiveness between holders of different opinions, to accentuate chasms of wealth between the richest and the poorest, or to suppress what liberal countries hold to be a set of basic human rights? What can crowds do against data manipulation by their governments? Can governments bring FAANGs in line with the generally accepted expectations of financial and ethical behavior by businesses? In short, what is needed for digitalism to remain democratic? I challenge each and every one of you to really think about and act on what is right. We are at a crossroads here. Don’t let your basic human rights be taken away from you in front of your eyes.