I don’t think it’s possible to talk about “Anthropocene art”: it would imply defining its main features and aesthetics, something which is not doable at the moment. Art is a reflection in a drop of water, a land of unlimited shades of expression, and as such, can generate any sort of thing – there is just one, single certainty: that it is a mirror of the historical period it is produced in. All original forms of creativity are imbued by the spirit of their own time, and, paradoxically enough, the very term “Anthropocene,” coined by chemist Paul Crutzen, is so far the most representative work of art of this century. The human-nature relationship has decentralized any classical and modern remains of anthropocentrism.
When it comes to dealing with topics such as biosphere corruption, some artists explore marginal spaces, which are hard to investigate without the right expressive tools. As occurs in chemical reactions, contemporary art “oxidizes” – it takes a snapshot of the relation between beings and the space where such relations take place, that is, the blue planet we live in.
The threat of a natural disaster affects our decision of what to depict. What I’m trying to explore in my path is inference: over a single image, I favor a set of images; over performance, the materialization of an open, non-conclusive, research process, over work, poetics, over isolated information, generative matrices.
The study of nature teaches us several things. Observing the “vital” processes (from the Big Bang to the Paleolithic, up until present times) is the framework that should guide us in our most crucial decisions. Post-alphabetic culture – which, unlike oral and alphabetical cultures, represents reality with pictures instead of characters – has broken the traditional harmony between beings and nature, therefore causing a rift between an action and its direct consequence: why do we call any Rough Collie “Lassie”? Why is the redundancy of digital images depicting humans not triggering a reflection on our population growth rate as a pivotal element for the wellness of our planet? Two thousand years into the study of nature, layering scientific paradigms of common use with the eyes of poets could help update the models and cultural devices with which we look at the life-forms of the biosphere.
An interesting topic in a historical period facing the threat of a natural disaster is memory retention. For years, I’ve been practicing alienation from the things I own: memory “pressure” is a limit, and such a practice helps me focus on creativity and thought. Clearly, I am not indifferent to scientific progress, but I don’t want to be guided solely by technological constructs. I find it interesting to collect data and then force its interpretation with an artistic approach.
Design is a crucial restoration tool in this era of transition: it partakes in the foundation of our civilizations and its artifacts, and at the same time, it shapes human behavior. Like a child experimenting with his abilities, industrial design has produced many aesthetically conventional products with a strong consumer appeal, but this is changing, and a whole new generation of designers attentive to the bio-social function of objects is arising. It’s bio-mimesis: the conscious study of nature’s biological and biomechanical processes as a source of inspiration to improve human and technological activities. The theme here is to design sustainable devices or systems of objects that can generate virtuous behaviors.
Some of the people who have heard about utopia, also know the frontispiece of the early sixteenth century edition of Thomas More’s Utopia, the book in which the term was first coined. It represents a graphic illustration of a place that is cut off from the rest of the world, at a faraway, foreign distance, and gives us a sense of the projection of utopia as something that cannot be realized within a large society. This visualization links utopia to the common usage of the word: something that is unattainable and unrealizable. However, as Gregory Claeys, professor of the history of political thought at Royal Holloway, University of London, and chair of the Utopian Studies Society, says, utopia originally referred to a kind of ideal society.
“The concept of utopia goes back to Sparta and Plato, and we can actually trace four different stages of evolution over the years,” says Claeys. “The first stage is the notion of an ideal society that initiated with Sparta, and was later taken up by Plato in a slightly different form, and was characterized by two crucial concepts: the equality of property among citizens and the contempt for luxury. In parallel, from the beginning of the Christian era, there are two fairly different conceptions: the idea of Eden, an original state described in Genesis, and the notion of Heaven, the perfect state of society that exists after death. Two different instances that face opposite directions: the former into the past, the latter directly into the future.”
The second stage occurs around the beginning of the sixteenth century, with the first peasant uprising led by Thomas Muntzer in 1525. This stage, as Claeys points out, “is usually described, in the works of German sociologist Karl Mannheim, as the first stage in which a utopian mentality oriented towards a more perfect society in the afterlife comes to be conceived as possible in this life.” Manheim calls this Orgiastic Chiliasm, and it means bringing down to earth the idea of a much more equal society, usually associated with Heaven as well as Eden. “Rather than waiting for the utopia to occur in a future world,” continues Claeys, “people begin to implement such a vision in this world.”
The third stage, then, is usually associated with the eighteenth century. “It is a shift from seeing utopia as something that happened once upon a time, for example in the golden age, to seeing it as a re-orientation of utopia towards the future,” explains Claeys. The German historian Reinhart Koselleck calls this the temporalization of utopia: an abstract ideal becomes temporal and real. “He also describes this as the morphing of utopia into the philosophy of history,” adds Claeys. In this way, we see history as doing one of two things: either potentially creating the ideal society — if the right conjecture of human beings and revolutionary spirits come into form — or automatically producing an ideal society, through a series of necessary progressive stages. As Claeys highlights, this stage is secularized in two revolutions which take place in the eighteenth century: the American Revolution against Britain, and the French Revolution.
The fourth stage, instead, is a kind of variation on the third one: utopia shifts from being something that once occurred in the past and we might try to recreate, to something that will only happen in the future and might be subject to human effort and agency — or even occur automatically. “This concept,” Claeys adds, “is closely linked in the nineteenth and twentieth centuries to the single most dominant concept of the whole of the modern period: the notion of progress. In this stage, the future is bright and every generation sees an increase in the standards of living, and utopia resides at the end of the curve of progress, which will go on indefinitely into the future.”
All these stages of the evolution of the concept of utopia share a common peculiarity: they are all rooted in our western cultural point of view. “The question of how universal across the entire world utopia is,” explains Claeys, “produces quite a bit of difficulty for scholars working in the field. If you define it relatively narrowly then it can be made to be essentially European; if you assume that is a much looser concept then it appears to have universal relevance.” In fact, there is one group that identifies the concept precisely with Thomas More’s coining of the term, considering the ideal society from a humanist and republican viewpoint. Therefore, this is mainly a European and Christian concept and while it has certain parallels elsewhere, it’s not universal.
A second group of scholars, instead, argues against this position and says that there are clearly generic ideas of the ideal society in virtually all cultures: “We can find them in Islam, in various earlier pre-Islamic African cultures, in Daoism, in Confucianism in China, in certain South-Asian traditions as well,” explains Claeys. In this case, we clearly see that the notion of the existence of an ideal society oriented around considerable greater equality does have a more universal following. From a literary point of view, this division is even more clear: the vast majority of utopian books are entirely unknown in the West, and for the most part, the western tradition that we attribute to More in 1516 is not diluted fundamentally by any other foreign or external sources. Western utopian literature is oriented towards its own history.
After the publication of More’s Utopia, there were waves of utopian writing, all of which were responses to domestic social crises of various kinds. We can consider two major phases before the middle- and late-twentieth centuries: the first is the controversy created by Edward Bellamy’s book Looking Backward which was published in 1888. Explains Claeys: “This is the first major reaction in the USA to the industrial revolution, and is preceded in Europe by a reaction to the industrial revolution by the early utopian socialist writers, such as Robert Owen, Henri de Saint-Simon, and Charles Fourier. A third group focused on the Bolshevik Revolution of 1917.”
Despite this vast and growing collection of utopian writings, visualizing the concept of utopia is more complicated. After the book frontispiece of More’s Utopia, illustrations from that time onwards are not as frequent as we might imagine. The reason is simple: it is a difficult topic to portray. Highlights Claeys: “There are a couple of related topics that often stand in for the representation of utopia, for example, the Greek idea of a golden age has been represented many times during the sixteenth and seventeenth centuries.”
The peasant utopia is a variation on the Greek ideal of the golden age, and it’s masterfully depicted in The Land of Cockaigne, a 1567 oil painting by Pieter Bruegel the Elder. Cockaigne was a mythical ‘Land of Plenty,’ and in the painting every kind of delicacy populates the scene: an egg wanders on its own legs and with a knife stuck down into its head, a roast chicken sits on a plate, a pig runs with a knife slicing its back. “There are many images of society where themes of abundance — most oriented towards eating and drinking, and celebrations — are central,” explains Claeys. This kind of celebration is integrated into the well-known tradition of Carnival,an updated version of the Roman festival of Saturnalia. In Saturnalia, a kind of ideal society became reality: a law reversal allowed slaves and masters to switch their obligations, overturning social norms, and returning to a world of equality.
In the nineteenth and twentieth centuries, instead, a different set of illustrations was produced by early socialists. “They produced novels and graphic images detailing plans of what their ideal community ought to look like,” explains Claeys, “with square buildings usually, some sort of industry steaming away in the countryside, and a kind of unity of vision of the best of modern industry and rural quasi-pastoral kind of golden age motif.” These illustrations led the way, in the middle- and late-nineteenth century, to a brand of utopias which were directly linked to science fiction. “From this time onwards we encounter a new vision of what a hyper-technological scientific-oriented future might come to look like,” adds Clayes, “most of these include flying machines, skyscrapers and the like.” This could be traced back to H.G. Wells, the greatest proponent of this vision, describing a modernistic variant of future-oriented utopia.
However, the notion of indefinite progress — particularly conceived of, in scientific, technological, and material terms, as an indefinite progression of nature’s exploitation — came to an end in the twenty-first century. “It comes crashing down within our own memory in the past 15 to 20 years,” warns Claeys. The climate crisis requires us to face a daring choice for our future. The worst-case scenario predicted by scientists in the ‘90s and early ‘00s turns out to be the scenario that we most likely should anticipate. With rising sea levels all over the world, widespread fires, migrants fleeing their home countries due to unforeseeable climate disasters, we live in a world soaked in dystopia. Thus, one must actively commit to utopian thinking to find a way out.
Claeys tries to embed the concept of utopia within the idea of utopianism, which includes three main components: ideology, literary fiction starting with More’s Utopia, and what he calls intentional communities — groups of people who would come together to try to live according to ideals that are linked to the concept of utopia. “These three components are linked around a concept which is kind of a form of solidarity or association or friendship,” explains Claeys, “I call it enhanced sociability, meaning a greater, closer form of sociability that is generally a standard in those societies created by the author of utopias, or those who attempted to create one.”
Utopian thinking now has more than an environmental prospect to analyze and deal with. According to Claeys, besides the climate crisis there are two more sets of dystopias that are going to converge in the twenty-first century: Dystopia of AI and tech surveillance, as in the use of face recognition technology in China, and dystopias that result in the concentration of wealth and power in the hands of billionaires. The danger here is that the underlying sense of nervousness and hopelessness about the future morphs into something like hysteria: People don’t see an alternative and don’t see practical measures put in place.
Utopian thinking comes in to provide a way to support us both psychologically and concretely. “It offers a kind of map for us so that, rather than simply expecting the future to grow out of it, we have some possibility to say there are very alternative courses that we can take: Do we want this future or that one?” says Claeys. “It’s a way of envisioning projections out of the present, some of course completely fantastic and unrealistic, others much more oriented towards looking at realistic trends within the present and seeing how they might pan out in the next 50, 60, or 100 years.” Utopian thinking is a realistic way of analyzing various types of futures and it becomes crucial, more than ever, in our current times. As the famous speculative fiction author Ursula K. Le Guin wrote in one of her essays, we should consider utopian thinking close to the conceptual term used by the Swampy Cree, a division of the Cree Nation occupying lands in Canada, to describe the thought of a porcupine as he backs into a rock crevice: “Usà puyew usu wapiw! He goes backward, looks forward.” To envision an inhabitable future, we should find our roots.
When will we close the gender gap? 2119, almost one century from now, is the average prediction in a World Economic Forum special report, which monitors 107 countries across the globe. Does that also apply to the workplace? Unfortunately, yes. Salaries are one of the areas with the lowest progress: In this case, the gap increases even more, to 247 years — we are closer to “year zero” than reaching gender parity. No matter which angle you look at it from, it doesn’t look good.
Knowing that the gender gap is likely to continue in our lifetime and that of our great-granddaughters, makes it look more like an unreachable goal instead of an achievable one. One might think that those average numbers are so high because of the countries who are doing worse than Europe and North America. That’s somewhat true, but they’re not that promising either. North America (like Europe) has closed its educational and health gender gaps, and has the smallest gap at the workplace, but this last figure (24% of the gap still to be closed) has remained unchanged since 2006.
Our notion of power, which has remained unchanged in the last centuries, has a lot to do with this quiescence: power is a crucial piece of the equation that composes the concept of gender. And power is also a fundamental ingredient of organizations, who essentially are groups of people whose relationships are profoundly based on the power each individual holds, and exercises.
Sally Haslanger, professor of philosophy at Massachusetts Institute of Technology, considers this situation. “The source of the employer’s power lies in the structure of relationships in the workplace and deference to those relationships,” says Haslanger. “In general, relationships are constituted by norms that define successful or apt participation in them; such norms allow for the accumulation and transfer of various forms of capital — economic, social, cultural, symbolic. This conception of power is important because it shifts attention from individuals and personalities to the structure of our relationships. There are incompetent and malevolent people who are in charge of some institutions and there is reason to replace them. But it is also true that individuals are deeply shaped by the social roles they are asked to play, and we should be looking to establish a just configuration of social positions rather than simply aim to get nice or smart people in positions of power.”
It shouldn’t come as a surprise that the tech industry is not only as far away from gender inequality as any other industry — it is actually doing worse. According to hiring firm, Adeva IT, women hold only 25% percent of computing jobs — a figure that’s been declining since 1991. This is even though tech organizations are investing significant resources to hiring more women.
Alison Tracy Wynn, a research associate with the Stanford VMware Women’s Leadership Innovation Lab says there are a lot of different factors operating to contribute to that gap, and “that’s why the problem is so difficult to eradicate.”
She adds: “It’s not like you can just fight gender inequality in one space, or on one front and fix it.” Wynn spent a year analyzing the impact of gender equality initiatives in Silicon Valley — which mostly consist of mentorship programs and diversity programs — to understand the root of the problem and think of new solutions.
Bias is indeed one of the main issues that tech companies have to look at. “Humans are exceptionally adept at noticing patterns,” says Haslanger. “But we also have a tendency to assume that the best explanation of a pattern lies in the nature of the kind that exhibits the pattern. For example, if we only see owls at night, we conclude that owls are nocturnal — it is a feature of the kind.” But such inferences are notoriously problematic, especially in the social domain. “Structural conditions in society create patterns that we take to be natural and right. So we expect them to behave in certain ways and often punish them if they don’t.”
“The tech industry in particular,” confirms Wynn, “also has a set of unique stereotypes that tend to continue to keep women out. STEM subjects are still considered only for boys, and these cultural assumptions are then confirmed by the reality they have created,” echoes Haslanger. When this stereotype is activated, “women will usually not perform up to their potential. Due to the negative stereotypes of women in technology, stereotype threat is certainly a factor in reducing women’s participation.”
Moreover, even if one persists in the face of stereotype threat, it can seem that the best strategy is to avoid risks and stick with tasks that don’t reveal one’s vulnerabilities. This is not a winning approach in tech fields.” In a vicious circle, stereotype is reinforced by education. Says Haslanger: “Small differences at an early age add up. Also, access to technology and adult role models in tech are not equally distributed across income groups. It is no surprise, then, that white boys have the advantage of greater experience and comfort with technology by the time they are teenagers.”
“That assumption is really damaging, because it lets companies off the hook and leaves executives to have a defeatist mindset that there’s nothing they can do to change inequality,” says Wynn. No matter the resources many companies are investing in to try and close the gender gap, it is still wide open because the problem, according to Wynn’s research, is being looked at with the wrong lens: it is seen as an individualistic problem, something which organizations have little control over, instead of an organizational one, an area where they do have room for manoeuvre. They try to change people’s mindsets, without realizing they should change their policies, too.
“In my research, I see a larger individualistic mindset about inequality and about diversity and inclusion, where people think that if they change what’s inside people’s heads, then all the problems will go away,” says Wynn. “So the assumption is if we can train people to think differently and be less biased, or if we can teach women to have the skills to succeed in our current environment, then that’ll fix everything.” With unconscious bias training and a mentorship program, most companies think the job is done, but they’ve actually missed what really needs to be done, and that is scrutinizing company policies and procedures that have an impact on inequality. Wynn says “as long as those things continue to disadvantage women systematically, it’s not going to matter if you change what’s inside people’s heads, or it’ll only do so much.”
What to do, then?: Look at all the touchpoints of one’s employee lifecycle and human resources. Catalina Schveninger, CPO at FutureLearn, discusses this based on her own experience. “There are lots of processes that need fixing from the core, and HR could play a better role not just to “police” manifestations of discrimination but also to redesign processes that drive diversity and limit bias,” says Schveninger. “When I was leading recruitment globally at Vodafone we built a report with the end-to-end recruitment funnel to analyze where we don’t get enough women through the shortlists — to our surprise the data was pointing to “usual suspects” — line managers who were interviewing more male than female candidates even when presented with a gender-balanced shortlist. It’s the role of HR to raise awareness and have the conversation and being data-driven helps to start a better conversation around a delicate subject — nobody likes to hear that he or she has unconscious bias.”
Wynn also found out that “a lot of tech companies were using inappropriate language references, and images in their recruiting sessions. They would talk about pornography and prostitution when trying to recruit candidates. They would put up sexy images of women or just make inappropriate references. There are also more innocuous things, which are just as important, like having female role models. A lot of companies would have all men doing tech content presenting. The women, if they were there at all, would be handing out t-shirts or food or things like that. Companies should consider where they can find candidates who can add gender balance to their teams. Are they only going to five of the top universities? Are they reaching out to historically black colleges or hosting events for diverse audiences? How are they recruiting first?’”
A second, crucial step is management: McKinsey’s 2019 Women in the Workplace report found that the first promotion to management is often the one where women are facing the most barriers. “That’s an interesting insight companies should look at: How do we give women that first step into management?” Then, performance comes: “we find that women are held to a higher standard than men and are evaluated differently. They’re more likely to be evaluated based on personality, or more likely to be given vague feedback. Looking at your evaluation procedures for employees and making sure that you’re evaluating them fairly and that you have clear, consistent criteria, tied to business impact and actual performance,” continues Wynn. The same goes for project assignment, and decision-making, and any other processes that form a company’s policies.
An additional layer of complexity is that the gender gap is not easy to monitor, especially while we go on with our lives, and businesses. Wynn says: “In Shelley Correll’s ‘small wins’ model, you create a small win and you basically create this contagion where people get excited about it. Small wins will then help identify the next areas to change.” An argument that also Haslanger brings up: “Most of us have very little opportunity to bring about change in our social milieu and we aren’t motivated to look for it. Change is difficult because it is disruptive, not only for relationships, but also our self-understanding. But when taken up with others, in a moment of resistance or in a social movement, the challenges can be inspiring and even small change becomes empowering.”
“The good faith is there, people want to change,” says Wynn, “they just don’t know how.” Treating gender parity not as individuals, but as a collective, structural, organizational issue is a tough mindset to build, but it will help us move beyond mentorships, possibly making the year human society will finally defeat this inequality closer than 2199.
Lorraine Justice is an industrial design professor, design researcher, and consultant from the United States, who focuses on the future of products and services, global innovation, and development strategies. As a design strategy advisor to governments, universities, corporations, and nonprofits, we asked Justice to talk about what the future of design might look like in a post-Covid-19 world.
How has Covid-19 launched us into a new era of human-centered design?
It’s remarkable that a large number of designers and companies from around the world jumped right in and started to design better masks, protective equipment, and sanitizers along with other ways for us to protect ourselves. They are also trying to design better social distancing experiences. Here in the United States, with restaurants open for drive up and takeout, we are really harkening back to the days when we would have someone bring food to the car and hook a tray on your car window. This option has kept many restaurants open and earning (some) money. Covid-19 precautions will permeate our new designs. Materials such as copper — which is supposed to be anti-microbial — will probably be used more strategically. New temporary products such as a phone sanitizers and fobs for pressing elevator buttons will come and go.
If we approach Covid-19 as a design problem, could we find an innovative way to apply design thinking and human-centered design principles to help solve some of the everyday frustrations and issues that people are facing?
Yes, we can absolutely use design thinking and design processes to solve our everyday frustrations. Design thinking will help us look at the world’s problems and uncover some of the issues that need to come to light. The design process will help us to focus on solving problems and finding opportunities. I often think of what would happen if we got designers together with scientists regularly, and if they started to look at the big picture, where the intersections might be, and what might emerge from those sessions. That process of putting everything on the wall, searching for what connects, looking for patterns, looking at something that may be unusual. Looking at the whole world of a problem. That is when the magic of design happens.
I am sure next autumn, we’ll have a wealth of design students working on pandemic types of projects. And we’re seeing more students interested in sustainability, trying to help with not only human disasters, but ecological disasters. The design profession is moving towards these areas where people can really help rather than just making an aesthetically-superior product. And this is one of the great things that’s coming out of the design field, this focus on health and well-being and human value … although we still want beautiful things.
Does human-centered design need new principles to navigate through a post Covid-19 world?
The issue that really started to hit home for me with this pandemic is how interconnected everything is. Prior to this, the design field worked very hard to get to the point of using the phrase human-centered design. It marked a distinct difference between design that was going on earlier, where we would have the ‘lone designer as hero,’ and a lot of that came from architecture, artists, and so on. And it was about them creating something for the world that they wanted to communicate. It was human-centered design and sustainable design, instead of looking at how can we preserve systems, how do we create systems that care for people? Human-centered design means good design for everyone. But now, with ecological disasters and the pandemic, human-centered design as a phrase is too narrow. I don’t think we should consider just humans in our design — although people will argue that humans covers everything. It doesn’t. And it puts an emphasis on humans instead of equal rights for animals, nature, etc. I think we need to look at broader systems and bring in more expertise. I don’t have a new word for it. Some people like the phrase life-centered design. But I think that is still too narrow. I just finished a book called The Future of Design, and my research reinforced the idea of how much more complex design is going to become and how the themes may be larger, and more experts may be brought in more frequently. So at least designers understand now that it’s not all about them and that they need to learn how to work with others and process more complex information — even conflicting information and data. We’re looking at a much more complex design process in the future.
You wrote your book before Covid-19, what is one lesson from it that can apply to this new design concept?
I am looking more closely at how people reason during design thinking and the design process to see where biases can influence outcomes (for better or worse). And I’m looking at Artificial Intelligence to help with that deep dive and see where bias can be removed. But it won’t take away the designer’s passion. I think if anything, it will increase this passion because they can really get more deeply involved in the problem and have even more solutions emerge. There might be a little mini renaissance of new ways to live among the viruses coming our way. We should always question our beliefs and ask: Why did we do things a certain way? Unfortunately, a lot of times it takes an emergency to spur us to redesign our lives, even if it does mean constant change for safer measures right now.
After nationalism, communism, and liberalism, digitalism is becoming the leading, global system of social organization. Whether we like it or not. The Covid-19 crisis was a catalyst for this rise of digitalism as governments around the world used it to make changes which could outlast the crisis.
Digitalism will become the first truly globally adopted political doctrine, encouraged and accepted by all governments and all companies. Digitalism envisions a world where data is the most important resource in society. It thrives on capitalism, and depending on the role of the government, either enables mass surveillance (whether state or company surveillance), or aims to empower its citizens.
In the past hundred years, we have seen various forms of social organizations and political philosophies come and go. First collapsed (extreme) nationalism after WWII. Then communism in the Western world with the fall of the iron curtain. In China, however, communism survived and even thrived by welcoming capitalism into the system. In the Western, democratic, world, liberalism became the accepted political and moral philosophy.
Although liberalism has thrived in the past decades, it seems that this global story of social organization is reaching its expiry date. As Yuval Noah Harari has written, the rise of big data and artificial intelligence could mark the end of liberalism and liberal democracy. According to Harari, the international, rule-based system is failing, and we need a new post-liberal order. I believe this new order is upon us, driven by emerging technologies that will change our lives drastically in the coming decades.
Data and liberalism
Liberalism is based on free competition and a self-regulating market. Unfortunately, many of these core principles are disappearing. Organizations are becoming so powerful, due to an ever-increasing hunger for data, that governments are failing to break the power of those companies which deliberately and consistently breach consumers’ trust, privacy, and freedom. Emerging technologies such as big data analytics and artificial intelligence, combined with constant data harvesting from the Internet of Things and social media, have created a surveillance society managed either by private companies or by the state. For example, during Covid-19, moving around in China and entering your house or workplace required citizens to scan a QR code, to give their name, ID number and temperature, which enabled the government to see their exact movements. This is way beyond George Orwell’s 1984.
As a result, individuals’ privacy and safety are rapidly diminishing. Although there are still exceptions such as the investigation by the New York Attorney General into Zoom’s privacy problems, companies such as Google or Facebook can continue their processes without any problem. The list of data breaches is endless, and businesses are expected to lose up to $5 trillion by 2024 due to cybercrime, directly impacting consumers’ privacy.
To make matters worse, recommendation algorithms are rapidly limiting an individual’s freedom, although many might not perceive it that way. Recommendation engines already run the world. These algorithms are based on recommendations on data collected and often only match your profile, resulting in a feedback loop which limits your freedom. They do not serve the individual, but the company that created them with the objective to sell more or have you stay around for longer. Recommendation engines are toxic, but they are everywhere. Unfortunately governments increasingly lean on such algorithms to make decisions.
What will digitalism look like?
What we will see in the coming decades is a division across the world in three streams of digitalism depending on how governments allow organizations to deal with the data at hand and how citizens will respond to it:
State digitalism will result in state surveillance at an unprecedented level. We already can see the first signs of this in China, especially in the Xinjiang province, where an AI-powered panopticon limits Uyghurs (a minority ethnic group) in their movements. Any misstep in this kind of state could be immediately known and have direct consequences.
Neo digitalism will result in company surveillance far beyond the surveillance Google and Facebook show today. Neo digitalism will be characterized by an extreme free-market, unlimited data harvesting and raging capitalism. Within such a society, the likes of which we see slowly unfolding in the USA, there is limited accountability online. The state will have less to say about people’s digital lives than behemoth companies. Even more, the state will not be capable of controlling corporations. Neo digitalism will cause a great divide within society. Driven by libertarians, it will result in companies that can decide and do whatever they want, resulting in extreme data harvesting. There will be a small elite who will gain wealth at unprecedented levels resulting in extreme inequality.
Modern digitalism combines the advantages of digital tools with strict privacy and security regulations. Citizen empowerment will give them control over their data. It is in this scenario that a self-sovereign identity and decentralized network stands the most chance to succeed. It will allow citizens to store and control all the data they create while interacting online. Online accountability will become normal, but citizens’ privacy will be secured (such as the anonymous accountability created by Mavin). Modern digitalism likely stands the best chance to succeed in Europe (although we are already facing the harvesting of our telecom data to monitor the spread of Covid-19). Fortunately, the EU is already working on ethical AI guidelines, and the GDPR (general data protection regulation, implemented May 2018) offers at least some protection for EU citizens.
What will digitalism create?
Digitalism will disrupt societies.
We will see more and more dark factories. Such factories will be equipped with fully automated systems and, hence, will not require lights, as humans will not be part of the manufacturing process. The factories will be expensive to build, but will see huge financial returns.
Autonomous artificial hackers will result in machine to machine fights operating at unbelievable speed and agility trying to steal and protect the data of consumers and organizations. Digitalism will also put the traditional hacker out of a job and affect consumers’ privacy even more.
Artificially created fake news, bad bots, and armies of online trolls will influence the online (political) discourse. The objective of these digital agents will be to sustain the state or company surveillance that is in place. Citizens will find it increasingly difficult to know what and who they can trust online.
Companies and governments will outsource their processes to AI, thereby taking over blue- and white-collar jobs. It will make organizations much more effective and efficient, but it will also come with significant challenges. AI is created by biased humans and is often trained with biased data, which can make AI a black box. Thanks to digitalism, how we run our society will become opaque and only known to the elite owning the data and AI.
Free will could disappear. Due to the unconstrained data harvesting, AI will know what you want better than you do. Organizations will thus have an economic incentive to constantly improve their recommendation engines, turning humans into machines who simply follow AI’s suggestions.
These scenarios might scare you, and so they should. A society based on digitalism will result in a tiny elite who will control digital tools while the vast majority will be subservient to them. Nevertheless, although many citizens will experience the benefits of these digital tools, they will also feel increasingly irrelevant. How irrelevant they will become depends on if digitalism will be controlled.
A digital Gestalt shift
The rise of Digitalism is unstoppable. However, as citizens, we still stand a chance to build a society that is there for us and not for corporations or dictatorial leaders. Companies such as IBM and Microsoft backing the Pope’s pledge for ethical AI is a good step in the right direction. It will require hard work and involve all stakeholders, but anything is better than becoming enslaved to technology and losing our freedom and free will. Digitization has tremendous opportunities that can help us create a better world. In the end, just like any technology, digital technologies are neutral. You, as a user, you as a citizen, can be in control of your data. If we follow that path of modern digitalism, then we can truly be empowered. Digitalism doesn’t have to be a bad thing, it all depends on how we deal with it. Why should I be able to use your data free of charge? I don’t use the other things that you own for free. And that’s a mentality shift, a Gestalt shift, which we need to embrace.
The world we’ve always dreamt about is at hand. An easier life in a perfect society is possible, and science is the key to reach it. Pared to the bone, this is the main belief of Technological Utopianism, an ideology built on an ironclad faith in technology and its ability to solve any kind of problem, which would enable people to live in a sort of utopia. Since the dawn of civilization, humankind has always indulged in envisioning a perfect future where all present issues are gone. And myths and legends came in handy to nourish that hope.
The Age of Reason gave humanity a new certainty: there were no Gods to turn to to make dreams come true. Science became the guiding star capable of leading humankind to a new condition. This faith in scientific progress may be the only thing that techno-utopian thinkers have in common. Some authors have envisioned post-scarcity — the condition of a world where vital resources are not limited — thus making wars unnecessary. Others believe that technological development will enable us to reduce, or even avoid, pain. The most radical believers are convinced that death can be defeated.
Techno Utopianism lacks the precise boundaries that other ideologies, such as Communism, have. It does not have a cornerstone like The Communist Manifesto, so it is unclear when it was founded, by whom, and what authors are part of the movement. Marx himself is considered among the first techno-utopians, for he believed that the rise of machines would play a pivotal role in bringing capitalism down and paving the way to the perfect, communist society. According to Rob Kling, one of the fathers of social informatics, technological utopianism does not refer to a set of technologies, but rather to “analyses in which the use of specific technologies plays a key role in shaping a utopian social vision, in which their use easily makes life enchanting and liberating for nearly everyone.”
In The Shape of Things to Come (1933), proto techno-utopian writer H. G. Wells described a future world order and envisioned many events that actually occurred, such as a new World War, the destruction of Europe through an extensive aerial bombing of its major cities, and the development of weapons of mass destruction. Yet, he was confident, that a new, peaceful, equal, and more evolved society would rise from those ashes, with a benevolent dictatorship enforcing the abolition of nations and the suppression of religions, the adoption of English as the universal language and the promotion of science as the primary way to achieve progress.
Although the origins of Techno Utopianism may be unclear, no one can deny that it is mainly an American thing (see Technological Utopianism in American Culture, by Howard Sagal) and that it came to a new life in the last few decades, remaining an interesting cultural phenomenon throughout the last century. In the late ‘70s philosopher Bernard Gendron, in Technology and the Human Condition, defined the principles of this ideology: we are presently undergoing a (post-industrial) revolution in technology; technological growth will be sustained (at least), and its growth will lead to the end of economic scarcity; the elimination of economic scarcity will lead to the removal of every major social evil.
Yet, in the Cold War era, the constant fear of a nuclear holocaust was as strong as the utopian drive and counterbalanced any excessive faith in the power of technology. In the ’80s, however, the landscape suddenly changed. As the ice between Moscow and Washington,D.C. started melting, the world was in the middle of the Age of the Computer. Technological utopianism unwillingly received a boost. In 1986, former American President Ronald Reagan declared that “the Goliath of totalitarianism will be brought down by the David of the microchip.”
The advent of the Internet brought new confidence. As early packet-switching networks evolved into the Internet, a generation of futurists and TED talkers arose, explaining the new system to the laity in a spirit of wide-eyed Techno Utopianism. They compared it to a superhighway, to a marketplace of ideas, to a printing press. It is true. In an open letter released in 2012, Mark Zuckerberg claimed that “We often talk about inventions like the printing press and the television — by simply making communication more efficient, they led to a complete transformation of many important parts of society. […] They encouraged progress. They changed the way society was organized. They brought us closer together.” Just like the printing press made the Renaissance and the scientific revolution possible, social media will lead to a new Golden Age.
If the United States is at the forefront of a terrific revolution, California is its hotbed. Three of the tech companies that have changed the world (Apple, Facebook, and Google) are headquartered there, where Technological Utopianism has become mainstream, so much so that an impressive slew of techno utopian sub-movements blossomed, four of which deserve mentioning. The cyberdelic counterculture was born when the cyberculture faded into the psychedelic subculture. Its members challenge any authority and believe men can transcend the limits of the body but also that of space and time, all thanks to technological development.
Transhumanism is another ideology centered around the belief that the human race can evolve beyond its current physical and mental limitations, especially employing science and technology. On their part, singularitarianists believe in The Singularity — a time when a superior intelligence will dominate and life will take on an altered form that we can’t predict or comprehend in our current, limited state. Finally, technogaianists believe that new green and clean technologies will allow us to save the planet and restore the environment.
The Fourth Industrial Revolution further fueled the wildest dreams of the techno-utopian community, part of which has the most ambitious goal mankind ever had: immortality. Silicon Valley’s elites are funding special centers of research and are supporting organizations like the Coalition for Radical Life Extension. Scientists and entrepreneurs are working on a range of techniques, from attempting to stop cell aging, to the practice of injecting young blood into elderly people. A small group of tech-tycoons have the financial resources to fulfill this dream, which reminds us all that something went wrong along the way. Has Technological Utopianism been cheated? Many think so. Argodesign’s chief creative officer Mark Rolston confessed his delusion in Fast Company: “This is not the Techno Utopia I thought we would create. We’ve gone from envisioning giant leaps for humanity to fighting hate speech, Russian misinformation, and politicians who use technology to warp public discourse with lies and vitriol. In the early 1990s, the techno utopian concept emerged breathlessly, promising a new world that was freed from the cruft of the tired, old structures that were unfair, top-down, and controlled by a powerful and wealthy elite. Technology would reinvent how we communicated, conducted business, learned, and shared news. It would deliver power to the individual, free from human bias.”
Is it so? The concentration of an astonishing wealth in the hands of a few tech-entrepreneurs who are designing the new world poses some questions and confirms that the line between utopia and dystopia is dangerously too thin.
From philosophy to Ska music, we look at how the concept of utopia inspired the work, life, and movements of four influences across history and the globe.
Peter Waldo (1140 – 1217)
Not much is known about the life of the man who is regarded as the founder of the Waldensian movement. Historians say he was a wealthy clothier and merchant from Lyon, France, and a bit of a scholar. Following a crisis of conscience, he sold all of his property around 1173, preaching apostolic poverty as the way to perfection. He taught his ideas publicly, condemning what he considered as papal excesses and some Catholic dogmas, including purgatory and transubstantiation, which he called “the harlot” from the book of Revelation. Waldo and his disciples, the “Poor of Lyon” — who later became known as Waldensians — evangelized their teachings while traveling as peddlers and practicing voluntary poverty and strict adherence to the Bible. Their main inspiration was the sermon of the mount — the most important moral discourse from Jesus Christ. They believed that every baptized Christian could serve as a priest, that the gospel should be spoken in local languages instead of Latin and that the church should not compromise with anyone in political power. They were declared heretics by the Roman Catholic Church, mostly because in their community, laypeople, including women, were allowed to preach, as they still are today. Pope Lucius III excommunicated Waldo and his disciples in 1184. Fearing persecution, he fled from Lyon to northern Italy and the Piedmont valleys. Today, the Waldensian Evangelical Church counts some 45,000 members, primarily in Italy, Switzerland, and South America.
Ernst Bloch (1885 – 1977)
For the famous German writer, philosopher, Marxist, and inspiration for the student movements in 1968, utopia was at the core of his philosophical research. During World War I, while living in Switzerland as a refugee, he wrote The Spirit of Utopia, in which he explained his general ways of thinking. War, he said, “was the failure of the European culture.” To build a new existence, he said it was necessary to develop a fresh utopian concept, following the route of fantasy, struggling for what is not yet there, no matter how poor the reality was. Bloch’s notion of utopia is not the same as that of Renaissance authors: it is neither something impossible nor abstract. On the contrary, it is something deeply possible, that is difficult to reach but still in the realm of possibilities. Practically speaking, Bloch’s utopia could be compared to a long-term political program. Utopia, he says, has a double meaning. It’s inside every one of us, who could potentially reach the status of homo absconditus — a utopian man who’s never born. And it’s also outside, as a mythical homeland we’ve never been to, and nonetheless, we dream of it as the end of our journey. Although Marxism criticized utopias by saying they were no more than ideologies, and thus reactionary, Bloch argued it would be an error for socialism to reject utopian ideas. If men only fight for their basic economic needs, it won’t be sufficient motivation in the long run. That’s why, according to Bloch, socialism has to encourage the deepest and most ambitious desires of men in the fields of art, religion, and philosophy — making humanity unique.
Milan Šimečka (1930 – 1990)
“Although I have had more than enough opportunity to rage at failed and moribund utopias, now, years later, I have made my peace with them […] realizing that without them our world would be that much worse.” Starting in his mid-20s, Czech philosopher and dissident Milan Šimečka dedicated a significant part of his intellectual life to dealing with the concept of utopia, which he analyzed with respect to the ideology and political practice of the Soviet regime. In 1963, he published a systematic work about social utopian theories — Social Utopias and Utopians, followed in 1967 by The Crisis of Utopianism, a utopia-based criticism of Marxism. Šimečka described utopia as a regressive conception of history, comparing it to a form of para-religious exaltation. In twentieth-century socialism, he recognized classical utopian ideals, such as the struggle to create a one and only notion of socialism and communism as the unreachable ideal everyone should look at. He argued that the ultimate purpose of ideology is to justify crimes, by persuading people that evil doesn’t come from men but rather from the powerful and mysterious hand of history. According to him, socialism could only have a future if it summoned the unicity of the single man, surrendering what he defined as “simplistic images from the last century.” For these ideas, Šimečka was called a dissident and revisionist: In 1968, he was banned from the communist party and forbidden from teaching. In 1989, he was arrested and imprisoned for more than a year, charged with “subversive activities.”
Jerry Dammers (1955 – present)What’s more utopian than a kid from Coventry, England who is barely aware of Nelson Mandela, and yet writes a hit that becomes the most powerful anti-apartheid anthem? It was 1983 when 28-year-old Dammers — a keyboard player and songwriter for the British ska band The Specials — heard the South African leader’s name for the first time. “I went to a concert at Alexandra Palace to celebrate Mandela’s birthday,” he told The Guardian in 2013. “People like Julian Bahula, a South African musician who came to Britain in exile, were singing about him, which gave me the idea for the lyrics. I picked up lots of leaflets at the concert and started learning about Mandela. At that point, he’d been imprisoned for 21 years.” The Specials were in chaos at the time, and three of its members — Terry Hall, Lynval Golding, and Neville Staple — had left to form a new band. But Dammers carried on the project with a new name — The Special AKA — and a fluctuating line-up. “There were lots of arguments in that period, so I asked Elvis Costello to produce the song because I thought he’d bring everyone together,” he said. “The track felt very important: trying to get it done before the whole thing fell apart was exceedingly stressful.” The chorus of Nelson Mandela was sung by three top session singers. Its words — “Free Nelson Mandela!” — are often used to refer to the song as a whole. The melody was composed years before: “In the early 1980s I made up a tune that was vaguely Latin-African. I didn’t quite know what it was, but it was very simple. The main melody was just three notes — C, D, and E — with brass embroidered around it. I think writing the tune before writing any lyrics was key. If I’d known anything about Nelson Mandela beforehand, I’d probably have come up with some earnest thing on a strummed acoustic guitar.”
Education is a marketplace. Schools, colleges, and universities provide a physical interface between teachers and students, with course content and standards to achieve, which are set and maintained by off-site regulators. It sits in a traditional three-stage life of education, work, and retirement that has hardly changed in a hundred years. It now looks like Covid-19 has finally focused enough attention to haul education into the digital twenty-first century, with an opportunity to benefit from employing crowdsourcing techniques.
In the past few weeks, millions have experienced how online education reduces the need for physical travel. Universities are future-proofing some of their courses (and their incomes) against a threat of further waves of Covid-19 infections, and they will be totally available online. So where could or should students live? Will teaching staff all be fully trained in online techniques? Will universities charge the same fees, and will students be willing to pay them if they don’t have a social life to accompany their classes? Will students still want an unbroken three or four years of study?
Among the many hundreds going online, are insights released by Cambridge University. All lectures will be online for the entire 2020-21 academic year. However, tutorials, that are often one-to-one or one-to-two sessions, will remain face-to-face—if possible. Student fees will stay the same. Lecturers are expected to arrange their own training on online teaching techniques. Tutors have been asked to think about how tutorials “can be delivered remotely should some students not be able to return to residence halls, or if there is another phase of lockdown preventing students from leaving their colleges.”
At the same time, the global adult workforce is threatened with the greatest unemployment rate since The Great Depression that began in 1929. How many post-pandemic jobs will be available—and to whom—after employers review processes and restructure them to introduce greater use of robotics and artificial intelligence?
Reskilling among adult workers, particularly those doing routine or repetitive tasks, was already an issue prior to the lockdown. Covid-19 has accelerated the trend of losing routine work to robots and AI. How many people will need to retrain and develop new skills, and where will they do it? From home and online is increasingly the answer.
There is evidence that learning as a crowd is beneficial. Duolingo is the world’s biggest free language school with over 300 million users. In a nutshell, language students are asked to translate texts to practice what they are studying. Corporations including Facebook and Google pay Duolingo for translation services, and students enjoy free learning courses.
It’s a great example of using a crowdsourcing model to tap into skills at a scale beyond what would ever be possible using internal resources—employees. The platform and its users share the same aims of free learning: people are prepared to work for free, knowing that it means others will enjoy a free language course. Users are also continually motivated by gamification techniques that tap into humans’ competitive nature. The introduction of Leader Boards saw student input increase 20%. The longest-standing students are awarded contracts, giving them intellectual property ownership of the data they provide, though with the provision that Duolingo is permanently able to use it. Nevertheless, it’s a form of recognition that strengthens the alignment of students with the platform and it aims for a win-win.
But could some form of creating value from university students’ output reduce their fees, or provide additional benefits?
Skillshare is another example of crowdsourced reskilling and an online learning community, with thousands of classes for creative and curious people on topics including illustration, design, photography, video, freelancing, and more. The platform gives teachers a more straightforward opportunity to share expertise and earn money in the gig-economy, and enables millions of members to come together finding inspiration to take the next step in a personal creative journey that could lead to their own gig-economy earning possibilities.
Skillshare continues to witness creativity as a catalyst for wider growth, change, and discovery in people’s lives, as inspiring and multiplying human creative exploration is recognized as difficult to replicate through machine learning and AI.
What is of particular interest is how education and reskilling, and work, might blend together in a post-pandemic ‘new normal.’ Catalina Schveninger of FutureLearn recently shared some thoughts on this with Crowdsourcing Week.
They can see a trend to tertiary education made up of a series of micro-credential courses, that can be studied with a variety of course providers, and earn recognized degree-level or higher qualifications at each student’s own pace. They would effectively enable crowdsourcing to provide a degree from an open marketplace.
Through online only access to course material it would be possible to study from any location with access to the internet. Perhaps universities — and some of them have extensive property portfolios—would create study-hubs in a number of satellite cities. Maybe they could share co-workspace hubs, allowing a freer flow of knowledge and increased opportunities for collaboration between students and startup business founders. Periods of study and work could alternate more frequently, avoiding the accumulation of such large personal debts.
Or, work could be anything in the gig-economy, accessed through platforms such as Upwork, Freelancer, or Fiverr, to name just three. And maybe periods of academic study and work could also include periods of acquiring more direct work skills. What used to be reserved for out-of-term time MOOC learning could become more mainstream.
There’s a lot of “maybes” and “perhaps” here, and one more is that governments will need a more flexible tax structure as people flip more frequently among education, full-time work as an employee, self-employed gig-economy activities, or temporary retirement/sabbatical as a digital nomad. A unique work and study history for each of us held on blockchain would be the solution, though how quickly is that going to be a reality?
John Perry Barlow, a lyricst for Californian psychedelic rock band the Grateful Dead, rancher, and internet civil liberties pioneer, declared the independence of cyberspace during the 1996 World Economic Forum. His freedom of Cyberspace from the “Governments of the Industrial World” was to be without borders and time and space limitations, with digital technologies forming society’s primary structures.
Barlow’s declaration to world leaders, CEOs, and policymakers during a gathering of The 1 Percent in Davos, Switzerland, was later disseminated via email and on the nascent commercial web as A Declaration of the Independence of Cyberspace. He was already well-known and active with the Electronic Frontier Foundation (EFF), a nonprofit organization he co-founded in 1990 to fight to defend Internet civil liberties. Barlow passed away in early 2018, at a time during which the Internet looked very different than his original vision, yet his main point is still relevant today — governments have no chance to control the Internet.
A Declaration of the Independence of Cyberspace is a crucial text to understand “Internet exceptionalism”: a way of conceptualizing the Internet as something that is inherently separate from reality, a distinct space built around rules and principles that can’t be applied elsewhere. To express this new utopian view, Barlow did not choose the metaphor of “Cyberspace” — a term invented by sci-fi author William Gibson — by chance. As scholar Aimée Hope Morrison wrote, Barlow was explicitly drafting a vision “in which revolutionary politics are assumed to be immanent in the machines that structure and enable networked communication.” Likewise, in his book From Counterculture to Cyberculture, Internet historian Fred Turner put Barlow’s Declaration in the context of the nascent computational metaphor of the ‘90s: Overthrowing bureaucracy and alienation and re-connecting with the ideal society of the Free Speech Movement and “counterculture militancy” born on the University of California, Berkeley campus in the 1960s. That view, writes Turner, was “a world in which hierarchy and bureaucracy had been replaced by the collective pursuit of enlightened self-interest.” For these reasons, Barlow’s Declaration holds its place in the history of the Internet as the connector of particular idealistic, if not mythological, stances.
University of Amsterdam new media professor, Michael Stevenson, finds historical value in Barlow’s declaration. “The Declaration is certainly of historical value, in that it represents something bigger,” Stevenson says, “Barlow’s grandiose tone and the idea of ‘cyberspace’ probably made a lot of tech people giggle because even then it sounded a little silly, but what Barlow and Mitch Kapor did in creating the Electronic Frontier Foundation was a very real attempt to represent a burgeoning tech community and net culture against state and corporate interests.”
When Barlow proclaimed his Declaration on February 8, 1996, it was not a normal day for the Internet, especially in the United States. The Clinton Administration had just passed the Telecommunications Act, the most significant telecommunications policy change in the country since the 1930s. The Act also included the Communications Decency Act, aimed at regulating pornographic content on the Internet. For Barlow, that sounded like a clear attempt by the government to extend its control over Cyberspace’s free lands, where Washington had “no sovereignty.” “This is a classic dynamic any new group of influential cultural producers, from artists to filmmakers to journalists and so on, will resist what they see as ‘contaminations’ of their core values,” says Stevenson. “As such, Barlow saw the Telecommunications Act as an egregious overreach by the U.S. government in an emerging culture or social universe that in his mind was best left pure.”
Barlow’s words were built around a recurring technology topos. The birth of each new communication technology brings a recurring discourse of radical change that “sometimes gives the impression that history stutters,” as French sociologist Patrice Flichy wrote. These topoi take the shape of myths. Myths are deeply connected with technology and with communication technologies in particular: in this context, myths are tales that express what Vincent Mosco calls “sublime,” that, as natural wonders, promote “a literal eruption of feeling that briefly overwhelms reason only to be recontained by it.” The web, or Barrow’s Cyberspace, is definitely not the first technology to be surrounded by such a mythological narrative: “the declaration was not unique to the history of the Internet or history in general. Utopianism and myths are stories we tell to galvanize a group of people behind particular forms of thought or action, in this case, standing up for the ‘rights’ of a broad community of digital media producers and consumers,” explains Stevenson, who recently published a paper about the discursive constructions of the web as a disruptive, unprecedented technology, “this always involves some form of fiction that aims to inspire, and Barlow’s notion of independent cyberspace was meant to do just that. Even for the people who didn’t buy the ‘cyberspace’ metaphor, it must have felt good to circulate the Declaration and feel part of some larger social universe that was standing up against powerful state actors. A particular variant of the utopianism in Barlow’s document returned with the digital rights activism of Anonymous and perhaps some of the rhetoric around Dark Web initiatives, and this stuff will return again,” adds Stevenson.
Technological myths and “sublimes” are perpetual, even where proven wrong, mostly because the metaphors on which they are constructed recur in different moments of history. In particular, these myths tend to start with the proclamation of the end of history and birth of a new era. It was the same with “A Declaration of the Independence of Cyberspace,” which, in Barlow’s words, marked the beginning of a new era, invalidating information premises and structures, setting new principles. “Cultural renewal often relies on the ‘original principles,’ and there are bound to be more efforts to restore the web to what it was supposedly meant to be. There’s a thought among critics that it’s best if we are rid of such myths, but I think such myths are unavoidable, so it’s our job to engage critically but also with an understanding of their attraction and even their utility,” says Michael Stevenson, 23 years after Barlow’s presentation in Davos.
Did the Declaration actually inspire the future of the Internet? We may ask ourselves if Barlow’s vision and that utopia ever took shape. Looking at what the Internet is today in the most common ideas and opinions, we may find ourselves lost. The unexplored territory that Barlow was declaring independence from has been colonized by a political economy, surveillance, and the “weary giants of flesh and steel.” The governments of most countries on the planet have extended their powers over Cyberspace, mastering it. What was to be a “world that all may enter without privilege or prejudice according to race, economic power, military force, or station of birth” ended up becoming the weaponized playground of an extremely limited number of companies and a de-facto surveillance state. “The web has changed drastically of course, and a naive reading of the Declaration would focus on how Barlow’s sense of a virtual world with its own ‘Social Contract’ was way off,” explains Stevenson discussing the state of the World Wide Web at the beginning of the 2020s. “The web is so central to our everyday media practices, commerce, and the internet is woven into nearly every domain of social, cultural, and political life,” he adds, “and the sense that this is a world that anyone may enter without prejudice sounds ridiculous when thinking of the continuous deluge of stories about public shaming, hate speech, cyber-bullying and cancel culture on social media. Meanwhile, the hackers and web-natives that Barlow spoke for have grown up to be CEOs of some of the most powerful companies in the world — it’s clear that our need for organizations that look out for their ‘rights’ has dwindled, and instead we need declarations of ’social responsibilities’ on the part of the people who build our search engines, social media, and other platforms,” adds Stevenson.
The Snowden revelations in 2013 inspired a deep reality-check into the public discourse around the Internet and its role in today’s politics. Borders play a part in structuring the Internet, and countries such as Russia or China are investing billions in infrastructures capable of building censorship walls around their segments of the Internet. Public discourse is more a matter of corporate policies than uncensored speech, mostly regulated by unaccountable algorithms optimized for profit over democracy. What author Shoshana Zuboff has defined as “surveillance capitalism” — though we may question the existence of any difference compared to classic capitalism — has become the de-facto structuring force of the Internet itself and potentially any social activity. Is this what a failed utopia looks like?
Writing about the Web’s 20th anniversary, Evgeny Morozov investigated the rationale behind the Internet becoming a world of venture capital and big money. “Who would take out the trash?” asked Morozov in that article, stressing that the Declaration was originally drafted when the Web was all but populous, search engines were rudimentary, and social networking sites a mere idea. Still, Morozov wrote, “it’s not so obvious that John Perry Barlow’s call on governments to exit Cyberspace was a good one. In the absence of strong public institutions with oversight, corporations felt they could do what they wanted. In most cases, they just pretended these problems didn’t exist.” Those issues, though, ended up being crucial and pressing social issues. The vision of Barlow’s generation was laudable but has been co-opted because it failed to see how it could be easily co-opted, partly because those who were put at the center of the vision on paper, the same generation of Internet pioneers as Barlow’s and the one that followed — Zuckerberg’s — worked hard to build platforms whose aim was market optimization rather than society optimization.
Still, utopias are crucial in producing “alternative images of society that puts ideological images into question,” wrote digital surveillance and politics of data scholar Lina Dencik in analyzing the current “pervasive atmosphere” in which surveillance as ordinateur of society seems an irreversible destiny. During times in which Amazon’s Jeff Bezos declares that Silicon Valley’s firms should feel comfortable doing business with the U.S. military, there is a huge need and demand for structuring a new utopia for the Internet, capable of inspiring the future of the Net and, consequently, that of the society we want to live in.
José Bastos is a co-founder and CEO of knok healthcare, a SaaS healthcare / insure-tech startup offering an integrated solution for remote medical consultations through a combination of AI triage, scheduling, video consultations, health records, and integration with hospitals or clinics. As Covid-19 has made virtual interactions the ‘new normal,’ Bastos shares what that means for the future of healthcare.
How is knok aiming to change people’s access to healthcare?
Our app gives people access to a doctor on their smartphones whenever they need them. They can book a video consultation through our web app with their existing medical team at their usual hospital or clinic. We are working on optimizing a doctor’s time by giving them access to data about you before a consultation. All of the admin work that comes before and after the consultation is just wasted time, and we are working on reducing it, while improving the quality of data, so that the quality of the insights that the doctor gets from the consultations are higher.
Why does it matter?
Today, insurers and medical practices are compelled to offer remote solutions. They either have to adopt massive, multi-million dollar projects that fundamentally change the existing procedures and workflow, or use Skype, Whatsapp, or Microsoft Teams, without HIPAA and GDPR compliance, patient friendliness, or integration and easy access to health records and data for patients and physicians. Since our main focus is doctor-patient communication, we continuously improve the product, bearing in mind the provision of relevant information and tools for the doctor to deliver quality care.
Yanwu Xu, principal health architect for Baidu Health, one of China’s largest internet corporations, and one of three companies contracted by the Chinese Government to implement virtual care technologies, said that in China, patients were advised to seek a physician’s help online rather than in person after the Covid-19 pandemic first emerged in Wuhan in December. Following China’s example, the Centers for Medicare and Medicaid Services (CMS) in the US implemented new measures which will allow for more than 80 additional services to be furnished via telehealth. Has knok’s virtual user base increased since the Covid-19 pandemic began?
The volume of consultations between January — which is always our strongest month — and April, multiplied by almost four times. It’s clear for everybody that peak Covid is probably in the past, so now things are calmer. But still, we keep generating leads and we have people interested in adopting our technology who are now reaching out to us and we are deep diving in the UK and Italy as our next markets. So I think what this means in practical terms is people understood that something changed. And having understood that something changed, they now want to adapt their offer, their medical channels offer from going to hospitals or clinics, plus a virtual channel. And I honestly think that the hurdle to digital health is much more related to the management of health companies than to the patient’s adoption. And the thing that surprised some of our clients the most is that between 60 to 70 percent of patients who used video consultations said that they didn’t feel that the experience was short on anything when compared to a physical consultation. There are some medical specialties where the doctor is supposed to touch you, like if you go to a doctor who needs to hear your lungs, they will need to put a stethoscope on you. So of course, the experience is not the same or it may not even work. However, let’s say you go to a cardiologist or you go to an endocrinologist if you’re a chronic patient. They will look at your ECG, they will look at some blood tests, and having looked at both, they will diagnose you and prescribe treatment as needed. In practical terms, you do not need to be physically at the doctor’s office, and the patient experience in terms of what you take away from the consultation is the same. You avoid going to the hospital, parking, waiting for an hour, versus having an online consultation where you push a button and start.
What will the healthcare of the future look like and how is knok helping people to prepare for that?
People are already used to doing video chats with their friends and family. So if patients are used to doing this with their friends and if doctors are used to doing this with their patients, the closest ones, what we are saying here is we have just now brought to front and center stage what people were kind of doing behind the curtains with WhatsApp chats with their doctors, and it’s now organized and structured with a safe and secure tool, with GDPR and HIPAA compliance, that has scheduling and an electronic health record module, because it’s born and built for healthcare.
After you’ve had a cardiologist consultation at home, will you ever want to go back to the doctor’s office if you can access the same exact doctor doing the exact same consultation without even going there? That to me is the million dollar question for healthcare systems.
I think healthcare in the future will be much more decentralized. There’s a couple of things that are critical for this to work. The first thing you need is to generalize standards, and the second, is to build something called a health passport, which is giving you access to all of your health data wherever you go. And the moment you do that, you as a patient become the owner of your own healthcare information. And the moment you become the owner of your own healthcare information you will have your own centralized place where you can accumulate data, which will allow systems and AI to act with a much wider database. This is probably going to take five years, I hope it takes less. But honestly, I think it’s hard because the healthcare system is very slow to change. But the way this will work is by empowering you as the final user to access your own tools and your autonomy to reach out to the best practitioners anywhere and take hold of your own conditions. Using digital tools you can aggregate the information and ensure that you are always on top of whatever you have. This will have a massive impact on chronic patients more than anything else.
If you have a chronic condition today, let’s say diabetes, you prick your finger on a little device, if that device can communicate with an app that you have on your cell phone and that app can speak a standard language with some other apps in the hospital or with your doctor, then your doctor can receive readings of your sugar three times a day and can manage you. And I believe that what this means is that the importance of data will increase by a lot.
There are obvious limitations to virtual healthcare, but what are the benefits of moving to virtual appointments when possible, especially as we may face more global pandemics in the future?
What is happening to the world is people are adopting social behaviors in areas that are pleasurable for them and avoiding social behaviors in everything that’s unpleasant for them.
Since patients now have access to video consultations, they will be more likely to look for a doctor whenever they feel something is wrong. Sometimes they just seek help when it’s too late due to the cumbersome administrative hurdles they might have to pass, the geographical distance, the disruption of daily life (e.g. work), or even the socially awkwardness that a doctor’s office visit represents. Going to a hospital is not an enjoyable experience. It’s never an enjoyable experience. So if tools come up that remove this experience from your day-to-day life, you won’t mind. You just draw a line and you say it’s a nice social experience, let’s keep it physical. It’s an unpleasant and unnecessary kind of social experience, let’s move to digital. Covid-19 has actually proven quite, quite interesting in this mindset. In a sense we’ll all be better off if this happens.