pluto

Dissolving gender barriers in tech

Sana Afouaiz is an award-winning gender expert, women’s advocate, and public speaker on global feminism and women’s issues in the MENA region. For the past five years, she has advised the United Nations, the European Commission, corporate institutions, and organizations on gender issues.  In 2016, she established Womenpreneur Initiative, an organization with a community of 10,000 across 20 countries, which aims to advance women in the entrepreneurial scene, technology, innovation, and society. In recognition of her achievements, Afouaiz was named an influential woman by the World Bank. On July 17, 2020 she spoke as a role model at UNIDO’s International Online Conference “Women in Industry and Innovation,” organized in cooperation with UN Women and FAO.

You said that the cultural environment you lived in made discrimination perfectly acceptable and visible. Can you give us an example?

I grew up in Morocco, in a family of eight girls, and my sister and I have always been treated as if something was missing and that something was a man, a brother. This really hurt my sisters, who felt like we were not complete, we felt guilty because of our gender. I let all this make me stronger and inspire my work.

Do you still experience gender-based discrimination?

I do, every day. For instance, I’m quite direct, and in the culture I grew up in it is men who are supposed to be assertive. Women should rather be soft and sweet, and therefore they feel limited in terms of exploring discussions at a certain level. This reminds me a bit of the time when Aristotle and other eminent ancient thinkers would say that women were not capable of contributing to philosophical debate. It happened at a different time but it’s the same vision many have today.

I also face discrimination because of my age. Sometimes it is difficult for men to take me seriously when I’m looking for funds. I don’t mind people asking about my age when I am presenting a project to be sponsored but I do mind when that is the first question asked and the second is about my marital status. I don’t think men are usually asked those questions, no matter how young they are. 

In Morocco, only 5% of firms have a female top manager and only 10% of entrepreneurs are women. And in the whole MENA region, things are not any better. In your opinion, what holds female entrepreneurs back, at any level? 

Different issues come into play. And it is not the same in each country of the MENA region. In the eyes of westerners, the MENA region is perceived as one entity, but it is a very non-homogenous group and encompasses many countries, with a different history, culture, and political conditions.

If you take countries like Morocco, Lebanon, and Tunisia you will find that women have achieved certain rights, they are visible at some level and the lack of women’s participation in the economy is related mostly to the social atmosphere and the general mindset. 

As a matter of fact, there are a great number of female STEM graduates across the whole region. Yet, very few of them achieve entrepreneurial careers. How should this problem be addressed?

You can’t fix gender issues in the MENA region by merely giving women funds and teaching them how to become entrepreneurs, for instance. It has been done for years now and it just doesn’t work. Female entrepreneurs are still very few, they operate in informal markets and face constant discrimination. Those who are successful make it abroad, not in their country. Further problems are technical issues such as accessing the right human resources or the fact that the bureaucracy in the region is very complex. It’s a tricky process and in the end, female entrepreneurs get tired and give up. 

Another problem with the MENA region is women’s limited access to financing and most of it is microfinance. That’s why female-led startups are not thriving as they should. If we really want to support women, we should financially support their projects. With Womenpreneur we did training with some investors to make them understand the gender bias because sometimes it is internalized. The investors don’t feel like they are intentionally excluding women, but they do because it’s ingrained in their minds. 

What other factors may cause such imbalance?

A big issue is the access to resources and opportunities, which is a struggle for all entrepreneurs—men and women. We have Dubai, where you can find many huge investors and all the entrepreneurs want to go there to be funded and start their companies, but if you look at Arab startups you can see that they don’t last. 

The best example is Careem, the Uber of the Arab world. It was a huge success funded in Dubai and a product of the MENA region. But it was sold to Uber last year. The problem is that we don’t have a structured ecosystem that can help create innovative startups and make them last. 

Then you have the lack of a legal framework to explain and facilitate the creation of startups, companies, and entrepreneurial projects that could help boost the economy.

Can you give us more details about specific countries in the MENA region?

Some countries have a more developed approach than others. Tunisia, for instance, is a very good ecosystem, I find it dynamic, active, very supportive. 

Moreover, a very interesting initiative was started, when a group made of private and public institutions and entrepreneurs created a legal framework project explaining what is needed by different entrepreneurs to succeed with their startups in Tunisia. It was presented to the government and they accepted it. Today it is called Startup Act and is becoming a space that provides legal information, the right resources, the right contacts, the right business support and I think that’s amazing. 

But again, even in this space women are invisible, and female startup founders are few. But I believe in Tunisia’s effort to change that.

In 2019 Womenpreneur went on tour to map and visit the female entrepreneurial talents of three countries: Morocco, Tunisia, and Jordan. Did you get new ideas or find new starting-points from this journey?

It was an amazing tour, we also did a policy paper study and roadshow events in each country. We traveled to different cities, meeting female tech entrepreneurs who developed interesting startups and companies, who raised huge funds inside their countries and have half of their offices across different countries, especially in Northern Africa and Sub-Saharan Africa.  

We have also conducted surveys with many women entrepreneurs in different countries, and we had meetings with experts from the financial sector, the public sector, and the government, to really have an overview of the baseline for women in tech. Afterwards, we published a document recommendation that also provides interesting propositions to improve the ecosystem for women in the region and I think it’s one of our tour’s highlights. 

Through our tour we connected more than 2,000 people and published engaging content through our media platform, reaching out to more than 500,000 people.  

Many businesses have been disrupted by Covid-19 and you are trying to share your experience through webinars and online events. Can you tell us more about your support for women entrepreneurs from the MENA region during the pandemic?

The first thing we did was set up an online space for women to access information and understand what they could do in the short term to revive their businesses and initiatives. 

Meanwhile, we’ve been running a research study in the MENA region about the impact of Covid-19 on women at different levels: social, economical, and political. The research was run by our experts and through that, we developed different possible projects we are going to implement in the region. One of them is a program dedicated to supporting female entrepreneurs in understanding how to cope with the crisis. The program has been finalized and we are going to start it by December targeting women from 10 countries in the Mediterranean. 

We have also launched Generation W, an online six-month acceleration program to support women who are impacted by Covid-19, to help them find a job or develop a project. We’ve seen that the pandemic has heavily impacted the job market and it has created a lot of unemployment, 80% of which is affecting women. 

We focused on providing women with high-tech skills, and this will make it easier for them to find a new job. The program is ending at the end of November with a total of 140 activities developed with more than 20 national and international partners. 

How will developing tech skills help those women through the crisis and possibly afterwards?

Coronavirus has heavily hit the economy but we’ve also seen that a lot of jobs are not necessary anymore. The digital revolution is rapidly changing our economic systems. During the outbreak, robots have been used to clean hospitals in Japan, and unfortunately, that has cost many people their jobs. But robots have also been used a lot in the medical field. 

Coronavirus is just one of the many crises we’re going to live through and women may always be the frontline victims. That’s why we need to help these women acquire tech skills to survive and this is something we do through advocacy and lobbying, also advising international organizations. Women have been the face of Covid-19, they’ve been the ones saving people, from nurses to doctors to food providers, and it saddens me how different countries try to manage the crisis. When you see that there is no budget dedicated to women, even though they represent more than 50% in each country across the globe, it does say a lot. 

Pivot power

Failure. Failing to achieve your intent, not accomplishing your desired goal. In essence, having an idea and not being able to implement it. It’s frustrating and hard to accept, to the point that the fear of negative consequences often dampens the innovative drive of many individuals and organizations. Held back by the fear of failure.

Sometimes, however, missing a target can be a great stroke of luck because perhaps you miss the first one, but you score a bigger one right beside it, or because that mistake has repercussions that enable you to achieve something extraordinary. This must be why people sometimes let themselves go and take risks anyway, and organizations say they want to foster a corporate culture that celebrates error and failure. At least in words.

I wonder if Dharma Jeremy had the same opinion. He was born in Canada, raised in a log cabin without running water and electricity, and later became a computer enthusiast with a master’s degree in philosophy. While playing a video game, he got a crush on a girl who he never got to meet, a circumstance that probably led him to become passionate about the phenomenon of interactions in virtual spaces. 

In the early 2000s, just before the dotcom collapse, he decided to found a startup to develop a game that had no real purpose, except that of helping people interact: and in fact, he called it Game Neverending. The idea was inexplicably unsuccessful, but before shutting down the startup, he drew on a part of the technology that he had developed and created a concept in which people exchanged boxes of photos and then commented on them. After all, this, too, was a social interaction that was not very different from a game that has no real purpose. The idea took off, and within a matter of months, it attracted Yahoo’s attention, which acquired it for a few tens of millions of dollars.

After working at Yahoo for a few years, Dharma Jeremy decided that big business was not for him: too much politics and too many delays, and too little growth. He left and decided to launch another massive game without a real purpose—only it was much nicer and better looking than his first one. This time he thought big; he raised a lot of funds from Venture Capitalists and put together a strong team of game animators, designers, and developers for another shot at what he had failed doing the first time.

But this second attempt didn’t work either. The game didn’t catch on, even if it did build up a small community of diehard gamers. But in order to work together, the team developed an internal chat system that changed the way a team worked together, exchanging messages and documents. After all, this is also a form of social interaction that is not very different from a game that has no real purpose. 

When they shut down the game project, they realized that they wouldn’t be able to use their communication tool anymore, and the fact that this displeased them made them think that maybe they had something in their hands that could capture a market. They asked their venture capitalists if they could use the remaining funds to turn their prototype into a product. They pivoted their focus to the new product and fired 80% of their employees; in style, however, helping them swiftly find new jobs. After a short while, they launched the product and turned it into one of the fastest-growing corporate software applications at the time. In just a few months, the company reached a billion dollars in market value, went public after a few years, and is now worth $16 billion. Not bad for a failure.

Dharma Jeremy Butterfield changed his name to Stewart at the age of 12. Now he is a charismatic CEO, as well as a pleasant and engaging philosopher as he tells his stories. In his interviews, you can see that he has connected the dots of his experiences and that his successes result from his mistakes and failures, but not everything happened by chance. He says that he has always been guided by his passions and his sense of responsibility towards colleagues, investors, and end-users. And following these guidelines, he tried and changed until he found a solution.

By the way, the two failures that Butterfield turned into successes are called Flickr and Slack.

Are you a hacker?

Nowadays hackers seem to be everywhere. They could be behind you in line at the supermarket, sitting next to you on the tram, or they could even be you. Yes, that’s not a mistake: if we stick to the original meaning of the term hack, it means finding an unconventional solution to a complex problem.

We’ve all done at least one ‘hack’ in our lives but not all of us are hackers as we generally conceive of them in our culture. But primarily, not even hackers agree with each other on what hacking is. Books and movies, however, shape our imagination and provide us with several examples of what a hacker is—or more precisely a partial look at what our society thinks about them. One of the most famous depictions is from the movie that in 2020 turned 25 years old: Hackers.

A techno-thriller in which young, rollerblading, computer geniuses manage to ruin the enemy’s plans. Since then, Hackers has evolved into a cult favorite and seems to describe the attitude of hackers, which in the movie is highlighted with very direct slogans such as “Hack the Planet,” and “There is no right and wrong/There’s only fun and boring”). 

This description of hackers certainly clashes with the hacker we’re all afraid of and that is usually stereotyped as the white, middle class, overweight male kid with a hoodie, clicking at sidereal speed on the keys of his computer with rivers of acid-green lines of code running incessantly on the screen, which is the only source of light to illuminate his dark little room. Usually, this figure is intent on committing computer crimes, destroying systems and infrastructures, and jeopardizing the security of our democracies.

News reports seem to confirm this notion of hackers as criminals: one of the most recent and consequential hacker attacks was that of Guccifer2.0, a moniker used by some Russian agents who hacked the Democratic National Convention systems in 2016, publishing data online and thus contributing to the destabilization of American elections. But there have also been hacker attacks with more concrete consequences, as in the case of Stuxnet malware in 2010. Then, hackers linked to the U.S. and Israeli governments infiltrated computer systems of the uranium enrichment facility at Natanz, in Iran, to sabotage the centrifuges and to slow down and damage the uranium enrichment plant. Because of its capabilities, Stuxnet has been called the world’s first digital weapon. 

Hackers have always been sitting between these two extremes: brilliant saviours of the world or dark and heinous criminals of cyberspace. Or at least this is the lazy dichotomy we like to apply to a much more complex world that shows all the different nuances of hacker culture. 

The first references to this world can be found in the late ‘50s and early ‘60s: at the Massachusetts Institute of Technology (MIT) the term was related to MIT’s Tech Model Railroad Club (TMRC): students used to have fun fiddling around with electrical systems and messing around. But hacking doesn’t have a single root, in fact in the same years phone phreakers were already actively hacking systems. We can consider them as direct ancestors to the underground hacker. They were people interested in the operation of the phone system, they studied its architecture and tried to exploit the system to route calls for their own benefit—most of the time this means to make free phone calls. 

The best definition of a hacker is suggested by Gabriella Coleman, an anthropologist and academic and one of the greatest experts of the hacking world: “A hacker is a technologist with a love for computing and a ‘hack’ is a clever technical solution arrived through non-obvious means.” In those MIT years, students enjoyed demonstrating their technical aptitude and cleverness. 

The lack of defined origins is however an important signal of what came next: in those years, thanks also to the widespread adoption of electronic technologies and of the digital sphere—the internet network was born in 1966 and the first computers connected to the ARPANET network appeared in 1969—a certain type of world vision, with shared ideals, emerged.

This hacker ethic is found in Steven Levy’s book titled Hackers: Heroes of the Computer Revolution. Hackers are convinced that computers can improve our lives, that all information should be free, that in order to pursue knowledge access to computers and anything that might teach you something about the way the world works should be unlimited and total. And also that people can create art and beauty on a computer. At the same time there was an ultra-push to adopt meritocracy: “Hackers should be judged by their hacking, not bogus criteria such as degrees, age, race, or position.”

Positioning these hacker ethic principles at the center, however, risks flattening the historical differences between the movements that have emerged since then and the problems they carry with them. As Coleman recalls, this ethic is often invoked in simplified terms, “whitewashing the most fascinating ethical dimensions that flow out of computer hacking.” And it also risks hiding sometimes under the carpet some systemic problems of this culture: such as episodes of sexism, misogyny, sexual harassment and raping happening all around security conferences or in hacker camps—and involving prominent figures of the hacking scene.

However, the genealogical tree of the history of hacking, starting from multiple roots, continues to branch out into multiple ways that sometimes flow together, sometimes ignore each other and sometimes even end up colliding. 

Hackers helped invent the field of computer security: that is, those computer researchers who look for vulnerabilities in systems and report through a ‘responsible disclosure’ process to companies in order to let them fix the bugs found. These hackers usually call themselves ‘grey-hat’ and ‘white-hat’ in complete opposition to ‘black hats’ who are mainly hackers that carry out illegal activities.

But the differentiation doesn’t stop there and the confusion increases, there are hackers that have absorbed a more political and activist approach: so called ‘hacktivism’ and ‘anti-security’ are born. 

First coined in 1995, the hacktivism label encompasses all those practices of resistance rooted in political ideals. We have hackers located primarily in Latin America, North America, and Europe, who have set up collectives often influenced by the political philosophy of anarchism.

Among the actions carried out by this hacktivist, we recall the acts of ‘electronic civil disobedience’ to draw attention to the Zapatistas in the 1990s. Hackers built a tool called FloodNet that would flood a targeted website with traffic: those were the digital version of mass strikes that usually happen in real life, but in this case they were aimed against websites and online services.

At the same time, another type of hacker is asserting itself, trying to exploit the vulnerabilities of computer systems for its own benefit: adopting the anti-security mantra in order to infiltrate government systems, steal data, and publish them online. 

This is the birthplace of one of the most fascinating and active movements in recent history: Anonymous. In the early 2020s this collective of hackers inflicted attacks everywhere: against the Church of Scientology, DDoS attacks on several government sites, data leaks and attacks against Visa and Mastercard. After a series of arrests that sank the collective in 2011, recently during the uprisings of the Black Lives Matter movement in the USA, Anonymous emerged once again ready to strike

This kind of activism exploits technical vulnerabilities in order to exfiltrate documents in the public interest and shame governments, corporations and public figures.

Nowadays, from the wrongfully-depicted darkness of their rooms, hackers have actually reached even political positions: Beto O’Rourke, the Texas Democrat who dropped out of the primary on October 2019, was a member of a famous group of hacktivists: Cult of the Dead Cow (cDc). And one of the cDc slogans shows how relegating the history of hacking to solely the sphere of technology is in turn limiting: “Global domination through media saturation.” They weren’t simply hacking software systems, they were also hacking media narratives. 

And counteracting media narratives is still a constant struggle for hackers nowadays. Some of them have turned into full-time security specialists working for big corporations, others are immersed in the academic world, while others still run loose trying to steal personal data, credit card information and selling them online for profit. 

Despite this multifaceted hacking world, a large part of the population still considers them only criminals and it seems that not much has changed since 1986 when an essay by a hacker known as The Mentor, titled The Conscience of a Hacker—universally known as The Hacker Manifesto—depicted a public outcry:

“We explore… and you call us criminals. We seek knowledge… and you call us criminals. We exist without skin color, without nationality, without religious bias… and you call us criminals. [….] Yes, I am a criminal. My crime is that of curiosity. My crime is that of judging people by what they say and think, not what they look like. My crime is that of outsmarting you, something that you will never forgive me for. I am a hacker, and this is my manifesto.”

Designing knowledge

Jeffrey Schnapp’s face fluctuates through pixels over Milan’s overloaded internet connection. From Woodstock, Vermont, where he has retreated as the coronavirus pandemic spreads through the United States, he looks into the frame of the video-call: “What we are living through right now will redefine how we interact with others over the longer term,” he says. “The virtualization of labor, communication, forms of entertainment; the substitution of physical proximity with on-screen intimacy. They are tendencies that have existed for some time now but are now accelerating so much that they may make our recent past unrecognizable. Even after the pandemic subsides, they are destined to become part of a new normal.”

Listening to Schnapp, founder/faculty director of metaLAB at Harvard University and faculty co-director of the Berkman Klein Center for Internet and Society at Harvard University, discussing the effects of screens on our relationships, and the rapport between virtual and real life, the definition of value in our day to day experience is an alienating experience.

Resistant to the idea that technology is ‘external’ to culture or the uncontested dominion of technicians and developers instilling their characteristics into the software, apps, and social networks we feed on everyday, Schnapp has approached the development of digital culture with an original approach, and during the last 20 years has contributed to the flourishing of a new field of study: digital humanities—the human sciences deeply enmeshed in the never-neutral spaces of technology. The Digital Humanities Manifesto came into being in 2008, while Schnapp was commuting between San Francisco and Los Angeles. It started as a provocation (that would later be collaboratively expanded and reauthored) that sought to examine the limits of the academic disciplines and the ‘orthodoxies’ of university departments competing amongst themselves. It tried to imagine alternative institutional containers like “humanities laboratories” inspired by the laboratories of the avant-garde but also, in part, by such structures as medieval scriptoria, thought laboratories like that founded by Benedetto da Norcia at the Montecassino monastery. “Special spaces,” Schnapp explains, “because they were capable of combining research, study, and contemplation, as well as functions that we associate today with artistic study: hackerspaces, chemistry laboratories, and farms, rather than publishing houses. They were places to meet, of learning, and collaborative manufacture.” The medieval reference shouldn’t be surprising: Schnapp was trained as a romance philologist and is the author of a significant corpus of writings on the Middle Ages, even if his circle of collaborators today tends more towards technologists, designers, artists, and architects.

“In 1999, Stanford invited me to develop a visionary initiative in the arts and humanities,” he says. “The challenge was to build bridges between the arts and humanities and the technological revolution unfolding both on the Stanford campus and in the surrounding Silicon Valley. They were intrigued by the fact that back in 1984 I had directed the first database project funded by the National Endowment for the Humanities: The Dartmouth Dante Project, a database making it possible to explore, verse by verse, seven centuries of commentary on the Divine Comedy, from the words of Jacopo Alighieri (Dante’s son [1321]) to those of Anna Maria Chiavacci Leonardi (1997).”

Daniel Stier’s project, which became a book, Ways of Knowing, began accidentally, when he photographed a research laboratory and “felt like he was visiting an artist’s studio.” He notes that both follow the same level of obsession and specialization and “try to work out where the order in chaos lies.” Stier presents the machines and workspaces of small-scale scientific research facilities out of context, asking: is it art or science? Through his lens, these workspaces become artworks in and of themsleves.

The result was the Stanford Humanities Lab (1999-2009), which brought together models of humanistic inquiry and technology innovation through a portfolio of projects that encompassed everything from performance-based pedagogies for the teaching of ancient philosophy to the design of museum interactives to experiments with knowledge production in virtual worlds. In 2009, Schnapp left Stanford and two years later became the founder and faculty director of metaLAB at Harvard, which he guides to this day. 

“metaLAB is an experimental platform committed to the creation of new forms for cultural communication, critical practice, and production of scientific knowledge,” Schnapp explains. “One of our main lines of interest is that of re-imagining libraries, archives, and museums for the 21st century. Also among our fields of exploration are the critical and creative use of artificial intelligence in the cultural field, database design and storytelling, data visualization, forms of pedagogy connecting mind to hand, as well as experimental publishing (which includes both print and digital publishing). Given that metaLAB is part of Harvard’s Berkman Klein Center for Internet and Society, we tend to think of ourselves not just as catalysts for change within the academy but as agents of change beyond the university’s walls. I sometimes refer to our work as ‘knowledge design’ in order to suggest the existence of a greater horizon of ambitions than that usually implied by ‘digital humanities’ or ‘civic humanities.’”

In addition to his influential corpus of print-based cultural historical work in various fields medieval to modern, Schnapp is also engaged in various forms of experimental scholarship. “Instead of concentrating on a single text or a single author or marshalling such tried-and-true humanities methodologies as close reading or observation, I have experimented with an inverted approach and asked myself: ‘How can one analyze an entire digital corpus or collection, even one made up of hundreds of thousands of objects? What can be extracted from the entire archive of an author or group of authors or generation?’” he explains. “Such corpora represent vastly expanded universes, universes in which the data can prove either disappointing or illuminating, the analytical tools either dense or revelatory, depending upon the research questions that one poses, the analytical tools that are employed, and the architecture and quality of the data itself. Data analytical tools are usually at their best when it comes to identifying patterns and constants, but, of course, the real interest of a collection may lie in the anomalies which are far harder to capture. Digital methodologies offer no guarantees in and of themselves; they need to be applied with rigor and a nuanced understanding of both their powers and limitations. They can help make a route clear, or the opposite. This is something that I have learned through practice and by making (and, especially, learning from) mistakes.”

He continues: “As might well be expected, I haven’t always been on target with my predictions as regards viable technologies or digital media for purposes of reimagining humanistic research. At the beginning of the 2000s, for example, at the Stanford Humanities Lab, we invested a great deal in 3D virtual worlds as sites for building extensions of exhibitions, staging events, and documenting research in progress. It was the heyday of Second Life, there was a lot of attention being paid to this sector. We built some brilliant and innovative content in Second Life, even hosting the world’s first live film premiere in a virtual world: a documentary by the artist Lynn Hershman Leeson simultaneously shown at Sundance and on our island within the reconstruction of one of Hershman’s site-specific artworks from the early 1970s (Dante Hotel). This proved a big success. But there would be failures as well. This happens. You have to take risks. The academic world tends to be averse to risk but at the Stanford Humanities Lab or, for that matter, at metaLAB we are anything but risk averse.” Failures provide a basis for future successes.

From this transdisciplinary background, Schnapp is now looking at the cultural and social challenges that the pandemic has accelerated. Above all, the necessity to radically rethink another facet of our everyday experience: The one stuck in the screens that are separating (and connecting) us from/to one another, as we aim to contain the virus. “metaLAB is a collection of projects-in-progress; we always work in more than one area at the same time, often exploring the relationships between the real and the virtual,” Schnapp says. “Our research typically involves the creative intermingling of the physical and digital domains, which is to say that we assume a critical posture with respect to the normativity of ‘screen culture.’ Our aim is to reset the foundations of the way we use screens; to emphasize the multisensory aspects of cultural experience and communication even as we work with digital tools and screen-based devices.” 

“If the pandemic has reinforced humanity’s reliance upon networks and screens, it has done so in the wake of a long prior history of virtualizations of the social. First with letters transmitted via postal systems, then telegraphs and telephones, then radios (whether one- or two-way), then with televisions, humanity has gradually equipped itself with infrastructures that allow people to connect without making direct physical contact. With the intensified importance recently assumed by videoconferencing platforms like Zoom, Meet, Teams, and Jitsi, we have reached a new plateau that is sure to have an impact on people and institutions as well as enduring well after the pandemic has run its course.”

“This change doesn’t only concern relationships fractured due to social distancing, but also didactics and teaching, a sector that, as a professor, particularly interests me. Most of the content offered today via online and video conference platforms is still very conventional. We must ask ourselves how to enhance online lessons, how to make them more medium-specific in character. Here too, we are only at the beginning, although there are already interesting trends to observe. I am thinking, for instance, about the renewed role of rhetoric on-line and on-screen.”

“For well over a century, the ‘gold standard’ for measuring the labor of scholars in humanistic fields has been the publication of monographs and specialized essays: print artifacts accompanied by a dense apparatus of notes that acknowledge and refer to previous arguments,” Schnapp explains. “If someone had told a nineteenth century luminary that this would become the defining proof of his or her intellectual value, he or she would have been surprised. In prior eras, the recognition of intellectual and scholarly value came just as much through lecturing, the ability to address a public (expert and not) and drawing this public in, and the brilliance and even wit employed in debating in front of fellow scholars and students. Ferdinand de Saussure, for instance, never wrote up his Course in General Linguistics even if he taught it for multiple generations; the surviving text is little more than the aggregate of his students’ lecture notes. And, even the mere act of capturing learned oratorical performances on YouTube can give rise to unusual success stories like that of Alessandro Barbero. I have seen thousands of people sharing his lessons from 10 years ago at the Festival della Mente in Sarzana. In this way, his lessons live on in the mind of an audience that is highly unlikely to ever pick up one of his specialized publications.”

But the future will not only be housed within the world of screens. “Paradoxically, the bombardment of screen-based experiences that we are experiencing today, will have the effect of making us miss physical presence even more,” Schnapp concludes. “The ubiquity and normativity of screen culture, on the one side, and the counter-pressure of all those movements asking us to switch off our phones to reconnect with the here-and-now, on the other, will enter into an even stronger dialectical struggle. As much as I don’t like the term, I think “glocalization” will be the territory of the future: That is, a strong tendency toward being in contact with what we have around us locally, even hyperlocally, while at the very same time we are absorbed ever deeper into the web of global networks and information, a web that is 100% agnostic with respect to physical location. What may perhaps disappear is what lies in the middle.”

Digital contagion

The Covid-19 pandemic stopped the world, causing thousands of deaths around the globe, and questioning the structures of our societies, economics, and cultures. Following the quarantine requirements, human sociality progressively moved online. As a consequence, digital infrastructures were under severe stress for a few weeks, following a massive increase in data consumption and connectivity demand. More people online for more time also meant more chances for cybercriminals to spread malware and other viruses exploiting code vulnerabilities and human weaknesses. In the face of this full-force technical gale, computer security expert Mikko Hyppönen tweeted out a warning to internet criminals: “Public message to ransomware gangs: Stay the F away from medical organizations. If you target hospital computer systems during the pandemic, we will use all of our resources to hunt you down.” 

What follows is a conversation in lockdown, in March 2020, between Philip Di Salvo—an academic and journalist who covers surveillance, hacks, and leaks—and Salvatore Vitale—a visual artist who works on security imagery and the politics of data, who recently infected a computer for art’s sake. 

Philip di Salvo: In these first days of quarantine in Northern Italy, I was thinking about how frequently I used the term “virus” in a technological context, compared to a medical, biological one. It strikes me now, to see this term gaining such a crucial meaning in that regard.

Salvatore Vitale: I’ve been thinking about it a lot too during the past few days. We are all witnessing the limits of our political and economic systems, therefore their social impact somehow reflects the way we, as humans, aren’t fully aware of how those systems work. This brings me back to high school, when, during biology class, my teacher was trying to hold the attention of a bunch of young students, while explaining how the human body and its immune system work: We all know we have one, we all, more or less, know the parts that constitute it, but many of us aren’t fully aware of its functioning. Can you find some similarities if we compare it to technology and safety online? Technological apparatuses, although cultural objects, go far beyond our understanding of their functioning. I see some similarities with the recent events concerning the pandemic, caused by an unknown enemy. 

As I watch cases of Covid-19 increase in my region, I’m constantly thinking about how easily we spread computer viruses or malware, sometimes without even noticing it. We have all received at least one email from a random contact with a dodgy link or attachment. That reminds me of asymptomatic people spreading Covid-19 without any awareness or chance of avoiding it. Have you ever worked on these issues?

I recently read an article about the increase of cyber attacks due to Covid-19. As more and more people are experiencing quarantine, online activity becomes a primary source of information and entertainment, as well as a tool to pursue social and professional interactions. This massive online presence triggers a series of criminal activities against a huge amount of potential victims. IT security has a lot to do with human behavior, a topic which is discussed in psychology. Back in 2018, talking about psychology and IT security at the festival Transmediale, Stefan Schumacher, president of the Magdeburg Institute for Security Research and editor of the Magdeburg Journal for Security Research, addressed some questions related to hand washing and disinfection: Everyone knows how to wash their hands, but many don’t know how to do it properly. That’s an interesting fact, as recently we’ve been bombarded by tutorials on “How to wash your hands,” promoted by governments, celebrities, and influencers in an attempt to educate the population to avoid the spread of Covid-19. There is, indeed, a similarity between hand washing and IT security, as both actions imply a certain level of self-awareness and perception of personal expertise, which inevitably leads to decision-making. And decision-making is affected by experiences and individual behaviors. Therefore, psychology plays a major role in the study of these phenomena. Let’s take as an example the use of passwords. Users don’t perceive a direct threat when they are requested to set up their passwords. The majority of them use weak passwords, often because they don’t consider IT security relevant. This has a lot to do with individual perception and acceptance of risk. When something (or someone) appears to be abstract, or—as in the case of computer malware or a biological virus—difficult to be comprehended, risk is not perceived, therefore, security measures for prevention will be weak. The complexity of cybernetic systems leads to various collateral and/or unintended effects on socio- and political-technological levels. However, these modulations, and thereby the relation between the modulator and modulated, are rarely fully transparent. This leads to action and reaction patterns with delayed or obscured cause-and-effect mechanisms, often resulting in a black box for lay users. This logic, as such, reflects the internet, but as we have seen, also both the computing of security and the securing of computing. Actions and non-actions, of users, super-users, bots, and robots, in connection with the networked world, require a regime of policing and securitization. Starting from these assumptions and the basic question, “what does malware look like?” I worked on The Reservoir, an installation used as a trigger to experience the non-linear cause-and-effect relationship that occurs while browsing the internet. By interacting with a sensor field in the sound installation, the audience disturbs and modulates an audio track, while a real-time infection of a Macintosh-running virtual machine connected to the internet triggers a visual simulation of human online activities and malware responses. Photography, sound, video, and interactions work together to underline and evoke the construction of a certain kind of awareness concerning safety in cyberspace.

In information security, it is widely accepted that the weakest knot in a system is usually human behavior. For instance, you can use the best state-of-the-art encryption technology and still jeopardize your security by doing something banal outside of the internet. Also, most hacking is more social engineering than technological expertise. I re-thought about it the other day when I saw a tweet from a white-hat hacker warning that in a lot of pictures on social media showing smart-working it was possible to spot passwords handwritten on post-its, etc. It is always fascinating to see how much humans tend to think about technology as if it was in isolation from other human, physical, or even biological factors. But tell me more about the project, what did you find out?

It is worth mentioning that as of yet there is no official research devoted to the visualization of cyberspace as a whole, though the researcher and academic, Myriam Dunn Cavelty, has attempted to specifically trace the visualization of cyber threats in visual culture through the analysis of movies and TV series. Ultimately, visual culture remains the only site that influences how digital is read and made readable. Within it we can observe a rapidly growing interest in the understanding and representation of the digital world we live in. A long list of blockbuster movies, for instance, deals with the representation of the intangible, which each time is presented and represented in a more or less physical, more or less ephemeral, futuristic, or post apocalyptic way. This is especially true in the realm of science fiction. Hyperreality plays a role here. The perception of the digital is often channeled into a series of factors that make its specificity explicit. However, the real is increasingly imbued with digital elements, therefore it becomes increasingly difficult to make a clear distinction. Hito Steyerl argues that the “internet is dead” because it crossed borders and became too real. The world we live in is shaped by the internet and the internet shapes the world we live in. It is actually a good exercise, to stop for a moment and notice how every single aspect of our life is regulated by images, screens, 3D models, videos, devices. Indeed, this is nothing new and many words have been shared about and around this topic. But Steyerl takes it to another level, she says: “Data, sounds, and images are now routinely transitioning beyond screens into a different state of matter. They surpass the boundaries of data channels and manifest materially. They incarnate as riots or products, as lens flares, high-rises, or pixelated tanks. Images become unplugged and unhinged and start crowding off-screen space. They invade cities, transforming spaces into sites, and reality into realty.” How can we blame her? The subtle line that separates what is digital from what is physical triggers a whole series of behaviors and reactions, which inevitably lead to situations such as the one you mentioned in your passwords example. However, as I was mentioning earlier, I witnessed a big gap between reality and representation. Our understanding of the digital is mostly based on patterns coming from a speculative process. Digital as such is highly abstract, therefore, it becomes difficult to visualize its functioning. When I had the occasion to collaborate with the The Reporting and Analysis Centre for Information Assurance (MELANI), I immediately realized how much this problem was also present in the work of those who produce and ensure IT security. In this sense, metaphors and allegorical representations of subjects are used, which often are far from providing exhaustive resources that grant access to wider audiences. I started, then, to wonder how to get rid of the limitations brought by the use of such a representative media as photography is, embracing different points of view, allowing it to play on an experiential level, but still underlining a visual narrative. Indeed, there are several examples in this sense, especially if we look back at internet art in the ‘90s and early ‘00s, as a precursor to internet aesthetics such as ASCII art—which is still used in some cases to design the visual look of software such as malware. In my installation, therefore, I put together those elements, creating a narrative which underlines both the functioning and the aesthetic of malware– a quite visual ransomware called Petya to be specific—relying on the viewer’s individual experience to design a speculative process filling the gap between understanding and representation. 

The malware that has mostly attracted my attention has been Mirai, which made the news in 2016. I’ve been fascinated with it ever since. The name means “future” in Japanese and the software itself has been at the core of one of the most widespread cyber attacks of recent times. Hackers used it to infect an army of products: cameras, printers, coffee machines, and other items that are connected to the internet for no serious reasons. The malware created an enormous botnet of “zombie” devices which were used to launch various Denial-of-service attacks against websites and web infrastructure, such as the DNS service provider, Dyn. Human users had no idea about what was going on with their devices but they were unconsciously helping to almost shut down the internet. I can’t really think of anything more similar to the Covid-19 pandemic.

Internet of things… the not-so-new-frontier for hackers. You made a point here, as the expansion of internet services is also a point to consider during the Covid-19 crisis. Suddenly, we are aware of the fact that the network isn’t unlimited and, as with any kind of infrastructure, it relies on limited resources. As previously said, we can definitely trace a correlation among the spread of a biological virus and the increase of cyber attacks. A major part of the world population is massively using internet services, the infrastructure is under pressure, and user behaviors shift to patterns that facilitate the spread of digital viruses. Since its very beginning and despite its borderless promises, internet logic mostly referred to groups and closed dynamics. Therefore, in the context we’re discussing here, the concept of community plays a role. Community building is, indeed, one of the main goals for any online service, both for a marketing and communication strategy. This became even more visible with the rise of Web 2.0 and the new dynamics introduced with the development of participatory content fostering bottom down engagement strategies, and consequently, community empowerment. Recently, I read about an interesting study—by Laurent Hébert-Dufresne, Samuel V. Scarpino, and Jean-Gabriel Young, published in Nature Physics—aiming at demonstrating how complex contagions (such as political ideas, fake news, and new technologies) are spread via a process of social reinforcement while, on the contrary, biological contagions are thought to be spread as simple contagions (where the infection is not directly related to the social context in which it happens). They also mention another study on the spread of memes within and across communities, demonstrating how “the spread within highly clustered communities is enhanced, while diffusion across communities is hampered.” Hence, contagions  benefit  from  network  clustering. This was also said by a Google IT security expert who I met while working on my project. Talking about user behaviors and policies to avoid the spread of digital attacks, they underlined how the company is mainly working on bottom down strategies devoted to educating users to recognize threats and foster individual awareness within their communities. 

My university inbox was recently targeted by a phishing attack coming from a compromised account related to an organization that I’ve been in touch with. The text tried to persuade me to download an “important” text file. The file was called “safety measures in regards to Covid-19.”

Closing the circle! I bet you downloaded it. Jokes aside, I am still fascinated by how phishing techniques somehow maintain this old-fashioned nature. Between.txt files, stock photos of self-styled white collars impersonating CEOs of big and famous companies and institutions, improbable wins, and requests for information, the question remains the same: “Who’s going to trust it?”According to KnowBe4, one of the world’s largest security awareness training and simulated phishing platforms, 91% of cyberattacks begin with phishing emails. However, in some cases, it is possible to assist in successful cyber security awareness campaigns and, suddenly, many users seem to understand some of the dynamics of popular attacks and start to protect themselves. It is very common, for instance, to see laptops with webcams covered—sometimes in a creative way—by any kind of sticker, post-it, colorful tape, and so on. This makes me think that, perhaps, when the risk threatens the personal sphere in a more or less visual way, users are more inclined to adopt defense strategies. Of course, there are many kinds of cyber threats as, to stick to the parallelism we are discussing, there are many different infectious agents. But, among the most effective ones we can definitely mention the Zero-Day, a bug in a system unknown to developers that is targeted for system attacks. It is called Zero-Day because, after the vulnerability is discovered, the developer has zero days to fix it. In a way—and to play with analogies—it makes me think about the concept of patient zero: The sooner you find them, the faster you can find out how an epidemic was spread and develop measures to contain it.

Nothing is real

We live in an era where conspiracy theorists talk about human-made viruses launched into the air and government plots to seize our existence. But what if they are onto something? What if the world that we are living in is not a globe? What if we were created by an intelligent being, instead of a process of biological evolution? What if instead of embracing technology and its innovation, we feared it like a villain? This moment in time—one unlike what many of us have ever lived—is a pertinent one in which to question our human experience, and to wonder what will happen after we return to our post-Covid-19 lives outside of our homes. Will we live in a reality that we can’t even imagine yet? What if we end up questioning everything that we knew before? For some people, alternative thinking is reality.  

Mark Sargent / What if the world as we know it does not exist? 

On February 10, 2015, at 3 a.m. Mark Sargent had a revelation: “I don’t think it’s a globe anymore. I think we are in a building. We’re in a box. We’re in The friggin’ Matrix, or a big sound stage, The Truman Show.” 

Sargent, a former software engineer and professional videogamer, is a self-proclaimed conspiracy theorist. In 2015, the American became interested in studying these alternative realities and proving them wrong, until he came to one that baffled him. 

“It turned into this big snowball where I’m staring at the globe, turning it over and over, again and again, and going, ok, how would I prove the globe in a court of law? And I could never come up with a satisfactory case for the globe,” says Sargent. “Apparently, I had unearthed something by accident that should not have been happening, and it just kept getting weirder and weirder, and bigger and bigger, and now we have conferences in multiple countries.”

Sargent is the protagonist in the 2018 documentary about ‘flat earthers,’ Behind the Curve, and shares his truth at Flat Earth conferences around the globe. He produced a series of videos on a YouTube channel which has 85,000 followers, explaining his ‘Flat Earth clues’ which are the basis of his theory about the flat ‘stage that we are living on—its southern outer limit is defined by a barrier in Antarctica, he explained. Sargent points to the Antarctic Treaty, which was signed by the United Nations in 1959, as evidence of this barrier. 

As he says: “Antarctica is locked-down, you can’t do anything there, it doesn’t matter how rich your country is.” 

Sargent also claims he’s found proof that outer space does not exist: So why isn’t the sky black? 

“If you had the ability to create a sound stage, a giant studio, let’s say you could make one that was very, very large, you could do just about anything you wanted on the inside of it, you know you could project stars and planets on the ceiling,” says Sargent. He adds that the sky, which is actually a big clock, also decorates the fabricated world that we are living in, and provides us with a (false) sense of inspiration, meaning, and hope.

But the big question is, who built the sound stage?

Unfortunately, Sargent doesn’t know, but he has some ideas: “One option is a giant civilization that is much older and much more powerful than ourselves, or some sort of God—and at this point, I’m not going to start naming names. But whoever it was, it absolutely wasn’t us. We are talking about engineering on a scale that is way beyond us.”

You could say Sargent is referring to a being who is more intelligent than we are. But who is this “some sort of God”? Although he says he believes in God, he doesn’t definitively call out the creator of our reality. 

Randy Guliuzza, a former physician and civil engineer with a master’s degree in public health from Harvard University, and the Institute for Creation Research where he works as a researcher and representative, however, do have a definitive answer about who created our world.

On an imaginary archipelago on Europe’s political periphery, isolation, social experimentation, and reconnection with nature infuse micro-societies. Archipelago, a look at the lifestyles of ecocommunities, eco-villages, and spiritual communities, started as Fabrizio Bilello’s research project about self-sustainable groups and progressively extended its focus on those embracing free sexuality, spirituality, and sustainability. Archipelago functions as a mosaic in which each portrayed community contributes to a counter cultural vision of society. In Chapter 1, the biotopes, the communities of Tinker’s Bubble and Yorkley Court Community Farm in the United Kingdom, and Tamera in Portugal, live with the duality of bonding and interdependence with nature.

Randy Guliuzza / What if humans were created by an intelligent being?

Proponents of Creation Science, or Creationism, say that our planet is so complex that the only way to explain its creation is by an intelligent being, who followed principles of engineering. Creationists, unlike Sargent, do identify the creator as a God, aka, Jesus Christ. 

As Guliuzza explains, according to Creation Science, we were made as fully-formed humans: “The biblical account is that Adam and Eve were created and all human beings were descendants from Adam and Eve. There were two of them and they were fully human. And we would say there’s an equivalent of an Adam and Eve for dogs. There was an equivalent of an Adam and Eve for cats. There was an equivalent of the original common ancestor for all the different types of creatures that we see today.” 

Guliuzza is an apt person to talk about Creation Science, because before he became a Creationist 30 years ago (he’s in his 60s now), he was a Darwinist. As a representative for the Institute, his area of expertise is explaining how organisms adapt, from a design and engineering point of view, such as noting that “all of our biological processes can be explained by engineering principles and the more they are consistent with engineering principles the more evidence there is to conclude that they were in fact engineered.” Everything we see on our planet was designed with a clear intention and purpose, by God, Guliuzza goes on to explain. 

How do you get past the fact that organisms seem to operate purposefully, and they behave purposefully and when I look at a heart, a heart seems to have a purpose, it’s pumping blood and the vessels seem to have a purpose, and all of this together looks like it’s a circulatory system and that has a purpose. How do you explain this design without a designer? Our explanation would be, yes, organisms do have an incredible amount of information, and in all of human experience we have only seen information come from an intelligent source.”

In Creation Science, the concept of purpose is not only about how our biological functions were designed, it is also about our reason for living. Guliuzza explains that by understanding Creationism, you can also embody why we are here on earth. 

“We are not adrift on a planet, spinning about with no reason for being here, through some purposeless origin of life in some warm little pond, with no goal in mind where it just happened that you stumble upon human beings. From what we can see, life looks pretty incredibly designed, and the more we dig down scientifically, the more complexity and unity we find. And the broader we look, the more we see how organisms work together in this incredible system where there is a benefit to each and every one of them.”

Guliuzza notes that with purpose could come greater meaning in our lives. Could that purpose be as simple as concentrating on the happiness and joy of spending time with others, without distractions?

The valley Valle de Sensaciones in Alpujarra in the southern mountains of Sierra Nevada in Andalucia, Spain is an ecovillage laboratory focused on free love and sexuality.

LaVern Schlabach / What if technology is evil? 

Arthur, Illinois, is an Amish community that is relatively unchanged since its founding more than 150 years ago. LaVern Schlabach lives there, less than three miles from the home where he was born 58 years ago. Like the other 900 or so households in the community, Schlabach’s has no television, internet, or cell phones. He’s used the internet a few times, but says “for me to find my way it would take all day.” Schlabach and his community, like other Amish, are Christians and they generally shun technology to keep a “safety net” around their culture, while at the same time contributing to the greater good when they can—like a community in Sugarcreek, Ohio which used their sewing and crafting skills to create hundreds of masks and face shields for Covid-19.

As Schlabach says, Amish culture is based around the family circle and keeping it intact, teaching children how to bake, care for animals, be self-sufficient, and above all, loved.

“Our salvation is not based on being Amish, it is based on accepting the blood of Jesus Christ and being forgiven for our sins, but we also recognize the safety of our culture,” says Schlabach. “If we don’t take care of that, our grandchildren won’t have that opportunity.”

In the last 20 years, some technology, like more landlines, and portable phones for construction workers have been introduced in Arthur, to support local businesses, and there might be a landline which a few families could share, Schlabach says. For example, Schlabach does have access to a landline, which he uses for his custom furniture business, which he started in 1988 and now employs 50 people, but that phone is in a separate building. He can also receive and send emails, but the messages are delivered to him from a third party via fax. It’s possible that as technology evolves, a ‘council of elders,’ the community’s decision-makers, might introduce more changes, he says. Maybe even the internet. He adds: “The world around us keeps changing, we are not trying to keep up with that, but we are still trying to communicate so we can do business.”

Schlabach has never known another way and says to allow technology in, would take away from the “family circle” focus on each other—his wife of 40 years, six children, and 33 grandchildren. 

“It’s not beneficial to the family circle—it creates individualism instead of togetherness. A cell phone used for business and business only, that is one thing,” says Schlabach. “In the evening, we like to get together to play games and do something as a family. Just sit down and visit. I don’t know how we would have time for that if we had smartphones and iPhones and all that. It’s quiet. There is no interruption from the outside.”

Schlabach says keeping the circle intact is particularly important for Arthur’s young people: “For a young person there is so much bad information available at the fingertip. TVs were big, now, today, a young person can reach out and put something in their pocket that is worse than any TV ever was.”

He’s seen the negative effects of technology while outside of Arthur (he only rides a bicycle or drives a horse and buggy, but will use a shuttle service if he needs to travel beyond the village), for example, at a business lunch, where he noticed a group of people at the next table who were more focused on their phones, than each other. He’s also seen what happens outside of the circle from a brother who decided to leave the community. 

“I think my brother realizes now raising children outside of the circle is much different than how he was raised. He is faced with things that he was never faced with growing up,” says Schlabach. He adds that protecting the circle is about maintaining a sense of peace and lack of fear. “How many people experience the rough roads and don’t know how to work it out? My grandparents and parents taught by example: They took us to church, we have our daily devotions that teach us how to work through the daily struggles. Do we ever run into situations where we are afraid? Yes, we do. But ok, it’s all in God’s hands. He will see us through.” 

Schlabach has his own reality, just like we all do. Whatever it is, it is based on what we are taught, on what’s around us. This ability to question it and develop new ways to make sense of it, is what makes us human. That’s true whether we believe the earth is flat, created by a higher power who is smarter than we are or who is guiding us to simple happiness.

Beyond palaeoanthropology

Not many nuclear physicists are actively engaged in studying human evolution. Claudio Tuniz, a scientist who works at the Multidisciplinary Laboratory of the Abdus Salam International Centre for Theoretical Physics in Trieste (Italy), is one of them. He advocates applying advanced physics methods in palaeoanthropology and archaeology. Over the last six years, he has coordinated the SAPIENS project (an Italian acronym for “sciences for archaeology and anthropology to interpret our deep history”), supported by the Enrico Fermi Centre for Studies and Research (Rome). His research relies on geochronometers (radiocarbon and other natural radionuclides used for dating the past) and imaging techniques based on X-rays and neutrons. Clearly, these are not the classical tools of archaeologists or paleontologists, who are usually more familiar with brushes and chisels. Or, at least, that’s what laypeople think.

Indeed, Tuniz’s passion for exploring human evolution began with those tools. “I graduated in nuclear physics in Trieste—he says—and spent the ’70s researching fundamental physics at the Legnaro National Laboratories of the National Institute of Nuclear Physics. There I began to envision a more cross-cutting use of nuclear physics. In particular, I was looking for non-invasive techniques to date ancient objects. Radioactive decay, for example, requires many grams to date an artifact, a practice capable of damaging it. I found the solution in 1981 when I was offered a postdoctoral position at Rutgers University in New Jersey, where I learned to count atoms with a particle accelerator. Only a few milligrams of a substance was enough to yield an accurate date. However, at the time, I was involved with the history of the solar system. While playing around with various particle accelerators, my research soon took an interesting turn. Starting with samples of meteorites from NASA, I ended up dating ancient human remains. The step to palaeoanthropology was a natural consequence of this.”

Archaeopteryx Lithograph, 11th specimen, detail of claws and feathers, by Simon Sola Holishcka

He continues: ”In the ’90s, when I was leading the radionuclide dating center at the Australian Nuclear Science and Technology Organisation near Sydney, I explored various fields of research. I applied the atom counting technique to archaeological applications, but also to study the paleoclimate in Antarctic ice cores and, in collaboration with the International Atomic Energy Agency, to identify illicit nuclear activities from environmental analysis of rare uranium radioisotopes. Today, people seek my expertise in nuclear chronometers and microscopes, but I am more interested in the big picture of human evolution.”

While employed at the Australian laboratory, Tuniz also worked on radiocarbon dating of Aboriginal cave art in the Kimberley, with paintings dating back nearly 20,000 years. A process of progressive immersion into the human deep past took place. In 1788, when Captain James Cook first set foot in Australia, he discovered populations that—in his opinion—were living in the “Stone Age.” The idea that indigenous people were still living in the past has been used to support the expansions of the European colonizers, justifying  genocide, land occupation and forced religious conversion. Tuniz worked with the most eminent Australian archaeologists (also in cooperation with Aboriginal elders) and began to develop interest not only in the “technical” aspect of his work but also in developing the theoretical side of some of the issues he came across. In particular, concerning human evolution, he is now interested in gaining a better understanding of the circular processes that involve the connection of brain, body and instruments—the extended mind hypothesis—and the emergence of the first stratified societies as a result of the development of symbolic thought and human self-domestication.

It is in this latter aspect, in particular, that Tuniz reached some surprising and troubling conclusions. “We have always been told that the first complex human societies appeared with the agricultural revolution of the Neolithic, roughly 12,000 years ago—he points out—but this is not so. Social stratification already existed in the Upper Palaeolithic, tens of thousands of years before. Many factors confirm it, such as the early presence of privileged classes in sites where the dead were buried with ivory jewelry and beads, together with other precious ornaments. For example, the Sungir site, in present-day Russia.” Tuniz emphasizes the importance of economies based on mammoth hunting to make such a case. Indeed, people extracted most of their resources from such an activity: for example, excess food, shelter, artifacts, and clothing, among other things. “Those who managed to prosper thanks to these enormous animals acquired great wealth at the time. It was something they had to distribute, regulate, and defend. This led to the division of labor into specific roles and tasks, forming hierarchies, enabling the organization of larger and more complex societies. When the Neolithic came, everything was already set up. At that point, the population increase could take off as a result of the agricultural revolution and the end of nomadism.”

Archaeopteryx Lithograph, 11th specimen, detail of skull remains with upper and lower jaw, by Simon Sola Holishcka

Why is it so essential to determine when the first human society was established? Probably because by studying the past, we can find new clues to interpret the present and possibly the future.

“Modern pre-historic research tends to put ‘palaeo’ in front of everything,” Tuniz jokes. “We are becoming aware that much of the knowledge we need to understand our present-day world also applies to the past. Therefore, we speak of palaeoecology, palaeoeconomy, palaeoneurology. Palaeodentistry, for example, offers us interesting insight stored in the enamel and tartar of our ancestors’ teeth, such as what they ate or whether they experienced traumas during their growth. These are hyper-specialist studies but can provide an all-round view of many aspects of an individual’s life in the deep past.”

Initially, Tuniz applied his knowledge to ‘hard sciences’ like nuclear physics or chemistry. In his latest works, he has shifted to social sciences, which he jokingly calls ‘soft sciences.’ “As larger human societies developed, evolution selected the less aggressive individuals, those who respected authority, and who were willing to play and share. At the same time, the first distinctive signs emerged, enabling individuals to recognize their own group and to distinguish ‘us’ from ‘them’: particular elements of clothing and ornaments, body painting and tattoos were well suited to the purpose. Eventually, by sharing the same appearance, the number of individuals willing to collaborate and be supervised became very large indeed. “They just had to invent something to share and identify with it. Along this path, thanks to symbolic thinking and immagination, the idea of sharing a homeland has brought millions of people together; and religion has united billions.” This is why looking into the past is relevant to understand our present.

Another puzzling event, emerging from our deep past, is the shrinking of our brain. After millions of years of growth, this trend has reversed over the last 50.000 years. To explain this phenomenon, Tuniz uses the concept of cyborgs. “This term is a contraction of ‘cybernetic organism,’ and was coined by scientist Manfred Clynes  in the 1960s when the idea of augmenting the Homo sapiens for space exploration began to gather momentum. This new composite creature was envisioned as having integrated artificial prostheses as parts of its body.” The process of body extension initiated when we started to use the first stone tools. Since then, we have been delegating a broad range of tasks and functions to external objects, which have become our automatic prostheses in their own right. Indeed, Tuniz’s latest book, published by Springer in 2020 and co-authored with Patrizia Vipraio Tiberi Vipraio, is entitled: From Apes to Cyborgs: New Perspectives on Human Evolution.

Archaeopteryx Lithograph, 11th specimen, detail of spine and ribs by Simon Sola Holischka

“We first delegated our ability to hunt (and make war) to stone tools, then metal knives, and then firearms or other lethal weapons. Likewise, we initially entrusted our memories to cave paintings, then writing, and ultimately to the address books of our smartphones. Our sense of direction is now rooted in maps and GPS devices. We can thus delegate complex problem solving to artificial intelligence,” says Tuniz. “Our species is increasingly capable of completing awesome tasks. As individuals, however, we are less and less ‘capable’ because we have delegated most actions and information to something or someone else. Humans have been doing this for at least 2 million years. As Homo sapiens, our destiny as cyborgs was probably already written in the Palaeolithic.”

It is precisely by studying skulls from the deep past that we learned that the brain of Homo sapiens, after having grown and developed its globular shape, has slightly decreased in volume since the first complex societies were founded. Strikingly, the same phenomenon affects dogs and other domesticated animals. As adults, they still feature the physical and behavioral traits of younger individuals. In practice, according to Tuniz, to be social, modern humans have had to tame themselves: and look and act as if they are forever young. The direction of this great collective game is now increasingly entrusted to digital agents, and this is a cause for concern if intelligent algorithms keep increasing their learning capacity at the current rate. 

Blending nuclear physics with palaeoanthropology spans multiple fields of action. “Our X-ray laboratories at ICTP and at the Elettra synchrotron have generated virtual brains and teeth of many ancient hominids,” says Tuniz. “Our interdisciplinary team, which also includes an archaeologist engaged with physics—Federico Bernardini—continues to carry out advanced analyses to study human evolution. We keep hoping that, by studying humans remains from the remote past, we will be able to gain a better understanding not only of what our ancestors were like—which is already a great accomplishment—but also what ‘we are like’ in the present, and perhaps how we will be in the future: New hybrids are already appearing, made of physical bodies, digital devices and enhanced organic materials, all connected by an intelligent system of people and machines. This new enhanced humanity would certainly have a common destiny. And whether that will be for the best or the worst, it is still up to us to decide. Maybe.”

Alternative progress

Alfred Russel Wallace

Imagine developing a cornerstone theory of modern science, and yet people use another scientist’s name when talking about it.

ALFRED RUSSEL WALLACE (1823-1913)

That happened to Alfred Russel Wallace, who independently conceived of the theory of evolution through natural selection—what we’re used to calling “Darwinism.” Born in Wales, the eighth of nine brothers, Alfred started collecting insects at age 22, while working in the countryside as a land surveyor. A strong believer in the transmutation of species—the altering of one species into another—in 1848 he decided to leave Wales for the Amazonian rainforest, where he stayed four years, collecting specimens and selling some of them to museums to fund his trip. In 1854, he left again, travelling for seven years through the Malay Archipelago and collecting more than 126,000 specimens, mostly beetles. There, he wrote his essay On the law which has regulated the introduction of new species, drawing the conclusion that “Every species has come into existence coincident both in space and time with a closely allied species,” from which it evolves. He started corresponding with Darwin, who was working on the same research for decades but had not yet published it. In February 1858, Wallace sent Darwin his On the Tendency of Varieties to Depart Indefinitely From the Original Type essay, where he theorized natural selection, although never using that term. Darwin found the work so similar to his own writings that he decided to send them both to the Linnean Society of London, where they were presented jointly on July 1, 1858. One year later, Darwin published On the Origin of Species, which is considered to be the foundational work of evolutionary biology.

GREGOR JOHANN MENDEL (1822-1884)

Gregor Johann Mendel

The Augustinian friar famous for inventing genetics from peas was born–named only Johann—in Silesia in the Austrian Empire. During his childhood he worked as a beekeeper, a passion he cultivated his whole life. He struggled to pay for his physics studies, and became a friar because that enabled him to earn a degree without having to pay for it. When he joined the Augustinians, he was given the name Gregor. At St. Thomas’s Abbey in Brno, Czech Republic he began his studies on heredity using mice, but his bishop did not like that one of his friars was studying the animal sex, so Mendel switched to plants, starting experiments on edible peas. Between 1856 and 1863 he cultivated and tested some 28,000 plants. Mendel worked with seven characteristics of the pea plant: height, pod shape and color, seed shape and color, and flower position and color. Taking seed color as an example, he showed that when a true-breeding yellow pea and a true-breeding green pea were crossbred, their offspring always produced yellow seeds. However, in the next generation, the green peas reappeared at a ratio of one green to three yellow. To explain this phenomenon, Mendel coined the terms ‘recessive’ and ‘dominant’ in reference to certain traits: the green trait—which seems to have vanished in the first filial generation—is recessive, while the yellow, is dominant. He published his work (Experiments on Plant Hybridization) in 1866, demonstrating the actions of invisible factors—now called genes—in predictably determining the traits of an organism. Nevertheless, the paper was largely ignored by the scientific community, cited only three times in the next 35 years. 

THOMAS HUNT MORGAN (1866–1945)

Thomas Hunt Morgan

Mendel’s work on peas was rediscovered around 1900, and with it, the foundation of genetics. At the time, Thomas Hunt Morgan—born in Lexington, Kentucky, in the same year in which Mendel published his main research—was a professor in experimental zoology at Columbia University in New York. During his career, Morgan became increasingly focused on the mechanisms of heredity and evolution: He was, nonetheless, initially skeptical of Mendel’s laws of heredity, as well as the related chromosomal theory of sex determination. With his experimental heredity work, Morgan was seeking to prove Hugo De Vries’ theory that new species were created by mutation. Around 1908, he started working on the fruit fly Drosophila melanogaster, encouraging students to follow him. He mutated Drosophila through physical, chemical, and radiational means, beginning cross-breeding experiments to find heritable mutations, with no significant success for two years. Then, in 1910, he bred mutant white-eyed flies with normal red-eyed females. Their progeny were all red-eyed, but second generation cross produced white-eyed males: A sex-linked recessive trait, which displayed Mendelian inheritance patterns. In a paper published in Science magazine in 1911, Morgan concluded that some traits were sex-linked, that the trait was probably carried on one of the sex chromosomes, and that other genes were probably carried on specific chromosomes as well. Morgan’s Fly Room at Columbia became world-famous, and he found it easy to attract funding and visiting scholars: The Fly Room is also a 2014 independent film based on the story of Calvin B. Bridges, one of the scientists who worked on Morgan’s team. 

STEPHEN JAY GOULD (1941–2002)

Stephen Jay Gould

One of the most influential authors of popular science, Stephen Jay Gould—born in Queens, New York, from a Jewish mother and a Marxist, war veteran father—spent most of his career teaching at Harvard University. He was a palaeontologist, evolutionary biologist, and science historian. In his 1996 book, Full House: The Spread of Excellence from Plato to Darwin, he favored the argument that evolution has no inherent drive towards long-term “progress.” Even if commentary often portrayed evolution as a ladder of progress—leading towards more complex and ultimately more “human-like” organisms—Gould argued that evolution’s drive was not towards complexity, but towards diversification. Given that life is constrained to begin with a simple starting point (like bacteria), any diversity resulting from this start, by random movement, will have a skewed distribution, and therefore be perceived to move in the direction of higher complexity. But life, Gould said, can also easily adapt towards simplification, as is often the case with parasites. Evolutionary biologist, Richard Dawkins, in reviewing Full House, suggested that he saw evidence of a “tendency for lineages to improve cumulatively their adaptive fit to their particular way of life, by increasing the numbers of features which combine in adaptive complexes.” “By this definition, concluded Dawkins, adaptive evolution is not just incidentally progressive, it is deeply, dyed-in-the-wool, indispensably progressive.”

Finding Utopias: unlocked conversations beyond a new normality

We were concentrated on our everyday lives and work, and then something happened. maize.LIVE 2020 was supposed to be the high point of our year: after three editions of experimentation, navigating the tricky world of events, we were ready to reach the pinnacle of our format called maize.LIVE and prepare an astonishing experience at our new H-FARM campus.

Unfortunately, Covid-19 shuffled the cards, and we had to start from scratch. Suddenly, the know-how of organizing an offline event became redundant as we were discussing a future in which normality was the new Utopia, if normality ever existed.

Today, we are facing the opportunity to unlock new conversations and to plan a (new) normality, composed of unachievable but fundamental challenges called Utopias. One of the challenges we face today is how to bring the second row of the choir to the front, giving it a voice and space to share thoughts and emotions without betraying the trust of a public that is looking for conscious advice while facing the instability of a fragile economic scenario. No longer are we willing to sponsor speakers and market dynamics that are incoherent with the fundamental message we are trying to convey.

Manifesto

What is innovation if not chasing after something that doesn’t (yet) exist? We could talk about it as the greatest Utopia of our times: imagining a place that does not yet exist, working to reach it, knowing that it will never be our final destination. Once conquered, we will need another purpose, another fantasy, another motivation.

Utopia

Utopia: an object of an ideal aspiration which is not susceptible to practical accomplishment.  It was Thomas More who coined the term, which means both a “place that does not exist” and a “good place to be”. On his imaginary island, More places a vision of society, establishes rules that are different from the ordinary, explains habits and functions, and hypothesizes alternative scenarios from fifteenth century England. He does what today we would call future envisioning, a concept which we would define as a fundamental attitude for organizations that want to embrace innovation. A way of thinking rather than a destination, a strategy rather than a result.

2020 was a year that threw reality, in all of its imperative urgency, in our faces: we lived day by day, we focused on extremes, and on the management of enormous, unknown, scary situations. The pandemic magnified all the cavities that were already present in our society and forced us to imagine alternatives. There is still room for thought and imagination, but many of our perceived utopias—those of organizations and even individuals—now sound wretched.

Now that the year is about to end giving us new breathing space, we can return to that precious practice, that attitude that we like so much. Asking big questions to which we can imagine answers that are not only tangible, but also, utopian, because we know that in the capacity to be both concrete and visionary lies the spirit of an innovator, one to whom we can profoundly relate.

A world map that does not include Utopia is not even worthy of a glance, because it leaves out the one place which humanity yearns to conquer. And, when it conquers, humanity looks around, sees a better place and sets sail again. Progress is the making of Utopia.

In the time warp of spring 2020 we did not stop looking for and intercepting signals from the market, society and people: there were and are three recurring words that we have reflected upon since.

Time, Trust and Happiness

We put them at the center of an internal and external debate, asking: What utopias do they represent? Can we control time for our organization and at work? What enables trust and makes it practical as well as ideal? What does being happy really mean?

This edition of maize.LIVE “Finding Utopias: unlocked conversations beyond a new normality” is dedicated to these and other questions: we will address them with the usual desire to connect and exchange experiences, listening to solid points of view and following novel leads for our investigation.

We have created a unique digital experience—forget about the webinars you have attended in the past, and especially during the last few months: we can’t wait to welcome you to our island.

maize.LIVE 2020 online edition is a three-day event, on December 1, 2, and 3 from 4 p.m. to 6 p.m. (UTC + 1). Click here to learn more.

Fahrenheit 451

Evolution is an intriguing concept to analyze. To study the evolution of a phenomenon, a living being, or a system, one has to connect the dots in reverse order and figure out what happened and why. Why did one path work, enabling the organism to multiply while the other one failed and led to extinction? Which characteristics facilitated adaptation or helped to seize evolutionary opportunities, and which ones limited it?

This kind of reassessment is absolutely fascinating and extremely useful to understand why the human brain has certain characteristics or why plants have evolved and adapted to their environments. But also why certain organizations have survived, and others have vanished.

Quite another thing, and to some extent more complex, is to predict how a subject or a system may evolve because that would mean being able to predict which characteristics would be suitable for which future contexts. And this, considering a sufficiently long timeframe, is very difficult for obvious reasons: It means being able to forecast the rise of new situations and contexts and to anticipate the type of adaptation and ‘fit’ in relation to that context. This is why trying to understand what will be needed and trying to force evolution in that direction to increase the chances of survival and success is a strategy that cannot be sufficient to ensure survival or success. In many ways, the future is unpredictable, and 2020 will undoubtedly go down in history as the year many understood the concept of the ‘black swan’ theory and reconsidered their perception of the complexity of predicting future scenarios. That is why it is more important to nurture evolutionary capabilities than to try to guess which features will work best.

It is quite likely that the future will be increasingly unpredictable and will bring about major and sudden changes. Not being able to anticipate how it will be and what will work, to survive and succeed, people and organizations will have to rely on the ability to react swiftly and adapt as they evolve. The ability to seize opportunities will depend on being able to recognize and deal with them, and this, in turn, will require flexibility, diversity, and agility. We must look up: people should focus on education that develops these skills instead of focusing everything on knowledge; and organizations should place more emphasis on how things are done, rather than just focusing on what gets done. Being prepared also pays off when it comes to evolution. And the best investment a person or organization can make is to focus on improving their adaptability and ability to grasp and ride change.

A bit like everything else. After Gary Player, a famous golfer of yesteryear, sunk a shot from a very difficult bunker, a spectator said to him: “Hey Gary, that was a very lucky shot.” He replied, “I guess you’re right, but you know, it’s funny: The more I train, the luckier I get.”