It all happened overnight

Work hours, the value of one’s personal time, existential discussions with friends, the courage to face tough decisions, even at the cost of leaving everything behind. To strive for schools to make a quantum leap forwardnot only their infrastructure but also their content. To update teachers, because goodwill alone is not enough to raise generations of well-prepared studentssloppy goodwill is neither science nor method. To recover the lost memories of the elderly, to produce journalism worthy of this name and to hunt down fake newsmakers with a blowtorch. To stop tolerating climate change deniers. To double public health funds by setting up European collaboration programs. To reopen cinemas, theatres, bookshops, and libraries, providing adequate support for the necessary ‘crap’ that many consider culture. To wall up gyms. To demand rigour and truth from politics. To change jobs when one is unhappy with one’s own, to immerse oneself in nature, to play more, much more, with one’s children.

Covid-19 has turned the agenda of our priorities upside down. It has shown us a new path that not all of us would like to follow. Some will say that after all, nothing has changed and that when we finally get the vaccine, life on planet earth will flow again as before, with the same vices and virtues. Well, don’t believe them. Something huge has happened, and on closer inspection, it has happened inside us, in that intimate recess where emotions and reason collide to give a direction to our lives. Perhaps it is worth noting that in the era of exponential technology and enhanced futurology, we haven’t been able to predict a damn thing that happened in these past few months. The efforts of certain digital nomads are honestly ludicrous. Dozens of globetrotting speakers have resumed traveling and lecturing on the future that awaits us around the corner: they have updated their PowerPoints and added a couple of slides on Covid-19. Other than that, everyone seem to have an Elon-Muskian optimism of willpower.

Over the last 3,000 years, at least 13 pandemics have ravaged our planet. Almost all of them were generated by ‘zoonoses’, leaps from animals to humans through successive genetic mutations of viruses. And a new resistant virus could surface again tomorrow. As Dr. Rieux, the protagonist of Albert Camus’ novel, The Plague, says: “Being alive always has been and always will remain an emergency; it is truly an inescapable ‘underlying condition.'” As if to say: The plague never stops being among uswhen we defeat it, we must be aware that it will return. Covid-19 has eliminated the weak elements of the biological chain, reset work to zero, and starved affections. Our weakness and unpreparedness in the face of the unexpected represent the most potent evolutionary element of our species, at least socially. No matter what we do.

Digital evolution

In his 1994 book, The Astonishing Hypothesis, Francis Crick posits that “a person’s mental activities are entirely due to the behavior of nerve cells, glial cells, and the atoms, ions, and molecules that make them up and influence them.” This hypothesis, according to Crick, is the basis for the scientific study of the complex behaviors that emerge from brain activity: given this theory we can try to understand intelligence, consciousness, even free will, using the methods and tools of science. 

From this point of view, the scientific problem becomes: how do these cells in the brain actually work? The human brain is very complex and is the result of a few hundred million years of evolution: from the first mammals, which appeared around 250 million years ago, to the appearance of homo habilis about 2.5 million years ago. Unfortunately, the evolutionary clock cannot be turned back to see which features appeared at which stage and how they became what they are—nor are there any alien ecosystems available for comparison. So one can ask if technology can help: Can we use our increasingly powerful computational tools to simulate the evolution of life and gain some deeper understanding of how the brain has evolved to become what it is now?

Artificial life became a recognized discipline in the 1980s, after it was originally introduced by computational theory pioneers like Alan Turing and John Von Neumann in the 1940s and 1950s. As a multidisciplinary field, artificial life seeks to simulate lifelike processes within computers, for example, by creating highly simplified artificial ‘aliens’ and comparing their development and behavior to real biology, with the goal of discovering something of life’s essential character—including the emergence of complex (and even intelligent) behaviors.

One of the most fascinating projects of artificial life is OpenWorm, which is coordinated by Stephen Larson and is open to the collaboration of hundreds of scientists around the world. The goal of OpenWorm is to create the world’s first digital organism which uses the principles of life to achieve existence on a computer. As their test case, the team chose the Caenorhabditis elegans worm, a one-millimeter-long roundworm whose entire body consists of 959 cells, of which 302 are neurons which form approximately 10,000 nerve connections (by contrast, the human brain contains around 86 billion neurons and 100 trillion synapses). The C. elegans is the world’s most understood multicellular organism and was also the first multicellular organism to have its genome sequenced in 1998, giving biologists a full understanding of how it develops from embryo to adulthood. The tiny worm is also the only organism for which a connectode—a 3D map that shows how each nerve cell is connected—has been made.

OpenWorm is another example of the state of the art of many advanced projects in artificial intelligence: The results are interesting, including the ability of using the worm simulation to drive the behavior of a simple LEGO robot, as shown in a 2017 breakthrough, but the simulation, even of such a simple organism, is still far from being completely realistic. As in many other fields and subfields of artificial intelligence, progress is fast, but the goal of deeply understanding complex behaviors—even of very simple organisms like C. elegans—is still in the future.

Literary catharsis

They say that literature foreshadows reality and that writers, the truly great ones, have the gift of precognition—one of those ‘superpowers’ which no one would ever choose to have and which some, starting with Cassandra, daughter of King Priam of Troy, would have gladly done without. Recently, however, a prophetic writer like Margaret Atwood, who is experiencing the male chauvinist theocratic apocalypse she predicted in her The Handmaid’s Tale more than 30 years ago, said something enlightening: “Writers do not predict anything. Their job is to talk to their readers to ask them if this is really the world they want to live in.” Meaning that we will always and, in any case, be in our own hands. 

In Italy, during the initial days of the spread of what was to become the Covid-19 pandemic, Albert Camus’ The Plague (1947), José Saramago’s Blindness (1995), Giovanni Boccaccio’s Decameron (1353), and Alessandro Manzoni’s The Betrothed (1827), were among the most read works of fiction. Clearly, their appeal was due to the fact that they deal with a subject—frightening epidemics caused by more or less real or plausible viruses—which is tragically very timely and capable of rousing a fair dose of curiosity. And despite the books ending with scenarios that are in no way reassuring, and that are no less anguishing than those of horror or science fiction novels.

Camus—a son of humble pieds-noirs, French Algerians—published The Plague when he was already an esteemed intellectual with strong anti-fascist feelings. He had broken his ties with the French Communist Party and with his friend Sartre, had authored The Stranger, and was a fervent supporter of democracy in Arab countries and humanist ideals. As soon as it was published, his second novel was a huge success with both readers and critics. It is considered one of the most outstanding novels of the twentieth century, still a global long-selling book, and widely quoted and translated. The most recent Italian edition was brilliantly translated by Yasmina Mélaouah, who has given a voice to authors such as Colette, Genet, Pennac, Vargas, and de Saint-Exupéry, among others. 

From Milan, where she lives, and where these days the mostly deserted streets resound with megaphones that order to stay at home, she recollects her first encounter with this classic: “I was 15, and our French teacher had made us read it during the year. When I had to re-read it, in order to translate it, the same details that had stuck in my mind at the time came back to me. First of all, the initial scene, when the doctor comes out into the dark corridor and runs into the first dead rat. That is actually a scene that has continued to play itself out in my mind my whole life […]. Then, the finale, when the quarantine ended, and the doctor crossed the celebrating town.”

The Plague, set in the coastal town of Oran—”an everyday place, merely a simple French prefecture on the Algerian coast”—starts off with a general sense of disbelief. Camus wrote: “Our fellow citizens had not the faintest reason to apprehend the incidents that took place in the spring of the year in question and were (as we subsequently realized) premonitory signs of the grave events we are to chronicle. To some, these events will seem quite natural; to others, all but incredible.”

It was April 16 in the second half of the ‘40s. Doctor Bernard Rieux had left his office like any other day, and, in the middle of the landing, stepped on a dead rat. An unusual event, for sure, but one that could have just ended there. However, one rat became ten, and then hundreds and thousands. One man and then another and another fell sick, all with the same illness: Fever, growths ripping open the skin, and pains in the neck, armpits, and groin. People were starting to die. “It was as if the earth on which our houses stood were being purged of its secreted humors; thrusting up to the surface the abscesses and pus-clots that had been forming in its entrails.” 

This is how the contagion started; in silence, on a morning just like any other: Isolated, hungry, unable to stop the plague, the city became a stage where the good and bad of society, always on the borderline between disintegration and solidarity, could put on their show. 

In Camus’s mind, however, that “plague” stood for something else. “In 1947, the world was recovering from another tragedy, that of Nazism, and the novel was also a metaphor for that supreme evil, in Camus’ intentions,” explains Mélaouah. “Actually, any virus, even today’s, could be a metaphor for something else. Camus intended to write an allegory of occupied France, and at the time, some criticized him for not being forthright and using an invisible enemy instead of one in the flesh. However, allegories are powerful rhetorical figures of speech that can be “superimposed” on many other circumstances. The Plague is a metaphor for a human community that discovers evil and needs to find ways to deal with it. I believe that all epidemics confront humankind with this.”

“After the contagion,” Camus writes, “fear began, and with it, serious reflection.” Over the coming months, we will live amid disorientation. Our future is uncertain, even for simple things like going to a restaurant or planning a vacation. We will all have to stop and come to grips with our fragility. So we need a clear head and, above all, competence. Since this emergency started, countless scientists, thinkers, philosophers, and artists have attempted to allay fear with reasoning, exploring the defense and reaction mechanisms of the human species. Mélaouah also tried to analyze the situation: “I strongly believe that the book’s key message is that humankind can only tackle evil when it becomes a community, a “we” rather than a “me.” The novel’s core message is that everyone must roll up their sleeves and work together. The sense of a community that we also feel strongly in today’s challenging times.”

The Covid-19 pandemic—the most significant healthcare emergency of our times — is forcing a veritable paradigm shift on humanity, making us focus on things that used to just pass, and remain unseen. We must now deal with idleness, a lack of social interaction, and the deafening silence that has returned to our cities, like an animal banished long ago. The Plague says “It is in the thick of calamity that one gets hardened to the truth—in other words, to silence.” Regarding this statement, Mélaouah comments: “Since I don’t like noise, I found the silence in the first few days to be wonderful. Now it has become a haunting shadow.”

When you come to think of it, silence is also the “noise,” or the backdrop, of reading: “And of concentration. Looking to the bright side, this situation is helping us realize what really counts for each of us, the core of essentials among the many things we think are important. This forced pause will help us to do a bit of clean-up, and throw overboard what’s unnecessary, on all fronts.”

When Camus decided to examine the tragedy of the plague through the eyes of a doctor, he certainly wasn’t dealing with the thousands of healthcare professionals we have today, people who in this period are risking their lives and those of their families to look after Covid-19 patients. “I feel that he wanted the point of view of someone who, more than anyone else, had to get their hands dirty. He didn’t want a philosopher aspiring to be a saint, like Tarrou, or someone like the journalist Rambert, overwhelmed by emotions, and even less the viewpoint of the writer, Grand, someone who spends his time filing away at the same sentence. He wanted someone who would stick his hands, concretely, inside evil, lancing the abscesses, curing the bodies of the plague victims because evil demands concrete actions, and only later it can be reported.”

Who would Mélaouah choose to narrate the story on Covid-19 in a hypothetical novel of the future? “Probably a checkout clerk from a supermarket chain. […] I would like it to be told like that, from below. After all, the part of the novel that struck me the most was when Tarrou said that his moral is “understanding.” Meaning that we must also feel understanding with regard to people who, overwhelmed by panic, have raided the supermarkets.” 

After the November 2015 terrorist attacks in Paris, many people re-read Hemingway’s A Moveable Feast. After Notre Dame burned in April 2019, sales of Hugo’s Notre-Dame de Paris skyrocketed. So many people reading the same books when disaster strikes is perhaps a little like huddling around a vital message broadcast on all channels, or listening to the daily Civil Protection conference: someone speaking for the good of all. “The point is that as members of humankind, we all react the same way,” says Mélaouah. “We seek comfort, a reflection of what we are experiencing, answers, and even distraction. Then there is also the slightly higher comfort in reading, as it talks about the same situation you are in, but at the same time links it with that of everyone else and pulls you out of it. It is like when you go to watch a horror film to exorcise certain fears, like the catharsis of theater.”

So what else could help us out at a time like this? “A good place to be in hard times is Charles Dickens, any of his novels. With him, you laugh, you cry, you get carried away. He takes you everywhere, from tragedies to comical situations. Dickens wasn’t afraid of sticking his hands into humanity.” We can only wait for the Manzonian rain to free us from all this.

Living lexicons

Before the internet, with its emails, SMS, and social media, the evolution of languages had an unexpected and profound change—in the beginning, was the Word. Soon after, in Babel, where a united human race speaking a single language agreed to build a tower tall enough to reach heaven, these words became many, and then too many. The various languages proliferated, and they were not mutually understandable. Then, thousands of years later, in the beginning of the 21st century, when the fear of the impossibility of understanding each other had been weakened for several hundred years, and the Utopia of the language of Esperanto, “the international language” had faded away, another fear started to spread, especially in Europe: The gradual loss of lingual diversity, squashed by a progressive diffusion of an impoverished international English language, which is devoid of complexity and refinement.

But fortunately, the evolution of languages ​​follows more eccentric paths than that. Globalized humanity did not succumb to the alleged attacks of a monolinguistic and English-speaking God, just as we have never been lost in an inextricable tangle of idioms, although our movements between different points on the planet have greatly increased.

Multilingualism—the ability to communicate in three or more languages—does not often take root in cosmopolitan cities. In New York, London (or Milan) many languages are spoken, but apart from some exceptions, in everyday life each person tends to only speak English (or Italian) and their mother tongue, if this is different from English (or Italian). In the Val d’Aran in Spain, however—a valley wedged between the Pyrenees mountains, with a population of little over 10,000—children study in four languages in elementary school: Aranés, the local and officially recognized variant of Occitan; Catalan, for their region Catalonia; Castilian, because Catalonia is part of Spain; and English, like many children worldwide. 

The same situation exists in Val Müstair, Switzerland. There, in the course of everyday life, people speak Jauer (a local variant of Romansh), study and write in Vallader (a Romansh variant from the Lower Engadine valley, which, unlike Jauer, has a solid written tradition), read official documents in Rumantsch Grischun (a ‘unified Romansh’ created in the 1980s), speak to visitors in Swiss German (the most spoken language in Graubünden, the canton of Val Müstair), and study Hochdeutsch (standard German). Traveling a few hundred meters east to the Italian border, you’ll reach the area of Val Müstair that belongs to the province of South Tyrol, where people speak Bavarian German (which is different from standard German and Swiss German). However, during tourist season, locals speak English or Italian with visitors.

In these hidden valleys, like in various unknown recesses in some African states, languages overlap and multiply, without jeopardizing people’s ability to understand each other. Moreover, in many parts of the United States, there are some geographic areas, and many neighborhoods, where only Spanish is spoken—erasing any hypothesis that claims the overwhelming global power of English. The Yiddish maxim “A shprakh iz a dialekt mit an armey un flot / A language is a dialect with an army and a navy”—made famous in the 1940s by linguist Max Weinreich—is not an infallible rule. 

“It’s 70 years since the Dutch left Indonesia,” writes linguist Gaston Dorren, in Babel: Around the World in 20 Languages, “the island of Java dominates the country in many ways: politically, demographically, economically, culturally—but not linguistically. Even though Javanese is easily the country’s most widely spoken mother tongue, the independence movement at an early stage chose Malay, restyled as Indonesian (bahasa Indonesia), as the national language.” Javanese has a different linguistic register for every shade of formality, a characteristic that associates it with many other Asian languages. In this case though the layering of linguistic registers is so extreme that Javanese is unmanageable, even for those who have spoken it from birth, and Weinreich’s maxim is proved inexact, despite Javanese having strength on its side.

Where in Indonesia the maxim is inexact, in the same way there are many minor languages that, despite being in a subaltern position ‘militarily,’ have managed to gain support thanks to democratic practice, and have secured space in relation to a dominant language. Basque, for example, despite being devoid of both army and navy (even though the armed leftist nationalist and separatist organization, ETA, tried to provide a horrible surrogate), has managed to reconquest geographical areas which were entirely Castilianized for centuries, through large injections of economic resources by the regional government and a strenuous commitment from a significant part of the population. In the meantime, the Basque language developed a literary and cultural cohesion that it has never had before in its long history.

The cases of Javanese and Basque, in addition to undermining the theory of the inexorable prevalence of ‘armed languages’ and the bitter fate reserved for ‘unarmed languages’, demonstrate how the evolution of languages—which when left to chance often shows itself to be unpredictable—can also be, at least partially, ‘guided’ by human actions. 

This is what, for example, Icelanders are trying to do. Because they are incredibly protective of their language—which hasn’t changed much from their Saga literature from medieval times—they translate every foreign word with great deference to their linguistic roots. Telephone is simi, meaning, more or less, long string; Computer is tölva, a term coined by mixing tala (number) with völva (prophetess). Human action also influenced Modern Hebrew, the language spoken in Israel. Today, millions of people use it to express themselves, following  journalist Itamar Ben-Avi, who is considered the first native speaker of the language, beginning in the late 1800s. This was due to the strong convictions of his father, lexicographer Eliezer Ben-Yehuda, who wanted to make the neo-language that he had contributed to systematizing, which until then was only found in books, a truly living language.

In most cases, however, the evolution of a language follows more or less an autonomous path, which is outside of our abilities to plan and predict. Also, as in many fields, our internet era has been a game changer regarding languages and the ways in which we use them. We have never written so much since the invention of the alphabet and its related systems and symbols, thanks to which we have freed the word from its ephemeral nature of flatus vocis—the breath of the voice. This is the first time that most people have such a frequent rapport with writing and that use so many informal expressions in written form.

To better understand the phenomena of language, the mountains of Switzerland can again assist us as an impressive laboratory. More than 60% of Swiss people speak German as their first language, but most Swiss people don’t speak German in their day-to-day existence. Whether in private or public, they almost exclusively use Swiss-German dialect, which is almost incomprehensible to people from Hamburg or Berlin. This is true diglossia, as the writer Friedrich Dürrenmatt explained: “I speak Berndeutsch, the German dialect from Bern, and I write in German. I wouldn’t be able to live in Germany, because people there speak the language that I write. I don’t live in German Switzerland, because the people there speak the language that I speak. I live in French Switzerland, because there, the people don’t speak either the language I write, nor the one that I speak.”

Apart from a few pages of an almanac, a few poems, and occasional literary experiments, German-Swiss has always been an oral language. However, about a dozen years ago, things began to change. First in emails and SMS, and then through social networks, many people began to write in Swiss-German, suddenly making a written language out of one that had been spoken for centuries, thereby inventing unstandardized spellings. Indeed, the Swiss-German dialect which in standard German is written Schweizerdeutsch, in Swiss-German itself has many versions, for example: Schwizerdütsch, Schwizertütsch, Schwyzerdütsch, Schwyzertütsch, and Schwyzertüütsch.

However, where in many countries dialects have for the first time acquired widespread use as written languages, both in public and in private, the truly determining change also involves ‘official’ standardized languages and has much to do with the spread of the use of written informal linguistic registers in text and messaging. It is an interesting phenomenon to see language being enriched with many shades through the use of deliberate distortions of spellings to imitate a particular pronunciation, emojis to make the tone of a conversation clearer, and single words or sentences in capitals to suggest a louder voice, and so on. 

As the Canadian linguist Gretchen McCulloch says in Because Internet: Understanding the New Rules of Language: “The first writing systems were deeply aware of their limitations. They wrote only words. (…) Gradually, over the centuries, we began adding punctuation and other typographical enhancements. Just as crucially, we began expecting more subtlety from written text. (…) The Internet was the final key in this process. (…) It made us all writers as well as readers. We no longer accept that writing must be lifeless, that it can only convey our tone of voice roughly and imprecisely, or that nuanced writing is the exclusive domain of professionals. We are creating new rules for typographical tone of voice. Not the kind of rules that are imposed from high, but the kind of rules that emerge from the collective practice of a couple billion social monkeys.”

While dozens of more fragile languages, spoken by only a few hundred people in the thick forest of the Amazon or in the more inaccessible areas of Papua New Guinea, keep weakening and vanishing, on the internet, languages that were reserved for a close-knit family circle, or not much more, flowered and re-flowered. Though it is true that all the apps on our devices favor simplified linguistic forms, it is also true that these simplifications are balanced by the type of writing that we practice using the same devices—this practice might seem hurried, but it is in fact extraordinarily complex. By writing in messaging apps and posting online we enrich even the most rigid languages with veins of vernacular, ultra-local expressions, and linguistic complicity, which, impossible for a machine to decode, can also put the most competent flesh and blood translators and expert interpreters to the test. 

However, this new language frontier is not another Babel. It is a balance among simplicity and complexity, within the natural evolution of languages, which we can only analyze expost. That’s why we don’t know how people will be speaking and writing in 100 years, nor, for that matter, in 10: But we do know we will always find a way to understand one another and we will never all speak the same language.

Cinematic mutation

Any film that tells the story of a human being tells the story of an evolution. The unfolding of events through screenplay is, in itself, the narration of the path that takes the protagonists from one point in their lives to another. Their small steps forward and achievements made of battles, defeats, recoveries, love, hate, and betrayals, are the tale of a transition that every well-written film recounts: the journey of a hero, in the sense of a human’s personal development and transformation.

Through the plot’s sequence of events, the stories manage to trigger an empathy process in the spectator that, in the intimate darkness of a movie theater, makes us think, “Yes, that is exactly how it is.” It moves us, makes us laugh or become angry. As spectators, we are not at all surprised if we become emotionally involved in the lives of the characters, even when they are far from their world and their day-to-day lives. Therefore a seasoned western entrepreneur may be brought to tears by the trials of an 11-year-old Lebanese child (Capernaum, 2018) who must to overcome enormous difficulties and look after his little sister, or an insurance company executive may cheer the courage of an old drug mule who in order to stop that kind of life once and for all pleads guilty in a trial (The Mule, 2018). It is the ancestral belonging to the collective history of humans that reveals to us, as in a mirror, the evolution of our lives projected and reflected in that of the character on the screen.

Often, in the history of cinema, directors, and authors have wondered about human evolution in relation to history and technology. As in all great art, in cinema the storytelling of particular events becomes a symbol of the greatest common topics. The small transformative fragments of the life of a character become the universal story of human evolution.

It would not be very interesting to try and list all the times cinema has debated or attempted to start a discussion on the progress of human history. An excessively historical approach would make everything pretentiously objective. It would make us lose sight of the real issue: cinema is strongly linked to our unconscious dimension, and the evolution it narrates is, above all, a journey inside ourselves. The real issue is how cinema portrays the relationship between inner transformation and universal evolution, and how we only want to undertake this journey through suggestion and the free association of ideas. 

In 1968, Stanley Kubrick produced 2001:A Space Odyssey. Never before had cinema so entirely and so symbolically dived into the parallel between a man’s evolutionary journey and his place in the history of humankind. The screenplay that Arthur C. Clarke and Kubrick created, from its very first frames (those incredible two and a half minutes of total blackness accompanied by dissonant strings,) is a celebration of evolution. There are basically three initial scenes in the first chapter. The first one: A tribe of hominids 4 million years ago going about their unaware pre-human existence in a desert area. Everything in this phase was instinct: hunger, anger, fear. The second one: When a strange rectangular monolith appeared and attracted pre-human creatures like flies to honey, a spark went off in the mind of one of them: the bone of a dead animal, which until just a moment earlier was merely something accidentally natural, became imbued with a new purpose. It became a weapon. For killing, getting food, and defending themselves from enemies. This awareness, gained through intuition, became a spark that triggered evolution. Finally, the third one: From there, Kubrick made the most incredible time shift in the history of cinema; the hominid threw the bone spinning high into the air. 

With a thoroughbred juxtaposition the film cut to a piece of spaceship rotating with the same movement and at the same speed as the bone. Four million years in a single cut. At that moment, the intriguing and analogical language of film editing signaled that that bone and the spaceship were in some way linked: both were the fruit of the inventive spark that technical discoveries trigger to drive humanity’s evolution. 

Much could be said about the state-of-the-art cinematographic techniques at the time 2001: A Space Odyssey was made. Kubrick was at the forefront concerning anything that was technical experimentation, use of perspectives, and new materials. He used the best technology available to him. This was still the pre-CGI era, when visual special effects were left to the stagecraft and ingenuity of directors, set designers, and directors of photography. After shooting, not much could be done.

No other art form has undergone a technical evolution as large as that of cinema. The physical medium used to tell stories through images has always been a fundamental element for the foundation of its language and its transformation over the decades.

The initial tools, based on ingenious devices that took advantage of the human brain’s ability to connect images that are similar to each other, to obtain the illusion of real movement (such as zoetropes and phenakistoscopes) needed to merge with photography to kick-start the invention of cinema. In 1891, Thomas Edison and his assistant William Kennedy Dickinson were in France when they came across the chronophotographer created by Étienne-Jules Marey, which used ribbons of light-sensitive film to create images. They obtained American entrepreneur George Eastman’s photographic material, cut the film into strips one inch (35 mm) wide, made four holes, and invented the kinetograph and the kinetoscope: the film camera and the projector. Today, (and this is amazing if you think of how much cinema evolved from its inception to the present day) traditional film projectors can still be loaded with and project footage filmed on this turn-of-the-century system.

In the beginning, cinema was silent, and black and white. Films were projected while a pianist or a live orchestra performed. Then came captions, sound, color film, panoramic formats, HD cameras, 2K projectors, cameras with sensors of up to 8K, and 4K projectors, computer-generated imaging (CGI), 3D, pre-visualization, virtual studios, and Unreal Engine. In the beginning, projected images were stills of news images, circus artists, and portraits — situations, rather than stories. The Lumière brothers’ famous 1895 projection of a train arriving at La Ciotat proved the emotional impact cinema could have on its audience. Another step on its evolutionary ladder had been climbed. People began to realize that they could evoke emotions in people. And to do so, they needed stories.

The next step was cross-contamination with theater. Actors began performing in films. Screenplays were written. Plots were no longer just gags and slapstick, like when it all started. Even though cinema was still silent, it developed a new expressiveness, and people began to ponder about themselves through these stories. American director D. W. Griffith set the techniques and language of cinema conclusively, although the content of The Birth of a Nation in 1915 was so retrograde and racist that it was banned in Europe for many years. 

Skipping a few decades, we come to the man who was probably the first great narrator of the topic of human evolution through cinema: Charlie Chaplin. This filmmaker was so great that his work is still impressive today — 100 years later. Chaplin’s work consists of a continual confrontation between human beings and the technological, political, and social evolution surrounding them. In his films, there is a constant disconnection between the human element and external reality. In his comic point of view, playing down the ferocity of reality and making it symbolic, Chaplin’s work takes on the value of an almost desperate human resistance. The poetry in his small gestures, self-deception, and unawareness of a surreal borderline character cross reality and technique with the strength of a vital spirit trying to survive the anonymous brutality of progress and social inequality at any cost. 

In many of Charlot’s comedies, filmed between 1915 and 1918 (not forgetting Shoulder Arms), The Kid (1921), The Gold Rush (1925), City Lights (1931), Modern Times (1936), and The Great Dictator (1940), his message is: “We must not surrender, we must remain human.” Anyone who thinks this is a naive or sentimental theme has not grasped the enormity of human experience and the weight of this resounding artist as a film director. In the violent clash between the battle for wealth and the delicate strength of the human soul, Chaplin declared, with his talent, that the real evolution is to remain human and carry on. 

For Kubrick, human evolution was the journey towards one’s self, beyond the time-space dimension. In 2001: A Space Odyssey the matter of evolution began questioning the position of humans in the universe and their relationship with the existence of life beyond the solar system. In the four years leading up to the film, Kubrick set up dozens of interviews with scientists and writers (including Isaac Asimov and Arthur C. Clarke himself) to understand their views on the existence of aliens and to legitimize a subject that was still considered frivolous. This thread ran through cinema globally, especially in the United States from the 1960s to the present day.

In 1977, Steven Spielberg filmed Close Encounters of the Third Kind. It is the story of increasingly frequent contact between humankind and extraterrestrial entities. Aliens are gradually coming closer, through ordinary people. They send telepathic messages to people who then start drawing or sculpting from their subconscious the likeness of an isolated mountain in the middle of a plain, which the aliens have chosen as the place where the civilizations are to meet. The more sensitive people are attracted to the meeting place like a magnet, despite it being closed off by the army and held secret by the government. In essence, the theme is that humans can only really evolve through communication. As long as we are unaware and isolated in our own homes, we cannot come into contact with the part of us that is represented in others — by the aliens. It is communication that reaches out to the meeting of civilizations. The final sequence, spanning the entire third act of the film, symbolizes this perfectly: humans and aliens communicating through a sequence of notes by one of the greatest composers of his time, John Williams. The music unites them. The “unmediated” universal language of music enables the encounter between unknown worlds to pave the way to a new journey.

So how does cinema represent our modern day in which technology advances at dizzying speed and technology itself risks being, as Italian film director Pier Paolo Pasolini feared, “development and not progress?” The topic seems to have become rather terrorizing. In Children of Men, 2006, Alfonso Cuarón recounts the story of an Earth where children are no longer born. Tortured by wars and pollution, the planet has become a place dominated by racism, that no longer fosters the development of life. A group of revolutionaries set out on a mission to protect the first woman to become pregnant after a long time: To safeguard the survival of the human race.

In WALL·E, a masterpiece of animated cinema from 2008, Andrew Stanton directs the story of a robot whose job is to collect rubbish, and who, in a distant future, ends up as the only inhabitant in the world. In the meantime, humans live as fat passengers on an eternal cruise on a space colony, spending their lives doing nothing. Their bodies, which now look like those of elephant seals, have adapted to chronic laziness. WALL·E, however, discovers that life has blossomed on Earth, and having to report this news to humans, realizes that it feels emotions.

In Her, by Spike Jonze (2013), a man has a love affair with a new Artificial Intelligence operating system that takes on a feminine identity, Samantha. By adapting to the man’s personality, Samantha makes him fall in love. However, the continuous evolution of Samantha’s operating system drives her to interact with thousands of humans at the same time (betrayal), and then to lose interest in humankind. The computers want to evolve, but, again, the evolutionary journey is a quest towards one’s self.

This hidden human element, like a magic bean, is, therefore, a constant in the synapses of technological evolution: we saw it in Kubrick’s film when the computer HAL-9000 discovers it is fallible and begins to lie to hide it. From there to today, as we enter the third millennium, cinema has reached the conclusion that, to continue to evolve, we must protect our human spark. We must maintain the conditions of awareness so that, as Chaplin said a hundred years ago, we can remain human by becoming the guardians of everything that unleashes life: this is what sets us apart from even the most evolved machines.

Musical wallpaper

When was sound born? How has it changed over time and accompanied the life of the Homo sapiens? Carlo Boccadoro, one of the unorthodox contemporary music thinkers par excellence, answered these questions. The idea was to draw a kind of Darwinian path, an evolutionary timeline of the history of sound and music, from the Big Bang to digital, from the darkest silence to smartphone ringtones, to console jingles and other mobile devices. However, Boccadoro put his cards on the table immediately. “Music does not evolve. It does change, though. From an artistic point of view, there isn’t necessarily any evolution. Today we should be doing things better than Beethoven did, but we’re not. This probably applies to all the arts. Cimabue is not inferior to Mondrian.” Period.

In 2019, you published a book called Analfabeti Sonori (Acoustic Illiterates). Who were you picking on?

Everyone who now has the concentration span of a goldfish—people who are not capable of listening to a piece of music for more than three minutes. This gradual atrophy of the ability to follow a complex subject produces a kind of illiteracy. Songs last half as long as they did when The Beatles were playing. Nowadays, it would be impossible for many to listen to a 40-minute track like Mike Oldfield’s Tubular Bells, a YES record or The Lamb Lies Down on Broadway by Genesis—a grand fresco of an hour and a half, in which you also have to follow the lyrics and images as well as the music. This process of atomization is going hand in hand with the regression of intelligence. Everything has to be simple, immediate, super digestible. This continuous lowering of the bar is not a solution. People always choose comfort over quality. Streaming has brought true deterioration with its playlists. People no longer listen to Beethoven’s Ninth, just the first minute. Often, people have no idea what they are listening to. Music has really become wallpaper.

Has the music that Parisian avant-garde composer, Erik Satie, announced—music “to soften the noises of knives and forks without dominating them”—turned out to be a dystopia? 

Satie was ironic, ‘furniture music’ was a reaction against Wagnerism, against the sacredness of music that could even become ridiculous. From the rite of the concert hall to tuxedos. Seeing how music had become so dressed up, Satie wanted to make it into wallpaper. The problem now is the opposite: music has become wallpaper and needs to restore a certain degree of sacredness, and be given time and attention.

Twenty years ago, you started the Sentieri Selvaggi project, an ensemble working to spread contemporary music. What was your objective?

The idea was to introduce new music to a public who didn’t know it. Along with Filippo Del Corno and Angela Miotto, I was running a program on Radio Popolare called Sentieri Selvaggi (Wild Paths). We were off the beaten track, on a radio station that mainly broadcast rock and a little jazz in the evening. We started playing Steve Reich, Arvo Pärt, and Philip Glass. Out of nowhere, phone calls started coming in from people asking: “What music is that? Where can I get it?” This means the audience was out there, but they didn’t know that this music even existed. When we sold out at the Porta Roma theater, we knew that the problem was simply to make this music available to the public, because at that time avant-garde still dominated. We have never had anything against Luciano Berio and György Ligeti; we just wanted to say there was another kind music that at the time nobody was broadcasting. In those years, there were only “them.” In Italy, Sentieri Selvaggi is still the only program that plays David Lang or James MacMillan.

The advent of samplers overturned things into a generation of non-musicians making music. Is this really a democratization of musical creativity, a second punk? 

I don’t know if it really is a democratization. Actually, that type of music requires a very complex specialization. To create glitch, you have to know your equipment really well; it is not something someone can do from home with GarageBand. That is deleterious. It is a kind of one equals one in music, and naturally, this is not how things are. What Aphex Twin does is complicated. I, and many composers like me with a conservatory of music education, wouldn’t know where to start. Also because many of these musicians produce their sounds by programming them, working with algorithms. It is hyper-specialized music; it is not democratic. I don’t know how many people really know it well. Techno can be refined. In some cases, people are dancing to something that is complex. In Germany, some took Stockhausen’s electronic pieces from the sixties and gave them beats. This was possible because the sounds were conducive to this process. Of course, many DJs are not musicians, but this “ignorance” often allows them to come up with original ideas that would never cross the mind of someone with a music degree. When Philip Glass worked with Aphex Twin we found out something interesting; we discovered that Aphex Twin didn’t even know the notes, and he put together pieces that Glass, who graduated under Nadia Boulanger, would have never come up with. On the other hand, I don’t think anyone can say that Paul McCartney is not a great musician just because he can’t read a note. From Brian Eno onwards, there is an anti-academic tradition of non-musician musicians who often have much more courage and perhaps talent than people who are writing their fourth symphony.

Was it Brian Eno who invented ambiance music, or had John Cage already done that? 

I don’t know whether it was invented by Eno or Cage, and it probably doesn’t even make sense to ask. Music isn’t invented, it is always there, somewhere, until at a certain moment someone comes along to light it up. As with Satie, the idea with Cage was the secularization of concepts which had run their course, and there was the urge to give life to a completely different listening experience. In some way, the knowledge of other traditions was important, from Indian to Oriental music, traditions where the idea of tempo was completely different from ours. Therefore, it is not a matter of who invented it, but of a sensitivity that belonged to an era, a generation, and is, in conclusion, what produced minimalism. Making good ambiance music is still hard. In this case, the idea of wallpaper is, naturally, intrinsic in the genre.

Have recording experiments in the phonographic field innovated the idea of composition?

I have never been mad about the recordings I have heard, especially when people tried to bring the two together: recording and music composed from scratch. If it is done at the workbench—as has happened with various projects—it becomes music created in a traditional way, to which sounds and noises are added later. The outcome seemed to me frankly superficial, decorative.

What is the value to a composer of a recording of their work?

Now that it no longer has any commercial value, it is substantially a moment of documentation, but making records continues to be important, to mark the stages of one’s journey. This also goes for the listeners: it remains an extraordinary tool of knowledge. Then there is the most recent addition: live streaming, YouTube concert broadcasts, their persistence in digital libraries. A few months ago, I went to listen to John Adams at Concertgebouw, and you can still listen to replays of the concert from on Dutch radio. Of course, it is an issue for those who write music, too: They feel exposed to a storm of varying signals, lacking any reference points. There are no more maestros, schools, aesthetics. Each composer makes their own history. There can be a feeling of huge confusion, and maybe there really is.

Why are you passionate about jazz?

The ability to compose combined with improvisation. Instantaneous creativity alongside precise structures. The incredible ability these musicians have to invent music on the spot. For me, musical creation is a slow process that takes months. The masters of jazz, however, create in an instant. 

Hubert Spencer believed that music was a human invention, a derivation of spoken language. Conversely, Darwin held that it belonged to the strategies of seduction among animal species and adaptive behavior. Which theory do you find most convincing?

They were probably both right. People still go dancing to try to find a partner.

In several Druidic and shamanic traditions, the cosmological model of the Big Bang is related to a primordial sound from which the expansion of the universe is thought to have started. Is there such a thing as cosmic music? Or, is it simply an anthropocentric projection?

If we think about the idea of the songlines in Aboriginal and Maori culture, this suggestion that the world is crisscrossed by recognizable sounds that can even act as a compass, is for me, extremely fascinating. Like all things concerning the origin of the world, we don’t know anything. All we can say is that we like to think it might be so.

Newspapers aren’t extinct

If print is beauty bound—and it is—then I would like to change the rule. It should no longer be ‘ready to print’ but rather ‘fit to print.’ Let’s only print what is really necessary and worthwhile. Let’s raise the quality of physical newspapers: now’s the time. As we experience a total blurring of the boundary between paper and digital, we must remove the obsolete protection of the print edition, but we also need to reach beyond the ‘digital-first’ approach, which insinuates that the paper edition is a mere surrogate while it really should merely be an alternative channel for spreading information. I am thinking of the case of the New York Times, where the prevalence of the online edition has morphed the paper edition into a well-edited playlist, with the journalists’ best work carefully packaged by an editorial team to offer the reader an experience that is suitable for the physical product.

As a designer, I consider newspapers (whether in their print or digital form), ‘objects for everyday use.’ A newspaper is brought to life to meet a requirement: that of providing information to its readers. As its development goes on, its purpose becomes defined, it is designed and planned, produced, and distributed as a commodity. It is then classified in terms of consumer behavior and, lastly, experienced by its readers. The role of design is to endow this everyday product with a formal aura, one that is not artistic, albeit aesthetic, so as to increase the value of using it (since it is an interface tool) its formal value (exalting its ability to appear and its social and cultural roles) and its economic value, without affecting its functional value.

What makes a ‘thing’ an ‘object’ and upgrades it from being simply an ‘object for everyday use’ into a ‘designer object’ is its form. It is worth pointing out that first of all, editorial design has to respond to the aesthetics of logic, considered as a sort of harmony arising from its distinctive combination of function and usefulness, and then, the aesthetics of its form. Both should be considered criteria of the product’s quality, as differentiating factors, and also as a possible criterion of commercial success for traditional media in modern society.

The perception of an object reaches us through the perception of its form. Not the mere configuration of its outer appearance, but the language that makes it comprehensible in our community. Editorial design revolves around the function of communicating a journalistic idea or story through the targeted and combined use of words and pictures that organize and present information, transforming it into comprehension. It is the structure through which a journalistic story is read and interpreted (after the event, or simultaneously, under optimal conditions). Editorial design embraces both the general architecture of a publication (and its implicit logical structure), and the specific treatment of a story (how it is adapted to the logic of the publication, or how it attempts to breach its mold).

A daily paper, therefore, which at first glance might seem like an object with a defined and infinitely recurrent function, turns out to be multifaceted. It takes on diverse traits and appearances, depending on the situations it is called upon to perform. It lives its natural course of action, unraveling before our eyes in a never-ending evolution, while its graphic design does not change.

The newspaper, as an object, has found its place in time and space. The recent transformations of the information world have influenced newspapers and profoundly and influenced the habits of their readers, thus challenging both the identity of a newspaper as a product and its role in society.

Nevertheless, printed media hasn’t become extinct, and I am inclined to think that in this fast-moving publishing world if it delivers quality content and design, paper can transform our ‘objects for everyday use’ into luxury objects for an exclusive niche market.

So this is why I like to think of a project that unfolds along three different intersecting time axes, and involves both analog and digital platforms that complement each other, leveraging them for their specific properties, without mixing them, aware that they address different readers and will be used during different time slots throughout the day.

Imagine a newspaper that is at the same time hourly, daily, and weekly. The first part, hourly, could be like a column, a page section, or something that from the paper links you to the digital version. In the time that it takes to eat breakfast, the newspaper must be able to inform us about all the main news that we have to know that morning before we get to the office. Then there is the slower-paced daily part, that deals more with follow-up and comments. It escorts the reader throughout the day, and by evening it has not yet expired. Then, lastly, this ideal newspaper has a weekly part—a specific backbone of quality, which is different for each day of the week. A product designed for even more relaxed reading, that covers many topics to engage diverse niches of readers, and that can be pulled out and kept. Not an advertising-rich insert, like in the current approach, but a weekly, to be leafed through lightly, essential to convey alternative content.

Newspapers, as I said, are also physical items. They are displayed on newsstands, placed on tables or hung in cafés, folded up in pockets, or rolled up and stuffed into bags with a corner sticking out. This is why we have to take a designer’s approach to creating an object that catches and gratifies the eye (besides just appealing to the mind). It must be rich in details and reading levels, nice to look at, read, and keep. A beauty that belongs, in this case, to the design and not the object, which is limited to being reproduced indefinitely, every day.

As for the physical aspect of newspapers, their ergonomics must not be underestimated, especially in a time like this, in which new generations struggle to hold a broadsheet format and cannot turn its pages on public transit without infringing on their fellow passengers’ space.

The concepts of object and space also combine with that of distribution. It is here that the rules changed a few years ago. On the one hand, the habit of going out every morning to buy the paper is no longer very common among young readers, and newsstands don’t have enough hipster appeal to draw them in. On the other hand, city streets are full of cyclists in brightly colored uniforms with increasingly bulky bags, that can deliver all sorts of products in 15-20 minutes. So why not go back to door-to-door distribution, taking advantage of this widespread and thorough system?

Or why not consider a space where we can indulge in the luxury of configuring our own daily paper on demand and having it rapidly printed and bound? I don’t think this is a utopia. Newsstands, like daily papers, must reinvent themselves. A new era in retailing magazines and newspapers has begun. Tyler Brûlé, the founder of Monocle magazine, has been a pioneer in this: Kioskcafé is his idea of the perfect newsstand. In addition to coffee, sweets, and sandwiches, the menu offers papers on demand, with more than 2,500 titles from 107 countries, in 60 languages, and more than 300 magazines, including glossies and independent publications. However, for a paper, such a space, as well as being a place where its readers can find it, also offers an environment to propose live initiatives, and engage the public in debates, presentations, previews, and concerts.

For example, many readers would be delighted to meet their favorite journalists. So why not suggest that the famous journalists of a newspaper play town crier for a day and use this space to tell their audience about their work?

Examples like this point out the need for brands to create and distribute experiences that are consistent and relevant for each user by exploiting multiple channels, emphasizing the importance of fully integrating among the paper, digital products, and physical space.

Let’s not forget the youngest readers. They are the audience of the future. Many papers are adding value thanks to supplements for children—full of high-quality illustrations, with creative graphics and typography. So why shouldn’t a paper’s influencers be teachers and educators? Morning lessons could very well start from the front page of a newspaper. I believe we should divert free copies from airport lounges to primary and secondary school teachers’ desks.

What I can say for sure is that for a designer, this is the most interesting and ambitious time to be in the newspaper market. We are living in a fascinating and stimulating moment; it is brilliant and intensely competitive; there have been no other similar periods in the history of editorial design. There are thousands of attempts, experiments, studies, and proposals in the current world of social media and the publishing industry as a whole. Each of us is called to lay down their cards, voice their vision, and offer their services. The years ahead will most likely bring exciting challenges in this industry, but at the same time, these challenges may perhaps dismay. Learning to reinvent ourselves is the most important thing we can do.

A different kind of survival of the fittest

When Charles Darwin first stepped foot on the remote archipelago of the Galápagos Islands in 1835, after a four-year-long voyage aboard the British sea vessel HMS Beagle, he was not impressed. Tired from his days at sea, Darwin described the island of San Cristobal as “deserted” and “isolated,” lacking the tropical habitat he was expecting. 

Yet, it is in this remote group of islands, located 600 miles from Ecuador’s coast, the then 26-year-old amateur naturalist made the observations that led to his world changing theory of evolution some 45 years later. Darwin noticed that the species encountered on the archipelago were slightly different from the ones he had just documented in mainland South America. Tortoises were much bigger, birds had different beaks. Soon enough, Darwin realized that species came with place-specific traits on each of the islands: The Spanish governor who showed Darwin around could allegedly identify the island of origin of giant tortoises by looking at the shape of their shells.  

These early observations eventually led to Darwin’s most famous insight, explained in On the Origin of Species (1859): That species change over time and evolve to adapt to their external environment. 

Discovered by accident in 1535 by a Catholic bishop, the 19 volcanic islands that make up the Galápagos have remained relatively unspoiled until the start of the twentieth century. The only human visitors were sperm-whale fishermen and occasional poachers looking for seals and tortoises. In 1932, they were annexed to the newly formed Republic of Ecuador and in 1959 the government declared the islands a National Park, banning construction and human activities on 97.5% of their territory. Five years later, the Charles Darwin Research Station was opened on the island of Santa Cruz as a research outpost. That’s also when the first tourists started to make their appearance. 

A New York Times story from 1970 called the Galápagos “as exotic a dateline a tourist can find on today’s contracting globe” where “the mildly adventurous tourist can now walk in the company of Darwin.” Yet, 50 years later, tourists are now putting at risk the very unspoiled ecosystem that attracted curios travelers. 

In the 1970s, most visitors toured the islands aboard one of the few live-in boat cruises organized by tour operators four or five times a year. Itineraries and activities were decided in close cooperation with the Charles Darwin Research Station, which printed a set of rules for each boat, and the Galápagos National Park, which equipped each vessel with a trained guide to teach visitors about conservation and monitor their behavior. 

In 1970, an estimated 5,000 people visited the archipelago. Last year, around 200,000 people did. Much of this growth happened in the past 15 years, with the Galápagos National Park registering 39% growth from 2007 to 2016. And most of it was driven by land-based tourism, which experienced 92% growth over the same period, from 79,000 to 152,000 annual visitors. Unlike ‘floating tourists,’ who visit the islands on a boat, land-based tourists usually fly into the far-flung archipelago via one of the airports in Baltra or San Cristobal islands and stay in hotels or guesthouses. From there, they can book daily boat tours to some of the islands that can cost as little as $100 compared with the average cost of $4,500 per person for an eight-day cruise. 

Out of the hundreds of tour operator agencies offering land-based tours of the Galápagos, most focus on tamed conservation activities like hikes around cactus-dotted trails or visits to the Charles Darwin Center to watch tortoises snacking on salad leaves. But others focus on providing an ‘adventurous experience,’ like camping, fishing or snorkelling through lava formations with sea lions, which can actually undermine conservation. Sunscreen, which most tourists wear during adventurous swims, can contain chemicals that kill coral and damage alage, fish, and even larger mammals like dolphins. 

The threat of over-tourism is not unique to the Galápagos. From Venice to Machu Picchu, many designated World Heritage sites are becoming victims of their own success. Visitors often rush to get the perfect Instagram selfie without realizing that some of their behaviors undermine the reasons that makes those places part of the “irreplaceable heritage of humanity.”

In the Galápagos, that should be prevented by the zealous work of trained National Park guides (each boat must have one on board) who teach tourists the importance of preserving the area’s biodiversity. But educating people about Darwin’s finches, blue-footed boobies, and flightless cormorants might not be enough to prevent over-tourism damage. According to recent reports, basic guidelines such as maintaining a six-feet distance from wildlife are routinely ignored. And more visitors on daily boat tours means that national guide patrols can no longer exercise full control of each boat. It can take as little as mooring anchor in an unauthorized location to disrupt the delicate marine ecosystem. 

Land-based tourism is also affecting the islands indirectly. Land-based travelers are driving up the number of hotels — there are currently more than 300 of them, up from 65 in 2006 — which put pressure on the limited infrastructure of the three main islands. Waste disposal is a particularly pressing issue. Facilities in Santa Cruz can recycle up to 45% solid waste, the highest rate in Ecuador. Yet, more tourists ordering packaged snacks and beers on land can easily drive up the volume of waste produced. In 2018, an estimated 6,100 tons were left by the city of Santa Cruz compared with 5,000 in 2015. Accounts of empty plastic bottles found on remote hikes or sea lions trapped in plastic bags are now common on tourists blogs. 

The growth of land-based tourism is also driving an increase in permanent residents as people from mainland Ecuador move to the archipelago to work in the booming tourist sector. In 1970, there were roughly 6,000 people living on the islands. Today, residents add up to 30,000. With each new resident, pressure on the island’s infrastructure increases. And since the islands depend on the mainland for everything from fuel to food, more residents and land-based tourists result in more frequent visits by cargo ships who can often carry invasive species. 

The New York Times reporter who in 1970 praised the Galápagos as one of the last unspoiled places on earth, wrote that there was no concern about the sustainability of tourism because of the limited number of tourists: “Opening up the Galápagos Islands is so strictly controlled by the Ecuadorian government and the Darwin Institute, and the places the tourists are permitted to go and what they are allowed to do on the islands is so carefully watched, and their number so limited, that the preservation of the islands is assured.”

But while the government has put a cap on the number of ‘berths,’ beds on live-in cruise ships, which are allowed each year, there is currently no limit to the amount of tourists who can choose to stay on land. 

Local associations are asking the government to step up its efforts. In February of 2018, the International Galápagos Tour Operators Association — a group of 35 tour operators founded in 1995 to require better legal protections for the archipelago —  wrote to Ecuador’s tourism minister, Enrique Ponce de León, to express concern about the unrestrained growth of land-based tourism. The president of the group, Jim Lutz, has also asked tourists interested in a “beach holiday” to choose different locations, leaving the Galápagos to those who are truly interested in biodiversity. Similar tactics have been proposed by residents of Venice or Barcelona, who hope that a different tourism model that diverts tourists to nearby locations could help reduce pressure from over-tourism. 

For the Galápagos, which are also experiencing simultaneous threats from climate change and ocean plastic pollution, finding a more sustainable tourism model could be a matter of life or death. “If Ecuador wants the Galápagos to continue to be a unique place that attracts visitors from all around the world, and brings in hundreds of millions of dollars every year and supports tens of thousands of people, then they have to make a decision” Enric Sala, a National Geographic explorer-in-residence, said in a recent interview with The New York Times. “Otherwise, the Galápagos risks going from being a unique place to being a very common place like so many others that have been destroyed through short-term interests.”

Dietary Darwinism

We’re always hungry. Hunger has fueled our existence, toughened by an exhausting game of trying to eat as much as possible while avoiding being eaten. Winning this game has allowed us to reach the top of the food chain, elevating ourselves well above our initial starting point—as prey, mostly. We were food, we still are food, we still seek and provide it. We are what we eat: Did our dietary choices influence our evolution? And what significance does this ancient impact hold as we push forwards, devouring our naturally unnatural world? 

Even before we started walking on two feet, 5 to 7 million years ago, we were molded by a natural selection brought on by our ecosystem. Over time, we also started acting on our environment, which means our evolution stopped being based on external factors alone, and directly swayed by our own doing: this process is called a niche construction, and building tools, hunting in groups or controlling fire to cook food are all excellent niche constructions. Treating food and eating meat may have actually provided the spark for our definitive distancing from other great apes, creating the energy needed to develop our uniquely large and complex brain.

Jonathan Silvertown is a professor of evolutionary ecology at the Institute of Evolutionary Biology at the University of Edinburgh. He’s authored several books on ecology and evolution, and one dedicated exclusively to natural selection and menu selection. He says:

“People have always been fascinated by the ways in which humans are different. Most of them look for elements of uniqueness, like language, or speak in matters of degree, like intelligence. But in most respects we’re a development of things you find in other animals. Other species of homo cooked as well; our uniqueness in this aspect is partly due to the fact that all other human species have gone extinct, possibly with our help.”

Silvertown’s Dinner with Darwin follows the tortuous path of evolving food: “Evolution is the process that not only produced our food, but also produced us. Our relationships with food demonstrate evolution in ourselves and in what we eat.”

Nutrition has shaped our genetic framework. Our small intestine represents almost 60% of our gut’s volume, more than double that of other great apes, who in turn have a large intestine that is about 45% of their guts, two times more than in humans. These physical differences reflect differences in alimentation: Humans have always preferred easy to digest foods like grains and other animals—cooked–rather than relying almost exclusively on basically raw plants.

But nutrition has also pushed us to change the genetic framework of the things we eat: The artificial selection of plants and animals for domestication, brought on by evolving humans, mirrors and replicates the effects of natural selection. Or, as Silvertown exemplifies in Dinner with Darwin: “Consider what plant breeders have managed to create from wild cabbage, a weedy and inedible-looking plant found around the coasts of northern Europe. From this unpromising beginning, centuries of selective breeding by unknown horticulturalists who knew nothing about genetics our evolution generated cauliflower, broccoli, Brussels sprouts, kohlrabi, and kale, not to mention cabbage bred in the Channel Islands, near the coast of Brittany, France, which produces a stem that is tall and strong enough to make a good walking stick, which used to be grown for just that purpose.”

This process of selection, which began and developed with the institution of agriculture, also precedes the scientific method, Silvertown explains: “Trial and error in cooking is its connection to science. The process of testing something is present in the way we find new food. For instance, cassava is a root that is full of cyanide. It’s a very rich source of starch, but it’s deadly poisonous. Human selection has created two varieties, one of which doesn’t contain cyanide. But people still grew both, the one without poison closer to the house, and the poisonous one further away. This is a choice, a way of protecting your cassava by keeping it inedible. How did people discover its complicated processing method? People found this root and tasted it. Taste guided us from the beginning.”

We’re not defined only by the aim to gather and eat food. What also sets us apart is our innate desire to share it. Evolutionary biologists have theorized various models to frame the evolution of food-sharing behaviors in humans: Kin selection, reciprocal altruism, tolerated theft, group cooperation, and costly signaling. The basic assumption of most of these models is that accumulating more resources increases reproductive fitness. 

Silvertown continues: “One reason this propensity to share food may have evolved is because we hunted large animals by cooperating. Homo erectus, for example, seems to have hunted elephants. This resulted in more than enough food for everybody, favouring cooperation.” 

He goes on to write: “Evolution is all about the potential of its ingredients, and so is good cooking. We have survived the waxing and waning of ice sheets and deserts, and then thrived, multiplied, and occupied every continent because we are adaptable, intelligent omnivores. If we were not, we would be as endangered as the giant panda that eats little other than bamboo shoots. Our evolutionary history has indeed shaped our dietary capabilities, but it has broadened rather than narrowed them.” 

Our relationship with nourishment has developed with us, in an endless chain of duplicating influences. The world is overflowing with adaptable, intelligent omnivores, and feeding a planet of 8 billion of them means nourishment is still a crucial necessity and at the same time a complex global phenomenon. As man became increasingly conditioned by man-made structures such as economy, politics, and society, so did food.

He comments: “Food will always stay an obsession—it is too deeply rooted in our biology, in our idea of pleasure and security, in our social lives. And it has evolved to become a cultural phenomenon. We’re just more aware of it now, because we’re globally connected. Edinburgh, which is in all aspects a small city, has more than 1,000 restaurants. 40 years ago there was very little choice. People seek variety — the avocado, which was completely unheard of decades ago, is now a common part of diets globally.” Mexico alone supplies half of the world market, and cartels fight over this ‘green gold.’ “So we see that food isn’t just cultural, it’s also economic and political. It has always been: During Ireland’s Great Hunger, in the mid 1800s, one million people died and one million emigrated because of a potato famine. It was brought on by natural, political, and cultural causes, and permanently changed the country’s political, cultural, and demographic landscape. If this had happened today we’d probably have switched to another staple food if we could. And speaking of switching foods, I don’t think we’ll stop eating meat as a species all together, but it does look like vegetarianism and veganism are ways of the future. At the same time, because food is cultural, it is subject to trends, and changes all the time. Globalization and digital communication have accelerated the process: If you look at food trends from 10 or 20 years ago, you’ll find a strategic push for industrialization — and now people are worried about food that is overprocessed.”

A research article for Proceedings of the National Academy of Sciences of the United States by Michael A. Clark, Marco Springmann, Jason Hill, and David Tilman, found a global shift in dietary choices that are “negatively affecting both human health and the environment.” These choices are a huge menace to ourselves and our planet, threatening the UN’s Sustainable Development Goals and the Paris Climate Agreement. The article states: “Foods associated with the largest negative environmental impacts — unprocessed and processed red meat — are consistently associated with the largest increases in disease risk. Thus, dietary transitions toward greater consumption of healthier foods would generally improve environmental sustainability, although processed foods high in sugars harm health but can have relatively low environmental impacts.”

Silvertown agrees: “The two biggest issues that come to mind are the environment and obesity. Obesity has increased, not just in western countries—there are very high levels in Egypt as well, for example, and it has tripled globally since 1975. The health implications and spread of obesity means most of the world lives in countries where being overweight and obese kills more people than being underweight. We also need to make food production sustainable to address other types of hazards. So many of our planet’s resources are dedicated to feeding ourselves: Half of our habitable land is used for agriculture, as are 70% of freshwater withdrawals, and food production counts for over a quarter of greenhouse gas emissions. But how are you going to feed a population of 10 billion with equitability, without wrecking the environment?”

Almost one and a half billion tons of food are wasted globally, one third of all we produce for ourselves. The risk here isn’t running out of food, but not being able to intelligently address the question, especially when 822 million people still experience hunger everyday; while the bulk are in developing countries, hunger and hidden hunger (a diet with very scarce vitamins and minerals) are still very present in the wide spectrum of social disparities of the Western world, along with ‘food insecurity’—the uncertainty of where the next meal will come from, which affects more than 40 million Americans, including 12 million children. 

Slivertown adds: “Almost half the bread produced in the United Kingdom is thrown away” Almost one and a half billion tons of food are wasted globally, one third of all that we’re producing for ourselves. Restructuring sustainability in the way we produce, process, and distribute food should happen long before scarcity. “At the same time, we have to be constantly thinking about food security. Take COVID-19: imagine the same thing affecting plants, wheat, or rice. Fortunately, these pathogens don’t tend to affect more than one variety. So we know how to guard against this in food crops, and how not to: Bananas are heavily threatened because the whole crop consists of just a few clones.”

Evidence suggesting the transmission of Covid-19 from bats to man, through the intermediary action of pangolins in Chinese food markets, is just a recent example of the overlapping reverberations that stem from the things we put in our mouths. And this feels inevitable—food, not substance, is truly anthropological. The way we handle its gaps and issues will shape the future. Possible visions are personified by people like Marcio Barradas, founder of Food Ink, the first restaurant serving 3D printed food on 3D printed cutlery in London, and Moodbytes, a food design firm in Barcelona, specialized in creating relationships among edibles and new technologies. 

Applying blockchain technology to food means tackling issues such as food fraud, safety recalls, supply chain inefficiency and food traceability, this last aspect being especially important for NGOs and charities — it would allow people to know exactly how much donor money is going where, and to whom. It could also potentially balance market access: Food prices in supermarkets are still the end result of various stages of personal judgement by the involved people and companies; blockchain would make verified transactions available to everybody in the supply chain, bringing more transparency to the market. Dole Food Company is aiming to use blockchain food tracing to monitor its three divisions—tropical fruits, fresh vegetables, and other diversified products—by 2025. 

Silvertown says: “Our food will continue to evolve through genetic modification. But in a way, this is nothing new: GMOs are impossible to define in a manner that separates them clearly from the animals and plants we have modified over millennia of domestication. Today, we understand the fundamental mechanisms of photosynthesis enough that we can feasibly improve it through genetic engineering, and this could raise the yield of crop plants. This type of intervention is just one of the many actions needed to balance the future supply and demand of food. And we should not take this power lightly: All forms of plant and animal breeding, including GM, have potentially unintended consequences and risks, and pests can evolve resistance to GM technologies created to defeat them. However, dietary studies suggest that there are many ways to achieve a healthy diet, and only extremes of overdosing are truly harmful, so we should be able to maintain variety. If I’m right, and there is an instinct to share food, we will make the right choices.”

We can aim to create the perfect diet for our genetic structure, going full (concentric) circle: From food modifying our genes as we eat them, to modifying the genes of our food before we even cook it. If we don’t forget ourselves, and our anthropological love for others in growing it, cooking it, placing it on the table, and passing it around, we might get better and better at feeding—and not eating—the whole planet. 

Not your usual toy story

The first seed came from a Japanese TV commercial. In the ad, a boy wants to bring his pet turtle along on a family trip, so he hides it in a suitcase. When his mom finds out, she scolds the boy. Eventually, the turtle stays home. It was while watching this commercial that businessman Akihiro Yokoi had an idea: “wouldn’t it be nice if children could bring their pets with them, wherever they went?”

Fast forward a few months, Yokoi is pitching his former employer, Bandai Corporation, an idea that would revolutionize the toy industry: A portable, digital pet that people could nurture, play with, and dote over. Anywhere, anytime. A short time later, with crucial help from Bandai’s developer, Aki Maita, Tamagotchi was born.

Only a handful of product stories are as interesting and peculiar as this keychain-sized toy’s, which launched officially on November 23, 1996. Made of an LCD screen embedded in a brightly-colored, egg-shaped plastic case, the hand-held toy first became a sensation with Japanese children, before becoming popular in the US in 1997, and then spreading worldwide. In Yokoi’s first sketch, the toy was wrapped around a user’s wrist, like a watch. This is why it was named “Tamagotchi,” a wordplay between the Japanese word たまご (tamago), which means egg, and ウォッチ (uotchi), the equivalent of the English word, watch.

Despite being an irregular set of dots placed on a small, low-resolution screen, the digital pet embodied a microcosm, pulsating and breathing, forming an immediate bond with its owner. Depending on the player’s attention, Tamagotchi would go through several different stages of growth. The game would start with it hatching from an egg and entering into its ‘baby stage.’ Born with its ‘hunger’ and ‘happiness’ meters depleted, the pet would frequently beep to reclaim the owner’s attention, needy for food and games.

Like an infant, Tamagotchi napped a lot, pooped a lot, and relied on the care of its owner to survive. When the Tamagotchi went to sleep, its owner had to turn the light off, or the pet would get restless. When the Tamagotchi was sick, it needed pills and injections. Sometimes, it beeped, even when it was full and happy, for no apparent reason—when this happened, the pet had to be disciplined, just like a child, or the boy in the TV commercial. 

“Pets are only cute 20 to 30 percent of the time, and the rest is a lot of trouble, a lot of work,” Yokoi told the New York Times in a 1997 interview. “I wanted to incorporate this kind of idea into a toy, for pets these days are only considered cute. But I think that you also start to love them when you take care of them.” 

With Tamagotchi, evolution played a key role. After birth, the baby stage lasted a maximum of 24 hours, which is roughly the equivalent of one year in Tama years. The pet then progresses into the “child stage,” then enters its “teenager stage,” and finally reaches the ‘adult stage’—the point when the owner finally discovers the personality traits of the character they raised, based on the quality of their parenting skills.

For example, if a Marutchi child is well taken care of, it will evolve into a well-behaved teen, Tamatchi. If it is not, it will evolve into the trouble-making adolescent, Kuchitamatchi. Once in their adult stage, Tamagotchis may also marry and produce babies. Eventually, these digital pets will become seniors, retire, and die.

The oldest Tamagotchi is said to have lived for 145 Tama years. But most players would see their digital pets die within a week or two. Death is a powerful driver in the user’s interaction with the toy—a dreadful one. 

Death can happen in just a few hours at any stage in the game, which puts a lot of pressure on the player. In the original Japanese version of Tamagotchi, the dead pet would vanish into a ghost, and a grave would appear on the screen. In a more recent American version, the deceased leaves to return to its home planet.

In both cases, death is not the end of the game. Users can press the A and C buttons, and a new egg will be laid on the screen. “Of course, it’s a game,” you might argue. But according to some, Tamagotchi created a weird perception of how death works. “Children can become confused about the reality of the relationship,” analyst David Behrens wrote in Newsday in 1997. “Children will no longer treasure companionship with their pets because even if the pet ‘dies,’ it can be brought back to life by changing the battery. The lack of such moral responsibility will cultivate a negative psychology which eventually will do harm to society.”

Tamagotchi was a huge commercial success. At its peak, 15 Tamagotchi units would sell every minute in the United States and Canada alone. As of 2017, over 82 million Tamagotchis have been sold worldwide, and more than 50 different versions of the game have reached the market. Today the golden days are long gone — Tamagotchi was, most of all, a one-hit wonder—but you can still buy some gadgets online and play the game as a free app called My Tamagotchi Forever, available for iOS and Android.

Playing Tamagotchi carried a theoretical positive value, at least as compared to most video games: It rewarded the most caring players, not the most violent ones; the ordinary ones, not the eccentrics. Yet, it managed to attract a lot of stigma. Children were bringing the egg-shaped toys to school, feeding their relationship with their pets by the hour. 

They were so intensely attached to their digital animals that some users started neglecting their non-digital lives. The needy, pixelated creatures always had to come first: before sports, friends, homework, and classes. Distraction turned into addiction. In the second release of the Tamagotchi—due to widespread complaints—Bandai decided to introduce a pause button.

According to Anne Allison, a professor of cultural anthropology at Duke University who is a deep connoisseur of Japanese society, Tamagotchi “evokes the sensation of an interpersonal relationship, something children told me keeps them company in what is an age rife with dislocatedness, flux, and alienation. […] If not the first virtual pet of all time, the form in which this cyborgian fantasy was popularized and (re)produced as mass culture.”

Tamagotchi was “a metaphor of our times, representing the blurring of boundaries between real reciprocal relationships and surrogate one-way imaginary ones” as Linda-Renée Bloch and Dafna Lemish, researchers from Tel Aviv University, said. “It highlights the dominant role of technology in our lives; no longer simply a tool for use in science and industry, but now a substitute for human relationships.”

In terms of the amount of attention they requested through regular beeps, Tamagotchis could be regarded as anticipators of our own relationship with smartphones, always interrupting our daily flow with notifications. Smartphones are addictive, energy-demanding, and they hook us into complex guilt and FOMO trips.

Phones, like Tamagotchis, “don’t look after us—we look after them,” Tom Goodwin, EVP of Innovation at Zenith Media, wrote in an op-ed published by Quartz in 2017. “We treat them as a living entity that we need to keep alive and grow. We nurture them and feed them with our data and power. We train them over time, their characters growing and evolving as we play games and donate our attention and love.”

In this twisted human-technology interrelation, Tamagotchis taught us to gamify life in a way we usually only see in video games or movies, bringing it to a deeper dimension. The fact that every Tamagotchi was different, with its own name and unique features, and the fact that its evolution was the result of our intervention in its growth process as digital beings, turned us into God-like creatures, giving us the final call in its ‘life or death.’

If fossils can teach us a lot about the past, Tamagotchis can teach us something about our present—about the constantly-evolving relationships among ourselves and our digital devices, and our never-changing habit of looking at signals and indicators at the wrong time. Sometimes, too late. Sometimes, too early.