It all happened overnight

Work hours, the value of one’s personal time, existential discussions with friends, the courage to face tough decisions, even at the cost of leaving everything behind. To strive for schools to make a quantum leap forwardnot only their infrastructure but also their content. To update teachers, because goodwill alone is not enough to raise generations of well-prepared studentssloppy goodwill is neither science nor method. To recover the lost memories of the elderly, to produce journalism worthy of this name and to hunt down fake newsmakers with a blowtorch. To stop tolerating climate change deniers. To double public health funds by setting up European collaboration programs. To reopen cinemas, theatres, bookshops, and libraries, providing adequate support for the necessary ‘crap’ that many consider culture. To wall up gyms. To demand rigour and truth from politics. To change jobs when one is unhappy with one’s own, to immerse oneself in nature, to play more, much more, with one’s children.

Covid-19 has turned the agenda of our priorities upside down. It has shown us a new path that not all of us would like to follow. Some will say that after all, nothing has changed and that when we finally get the vaccine, life on planet earth will flow again as before, with the same vices and virtues. Well, don’t believe them. Something huge has happened, and on closer inspection, it has happened inside us, in that intimate recess where emotions and reason collide to give a direction to our lives. Perhaps it is worth noting that in the era of exponential technology and enhanced futurology, we haven’t been able to predict a damn thing that happened in these past few months. The efforts of certain digital nomads are honestly ludicrous. Dozens of globetrotting speakers have resumed traveling and lecturing on the future that awaits us around the corner: they have updated their PowerPoints and added a couple of slides on Covid-19. Other than that, everyone seem to have an Elon-Muskian optimism of willpower.

Over the last 3,000 years, at least 13 pandemics have ravaged our planet. Almost all of them were generated by ‘zoonoses’, leaps from animals to humans through successive genetic mutations of viruses. And a new resistant virus could surface again tomorrow. As Dr. Rieux, the protagonist of Albert Camus’ novel, The Plague, says: “Being alive always has been and always will remain an emergency; it is truly an inescapable ‘underlying condition.'” As if to say: The plague never stops being among uswhen we defeat it, we must be aware that it will return. Covid-19 has eliminated the weak elements of the biological chain, reset work to zero, and starved affections. Our weakness and unpreparedness in the face of the unexpected represent the most potent evolutionary element of our species, at least socially. No matter what we do.

Digital evolution

In his 1994 book, The Astonishing Hypothesis, Francis Crick posits that “a person’s mental activities are entirely due to the behavior of nerve cells, glial cells, and the atoms, ions, and molecules that make them up and influence them.” This hypothesis, according to Crick, is the basis for the scientific study of the complex behaviors that emerge from brain activity: given this theory we can try to understand intelligence, consciousness, even free will, using the methods and tools of science. 

From this point of view, the scientific problem becomes: how do these cells in the brain actually work? The human brain is very complex and is the result of a few hundred million years of evolution: from the first mammals, which appeared around 250 million years ago, to the appearance of homo habilis about 2.5 million years ago. Unfortunately, the evolutionary clock cannot be turned back to see which features appeared at which stage and how they became what they are—nor are there any alien ecosystems available for comparison. So one can ask if technology can help: Can we use our increasingly powerful computational tools to simulate the evolution of life and gain some deeper understanding of how the brain has evolved to become what it is now?

Artificial life became a recognized discipline in the 1980s, after it was originally introduced by computational theory pioneers like Alan Turing and John Von Neumann in the 1940s and 1950s. As a multidisciplinary field, artificial life seeks to simulate lifelike processes within computers, for example, by creating highly simplified artificial ‘aliens’ and comparing their development and behavior to real biology, with the goal of discovering something of life’s essential character—including the emergence of complex (and even intelligent) behaviors.

One of the most fascinating projects of artificial life is OpenWorm, which is coordinated by Stephen Larson and is open to the collaboration of hundreds of scientists around the world. The goal of OpenWorm is to create the world’s first digital organism which uses the principles of life to achieve existence on a computer. As their test case, the team chose the Caenorhabditis elegans worm, a one-millimeter-long roundworm whose entire body consists of 959 cells, of which 302 are neurons which form approximately 10,000 nerve connections (by contrast, the human brain contains around 86 billion neurons and 100 trillion synapses). The C. elegans is the world’s most understood multicellular organism and was also the first multicellular organism to have its genome sequenced in 1998, giving biologists a full understanding of how it develops from embryo to adulthood. The tiny worm is also the only organism for which a connectode—a 3D map that shows how each nerve cell is connected—has been made.

OpenWorm is another example of the state of the art of many advanced projects in artificial intelligence: The results are interesting, including the ability of using the worm simulation to drive the behavior of a simple LEGO robot, as shown in a 2017 breakthrough, but the simulation, even of such a simple organism, is still far from being completely realistic. As in many other fields and subfields of artificial intelligence, progress is fast, but the goal of deeply understanding complex behaviors—even of very simple organisms like C. elegans—is still in the future.

Literary catharsis

They say that literature foreshadows reality and that writers, the truly great ones, have the gift of precognition—one of those ‘superpowers’ which no one would ever choose to have and which some, starting with Cassandra, daughter of King Priam of Troy, would have gladly done without. Recently, however, a prophetic writer like Margaret Atwood, who is experiencing the male chauvinist theocratic apocalypse she predicted in her The Handmaid’s Tale more than 30 years ago, said something enlightening: “Writers do not predict anything. Their job is to talk to their readers to ask them if this is really the world they want to live in.” Meaning that we will always and, in any case, be in our own hands. 

In Italy, during the initial days of the spread of what was to become the Covid-19 pandemic, Albert Camus’ The Plague (1947), José Saramago’s Blindness (1995), Giovanni Boccaccio’s Decameron (1353), and Alessandro Manzoni’s The Betrothed (1827), were among the most read works of fiction. Clearly, their appeal was due to the fact that they deal with a subject—frightening epidemics caused by more or less real or plausible viruses—which is tragically very timely and capable of rousing a fair dose of curiosity. And despite the books ending with scenarios that are in no way reassuring, and that are no less anguishing than those of horror or science fiction novels.

Camus—a son of humble pieds-noirs, French Algerians—published The Plague when he was already an esteemed intellectual with strong anti-fascist feelings. He had broken his ties with the French Communist Party and with his friend Sartre, had authored The Stranger, and was a fervent supporter of democracy in Arab countries and humanist ideals. As soon as it was published, his second novel was a huge success with both readers and critics. It is considered one of the most outstanding novels of the twentieth century, still a global long-selling book, and widely quoted and translated. The most recent Italian edition was brilliantly translated by Yasmina Mélaouah, who has given a voice to authors such as Colette, Genet, Pennac, Vargas, and de Saint-Exupéry, among others. 

From Milan, where she lives, and where these days the mostly deserted streets resound with megaphones that order to stay at home, she recollects her first encounter with this classic: “I was 15, and our French teacher had made us read it during the year. When I had to re-read it, in order to translate it, the same details that had stuck in my mind at the time came back to me. First of all, the initial scene, when the doctor comes out into the dark corridor and runs into the first dead rat. That is actually a scene that has continued to play itself out in my mind my whole life […]. Then, the finale, when the quarantine ended, and the doctor crossed the celebrating town.”

The Plague, set in the coastal town of Oran—”an everyday place, merely a simple French prefecture on the Algerian coast”—starts off with a general sense of disbelief. Camus wrote: “Our fellow citizens had not the faintest reason to apprehend the incidents that took place in the spring of the year in question and were (as we subsequently realized) premonitory signs of the grave events we are to chronicle. To some, these events will seem quite natural; to others, all but incredible.”

It was April 16 in the second half of the ‘40s. Doctor Bernard Rieux had left his office like any other day, and, in the middle of the landing, stepped on a dead rat. An unusual event, for sure, but one that could have just ended there. However, one rat became ten, and then hundreds and thousands. One man and then another and another fell sick, all with the same illness: Fever, growths ripping open the skin, and pains in the neck, armpits, and groin. People were starting to die. “It was as if the earth on which our houses stood were being purged of its secreted humors; thrusting up to the surface the abscesses and pus-clots that had been forming in its entrails.” 

This is how the contagion started; in silence, on a morning just like any other: Isolated, hungry, unable to stop the plague, the city became a stage where the good and bad of society, always on the borderline between disintegration and solidarity, could put on their show. 

In Camus’s mind, however, that “plague” stood for something else. “In 1947, the world was recovering from another tragedy, that of Nazism, and the novel was also a metaphor for that supreme evil, in Camus’ intentions,” explains Mélaouah. “Actually, any virus, even today’s, could be a metaphor for something else. Camus intended to write an allegory of occupied France, and at the time, some criticized him for not being forthright and using an invisible enemy instead of one in the flesh. However, allegories are powerful rhetorical figures of speech that can be “superimposed” on many other circumstances. The Plague is a metaphor for a human community that discovers evil and needs to find ways to deal with it. I believe that all epidemics confront humankind with this.”

“After the contagion,” Camus writes, “fear began, and with it, serious reflection.” Over the coming months, we will live amid disorientation. Our future is uncertain, even for simple things like going to a restaurant or planning a vacation. We will all have to stop and come to grips with our fragility. So we need a clear head and, above all, competence. Since this emergency started, countless scientists, thinkers, philosophers, and artists have attempted to allay fear with reasoning, exploring the defense and reaction mechanisms of the human species. Mélaouah also tried to analyze the situation: “I strongly believe that the book’s key message is that humankind can only tackle evil when it becomes a community, a “we” rather than a “me.” The novel’s core message is that everyone must roll up their sleeves and work together. The sense of a community that we also feel strongly in today’s challenging times.”

The Covid-19 pandemic—the most significant healthcare emergency of our times — is forcing a veritable paradigm shift on humanity, making us focus on things that used to just pass, and remain unseen. We must now deal with idleness, a lack of social interaction, and the deafening silence that has returned to our cities, like an animal banished long ago. The Plague says “It is in the thick of calamity that one gets hardened to the truth—in other words, to silence.” Regarding this statement, Mélaouah comments: “Since I don’t like noise, I found the silence in the first few days to be wonderful. Now it has become a haunting shadow.”

When you come to think of it, silence is also the “noise,” or the backdrop, of reading: “And of concentration. Looking to the bright side, this situation is helping us realize what really counts for each of us, the core of essentials among the many things we think are important. This forced pause will help us to do a bit of clean-up, and throw overboard what’s unnecessary, on all fronts.”

When Camus decided to examine the tragedy of the plague through the eyes of a doctor, he certainly wasn’t dealing with the thousands of healthcare professionals we have today, people who in this period are risking their lives and those of their families to look after Covid-19 patients. “I feel that he wanted the point of view of someone who, more than anyone else, had to get their hands dirty. He didn’t want a philosopher aspiring to be a saint, like Tarrou, or someone like the journalist Rambert, overwhelmed by emotions, and even less the viewpoint of the writer, Grand, someone who spends his time filing away at the same sentence. He wanted someone who would stick his hands, concretely, inside evil, lancing the abscesses, curing the bodies of the plague victims because evil demands concrete actions, and only later it can be reported.”

Who would Mélaouah choose to narrate the story on Covid-19 in a hypothetical novel of the future? “Probably a checkout clerk from a supermarket chain. […] I would like it to be told like that, from below. After all, the part of the novel that struck me the most was when Tarrou said that his moral is “understanding.” Meaning that we must also feel understanding with regard to people who, overwhelmed by panic, have raided the supermarkets.” 

After the November 2015 terrorist attacks in Paris, many people re-read Hemingway’s A Moveable Feast. After Notre Dame burned in April 2019, sales of Hugo’s Notre-Dame de Paris skyrocketed. So many people reading the same books when disaster strikes is perhaps a little like huddling around a vital message broadcast on all channels, or listening to the daily Civil Protection conference: someone speaking for the good of all. “The point is that as members of humankind, we all react the same way,” says Mélaouah. “We seek comfort, a reflection of what we are experiencing, answers, and even distraction. Then there is also the slightly higher comfort in reading, as it talks about the same situation you are in, but at the same time links it with that of everyone else and pulls you out of it. It is like when you go to watch a horror film to exorcise certain fears, like the catharsis of theater.”

So what else could help us out at a time like this? “A good place to be in hard times is Charles Dickens, any of his novels. With him, you laugh, you cry, you get carried away. He takes you everywhere, from tragedies to comical situations. Dickens wasn’t afraid of sticking his hands into humanity.” We can only wait for the Manzonian rain to free us from all this.

Musical wallpaper

When was sound born? How has it changed over time and accompanied the life of the Homo sapiens? Carlo Boccadoro, one of the unorthodox contemporary music thinkers par excellence, answered these questions. The idea was to draw a kind of Darwinian path, an evolutionary timeline of the history of sound and music, from the Big Bang to digital, from the darkest silence to smartphone ringtones, to console jingles and other mobile devices. However, Boccadoro put his cards on the table immediately. “Music does not evolve. It does change, though. From an artistic point of view, there isn’t necessarily any evolution. Today we should be doing things better than Beethoven did, but we’re not. This probably applies to all the arts. Cimabue is not inferior to Mondrian.” Period.

In 2019, you published a book called Analfabeti Sonori (Acoustic Illiterates). Who were you picking on?

Everyone who now has the concentration span of a goldfish—people who are not capable of listening to a piece of music for more than three minutes. This gradual atrophy of the ability to follow a complex subject produces a kind of illiteracy. Songs last half as long as they did when The Beatles were playing. Nowadays, it would be impossible for many to listen to a 40-minute track like Mike Oldfield’s Tubular Bells, a YES record or The Lamb Lies Down on Broadway by Genesis—a grand fresco of an hour and a half, in which you also have to follow the lyrics and images as well as the music. This process of atomization is going hand in hand with the regression of intelligence. Everything has to be simple, immediate, super digestible. This continuous lowering of the bar is not a solution. People always choose comfort over quality. Streaming has brought true deterioration with its playlists. People no longer listen to Beethoven’s Ninth, just the first minute. Often, people have no idea what they are listening to. Music has really become wallpaper.

Has the music that Parisian avant-garde composer, Erik Satie, announced—music “to soften the noises of knives and forks without dominating them”—turned out to be a dystopia? 

Satie was ironic, ‘furniture music’ was a reaction against Wagnerism, against the sacredness of music that could even become ridiculous. From the rite of the concert hall to tuxedos. Seeing how music had become so dressed up, Satie wanted to make it into wallpaper. The problem now is the opposite: music has become wallpaper and needs to restore a certain degree of sacredness, and be given time and attention.

Twenty years ago, you started the Sentieri Selvaggi project, an ensemble working to spread contemporary music. What was your objective?

The idea was to introduce new music to a public who didn’t know it. Along with Filippo Del Corno and Angela Miotto, I was running a program on Radio Popolare called Sentieri Selvaggi (Wild Paths). We were off the beaten track, on a radio station that mainly broadcast rock and a little jazz in the evening. We started playing Steve Reich, Arvo Pärt, and Philip Glass. Out of nowhere, phone calls started coming in from people asking: “What music is that? Where can I get it?” This means the audience was out there, but they didn’t know that this music even existed. When we sold out at the Porta Roma theater, we knew that the problem was simply to make this music available to the public, because at that time avant-garde still dominated. We have never had anything against Luciano Berio and György Ligeti; we just wanted to say there was another kind music that at the time nobody was broadcasting. In those years, there were only “them.” In Italy, Sentieri Selvaggi is still the only program that plays David Lang or James MacMillan.

The advent of samplers overturned things into a generation of non-musicians making music. Is this really a democratization of musical creativity, a second punk? 

I don’t know if it really is a democratization. Actually, that type of music requires a very complex specialization. To create glitch, you have to know your equipment really well; it is not something someone can do from home with GarageBand. That is deleterious. It is a kind of one equals one in music, and naturally, this is not how things are. What Aphex Twin does is complicated. I, and many composers like me with a conservatory of music education, wouldn’t know where to start. Also because many of these musicians produce their sounds by programming them, working with algorithms. It is hyper-specialized music; it is not democratic. I don’t know how many people really know it well. Techno can be refined. In some cases, people are dancing to something that is complex. In Germany, some took Stockhausen’s electronic pieces from the sixties and gave them beats. This was possible because the sounds were conducive to this process. Of course, many DJs are not musicians, but this “ignorance” often allows them to come up with original ideas that would never cross the mind of someone with a music degree. When Philip Glass worked with Aphex Twin we found out something interesting; we discovered that Aphex Twin didn’t even know the notes, and he put together pieces that Glass, who graduated under Nadia Boulanger, would have never come up with. On the other hand, I don’t think anyone can say that Paul McCartney is not a great musician just because he can’t read a note. From Brian Eno onwards, there is an anti-academic tradition of non-musician musicians who often have much more courage and perhaps talent than people who are writing their fourth symphony.

Was it Brian Eno who invented ambiance music, or had John Cage already done that? 

I don’t know whether it was invented by Eno or Cage, and it probably doesn’t even make sense to ask. Music isn’t invented, it is always there, somewhere, until at a certain moment someone comes along to light it up. As with Satie, the idea with Cage was the secularization of concepts which had run their course, and there was the urge to give life to a completely different listening experience. In some way, the knowledge of other traditions was important, from Indian to Oriental music, traditions where the idea of tempo was completely different from ours. Therefore, it is not a matter of who invented it, but of a sensitivity that belonged to an era, a generation, and is, in conclusion, what produced minimalism. Making good ambiance music is still hard. In this case, the idea of wallpaper is, naturally, intrinsic in the genre.

Have recording experiments in the phonographic field innovated the idea of composition?

I have never been mad about the recordings I have heard, especially when people tried to bring the two together: recording and music composed from scratch. If it is done at the workbench—as has happened with various projects—it becomes music created in a traditional way, to which sounds and noises are added later. The outcome seemed to me frankly superficial, decorative.

What is the value to a composer of a recording of their work?

Now that it no longer has any commercial value, it is substantially a moment of documentation, but making records continues to be important, to mark the stages of one’s journey. This also goes for the listeners: it remains an extraordinary tool of knowledge. Then there is the most recent addition: live streaming, YouTube concert broadcasts, their persistence in digital libraries. A few months ago, I went to listen to John Adams at Concertgebouw, and you can still listen to replays of the concert from on Dutch radio. Of course, it is an issue for those who write music, too: They feel exposed to a storm of varying signals, lacking any reference points. There are no more maestros, schools, aesthetics. Each composer makes their own history. There can be a feeling of huge confusion, and maybe there really is.

Why are you passionate about jazz?

The ability to compose combined with improvisation. Instantaneous creativity alongside precise structures. The incredible ability these musicians have to invent music on the spot. For me, musical creation is a slow process that takes months. The masters of jazz, however, create in an instant. 

Hubert Spencer believed that music was a human invention, a derivation of spoken language. Conversely, Darwin held that it belonged to the strategies of seduction among animal species and adaptive behavior. Which theory do you find most convincing?

They were probably both right. People still go dancing to try to find a partner.

In several Druidic and shamanic traditions, the cosmological model of the Big Bang is related to a primordial sound from which the expansion of the universe is thought to have started. Is there such a thing as cosmic music? Or, is it simply an anthropocentric projection?

If we think about the idea of the songlines in Aboriginal and Maori culture, this suggestion that the world is crisscrossed by recognizable sounds that can even act as a compass, is for me, extremely fascinating. Like all things concerning the origin of the world, we don’t know anything. All we can say is that we like to think it might be so.

A different kind of survival of the fittest

When Charles Darwin first stepped foot on the remote archipelago of the Galápagos Islands in 1835, after a four-year-long voyage aboard the British sea vessel HMS Beagle, he was not impressed. Tired from his days at sea, Darwin described the island of San Cristobal as “deserted” and “isolated,” lacking the tropical habitat he was expecting. 

Yet, it is in this remote group of islands, located 600 miles from Ecuador’s coast, the then 26-year-old amateur naturalist made the observations that led to his world changing theory of evolution some 45 years later. Darwin noticed that the species encountered on the archipelago were slightly different from the ones he had just documented in mainland South America. Tortoises were much bigger, birds had different beaks. Soon enough, Darwin realized that species came with place-specific traits on each of the islands: The Spanish governor who showed Darwin around could allegedly identify the island of origin of giant tortoises by looking at the shape of their shells.  

These early observations eventually led to Darwin’s most famous insight, explained in On the Origin of Species (1859): That species change over time and evolve to adapt to their external environment. 

Discovered by accident in 1535 by a Catholic bishop, the 19 volcanic islands that make up the Galápagos have remained relatively unspoiled until the start of the twentieth century. The only human visitors were sperm-whale fishermen and occasional poachers looking for seals and tortoises. In 1932, they were annexed to the newly formed Republic of Ecuador and in 1959 the government declared the islands a National Park, banning construction and human activities on 97.5% of their territory. Five years later, the Charles Darwin Research Station was opened on the island of Santa Cruz as a research outpost. That’s also when the first tourists started to make their appearance. 

A New York Times story from 1970 called the Galápagos “as exotic a dateline a tourist can find on today’s contracting globe” where “the mildly adventurous tourist can now walk in the company of Darwin.” Yet, 50 years later, tourists are now putting at risk the very unspoiled ecosystem that attracted curios travelers. 

In the 1970s, most visitors toured the islands aboard one of the few live-in boat cruises organized by tour operators four or five times a year. Itineraries and activities were decided in close cooperation with the Charles Darwin Research Station, which printed a set of rules for each boat, and the Galápagos National Park, which equipped each vessel with a trained guide to teach visitors about conservation and monitor their behavior. 

In 1970, an estimated 5,000 people visited the archipelago. Last year, around 200,000 people did. Much of this growth happened in the past 15 years, with the Galápagos National Park registering 39% growth from 2007 to 2016. And most of it was driven by land-based tourism, which experienced 92% growth over the same period, from 79,000 to 152,000 annual visitors. Unlike ‘floating tourists,’ who visit the islands on a boat, land-based tourists usually fly into the far-flung archipelago via one of the airports in Baltra or San Cristobal islands and stay in hotels or guesthouses. From there, they can book daily boat tours to some of the islands that can cost as little as $100 compared with the average cost of $4,500 per person for an eight-day cruise. 

Out of the hundreds of tour operator agencies offering land-based tours of the Galápagos, most focus on tamed conservation activities like hikes around cactus-dotted trails or visits to the Charles Darwin Center to watch tortoises snacking on salad leaves. But others focus on providing an ‘adventurous experience,’ like camping, fishing or snorkelling through lava formations with sea lions, which can actually undermine conservation. Sunscreen, which most tourists wear during adventurous swims, can contain chemicals that kill coral and damage alage, fish, and even larger mammals like dolphins. 

The threat of over-tourism is not unique to the Galápagos. From Venice to Machu Picchu, many designated World Heritage sites are becoming victims of their own success. Visitors often rush to get the perfect Instagram selfie without realizing that some of their behaviors undermine the reasons that makes those places part of the “irreplaceable heritage of humanity.”

In the Galápagos, that should be prevented by the zealous work of trained National Park guides (each boat must have one on board) who teach tourists the importance of preserving the area’s biodiversity. But educating people about Darwin’s finches, blue-footed boobies, and flightless cormorants might not be enough to prevent over-tourism damage. According to recent reports, basic guidelines such as maintaining a six-feet distance from wildlife are routinely ignored. And more visitors on daily boat tours means that national guide patrols can no longer exercise full control of each boat. It can take as little as mooring anchor in an unauthorized location to disrupt the delicate marine ecosystem. 

Land-based tourism is also affecting the islands indirectly. Land-based travelers are driving up the number of hotels — there are currently more than 300 of them, up from 65 in 2006 — which put pressure on the limited infrastructure of the three main islands. Waste disposal is a particularly pressing issue. Facilities in Santa Cruz can recycle up to 45% solid waste, the highest rate in Ecuador. Yet, more tourists ordering packaged snacks and beers on land can easily drive up the volume of waste produced. In 2018, an estimated 6,100 tons were left by the city of Santa Cruz compared with 5,000 in 2015. Accounts of empty plastic bottles found on remote hikes or sea lions trapped in plastic bags are now common on tourists blogs. 

The growth of land-based tourism is also driving an increase in permanent residents as people from mainland Ecuador move to the archipelago to work in the booming tourist sector. In 1970, there were roughly 6,000 people living on the islands. Today, residents add up to 30,000. With each new resident, pressure on the island’s infrastructure increases. And since the islands depend on the mainland for everything from fuel to food, more residents and land-based tourists result in more frequent visits by cargo ships who can often carry invasive species. 

The New York Times reporter who in 1970 praised the Galápagos as one of the last unspoiled places on earth, wrote that there was no concern about the sustainability of tourism because of the limited number of tourists: “Opening up the Galápagos Islands is so strictly controlled by the Ecuadorian government and the Darwin Institute, and the places the tourists are permitted to go and what they are allowed to do on the islands is so carefully watched, and their number so limited, that the preservation of the islands is assured.”

But while the government has put a cap on the number of ‘berths,’ beds on live-in cruise ships, which are allowed each year, there is currently no limit to the amount of tourists who can choose to stay on land. 

Local associations are asking the government to step up its efforts. In February of 2018, the International Galápagos Tour Operators Association — a group of 35 tour operators founded in 1995 to require better legal protections for the archipelago —  wrote to Ecuador’s tourism minister, Enrique Ponce de León, to express concern about the unrestrained growth of land-based tourism. The president of the group, Jim Lutz, has also asked tourists interested in a “beach holiday” to choose different locations, leaving the Galápagos to those who are truly interested in biodiversity. Similar tactics have been proposed by residents of Venice or Barcelona, who hope that a different tourism model that diverts tourists to nearby locations could help reduce pressure from over-tourism. 

For the Galápagos, which are also experiencing simultaneous threats from climate change and ocean plastic pollution, finding a more sustainable tourism model could be a matter of life or death. “If Ecuador wants the Galápagos to continue to be a unique place that attracts visitors from all around the world, and brings in hundreds of millions of dollars every year and supports tens of thousands of people, then they have to make a decision” Enric Sala, a National Geographic explorer-in-residence, said in a recent interview with The New York Times. “Otherwise, the Galápagos risks going from being a unique place to being a very common place like so many others that have been destroyed through short-term interests.”

Not your usual toy story

The first seed came from a Japanese TV commercial. In the ad, a boy wants to bring his pet turtle along on a family trip, so he hides it in a suitcase. When his mom finds out, she scolds the boy. Eventually, the turtle stays home. It was while watching this commercial that businessman Akihiro Yokoi had an idea: “wouldn’t it be nice if children could bring their pets with them, wherever they went?”

Fast forward a few months, Yokoi is pitching his former employer, Bandai Corporation, an idea that would revolutionize the toy industry: A portable, digital pet that people could nurture, play with, and dote over. Anywhere, anytime. A short time later, with crucial help from Bandai’s developer, Aki Maita, Tamagotchi was born.

Only a handful of product stories are as interesting and peculiar as this keychain-sized toy’s, which launched officially on November 23, 1996. Made of an LCD screen embedded in a brightly-colored, egg-shaped plastic case, the hand-held toy first became a sensation with Japanese children, before becoming popular in the US in 1997, and then spreading worldwide. In Yokoi’s first sketch, the toy was wrapped around a user’s wrist, like a watch. This is why it was named “Tamagotchi,” a wordplay between the Japanese word たまご (tamago), which means egg, and ウォッチ (uotchi), the equivalent of the English word, watch.

Despite being an irregular set of dots placed on a small, low-resolution screen, the digital pet embodied a microcosm, pulsating and breathing, forming an immediate bond with its owner. Depending on the player’s attention, Tamagotchi would go through several different stages of growth. The game would start with it hatching from an egg and entering into its ‘baby stage.’ Born with its ‘hunger’ and ‘happiness’ meters depleted, the pet would frequently beep to reclaim the owner’s attention, needy for food and games.

Like an infant, Tamagotchi napped a lot, pooped a lot, and relied on the care of its owner to survive. When the Tamagotchi went to sleep, its owner had to turn the light off, or the pet would get restless. When the Tamagotchi was sick, it needed pills and injections. Sometimes, it beeped, even when it was full and happy, for no apparent reason—when this happened, the pet had to be disciplined, just like a child, or the boy in the TV commercial. 

“Pets are only cute 20 to 30 percent of the time, and the rest is a lot of trouble, a lot of work,” Yokoi told the New York Times in a 1997 interview. “I wanted to incorporate this kind of idea into a toy, for pets these days are only considered cute. But I think that you also start to love them when you take care of them.” 

With Tamagotchi, evolution played a key role. After birth, the baby stage lasted a maximum of 24 hours, which is roughly the equivalent of one year in Tama years. The pet then progresses into the “child stage,” then enters its “teenager stage,” and finally reaches the ‘adult stage’—the point when the owner finally discovers the personality traits of the character they raised, based on the quality of their parenting skills.

For example, if a Marutchi child is well taken care of, it will evolve into a well-behaved teen, Tamatchi. If it is not, it will evolve into the trouble-making adolescent, Kuchitamatchi. Once in their adult stage, Tamagotchis may also marry and produce babies. Eventually, these digital pets will become seniors, retire, and die.

The oldest Tamagotchi is said to have lived for 145 Tama years. But most players would see their digital pets die within a week or two. Death is a powerful driver in the user’s interaction with the toy—a dreadful one. 

Death can happen in just a few hours at any stage in the game, which puts a lot of pressure on the player. In the original Japanese version of Tamagotchi, the dead pet would vanish into a ghost, and a grave would appear on the screen. In a more recent American version, the deceased leaves to return to its home planet.

In both cases, death is not the end of the game. Users can press the A and C buttons, and a new egg will be laid on the screen. “Of course, it’s a game,” you might argue. But according to some, Tamagotchi created a weird perception of how death works. “Children can become confused about the reality of the relationship,” analyst David Behrens wrote in Newsday in 1997. “Children will no longer treasure companionship with their pets because even if the pet ‘dies,’ it can be brought back to life by changing the battery. The lack of such moral responsibility will cultivate a negative psychology which eventually will do harm to society.”

Tamagotchi was a huge commercial success. At its peak, 15 Tamagotchi units would sell every minute in the United States and Canada alone. As of 2017, over 82 million Tamagotchis have been sold worldwide, and more than 50 different versions of the game have reached the market. Today the golden days are long gone — Tamagotchi was, most of all, a one-hit wonder—but you can still buy some gadgets online and play the game as a free app called My Tamagotchi Forever, available for iOS and Android.

Playing Tamagotchi carried a theoretical positive value, at least as compared to most video games: It rewarded the most caring players, not the most violent ones; the ordinary ones, not the eccentrics. Yet, it managed to attract a lot of stigma. Children were bringing the egg-shaped toys to school, feeding their relationship with their pets by the hour. 

They were so intensely attached to their digital animals that some users started neglecting their non-digital lives. The needy, pixelated creatures always had to come first: before sports, friends, homework, and classes. Distraction turned into addiction. In the second release of the Tamagotchi—due to widespread complaints—Bandai decided to introduce a pause button.

According to Anne Allison, a professor of cultural anthropology at Duke University who is a deep connoisseur of Japanese society, Tamagotchi “evokes the sensation of an interpersonal relationship, something children told me keeps them company in what is an age rife with dislocatedness, flux, and alienation. […] If not the first virtual pet of all time, the form in which this cyborgian fantasy was popularized and (re)produced as mass culture.”

Tamagotchi was “a metaphor of our times, representing the blurring of boundaries between real reciprocal relationships and surrogate one-way imaginary ones” as Linda-Renée Bloch and Dafna Lemish, researchers from Tel Aviv University, said. “It highlights the dominant role of technology in our lives; no longer simply a tool for use in science and industry, but now a substitute for human relationships.”

In terms of the amount of attention they requested through regular beeps, Tamagotchis could be regarded as anticipators of our own relationship with smartphones, always interrupting our daily flow with notifications. Smartphones are addictive, energy-demanding, and they hook us into complex guilt and FOMO trips.

Phones, like Tamagotchis, “don’t look after us—we look after them,” Tom Goodwin, EVP of Innovation at Zenith Media, wrote in an op-ed published by Quartz in 2017. “We treat them as a living entity that we need to keep alive and grow. We nurture them and feed them with our data and power. We train them over time, their characters growing and evolving as we play games and donate our attention and love.”

In this twisted human-technology interrelation, Tamagotchis taught us to gamify life in a way we usually only see in video games or movies, bringing it to a deeper dimension. The fact that every Tamagotchi was different, with its own name and unique features, and the fact that its evolution was the result of our intervention in its growth process as digital beings, turned us into God-like creatures, giving us the final call in its ‘life or death.’

If fossils can teach us a lot about the past, Tamagotchis can teach us something about our present—about the constantly-evolving relationships among ourselves and our digital devices, and our never-changing habit of looking at signals and indicators at the wrong time. Sometimes, too late. Sometimes, too early.