pluto

Chronicles of a fading-out civilization

Scientists, writers, philosophers: the ranks of those who believe that a mass extinction is happening are swelling. The question to be asked, therefore, is not when, but how. We discussed it with Roy Scranton, a philosopher, writer, and Iraq War veteran who authored, among others, We are doomed. So what? and Learning to Die in the Anthropocene: Reflections on the End of a Civilization.

In a recent study published by the Ecological Society of America, researchers hint at the possibility that the Chernobyl disaster may have built a paradise: the absence of human beings may have brought back wild animals and nature at its best. Would the world be better off without human beings? 

To answer this question, we should first ask ourselves what “value” means and how to measure it. Massive catastrophes and extinctions have peppered the geological history of our planet; such events have made possible a whole new flourishing of different forms of life, on one side, and at some point will eventually make Earth no longer capable of sustaining life, on the other. In this context, what we can be certain of is that, in terms of biodiversity, human beings have been an unmitigated disaster for Earth. So in that respect, yes: things might be better on this planet without humans.

You referred to the apocalypse we are facing as a slow-motion ecological catastrophe. 

What is the most frightening about the situation we’re in now, is that there isn’t going to be a single, worldwide disaster. The ecological catastrophe will unfold more in the form of an ongoing, constant change followed by another change again, and again. And there is no reason to expect a new stable system anytime soon: the force that we’ve exerted on the planet’s climate system is so strong that it’s going to be unfolding for hundreds of years.

How, then, will we react to each new shock and catastrophic event

What our species has shown throughout history is its incredible adaptability. A recent study states how it takes only about two years for people to get used to a new sense of what’s normal in weather. As the world around us gets warmer, we will experience a moment of shock with each, and new perturbation, discomfort, disturbance. Then, we’ll start getting used to it —over and over again. Unfortunately, that same ability to adapt is going to work against us as the slow catastrophe unfolds: it will consistently undermine any efforts to change.

Meaning that the much-desirable route change in our system is unlikely to happen. 

Very unlikely. Our current political and economic systems are very complex and rely on deep cultural investments to preserve the status quo. So there’s very little reason to suspect that we’re going to be able to veer off course and completely rebuild society in the coming years, and no good reason to believe that we’re going to be able to save ourselves.

But we are experiencing a change in culture.

That optimism, that sense that there’s always more to extract and more room to grow, that we can always build new technologies that are going to change the world for the better, and that whatever human problems may arise are eventually solvable, is something embedded in capitalism. More and more people, though, are realizing that such a progressive, somewhat utopian belief system that has framed our worldview in the late-20th and early-21st centuries doesn’t work anymore. Acknowledging that there are problems with no solutions and that human existence is fundamentally tragic, is a cultural adaptation rather a cultural change: an adaptive response to the end of the postwar liberal belief in technological and capitalist solutions to human existence.

Will capitalism somehow be replaced by environmentalism? 

I doubt that. Capitalism is a way of deciding where resources go and how they get used. Environmentalism is on a different level: it exists only within the world order that we understand as being shaped by capitalism, it doesn’t come with a strong system ideology of how to organize human life, and it is predicated on the capitalistic idea that the nature of the environment is something outside of human life. Ecological thinking is the understanding that humans are embedded in nature, that human civilization and culture is part of the Earth’s natural processes. All of our cars, cities, oil tankers, the carbon dioxide that we’ve spewed in the atmosphere that’s all inextricable from nature.

What form of social organization, then, might arise? 

Some kind of alternative social organization based around renewable energies, one might hope. But it’s more likely that what we’ll see is some sort of regressive feudalism where capitalism is piece by piece replaced by a more static and oppressive hierarchy of inherited faults.

We are doomed, but we should not despair. However, the idea that we can save the world through individual consumer choices is reasonably not enough. What should we do? What does living ethically mean? 

Living ethically in the world that we’ve made means recognizing that each individual existence is not independent, but dependent on other existences. As we make choices, we have to think over the effects that these have on all the other beings we relate to. But it also means reconsidering if our goals and values make sense in a world of increasing precariousness and suffering.

Do individual actions and choices make sense, knowing that what each person does isn’t necessarily going to have any significant systemic effect?

We have to give up the idea that we do things that we believe are good to make the world a better place. That kind of hubris is misguided and actually harmful. We have to make more concrete and less abstract ethical choices, not because we think they might solve human problems forever, but to be a better person and to help those around us who are suffering. They need to be more concrete and less abstract.

In such a scenario, is there any room for hope?

Given the situation, it’s not logical, linguistically speaking, to talk about hope: it’s not a feeling that can help us in a way. We tend to confuse “hope” with optimism, or with an insistence that we have to believe that certain things are possible, or certain situations can turn out ok in ways that don’t seem realistic to me. But there is another kind of belief in the human possibility that we can talk about as “hope.” It’s not the comforting hope that the world is going to get better, or that we will experience a cultural liberation where everybody gets to be exactly who they would dream of being no matter the circumstances: none of that is sustainable. It’s a belief in the possibility of human beings to make meaningful lives regardless of the circumstances. And that is the kind of hope that I hang on to.

Science is an ivory tower

Dr. Berebichez is a physicist, TV host, and STEM advocate. She co-hosts Discovery’s Outrageous Acts of Science, where she uses her physics background to explain the science behind extraordinary engineering feats. We sat down with her to talk about the future of science education.

Has social media made it easier for the scientific community to engage with a wider audience, making complex STEM topics more accessible?

Yes! Social media has given many scientists the opportunity to become science communicators which is great. We now have thousands of people telling us what their research is about and how it impacts our daily lives. But social media has created its own challenges. The ubiquity of scientific information has also left the public confused. The vast amount of information on social media can sometimes appear contradictory or at best, it’s hard to interpret the importance of certain results. One study in France claims drinking coffee is good for you, and a month later a different study in Italy claims the opposite. What the public isn’t told is that the devil is in the details. Perhaps the French study was done with only middle-aged women with a certain diet and the one in Italy used older men with a very different diet and that’s why the results were different. And some communicators, in their eagerness to tell a good story, oversimplify or even omit these details. So there is a gap: we need more people translating the full complexities of science in lay terms. It’s the combination of studies or “meta-studies” that provide a complete picture and we need more people who can present them in a clear and concise way. We already have some extremely talented science communicators doing exactly that and we should absolutely support them!

What is the recipe to engage an audience while keeping the complexity of science?

The formula is to have someone very passionate about explaining science, especially if we want to grab the attention of a public that is already burdened with a lot of distracting information. At the same time, the communicator needs to deeply understand both the big picture and the technical details of the research. The perfect balance between complexity and engagement can be achieved. But it’s not easy. Like famous physicist Richard Feynman once said “if you cannot explain your research to your own mother, it means YOU don’t understand it.”

Do you think our fear of the future and AI is based on the fact that people just don’t really get it, or it hasn’t been explained well by the STEM community?

I think we are going through an uncertain time, economically and politically worldwide, and people are afraid because they are seeing a lot of change and the future seems uncertain. I think most people, when they think of AI they are really thinking of what we call Artificial General Intelligence (AGI) which is the type of AI that can carry out any cognitive function in the way a human can. This technology is not here yet. Far from it. Today, what we have is called Narrow AI in applications such as IBM Watson, Siri, Alexa, and others. The main difference between AGI and Narrow AI is autonomy and objective-setting. That is, Siri only works if one asks it a question; it can’t automatically ask about the weather on its own. This doesn’t mean that AGI will not be here in the future. We have to accept that the world is changing without giving into fear. Of course, it’s natural to be afraid when we can no longer rely on the systems and the rules that we grew up with — especially the older generation. But don’t let fear lead you into the future, let curiosity lead you into it! 

How is technology changing the way we need to educate our young people?

There is a certain computing literacy that is now required of young people. Just like you need to learn a language in school, you also need to know a little bit about computer science. I help run a company called Metis that teaches people AI, machine learning, and data science, and when we talk to some large Fortune 500 companies, we notice that in the beginning a few of them are searching for direction when it comes to AI — they ask what is this for, why do we need it? Quickly, companies are discovering that automating certain tasks doesn’t mean that they can get rid of 10 workers and replace them with a program. What it means is that those 10 workers can now free up a few hours a day to perform other tasks that require more complex thinking such as interpreting interesting data, looking at outliers, and making data-based business decisions. Most of AI is not taking away jobs. On the contrary, it is opening up more opportunities because there is a massive need for a deeper understanding of what the algorithms are doing and what the results mean.

What should schools do to make sure we are giving people the right skills for the future?

My particular concern is that many schools are teaching a very basic curriculum of coding to satisfy a requirement. But a lot of these curricula make kids memorize some coding rules without thinking of the basic problem at hand and the goals of the code. In this way, they are forgetting that coding is a means to an end, a way to solve problems. Therefore, the most important skill we should learn is critical thinking: how to think independently about all kinds of problems. Not about memorizing rules. A great school helps their students gain skills that last a lifetime: a love of learning, endless curiosity, and critical thinking.

In praise of sci-fi determinism

In 1983, science fiction writer Isaac Asimov was asked by Canadian newspaper The Toronto Star to predict what 2019 would look like, as part of a homage to the namesake year of George Orwell’s acclaimed book, 1984. Asimov was asked to foresee the next 45 years, the same time that separated Orwell from 1984 when he wrote his book. Of course, Asimov didn’t guess everything correctly, but reading through most of his predictions, one can’t help but get goosebumps. 

Asimov’s brilliant mind predicted technological devices becoming part of our lives (“an essential side product, the mobile computerized object, or robot, is already flooding into industry and will, in the course of the next generation, penetrate the home”); how computerization would disrupt the job market (“the immediate effect of intensifying computerization will be, of course, to change our work habits”) and the consequent urge to update the education system (“a vast change in the nature of education must take place, entire populations must be made “computer-literate” and must be taught to deal with a ‘high-tech’ world”). What’s most fascinating, however, is how he predicted climate change too — in a time where climate warming had only been theorized and the word ‘Anthropocene’ had been timidly pronounced by a few select scientists. In his article, in fact, he stated that by 2019 “the consequences of human irresponsibility in terms of waste and pollution will become more apparent and unbearable with time” and “hoped” that technology would be advanced enough to “help accelerate the process whereby the deterioration of the environment will be reversed.”

Asimov was no conventional mind, and was one of the most brilliant thinkers of the last century. His predictions give us a good sense of how, when it comes to imagining our future, science fiction has always been a strong driver of innovation. Most of the time, this genre has been as pivotal as science in shaping our tomorrows, contributing by inventing technologies, shaping the imagination and thought that, sooner or later, we would see unfolding as time went on. “Science fiction doesn’t have to deal with real life and its limitations. It’s a space where imagination has no opponents: Sci-fi writers take reality and ‘edit it’ without constraints to prove their point. It’s a genre that helps humans imagine the future. It’s Francesco Verso talking, an Italian Sci-fi writer with a degree in environmental economics: “Sci-fi’s goal is to educate the population on new technologies and prepare it for future scenarios.” In such a framework, Asimov’s prediction being so close to reality almost seems natural. 

When it comes to this genre, what has changed since Orwell’s and Asimov’s times is its catchment area: in 1949, as in 1983, Sci-fi literature had a very niche audience, while in 2019 it’s one of the most trending genres which has broken out of its literary, printed roots, and onto the screens of our laptops. According to Sci-fi writer Eliot Peper, such popularity is due to the need of our times to develop a new mythology: “Up until a few decades ago, life had worked approximately the same way for many generations. In this static context, literature was a tool made of stable stories that helped people make sense of the world. Now that technology is changing our world at an unprecedented pace, science fiction is building the mythology of the 21st century a mythology shaped by technology and climate change.” 

Climate change, de facto, is becoming one of the main topics in science fiction, to such an extent that there is now a sub-genre of novels which are set in a natural disaster scenario: it’s the so-called climate fiction, which is trying to imagine a climate-changed world ravaged in the near future. Most of these imagined futures are, as one could guess, dystopian. Eliot Peper’s trilogy is set in the near future and explores the geopolitics of climate change: “the ubiquitous digital feed shapes our lives and politics. So does climate change. How do we, then, deal with the technology disruption and the declining of nation-states in a climate disaster scenario? How do we invent new human institutions for such a changing context, in a present in which traditional institutions seem not to be doing much, while hedge funds are already working to profit from climate change?”

A declining nation overlaid with a climate change scenario is also the backdrop of journalist and novelist Omar El Akkad’s “American War,” which depicts a civil war over the leftovers of fossil fuels erupting in a United States that has been devastated by natural disasters. According to El Akkad, science fiction’s aim is not that of trying to predict the future, but to redistribute the present. “William Gibson stated in what’s now a worldwide famous quotation, that ‘the future is already here, it’s just not very evenly distributed.’ This is mirrored in something else which is also already here, but not very evenly distributed: climate change.” What science fiction does in general, and climate fiction in particular, then, is redistributing the present: “I write about climate change ravaging the United States in some kind of future, but what I’m really doing is redistributing what’s actually happening right now, in this very present in which the Gulf of Mexico is losing, as a result of climate change, an area the size of a football field.” Once again, it’s not about the notion of trying to predict the future, something science is devoted to, but a matter of deciding where to shine the spotlight. “Wanna see dystopia in action? Go to Venezuela,” claims Peper. 

So is this anthropogenic ‘present’ (for some, but ‘future’ for others) so utterly scary that we’ve lost all of our imaginative power which, by the way, we have always considered to be exclusively human? Is our inability to imagine a different future under such premises the reason why dystopian Sci-Fi books and movies are so popular right now? “Sci-Fi, in all its forms,” says Francesco Verso, “always shies away from reality. Now that reality has become a dystopia, speculative fiction is slowly going back to doing utopias and building new scenarios to overcome the complex issues we will face.” This is something that Solarpunk, a very recent wave in science fiction, does. Whereas in Cyberpunk there is a hero fighting to survive against an evil corporation, Solarpunk implies the salvation of community. “It’s ‘solar,’” Verso adds, “because the future it imagines is ecological, it features open-source solutions and renewable energies in a society that is finally finding a new way to socially live together in harmony with the planet.” Demanding constructive stories that question the current state of the art, this genre aims at finding feasible solutions to problems rather than indulging in pessimistic dystopias that offer no path forward. 

It is not that the future happened this way because Asimov said it would. Of course, speculative fiction has never been, and never will be, enough to shape the future: but any self-respecting transformation is nothing without imagination. Our human ability to imagine something better, something greater and something that views the Anthropocene as a part of our present but not necessarily our future, is key to ensuring that the mistakes of the past are not repeated moving forward. In this sense, Sci-fi is more fact than fiction.

Main image artwork by David Revoy for the preproduction of the fourth open movie of the Blender Foundation ‘Tears of Steel’ (project Mango).

The end of handwriting

Today, much less is written by hand than was 20 years ago. As a result of digital progress, we are increasingly turning to conventional technologies that surround us 24/7, instead of using pencil and paper. And why should we? We prefer to send texts with the help of numerous apps that can even be used by children. 

Each of us has developed increasingly simple handwriting compared to the more classic cursive script. At Austrian schools, initially deliberately ornate, artistic script has evolved into increasingly simpler handwriting since 1995. In Finland, connected cursive writing has not been taught since 2016. 

This simplification and the use of computer fonts also leads to a loss in personal forms of expression, and identities. Prefabricated fonts and symbols — such as emoticons — convey content but say little about oneself. The death of handwriting sadly seems inevitable, although writing by hand has been proven to promote the formation of synapses in the brain and makes it easier for us to remember information. 

However, fluent handwriting requires a lot of practice, concentration, and patience. 

And it is precisely this patience that we no longer think we are capable of today. The constant perfectionism of the digital world also deprives us of the possibility of failure. What actually happens when we no longer make mistakes from which we can learn?

Our thoughts and knowledge are increasingly stored electronically. But rapid technological development also means that digital data can no longer be read on media platforms that are 20-years-old. This data is already lost forever. Paper, on the other hand, is age-resistant and can still be read without tools even after hundreds of years of existence. What if all this digital data is lost in a blackout? Could the death of handwriting catapult future generations into another Middle Ages? These are the things that we must think about today.

“First of all, for clarification, says Martin Tiefenthaler, a typographer, graphic artist and co-founder of the Typographic Society Austria, handwriting means that I extract something that I have read in a personal way. Handwriting means to arrange thoughts or circumstances/conditions, to make the contents of the world my own. This ranges from a shopping list — which anticipates the walk through the shelves of the shop — to the taking down of a lecture, to the arrangement of arguments on a topic.”

We asked Tiefenthaler if he thinks that writing by hand on paper means having easy access to an ordering medium.

He said: “When writing down my own thoughts, I think analog writing and digital typing may be equally useful for different people. Quick ideas can be noted down with one hand on the other, on the margin of a magazine or a book, on a notepad, just as well as in note functions of smartphones, laptops, or e-readers. For larger amounts of text, there is also suitable software, such as TextEdit for the OS system, or IA-writers. Word is the most common, but definitely not the most usable program for me, because it always tries to simulate inadmissible design. Handwriting is a symptom of concentration. If you only learn to swipe and/or type, instead of using handwriting, I see critical consequences.”

The Canadian neuropsychologist Donald Hebb’s statement ‘Neurons that fire together wire together’ indicates that complex physical activities can possibly lead to complex cognitive results. Writing is a complex process that is extremely demanding in terms of motor skills. The word articulation comes from the Latin “articulatio,” the joint. It may well be that a flexible “articulated” body is also able to think more ‘articulately.’

A digitally-available font must be perfectly crafted, optimally prepared (in terms of running width and kerning), and well programmed. 

Tiefenthaler says the much-used Helvetica, for example, is a typeface from 1956 that is no longer as good as younger contemporary typefaces. We’ve gotten used to Helvetica, but if you’re only a little familiar with font design, you know that we don’t have to rely on Helvetica anymore for legibility. Furthermore, in too many cases the font is chosen incorrectly, and in even more cases a possibly correctly chosen font is subsequently handled incorrectly.

Handwriting is never the end product. This is an interesting circumstance, which can perhaps also explain why calligraphy — which claims to be a final product — is capable of attracting so much attention at first glance, but soon loses its effect when viewed too long or too frequently, and consequently generates disinterest, boredom, and even disgruntlement. This is also due to the compelling lack of content of calligraphed works, which is caused by their brevity since long hand-written content is being read only very reluctantly. The only exception may be letters. 

A hard disk is surely even less durable than papyrus. The fact is, that every type of testimony disappears and we cannot even estimate what has already been lost, precisely because it has disappeared. About 1600 years ago, we suffered the greatest loss of books humanity has ever experienced: In a conflict between Christianity and other religions, they burned each other’s libraries. For some essential ancient works we only know that they existed— we are lucky that the diary of Anne Frank has been preserved, but who knows what else was lost. 

But as long as people are still expressing themselves — whether through analog or digital writing — does it really matter that kids aren’t learning to write stories on paper anymore? If both can be lost, then maybe we shouldn’t be so worried about the end of analog writing.

Beyond meat

Human beings are parasitic. We live off of our host (planet) and rely on it for our sustenance and existence. Many plants and animals live parasitic lives, and indeed many have adapted to live in symbiosis with their host. It’s not a bad system, as long as the parasite doesn’t kill off the host – but this is where we humans have gone wrong – we continue to feed off our host planet with a disregard for the fact that we are only destroying ourselves in the process. Put simply, parasites shouldn’t kill off their host; else they too will die. Not long ago, we were indeed good to our planet, but now we are multiplying at an exponential rate, polluting the very air and water that gives us life, and ultimately driving ourselves to the point of global starvation – not simply in the food sense, but also more generally in terms of natural resources.

As we continue to consume our own host we will ultimately be left with two options: one is leaving the host (Earth, that is) – and some people are indeed working on that, and the other is to figure out ways to get the planet back to health or at least stem the rate of degradation we are inflicting upon it. I think many would agree that the latter is imperative, regardless of the success of the former. So how do we continue to sustain our expanding population? At the most basic level, we cannot rely on past methods of growing food and relying on natural resources – we need to eat, and we need to create new materials to sustain our growth.

Enter synthetic biology. As we begin to accept the fact that we cannot create new precious metals or natural resources from scratch – alchemy doesn’t exist yet after all – we can turn to synthetic biology, which can create certain families of new molecules thanks to some of the in-built mechanisms that Mother Nature has evolved over the millennia. Synthetic biology is a new, emerging discipline of applying engineering principles to program and guide biological cells to produce new molecules. Cells are Mother Nature’s own biological factories that use DNA as a blueprint for making complex molecules such as hormones (e.g., insulin) and other proteins that are essential for life. Being able to alter the DNA blueprint of specific cells would allow us to direct these “cellular factories” to make other molecules that we want. In the ‘80s and ‘90s, people started looking at engineering algae cells to produce biofuels as a potential alternative to fossil fuels. While the science worked, the economics of scaling up production were not yet viable for the mass market. But the concept was powerful – we could leverage the power of biology to make new things.

When talking about the Anthropocene, one of the biggest hits that the planet is taking today thanks to our presence is “Factory Farming.” Agriculture is responsible for 18% of global greenhouse gases, which is more than the entire transportation sector. Factory farming, besides being one of the largest causes of greenhouse gas emissions, is highly inefficient. Take, for example, how we make beef: we feed grass to cows in order to turn the grass into meat. The cows are in this sense, the factories for turning grass into meat – seems like there should be a better way to grow meat than having to grow an entire cow. A synthetic biology approach can offer a much more efficient means of production by focusing on growing just the parts of an animal (e.g. muscle and fat cells) that we need for food production. Why grow the bones at all if we don’t eat them? One approach is to take a sample of cells from an animal (essentially a biopsy) and grow these cells in bioreactors (similar to the fermentation tanks used to brew beer). In these tanks, we can feed cells the appropriate nutrients that enable them to grow and multiply, so we can grow just what we want to consume. Once the cells have proliferated, they can be harvested and textured and seasoned to taste (obviously there is an element of culinary artistry needed for this as well).

If this all sounds a bit far off into the future, it’s not — there are already startups working on alternative foods. The Impossible Burger is an entirely plant-based burger that “bleeds.” How? Well, one of the main molecules that makes meat taste like meat is a protein called “heme.” Impossible Foods inserted the DNA that encodes for the heme protein into yeast cells and then fermented these cells in much the same way beer is fermented. In the process, the cells produce the heme protein, which is extracted and mixed in with other plant-derived ingredients to make a burger that has a bloody meat taste. By genetically engineering yeast cells, we can make complex molecules that once we could only obtain by growing whole animals.

The exciting thing is that we’re not just limited to food. Companies like Bolt Threads are using similar techniques to “brew” silk in fermentation tanks – by placing the DNA that encodes for the silk protein into yeast cells, they are able to produce silk without silkworms to make clothing. The fact is that we are only getting better at creating molecules, many of which have never existed before. We have, of course, been able to modify DNA for years, but thanks to recently-developed tools like CRISPR, DNA-driven chemistry continues to improve its accuracy and reduce its cost. We haven’t even begun to scratch the surface of what new molecules and materials we can create.

So how does this all relate to the Anthropocene? Given that we need to find radically new ways to sustain our population growth, we must invest in and develop new methods of manufacturing food, clothing, and other materials we’ve grown to rely on. Synthetic biology is one of the most promising new technologies to help us in that endeavor. But for this to scale and have a material impact on the planet, products like lab-grown meat and brewed clothing all have to reach price points that are on par with products that have been manufactured in traditional ways. An economic reality is that the cheaper they become, the higher the demand will be.

We’re fortunate to be witnessing the start of this revolution —Burger King just launched the Impossible Whopper on their menu. Economics will determine the critical point of adoption in this industry, and we’re rapidly approaching that inflection. Agriculture behemoths like Tyson Foods have also started to invest heavily in the lab-grown meat industry. Imagine the adoption rate when a quarter pounder of lab-grown meat is a quarter of the cost of factory-farmed meat. What is synthetic biology going to do for our planet then? Eventually, it will allow us to become less parasitic by massively reducing our dependence on an industry that contributes almost a fifth of all our climate change emissions: we are reversing the Anthropocene one burger at a time …

How to turn your body into a device

As our technology is changing and developing, so is our relationship to it and with it. But technology doesn’t only belong in the realm of our minds, as human beings, technology also gives us new ways to think about sexuality and pleasure. 

During my artist residency for National Power at Fawley Power Station, near Southampton, UK, I found that the only way to fully understand what I was drawing was by interpreting the human body as if it were a technology device. This even goes back to my childhood, when I was often in the hospital for asthma, and saw how the human body is its own type of technology — a modular machine composed of two separate parts: mechanism and mind. 

When I moved to London in the ‘90s, I discovered virtual reality, and for the first time I thought that this is what I’ve been looking for to explore my body. I also came up with the idea to use virtual reality paraphernalia for a project about sexual identity in virtual spaces. I decided to transfer my work around the relationship between artist and model, into VR, so that everyone could access the exploration which I had started. At the time, I would display my VR work in fetish clubs, and I started collaborating with others to create sex suits to link up the virtual space with a haptic body. 

In my PhD research, I explored this idea further, with Computer Fetishism and Sexual Futurology: Exposing the Impact of Arousal on Technologies of Cyberspace. As part of that work, I was invited to an event where I met people who were passionate about technology, their sexuality, and their sexual fetishism. There, one person was engaged physically and psychologically with the internet in the late ’90s, during which time he considered himself female during his technological experiences. That got me thinking: What if this guy liked being in a situation so much that he would never want to get out of it again? What about being addicted to such virtual experiences?

All of this has led me to realize that the relationship between sexuality and identity is fluid: We can digitally edit our pictures or undergo medical treatments to change our biological gender. And our way of sharing our vision of gender has become fluid, too. Such transformations are the result of technology making us more open-minded about our relationship with gender categorization, which nowadays is no longer perceived as an obligation. In this view, our vision on sexuality is and shall be diverse: It’s a way of expressing who we are. At this time and age we have heaps of materials available to help us build our identity starting from our informed opinions, and gender fluidity is an ever-evolving example of such freedom. We’re left with an important question, and I hope my studies on cyber sexuality will help to provide a context for an answer: How did technology change society, and how will technology change society in the future?

We don’t know what will happen, but what I can do is be ready to investigate technology as it evolves, as it becomes faster and more portable. I can’t wait to see and touch new tech devices. As for VR Porn, the porn industry took the term and used it to promote what is basically 360-degree stereoscopic film. True Virtual Reality, however, is different: One can interact with a digitally-crafted environment where the body is connected to the device and feels actual sensations. 

There is a lot of research around what happens to the human body during virtual experiences, and devices and programs that are giving people opportunities to feel as they would in real life. There’s a feature called Kissinger that allows users to exchange kisses by connecting their smartphones to a device. Another program will shift a person’s hair color according to their sexual arousal; Teledildonics are remotely-controlled vibrators which allow others to govern their desires; There are panties with pockets which will hold a device that receives messages remotely from one’s partner, and laptop-accessible platforms receive sexual stimulation from people across the globe. With so many options, it’s clear that technology is widening our range of possibilities for sexual experiences, but there may be negative implications for all of this and these experiences could make us more vulnerable to sexual predators. 

Inside this fluid circle — where humans can use technology to fill holes in their sexual existence — there may be challenges such as “What is love?” It will be interesting to see new answers come up in a future where technology will impact platonic love, affection, friendship, intimacy, and the connection with sex, pleasure, love, and emotional attachment. This leads me to consider, what will we do when we reach the moon or Mars, that is, what will happen when the distance between two people who love each other is no longer city-to-city, or country-to-country, but rather interplanetary?

We might think of the post-human era as the era of robots — people who are not real, but have a technology upon which they have a series of features that appear human — and reflect on the way we engage with these robotics. Maybe one day we’ll only accept robots if they are more human-like, or will have a preference for beings that are more abstract. Or, it’s possible that one day, a group of two or three people will go to Mars with a whole host of robots at their service, who are connected to their loved ones remotely from planet Earth. If we think of VR and telepresence, we realize how the space separating two individuals can be manipulated through technology, and, even more surprisingly, their psychological perception. That means that our relationships will be considered polyamorous when people are virtually together online, or through any other platform. 

Humanity in this virtual realm has found an alternative way of building relationships and intimacy — one that goes beyond the “natural scheme of things,” beyond space and distance, beyond the idea of couples living in two different cities, to couples who could love each other from different planets. In this future, we might have relationships between eight-minute time zones — between Mars and Earth. But would a teledilconics’ remote control work from one planet to another? Maybe at that point we won’t need tech to stay connected: We’ll end up having sex with various version of ourselves.

Cover image: Frederik Heyman – Gentle Monster – ArtFutura

The economics of news

Guido van Nispen is a publisher, advisor, and governance executive. He is on the supervisory boards of World Press Photo and Cinekid, and an advisor to the Dutch Council for Culture chairing committees advising the Dutch government in the audiovisual sector. With i4j, which aims to help build a society where technological advances are used to improve how we work, he recently published “The People Centered Economy, the new ecosystem for work.” He formerly served as CEO of ANP, the largest independent BtB news and information supplier to Dutch media. We sat down with Guido to talk about where journalism is headed in a world focused on innovation.

How have journalism models changed in the past five years? 

We could talk about innovation in journalism, but if we look at general journalism, there are a couple of things that have changed significantly. 

The first one is the economics model of journalism, which requires having a good group of people who are interested in what you want to share with them, because that’s the way it works. You need subscribers to make sure that you can make money. And in general journalism, there is a development that is becoming more partisan, to connect to your target audience. You need to be more like them. And I think a good example is in the United States, where mainstream media is becoming partisan. They are either like Fox News, more on the Trump side, or they’re more like CNN, or The Washington Post, on the democratic side. And this has created an enormous boost for journalism, and I think the Trump effect created a good economical uplift for it. I’m not sure whether that’s good or bad. It just shows that in times with uncertainty, people need to have reconfirmation of their own opinions. That’s one of the movements that we see happening, that you get a colored lens over facts. That of course, can be amplified by using technology — whether that’s social media, or AI, or other kinds of technology, or if you go to Facebook, all the algorithms make choices there, too. 

And there are a lot of requests from people saying how do we go back to more objective, less algorithmic-based journalism? I think that’s very difficult. But there are also new technologies in the production of news, not only for gathering and producing, but also for how we are presenting video and photography. As the news becomes richer, easier to consume, and more and more driven by algorithms, users have a much richer experience than ever before. So that’s why you will see specialized media getting closer to groups and the next phase will be accessing these groups through virtual reality, which will be even more immersive. Which means if this continues, then you will, as a user, probably experience news as if you are extremely well served. But it might be very partisan and selective, for example, if you go to YouTube, you might see content that is tailored to your preferences, thinking, and opinions. 

We still need to have journalists connected to innovation who understand innovation. A lot of what is called innovation journalism is, say, there’s a new iPhone or, you know, there will be self-driving cars. Most innovation in journalism or journalism about innovation is basically about new gadgets and new things. And often it’s dystopian. So will the robots take our jobs? You know, these kind of things. And if we want to have good innovation journalism, we need journalism that understands what innovation is and is capable of translating that to an audience. Innovation journalism is not using technology to talk about innovation, it’s about making sure that you explain things like artificial intelligence in a way that the general audience understands it, and sees its advantages and disadvantages. 

How is a culture of innovation changing the way that news is consumed?

The economics of news, especially online news, means that you need to create news all the time. There’s little time for reflection and coming back to things. However, there is a magazine called Delayed Gratification and they, instead, look three months back. So it’s always extremely interesting to read, because you think, wow, did that all happen in the last three months? But most news is so short. And, you know, it’s gone before you know it. And then something else pops up. Many, many people get news from Facebook or Twitter and that’s their world. A lot of organizations know that the time you get from a user and the value that user gives to that time is very important. So the value in capturing people’s attention and keeping it is very high. And technology is there to help with that, playing the next episodes or, sharing the next story, or having a more aggressive news feed, are all elements that create this reinforcement circle and I think that it’s very difficult to break through unless you create something for the user. For example, Apple shares with you the time that you spent on their equipment and then shows you what you did. 

Do you see a trend toward looking at what readers want, leaving this advertisement-driven business model aside? Is there a model that can make journalism profitable again? 

No matter what the model is, we still need to have people who are willing to pay for news, either through money or advertising. So whether its a subscription or advertising base, I don’t think that’s a big difference because we still need to serve them articles that we can monetize or subscriptions that we can monetize. For example, there is a Swedish product called Readly. It’s like Spotify for magazines. With Readly, you have thousands of magazines that you can read and for me, this is amazing, because I pay 10 euros a month, which is probably less than I would pay for any of the more expensive magazines. But how does this work? Well, they get a share of the revenue, like Spotify. I read maybe 50 of these magazines a month now and with one account, you can have five users. These platforms might help smaller publishers, but I have also canceled a lot of subscriptions since I joined Readly to save money. I’m not sure that innovations like Readly will bring the industry forward if people are cancelling their subscriptions.

Why do you think the idea that content should be free became so intrinsic for us?

We as consumers expect to be able to get anything for free online and when it comes to magazines, I would have never thought about that in the past. I think nobody expected the kind of scale that Amazon, Apple, Netflix, and Facebook etc. would get to and that has certainly changed how we consume media. I had a subscription for The Economist and I’m not sure what I paid, but I think it’s more than 200 euros a year. When I subscribed to Readly, I cancelled The Economist, not because I don’t think it’s a great magazine, but if you have 10 or 20 other magazines, which are also very good, you know, there’s not enough time in the day to read them all. If you tell people that music is 10 euros a month, like on Spotify or, you know, magazines are 10 euros a month on Readly, it will be very difficult to go back to saying now you need to pay 20 euros a month for one magazine or album. Netflix has started to increase their prices a little bit, but if they would say now you need to pay a hundred euros a month, they would lose everybody. This is the basic economic story: You get companies giving away content for free or very cheap, and this is the way that they scale, but in the process they devalue the product. To bring the value back, we need a completely different model — some publishers are trying, like Monocle magazine or M.I.T. Technology Review. But I think in general that it’s difficult to return to the good old days of people paying a lot of money for content. 

Who is responsible for the climate crisis?

Who is responsible for the climate crisis? For everyone who isn’t a climate denialist, there’s an easy answer to the question: humanity. Who, in their right mind, would challenge the idea that climate change is anthropogenic (made by humans)? Are we not living in the Anthropocene: the Age of Man as geological force? 

Well, yes and no. It turns out that saying “Humans did it!” may obscure as much as it clarifies. A world of political difference lies between saying “Humans did it!” – and saying “Some humans did it!” Radical thinkers and climate justice activists have begun to question a starkly egalitarian distribution of historical responsibility for climate change in a system committed to a sharply unequal distribution of wealth and power. From this standpoint, the phrase anthropogenic climate change is a special brand of blaming the victims of exploitation, violence, and poverty. A more nearly accurate alternative? Ours is an era of capitalogenic climate crisis.

Capitalogenic: “made by capital.” Like its sibling, Capitalocene, it can sound awkward when spoken. That doesn’t have much to do with the word, however – it’s because under bourgeois hegemony we are taught to view with suspicion any language that names the system. But naming the system, the form of oppression, and logic of exploitation is what emancipatory social movements always do. Justice movements unfold through new ideas and new languages. The power to name an injustice channels thought and strategy, something dramatically underscored by labor, anti-colonial, and feminist movements across the long twentieth century. In this respect, mainstream environmentalism since 1968 – the “environmentalism of the rich” (Peter Dauvergne) – has been a complete disaster. The “ecological footprint” directs our attention to individual, market-oriented consumption. The Anthropocene (and before that, Spaceship Earth) tells us that planetary crisis is more or less a natural consequence of human nature – as if today’s climate crisis is a matter of humans being humans, just as snakes will be snakes and zebras will be zebras. The truth is more nuanced, identifiable, and actionable: we are living in the Capitalocene, the Age of Capital. We know – historically and in the present crisis – who is responsible for the climate crisis. They have names and addresses, starting with the eight richest men in the world with more wealth than the bottom 3.6 billion humans.

What is the Capitalocene? Let me begin by saying what the Capitalocene is not. It is not a substitute for geology. And it is not an argument that says an economic system drives planetary crisis – although economics are crucial. It is a way of understanding capitalism as a connective geographical and patterned historical system. In this view, the Capitalocene is a geopoetics for making sense of capitalism as a world-ecology of power and re/production in the web of life. 

We’ll dig into the Capitalocene in just a moment. First, let’s get clear on the Anthropocene, of which there are two. One is the Geological Anthropocene. This is the concern of geologists and earth system scientists. Their primary concern is golden spikes: key markers in the stratigraphic layer that identify geological eras. In the case of the Anthropocene, these spikes are generally recognized as plastics, chicken bones and nuclear waste. (Such is the contribution of capitalism to geological history!) Alternatively, and perceptively, the biogeographers Simon Lewis and Mark Maslin argue that 1610 marks the dawn of the Geological Anthropocene. Deemed the “Orbis Spike”, the period between 1492 and 1610 witnessed not only the Columbian Invasion. The ensuing genocide in the Americas led to forest regrowth and a rapid CO2 drawdown by 1550, contributing to some of the Little Ice Age’s coldest decades (c. 1300-1850). The Geological Anthropocene is therefore a deliberate abstraction of historical relations in order to clarify the biogeographical relations of humans (as species) and the biosphere. That’s entirely reasonable. The Capitalocene thesis is not an argument about geological history. 

It’s an argument about geohistory – something that includes biogeological changes as fundamental to human histories of power and production. Here, the Capitalocene confronts a second Anthropocene: the Popular Anthropocene. This second Anthropocene encompasses a much wider discussion in the humanities and social sciences. It’s a conversation about the historical development, and contemporary realities, of planetary crisis. There’s no neat and tidy separation, and many earth system scientists have been happy to shift from the Geological to the Popular Anthropocene, and then back again!

For the Popular Anthropocene, the problem is Man and Nature – a problem that contains more than a little gender bias, as Kate Raworth makes clear when she quips that we’re living the Manthropocene. This Anthropocene presents a model of planetary crisis that is anything but new. It reincarnates a cosmology of Humanity and Nature that goes back in some ways to 1492 – and in others to Thomas Malthus in the eighteenth century. This is the narrative of Humanity doing terrible things to Nature. And driving those terrible things is, as ever, the spectre of overpopulation – an idea that has consistently justified the violent oppression of women and peoples of color.

 You might notice that I’ve capitalized those words Humanity and Nature. That’s because these are not mere words, but abstractions that have been taken as real by empires, modernizing states, and capitalists in order to cheapen human and extra-human natures of every kind. Historically, most human beings have been practically excluded from membership in Humanity. In the history of capitalism, there has been little room in the Anthropos for anyone not white, male, and bourgeois. From 1492, the super-rich and their imperial allies dispossessed peoples of color, Indigenous Peoples, and virtually all women of their Humanity, and assigned to Nature – the better they could be transformed into profit-making opportunities. The upshot is that the cosmology of Man and Nature in the Popular Anthropocene is not only a faulty analytic, but implicated in practical histories of domination. When the Popular Anthropocene refuses name capitalogenic climate change, it fails to see that the problem is not Man and Nature, but certain men committed to the profitable domination and destruction of most humans and the rest of nature.

Courtesy of Rebecca Hastings.

The Popular Anthropocene’s insinuation that all humans did it, then, is clearly not the case. The American and western European share of CO2 emissions between 1850 and 2012 is three times greater than China’s. Even this doesn’t go far enough. Such national accounting is akin to individualizing responsibility for the climate crisis. It doesn’t consider the centrality of American and western European capital in global industrialization since 1945. Since the 1990s, for example, China’s emissions have overwhelmingly served European and American export markets, and for decades were underwritten by massive foreign investment. There’s a global system of power and capital that’s always hungry for more Cheap Nature, which since the 1970s has meant sharply widening class inequality. Consider the United States, the world-historical leader in carbonizing the atmosphere. To allocate equal responsibility for global warming to all Americans is a grand erasure. The U.S. was, from the beginning, an apartheid-style republic based on genocide and dispossession and slavery. Certain Americans are responsible for US emissions: the owners of capital, plantations and slaves (or today’s private prisons), factories and banks. 

The Capitalocene argument therefore rejects anthropocentric flattening – “We have met the enemy and he is us” (as in Walt Kelly’s iconic 1970 Earth Day poster) – along with economic reductionism. To be sure, capitalism is a system of endless capital accumulation. But the Capitalocene thesis says that to understand planetary crisis today, we need to look at capitalism as a world-ecology of power, production, and reproduction. In this perspective, the “social” moments of modern class rule, white supremacy, and patriarchy are intimately connected with environmental projects aimed at endless capital accumulation. Essentially, the great innovation of capitalism, from its origins after 1492, was to invent the practice of appropriating Nature. That Nature was not just an idea but a territorial and cultural reality that encaged and policed women, colonized peoples, and extra-human webs of life. Because webs of life resist the standardization, acceleration, and homogenization of capitalist profit-maximization, capitalism has never been narrowly economic; cultural domination and political force have made possible the capitalogenic devastation of human and extra-human natures at every turn. 

Why 1492 and not 1850 or 1945? There’s no question that the Anthropocene’s famous “hockey stick” charts indicate major inflection points for carbonization and other movements at these points, especially the latter. These are representations of consequences, however, not the causes of planetary crisis. The Capitalocene thesis pursues analyses that link such consequences to the longer histories of class rule, racism and sexism, all of which form, in the modern sense, after 1492.

By the sixteenth century, we see a rupture in how scientists, capitalists, and imperial strategists understood planetary reality. In medieval Europe, humans and the rest of nature were understood in hierarchical terms, like the Great Chain of Being. But there was no strict separation between human relations and the rest of nature. Words such as nature, civilization, savagery and society only realized their modern meaning in the English language between 1550 and 1650. This was, not coincidentally, the era of England’s capitalist agricultural revolution, the modern coal mining revolution, the invasion of Ireland (1541). This cultural shift didn’t happen in isolation in the Anglosphere – there were cognate movements underway in other western European languages at around the same time too, as the Atlantic world underwent a capitalist shift. This radical break with the old ways of knowing reality, previously holistic (but still hierarchical) gave way to the dualism of Civilization and Savagery. 

Wherever and whenever European ships disembarked soldiers, priests, and merchants, they immediately encountered “savages.” In the Middle Ages, the word meant strong and fierce; now it signified the antonym of civilization. Savages inhabited something called wilderness, and it was the task of the civilized conquers to Christianize and to Improve. Wilderness in these years was often known as “waste” – and in the colonies, it justified laying waste so that such lands and its savage inhabitants might be put to work cheaply. The binary code of Civilization and Savagery constitutes a pivotal operating system for modernity, one premised on dispossessing human beings of their humanity. Such dispossession – which occurred not once but many times over – was the fate meted out to indigenous peoples, to the Irish, to virtually all women, to African slaves, to colonial peoples around the world. It’s this capitalist geoculture that reproduces an extraordinary cheapening of life and work, essential to every great world economic boom but also violent, degrading, and self-exhausting. 

The language of Society and Nature is therefore not just the language of the bourgeois-colonial revolution in its widest sense, but also a praxis of alienation, every bit as fundamental to capitalism hegemony as the alienation of modern labor relations. Society and Nature fetishize the essential alienated relations of violence and domination under capitalism. Marx’s account of commodity fetishism, through which workers come to perceive the fruits of their labor as an alien power looming over them is obviously central. There’s another form of alienation that goes along with this commodity fetishism. This is civilizational fetishism. That alienation isn’t between “humans and nature.” It’s a project of some humans – white, bourgeois, male during the rise of capitalism – to cheapen most humans and our fellow life-forms. If commodity fetishism is a fundamental antagonism of capital and the proletariat, civilizational fetishism is the world-historical antagonism between capital and the biotariat (Stephen Collis) – the forms of life, living and dead, that provide the unpaid work/energy that makes capitalism possible. Civilizational fetishism teaches us to think the relation between capitalism and the web of life as a relation between objects, rather than an internalizing and externalizing relation of environment-making. Everything that Marx says about commodity fetishism was prefigured – both logically and historically – by a series of civilizational fetishes, with the line between Civilization and Savagery its geocultural pivot. The rise of capitalism did not invent wage-work; it invented the modern proletariat within an ever more audacious project of putting natures of every kind to work for free or low cost: the biotariart. Like commodity fetishism, civilization fetishism was — and remains — not just an idea but a praxis and a rationality of world domination. Since 1492, this line – between Civilized and the Savage – has shaped modern life and power, production and reproduction. Reinvented in every era of capitalism, it is now being reasserted in a powerful way – as resurgent authorian populists militarize and secure borders against the “infestations” of refugees driven by the late Capitalocene’s trinity of endless war, racialized dispossession, and climate crises. 

1492 marked not only a geocultural shift, but also a biogeographical transition unprecedented in human history. The Columbian Invasion began a geohistorical reunification of Pangea, the supercontinent that drifted apart 175 million years earlier. This modern Pangea would, in the eyes of Europe’s bankers, kings, and nobles, serve as a virtually limitless storehouse of Cheap labor, food, energy, and raw materials. It’s here, in the Atlantic zone of modern Pangea, that capitalism and today’s planetary crisis originated. In the three centuries that followed, capitalism’s triple helix of empire, capital, and science made possible the greatest and most rapid land/labor transformation in human history. Only the genesis of settled agriculture at the dawn of the Holocene, some 12,000 years ago, rivals early capitalism’s ecological revolution. Centuries before Newcomen and Watt’s steam engines, European bankers, planters, industrialists, merchants, and empires transformed planetary labor/life/land relations on a scale and at a speed an order of magnitude greater than anything seen before. From Brazil to the Andes to the Baltic, forests were mowed down, coercive labor systems imposed on Africans, indigenous peoples and Slavs, and indispensable supplies of Cheap food, timber, and silver shipped to the centers of wealth and power. Meanwhile, women in Europe – not to mention in the colonies! – were subjected to a coercive labor regime more ruthless than anything known under feudalism. Women were ejected from Civilization, their lives and labor tightly policed and redefined as “non-work” (Silvia Federici): precisely because “women’s work” belonged to the sphere of Nature. 

Courtesy of Rebecca Hastings.

The story of planetary crisis is typically told through the lens of “the” Industrial Revolution. No one questions that successive industrializations have coincided with major inflection points of resource use and toxification. (But industrialization long predates the nineteenth century!) To explain the origins of planetary crisis to technological transformations, however, is a powerful reductionism. Britain’s Industrial Revolution, for example, owed everything to Cheap cotton, to the unpaid work of generations of indigenous peoples who co-produced a variety of cotton suitable for machine production (G. hirsutum), to the genocides and dispossessions of the Cherokees and others in the American South, to the cotton gin which magnified labor productivity fifty-fold, to the enslaved Africans who worked in the cotton fields. Nor was English industrialization possible without the previous century’s oppressive gender-fertility revolution that subjected women’s care and reproductive capacities to capital’s demographic imperatives. 

These snapshots of capitalism’s history tell us that this peculiar system has always depended on frontiers of Cheap Natures – uncommodified natures whose work can be appropriated for free or low cost through violence, cultural domination, and markets. Such frontiers have been crucial because capitalism is the most prodigiously wasteful system ever created. This explains capitalism’s extraordinary extroversion. To survive, it has had to enclose the planet simultaneously as a source of Cheap Nature, and as a planetary waste dump. Both frontiers, which allow for radical cost-reduction and therefore profit-maximization, are now closing. On the one hand, Cheapness is a relationship subject to exhaustion – workers and peasants revolt and resist, mines are depleted, soil fertility eroded. On the other hand, capitalism’s enclosure of the planetary atmosphere and other commons for its wastes has crossed a critical threshold. Epochal climate change is the most dramatic expression of this tipping point, where we find global toxification increasingly destabilizing capitalism’s epochal achievements, its Cheap Food regime above all. These two strategies, Cheap Nature and Cheap Waste, are increasingly exhausted, as the geography of life-making and profit-taking enter a morbid phase. The climate crisis is – as Naomi Klein reminds us – changing everything. Capitalism’s world-ecology is undergoing an epochal inversion – or better, implosion – as natures stop being cheap and starting mounting ever-more effective resistance. Webs of life everywhere are challenging capital’s cost-reduction strategies, and become a cost-maximizing reality for capital. Climate change (but not only climate change) makes everything more expensive for capital – and more dangerous for the rest of us. 

This is the end of Cheap Nature. That’s a huge problem for capitalism, built on the praxis of cheapening: cheapening in the sense of price, but also cheapening in the sense of cultural domination. The first is a form of political economy, whilst the other is the cultural domination that revolves around imperial hegemony, racism and sexism. Among the most central problems of planetary justice today is to forge a strategy that links justice across and through these two moments. Consider that the most violent and deadly biophysical results of this toxification and economic stagnation are now visited upon those populations most consistently designated as Nature since 1492: women, neo-colonial populations, peoples of color. 

This is a dire situation for everyone on planet Earth. But there are grounds for hope. A key lesson I’ve drawn from studying climate history over the past 2,000 years is this: ruling classes have rarely survived climate shifts. The collapse of Roman power in the West coincided with the Dark Ages Cold Period (c. 400-750). The crisis of feudalism occurred in century or so after the arrival of the Little Ice Age (c. 1300-1850). Early capitalism’s most serious political crises – until the mid-twentieth century – coincided with the most severe decades of the Little Ice Age in the seventeenth century. Climate determines nothing, but climate changes are woven into the fabric of production, reproduction, governance, culture… in short, everything! To be sure, the climate changes that are now unfolding will be bigger than anything we’ve seen over past 12,000 years. “Business as usual” – systems of class rule and production and the all rest – never survive major climate shifts. The end of the Holocene and dawn of the Geological Anthropocene may therefore be welcomed as a moment of epochal political possibility – the end of the Capitalocene.

To be sure, capitalism continues. But it’s a dead man walking. What needs to happen now is radical change that links decarburization, democratization, decommodification. This will have to turn the logic of the Green New Deal inside-out. Such a radical vision will take the GND’s crucial linkage of economic justice, social provision, and environmental sustainability in the direction of de-commodifying housing, transportation, care, and education – and ensuring food and climate justice by de-linking agriculture from the tyranny of capitalist monocultures. 

It’s precisely this radical impulse that is at the heart of the world-ecology conversation. That conversation is defined by a fundamental openness to rethinking the old intellectual models – not least but not only Society and Nature – and to encouraging a new dialogue of scholars, artists, activists, and scientists that explores capitalism as an ecology of power, production, and reproduction in the web of life. It’s a conversation that insists: No politics of labor without nature, no politics of nature without labor; that emphasizes Climate Justice is Reproductive Justice; that challenges Climate Apartheid with Climate Abolitionism. 

The Capitalocene is therefore not some new word to mock the Anthropocene. It is an invitation to a conversation around how we might dismantle, analytically and practically, the tyranny of Man and Nature. It’s a way of making sense of the planetary inferno, emphasizing that the climate crisis is a geohistorical shift that includes greenhouse gas molecules but can’t be reduced to matters of parts per million. The climate crisis is a geohistorical moment that systemically combines greenhouse gas pollution with the climate class divide, class patriarchy, and climate apartheid. The history of justice in the twenty-first century will turn on how well we can identify these antagonisms and mutual interdependencies, and how adeptly we can build political coalitions that transcend these planetary contradictions.