
The territory is not the map
Revisiting Alfred Korzybski’s warning in the digital age
by Vittoria Traverso
On September 23, 2023, Philip Paxson, a 47-year-old medical device salesman, was driving home following directions provided by Google Maps. As he approached a bridge near Hickory, North Carolina, the app told him to drive over it. Paxson followed the directions, only to drive his Jeep over the unbarricaded edge of a long-collapsed bridge, and fatally plunged 20 feet into the creek below. A few days later, Paxson’s family sued Alphabet, Google’s parent company, claiming negligence, arguing that despite repeated requests from users, the company had failed to update its maps following the collapse of that bridge nine years earlier.
This heartbreaking incident dramatically illustrates a concept popularized by Polish-born American philosopher Alfred Korzybski nearly a century ago. In his groundbreaking 1933 book Science and Sanity, Korzybski posited that “the territory is not the map,” warning that representations of reality, like maps or digital models, are not reality itself. The Hickory tragedy is a stark reminder: the map may have said there was a bridge over the creek, but in reality, the bridge had collapsed.
Born in 1879 in Warsaw, Poland, Alfred Korzybski was an engineer fluent in Polish, Russian, French, and English. During World War I he served as an intelligence officer for the Russian Army, suffered leg wounds in battle, and relocated to North America in 1916 to coordinate weapons shipments to Russia. During his decades-long career following his military experience, he became frustrated by humanity’s inability to communicate with one another. He observed how advances in science and mathematics were driving technological innovation while similar progress was lacking in fields such as human interaction and communication. In 1938, Korzybkski founded the Institute of General Semantics in Chicago, Illinois, to provide training in general semantics, a discipline he had established a few years earlier. Conceived as broader in scope than regular semantics and based on scientific methods of inquiry, “general semantics” examines how the symbols and abstractions humans create to make sense of the world can affect our interactions with the world itself. As general semantics practitioner Harry Maynard summarizes, general semantics looks at how we “‘size up’ the world and then symbolically relate to it.” Indeed, the most intuitive application of the questions raised by general semantics can be found in mapmaking. In making a map, cartographers collect a variety of data about a portion of our physical world, which they convey in a visual summary that highlights its most salient features. General semantics asks us to be aware of how this map is created and how its symbols can influence our exploration of the world around us.

MAPS by Scott Reinhard
In particular, Korzybski focused on the role of language as a system of abstractions. According to the Polish polymath, words can act as a bridge between reality and human perception, allowing us to communicate effectively with one another, or they can be mirrors, tricking people into perceiving a distorted image of the world. To further explain his theory, Korzybski created visualizations of the different steps that take place during the process of abstraction, starting from the selection of a bit of reality to be expressed (“an apple”) to the choice of a symbol to represent it (the word “apple”).
Since Korzybski’s time, the number of abstractions that influence our daily perception of reality has skyrocketed. An estimated 1 billion people in the world use Google Maps, and more than 500 million use China’s Gaode Map, making these the most extensive shared visual representations of the world in human history. But it is not just language or maps; millions of us interact with the digital world daily through interfaces, human-made abstractions created to facilitate our interaction with online products or services. It has been estimated that as of 2020, some 250 billion apps were being downloaded each year by smartphone users, with people spending around 85% of their screen time using apps. Far from being value-neutral, the design of app interfaces reflects specific choices made by engineers, product developers, and business managers at each step of what Korzybski would have called the “abstraction” process. When designers create an interface that depicts the seemingly “magical” delivery of pizza or sushi to your doorstep, they do so by choosing to represent certain aspects of the food delivery process, such as speed, over others, such as the working conditions of the people delivering the food.

MAPS by Scott Reinhard
One of the key qualities Korzybski sought to instill in students of general semantics is what he called “consciousness of abstraction,” being aware of what is omitted when we simplify information. In the case of digital interfaces, it is only recently that scholars have begun to pay attention to “what is left out” of the slick digital interfaces we interact with every day.
In their 2019 book Ghost Work: How to Stop Silicon Valley from Building a New Global Underclass, anthropologist Mary L. Gray and computational social scientist Siddharth Suri invite readers to look at the “hidden” human labor behind the digital world. The authors reveal how the “frictionless” services provided by companies like Amazon, Google, Microsoft, and Uber are only possible thanks to a vast, invisible human workforce scattered across the planet. From watching traumatizing videos, to flagging X-rated content, to tagging data to train algorithms, to proofreading, these “ghost workers” perform vital tasks, often for less than minimum wage and with little to no benefits. “To users, digital assistants, search engines, social media, and streaming services seem like software wizardry,” writes journalist Edd Gent in an article about ghost workers for BBC Worklife, “but their smooth running relies on armies of humans whose contribution often goes unrecognized.” Gent cites the example of a worker based in Chennai, India, tasked with training Amazon’s Alexa responses by transcribing audio, classifying words, and ranking Alexa’s responses during grueling nine-hour shifts. While the end user may experience a voice assistant that seems to magically “speak to us,” the contribution of the humans working behind the scenes to make that magic happen is erased in the abstraction process. In the case of Amazon Mechanical Turk, a platform created in 2005 to connect gig workers with companies looking to complete tasks, the visual interface itself is designed to erase the humans working behind the scenes, says Professor Lilly Irani, a digital labor expert at the University of San Diego. “Rather than communicate directly with workers, Mechanical Turk users create tasks through a programming interface in much the same way they would write instructions for a computer,” Irani told Gent in an interview, “this is designed to mask the human labour and lets users kid themselves that they’re coders rather than managers.”

MAPS by Scott Reinhard
The term “digital” as opposed to “physical” can in itself trick us into believing that the digital world is “ephemeral” and “weightless,” terms that can hide the environmental footprint of digital services and products. In his 2023 book The Dark Cloud, the Hidden Costs of the Digital World, Guillaume Pitron traces the physical journey of electrical signals that travel thousands of kilometers through fiber optic cables across every time we send an email or “like” a video on social media. Pitron estimates that the digital world accounts for about 10% of the world’s electricity consumption and is set to grow to 20% in the next ten years. The consumption of rare earth materials to power the devices and cables that make up the physical infrastructure of our digital world is also “hidden” from the words and interfaces we use every day, Pitron notes. In his book, he estimates that manufacturers consume about 180 kg of resources like rare earth metals and fuel to make a standard smartphone. The rapid advancement of AI, which requires vast amounts of water to cool down data centers, is rapidly increasing the environmental footprint of the digital world, making Pitron’s call to become aware of the real-world infrastructure behind the seemingly ephemeral services we use all the more urgent.
The growing interest in those bits of reality left out of our daily narratives and interfaces has given rise to efforts to make these more visible. Lilly Irani and Six Silberma founded the online forum Turkopticon as a space for Amazon Mechanical Turk workers to share their experiences. It has become a place to voice advocacy efforts to obtain better working conditions for “MTurkers” worldwide. Efforts to make people more aware of the carbon footprint of email or voice messages are now more common. And last year, the US military brought back celestial navigation training in its curriculum to fight back against overreliance on GPS. Of course, it would be an impossible quest to find a word or a map or an interface that can faithfully replicate the multi-faceted, ever-shifting nature of the real world. However, the principles of general semantics introduced nearly a hundred years ago by Korzybski ask us to distinguish between useful maps or abstractions, the ones that help see the real world with clarity, and faulty ones, which obscure key parts of reality. “A map is not the territory it represents,” Korzybski stated on page 58 of Science and Sanity, “but, if correct, it has a similar structure to the territory, which accounts for its usefulness.”