In H.G. Wells’s masterpiece The Time Machine, by the year A.D. 802,701 humanity has bifurcated into two races: the Eloi, who live within a daytime paradise, and the Morlocks, subterranean creatures who maintain the ancient machinery that support the Eloi’s life on the surface. The Morlocks, apparently without knowledge of how to construct the machines they support, provide a carefree existence for the Eloi, and in exchange, eat them.
Wells’s time traveling protagonist reasons the upper classes have become livestock for the working class Morlocks. Living in apparent paradise, the Eloi (in Hebrew, “lesser gods”) have lost all curiosity and initiative. In one pivotal scene, none of the Eloi notice the plight of one of their own drowning, much less attempt to rescue her.
Without attention and intent, we risk living as Eloi, tended by technology we fail to understand. Today we already exist in a complicated, complex web of relationships between humans and technology beyond the comprehension of any individual. Increasingly, AI monitors and arbitrates on our behalf, from health monitoring and resource allocation, to customer service and security. What control will we cede? How much have we ever had?
INTO THE UNKNOWN
Thus far, we have been the agents creating paths, defining processes and algorithms instantiated by technologies. AI introduces new exploratory agents. In our image, these systems will exhibit curiosity and act to change our world for better and worse. As systems become more capable and complex, some will evolve beyond our control. Even with control mechanisms – which we’ll be well-advised to create – AI will discover insights and capabilities not conceived in advance. This is the nature of exploration.
Machine Learning systems have already begun to surprise their coders. Google’s language translation engine, Google Neural Machine Translation (GNMT), provides the best-publicized example. Programmed to learn translation between human languages, the system began generating a new internal language, dubbed by Google interlingua, helping the system translate between pairs of languages it wasn’t explicitly programmed to handle. While commentators argue about to what extent GNMT accomplished something for which it was not programmed, it’s a harbinger.
Humanity has many times invented machines we initially failed to fathom. Something as simple as the barometer at first perplexed scientists. Through experimentation, Evangelista Torricelli, Blaise Pascal and others eventually explained the mechanism’s operation — upending two thousand years of Aristotelean theory. Machines catalyze understanding.
Exploration requires navigating through opacity, un-familiar phenomena, notions for which symbols do not yet exist. The recent acceleration of AI is a redo of early progress throttled by some of AI’s brightest minds. In 1957, pioneering AI researcher Frank Rosenblatt introduced the perceptron, the first operational neural network. Its apparent ability to learn generated significant interest.
Far better known and connected researchers had other ideas. In 1969, AI luminaries Marvin Minsky and Seymour Papert published a book, Perceptrons, that excoriated the notion of neural networks. Funding evaporated (most of it coming from US Government sources heavily influenced by experts like Minsky and Papert) in favor of other AI paths. Neural networks appeared a dead end until a new generation of researchers resurrected Rosenblatt’s work in the 1980s, a catalyst of current AI ferment. Sadly, Rosenblatt passed away in 1971, never seeing his ideas vindicated. A true Kuhnian story of personalities and paradigms dominating scientific progress.
Genius is no guarantee of truth. Fortunately, today’s AI research is not dependent on one dominant funder. Wider, more diverse capital sources support AI research and application. The more experiments and applications worldwide, the more likely we’ll discover fruitful hypotheses, unexpected insights and confounding.
TOWARD COLLECTIVE CEREBELLA
Karl Popper asserted that “All life is problem solving.” Evolution encodes solutions successful within the environments in which they developed, shifting as conditions change. Many solutions remain embedded in our living systems. Our cerebellum, the reptilian brain within, maintains breathing and heartbeat without our conscious intervention. We stand upon automated scaffolds, which both enable and constrain.
Technology continues this dynamic, transferring activities from conscious to automatic operation. Shifting attention, brain plasticity, cybernetics and over longer periods, evolution, will transition our cognitive systems to new roles. Already, portions of the brain adapted for map reading and navigation atrophy as many of us obey our Google Maps. From grey matter to the cloud.
Meanwhile, the scale advantage of data for machine learning suggests this dynamic might generate, in a sense, collective cerebella. Data emerge from individuals, though they’re most useful with reference to groupings of individuals. As systems connect and integrate, they’ll play cerebellum-like roles across groups, transforming social systems. They’ll become agents of culture change and mediation, technological mechanisms for collective action and control.
In general, machine learning operates more effectively with larger data sets. This suggests a standardizing effect of connectivity and machine learning across ever-larger populations. On platforms such as Facebook, LinkedIn and WeChat, billions of individuals interact within standardized environments— for the first time in human history.
AI will amplify this standardizing effect across economies and cultures. China’s leadership currently pursues social engineering at unprecedented scale through consumer applications like WeChat and AliPay, widespread deployment of video monitoring and a nationwide Social Credit System to rank each citizen on their – and their network’s – actions. Set to roll out by 2020, the system will allocate benefits and sanctions based on the score.
The liberal democratic mind recoils at the prospect.The European Union recently enacted the General Data Protection Regulation (GDPR) to return some modicum of control to individuals. While a noble mission, if machine learning advances through data access, which region’s solution might prove more economically effective? What might be the implications for civil society and the lives we aspire to live?
Wider data access, analytics and agency can lead to far better service and security, as well as exploitation and oppression. Realizing the potential of AI depends on the value functions pursued, and on which organizations command the resources to do so. Questions of liberty and equity loom large.
WHAT SHOULD HUMANS DO?
Recently in Harvard Business Review, I posed the question, “when technology can increasingly do anything, what should human beings do, and why?” This query will define much of our journey this century. Answers intimately relate to what we ask AI systems to do – and eventually to what AI systems decide to do.
Over decades, AI and robotics systems will become far more capable than we humans at nearly everything. Even humans-remain-special safety blankets like creativity and empathy will succumb to technology, at least from a pragmatic perspective. Perhaps AI systems will not ‘feel’ empathy as we do, though this distinction hardly matters if they are capable of using empathy to accomplish objectives. Ethical and spiritual questions abound.
The market mechanism, driven by efficiency, ensures we will be best suited to stop doing many things. Actuaries hold high-prestige, high-paying jobs. In the near future, AI systems will outperform any one human’s ability to execute traditional actuarial tasks. The mission of actuarial science will remain. How it’s accomplished will transform.
As in past transitions, humans will discover new opportunities, solving problems in new ways and solving new problems. But this time change will happen faster. Electrification of manufacturing in the late 19th century took 20 years to diffuse to half of all relevant facilities in the US. AI capabilities diffuse more rapidly. From consumer launch in November, 2014, Alexa and other voice-based systems will likely surpass 50% of US households soon after 2020. This is just one consumer-facing component of vast industry and cultural changes underway.
Human beings now exist in a constant, accelerating, shifting search for relevance. Fortunately, AI won’t simply lead to an ‘us-versus-them’ robot apocalypse. These technologies will be integrated with our cognitive, living, social systems. As ever, our greatest challenges will remain us versus us.
ATTENTION AS OUR ESSENTIAL QUESTION
Throughout life, each of us retains a singular choice: attention. While Descartes likely erred in postulating the mind-body dichotomy, his assertion, cogito ergo sum becomes ever more relevant. Thought provides not only evidence of individual existence, but also the mechanism through which we construct our world.
Conscious attention is each individual’s most limited resource. Automation’s essential contribution is to release our attention for activities of choice rather than necessity. The Agricultural Revolution released much of humanity from subsistence-level food production.
Liberated from survival concerns, desires might lead attention anywhere. As necessities become more widely available, a larger group of humans has the option to seek more intellectually, emotionally, experientially engaging activities, or to wallow in stimulatory surplus.
On what to attend becomes one of our most challenging, essential and ethical questions. One person’s trivial distraction could be another’s transcendent ritual. Our post-modern world deconstructs traditional definitions of value, opening horizons for exploration while fermenting confusion, anxiety and discord. Unlimited options can paralyze.
William James asserted that, “attention equals belief.” Equals belief through an iterative process. To what we attend stirs and sways our beliefs, which in turn bias our attentions, moving each ever closer to equivalence. Social media echo chambers and the challenges they pose to our social-political institutions offer a poignant example.
Hölderlin’s poetic line framing this essay suggests our challenge. In The Question Concerning Technology, Martin Heidegger leveraged Hölderlin’s insight to explore technology’s roles in the human experience. “Technology harbors in itself what we least suspect, the possible arising of the saving power.” Though he cautioned to keep, “always before our eyes the extreme danger.”
Our creations will advance beyond our control in ways we have yet to imagine. Fortunately, our choice is not between Eloi or Morlocks, but between passivity or engagement. Prisons of our inadvertent making or platforms for ever-greater experience and actualization?