— During this century, VR will become capable of generating comprehensive experiences, experienced as ‘directly’ (i.e.- washed through our perceptive, cognitive apparatus) as what we today know as reality.
Later this century, we will enter a “post-virtual” world. While pundits announce Virtual Reality’s (VR) coming of age, arguing how far and fast the technology might advance, these are concerns of the next decade or so. Anyone living for a couple more decades will encounter a world where VR and its cousins, Augmented and Mixed Reality, become commonplace.
What happens as the distinctions between ‘real’ and virtual fade?
While today the distinctions between virtual and real are relatively easy to discern, what happens as these distinctions fade? Research trajectories already suggest worlds where modified or synthesized environments are experienced as deeply real. As this occurs, they are likely to become considered as alternative, complementary versions of experience. Reality from a wider palette.
We can already discern capabilities that will enable this journey. We’ll address two here: invisible interfaces and intelligence-to-intelligence (i2i) communications.
How do you move your arm? The interface exists, and you’re blissfully unawar.
How do you move your arm? Do you remark, “Siri, move my arm,” or punch a keyboard? Of course not. It just happens. The interface exists, and you’re blissfully unaware. The same will occur with respect to our interactions with computing systems and eventually with each other. Through invisible interfaces.
Amazon Echo is a Voice-Activated Human Computer Interface, available since 2015.
Human-computer interactions are rapidly moving beyond screens and keystrokes. Voice-activated Human Computer Interfaces (HCI) that use natural language capabilities in smart phones are becoming common. As impressive as voice interfaces are, technologists have already surpassed these with direct mind-activated systems, or Brain-Machine Interfaces (BMI), such as the direct control of artificial limbs illustrated by the work of John Donoghue’s team at Brown University.
Technology trends extrapolate to vanishing interfaces between brains and technologies.
This trend extrapolates to a disappearing interface between brains and technologies. Currently, BMIs are either not accurate enough to replace traditional methods, or too invasive to be offered outside of clinical settings, but thought activation of peripherals is already being pursued by research labs and technology companies alike. From 2013 – 2015, the European Union funded a major survey of BMI research and development that resulted in recommendations for significant applications within the next decade.
Such interfaces could lead us to bypassing some of our most established modes of communication. People might still desire keystrokes or voice interfaces in certain circumstances, similar to how some people prefer a hand-written letter to an email. Nonetheless, as these technologies become more capable, they are more likely to become mechanisms of choice for an ever-wider range of purposes.
Intelligence-to-intelligence (i2i) communications— metaphorically, communicating eye-to-eye.
If brains can interface directly with machines, then two or more brains could potentially interact directly as well. In 2013, Miguel Nicolelis and his team at Duke University electronically connected the brains of two rats, perhaps the first demonstrated brain-to-brain interface. Transmission and translation across invisible interfaces enabling seamless communications between humans, or between various forms of intelligence. True “intelligence to intelligence” communications, or i2i — metaphorically, “eye-to-eye.”
As our understanding of language advances, the ability to parse content generated in the brain, deriving and conveying semantic meaning, could enable us to overcome the spoken language barrier. Transfer a thought from one brain— human, artificial or cybernetic— to another without the need for verbal communication. Verbal communication will likely remain essential for humans for some time, but we have no way of knowing the choices our descendants will make after they have assimilated i2i capabilities.
“Society progresses by increasing the number of things we can do without thinking.” —Alfred North Whitehead
Sensory and cognitive systems seamlessly supplementing the biological brain could support a range of activities without conscious thought, analogous to the autonomic nervous system controlling our basic living functions. A genius of such systems would be the continuous operation of essential functions in the absence of conscious intervention. As British philosopher and mathematician Alfred North Whitehead suggested, “society progresses by increasing the number of things we can do without thinking.”
From “Virtual” to a Wider Definition of Reality
As comfort with VR advances, the notion of virtuality will change, perhaps fading altogether. What we generate and experience, with increasingly subtle, disappearing interfaces, would thus become recognized as additional aspects of reality — wider and more diverse, yet no less real.
The notion of virtuality will change, perhaps fading altogether.
Consider the experience of dreaming. During sleep the brain generates a rich environment with realistic encounters that fool us into belief. Only upon waking do we realize the fact that these existed only in our dream world. Current VR systems fail to generate such a comprehensive experience. We know we can remove the goggles and with them our virtual world. Similar to waking from a dream, but we are fully conscious of the choice to exit a VR experience. In most dream states, the choice to exit does not seem operative. Since we trust our senses more than we might recognize (e.g., we rarely question the impressions our eyes propose) an interface capable of thwarting cognitive disbelief might support experiences interpreted as real, generating a full set of emotions, desires and thoughts.
Virtual Reality could be really immersive! Image courtesy of Somniacs.
The first screening of a filmed event, the arrival of a train into a station, “caused fear, terror, even panic.”
One of the first examples of a filmed scene presented to a live audience, L’Arrivée d’un train en gare de La Ciotat (August and Louis Lumière, 1896), showed the arrival of a train into a station. As the steam locomotive approached, more realistic that any virtual experience yet presented, journalist Hellmuth Karasek commented in Der Spiegel that, “it caused fear, terror, even panic.” While historians debate the intensity of the reaction, many who experience recent VR technologies report dissonance, even fear. Walking off a simulated cliff creates a notoriously uncomfortable, even frightening physical and emotional response—even while standing on solid ground. We know we will not plummet to our deaths. Instincts overtake rationality.
The only way we’ll know a well-simulated reality is simulated, is that we’ll know it is so.
The only way we experience what we know as reality is through stimuli collected by our senses and interpreted by our brains. The only way we’ll know a well-simulated reality is simulated, is that we’ll know it is so. Absent any other frame of reference or qualifying information, simulated stimuli could generate experiences identical to reality. While the proximate causes might differ, our experience could be the same. If we can simulate what we experience as real, we can also invent new experiences such as a comprehensive feeling of unaided flying (as described in my post here on January 2, 2017). No longer simply pale simulacra, the dimensionality of ‘real’ will expand and diversify, subsuming what we today consider virtual. The post-virtual world is coming. Our concept of reality will need to evolve.
This article was first published in the Huffington Post on 21/01/17