Mind over matter: navigating the ethical landscape of neural tech
In the synaptic architecture of the augmented mind, a new technology hands humanity a red pill, revealing how deep the rabbit hole of ethical control truly goes
Credit: Google DeepMind
Morpheus told Neo, “You take the blue pill — The story ends, you wake up in your bed and believe whatever you want to believe. You take the red pill — You stay in Wonderland, and I show you how deep the rabbit hole goes.”
It’s always the same question: What do you want to do? Pretend that nothing is wrong, that everything is nice and simple, not raise too many problems and show glowing optimism, or try to understand better and question the good side of things and try to understand the negative side as well? At the risk of sounding a bit conservative, fearful, and reluctant to innovate and change?
It’s easy to find yourself in that situation these days when it comes to radical innovation. Whether it is AI or augmented brains, there are so many possible outlooks, conceivable scenarios, and so much uncertainty that no one can be sure that specific courses of action are ultimately good or bad.
Take, for example, BCIs (Brain-Computer Interfaces), a term coined in the 1970s by Jacques Vidal, a professor at UCLA, when he discovered that you could use electroencephalography to detect brain waves and use them to control external devices with your thoughts. Since then, great strides have been made, and today, many people have a more or less invasive device that allows them to connect their brain waves to prosthetic limbs, computers, or other peripherals. All of this was happening largely under the radar until a few years ago when Elon Musk founded Neuralink, a company whose mission is to create “a generalized brain interface to restore autonomy to those with unmet medical needs today and unlock human potential tomorrow.”
Partly because of the individual’s notoriety, partly because of the mission’s ambitiousness, and partly because of the, again, very innovative approach, BCI has become a mainstream phenomenon that many are eyeing with both hope and concern. Neuralink has developed a coin-sized device (called Link) and a fully robotic system with which, a few months ago, it performed the first implant inside the skull of Noland Arbaugh, a 29-year-old Texan paralyzed from the shoulders down in an accident. Unlike other solutions, Link is completely wireless, charges daily like a cell phone, and has no external bulges. Implanted inside the skull, its dense array of electrodes and more than 1,000 channels enable the device to read deep into brain signals and provide unprecedented control.
To read the experience of this unfortunate young man is incredible: his fears before accepting to be Patient Zero, his hopes of regaining autonomy and dignity, the joy of being able to do things by himself that he could not imagine before, the fear when he noticed a reduction in his acquired abilities due to the detachment of some electrodes, and again the happiness when, thanks to a recalibration, everything worked perfectly again.
Restoring autonomy, dignity, and hope to unfortunate people. One might wonder why in the world we would want to stop progress in the face of these extraordinary achievements. Indeed, stopping it altogether does not seem to be the right answer. However, it does seem appropriate to keep unchecked acceleration under control in a field that still presents many unknowns and also lends itself to a host of legitimate ethical concerns.
In this case, as in many others, Link *reads* brain signals and uses them to control devices. But like all interfaces, it could potentially *write* brain signals, and indeed Neuralink’s ambitions include working in this direction to help solve degenerative diseases such as Parkinson’s and Alzheimer’s. This opens the field to a whole range of dystopian scenarios, where one can imagine a totalitarian system taking control of thought by hacking into people’s cognitive abilities.
Even the company’s own admission that it wants to “unlock human potential” may raise questions about how a society can develop in which only a few have enhanced cognitive abilities, and how this may increase inequality.
Yuval Noah Harari, in his famous “Homo Deus,” pointed out the risks of a world in which humans, thanks to biotechnology and artificial intelligence, will be able to achieve a state of semi-gods by enhancing their physical and cognitive capacities, and how this will create a distinction between enhanced and non-enhanced humans, the relationship of which will be similar to that between humans and other animal species today.
But even more than that, Harari anticipated that the greatest risk is reaching a situation like the one described above almost unconsciously, initially driven by excellent intentions to solve health problems and help so many unfortunate people, as was the case with Noland. Step by step, until we find ourselves in a big Matrix. After all, even Matrix was a virtual world visible through a BCI capable of writing signals to the brain to make it see and perceive unreal experiences as if they were real.
This dilemma is particularly relevant for those involved in innovation because it requires knowing how to set appropriate guardrails capable of reducing the risks of developments such as those described without inhibiting the enthusiasm necessary to invest in progress.
So: red pill or blue pill?