pluto

Crowdsourcing morality

In the recent years, AI development has quickly accelerated and is finally making its way into our lives in the form of self-driving cars and smart personal assistants. As AI gets more and more reliable, people are slowly overcoming their suspicions towards it and embracing it as a part of their day-to-day lives. With adoption rates soaring, now is the correct time to switch the focus of the debate to Moral Machine Learning: which ethics do we want machines to have? And how do we bestow these values onto them?

As self-driving cars make headlines for both their life-changing potential and ethical implications, we have ask ourselves: which moral values does a car need when it finds itself in a dangerous situation? In Arizona on March 18, one autonomous car killed a pedestrian who was crossing the street. This casualty highlights how urgent it is to dig into this topic: does AI have a sense of what is good and what is bad?

To answer such a complex question, we have to get back to the basics of human morality, reflect on our ethical codes and the common agreements that are currently in place. We can’t instil a sense of morality in AI if there is no consensus about what “ethical” is for us in the first place.

AI, born in the late ‘50s, is a very young branch of Informatics — which is why it behaves just like a child, observing and learning anything it comes across without any sort of filter. So far, we’ve just kept on “feeding” it with uncontrolled stimuli; but children won’t grow their own ethics if parents fail to provide them with ethical guidelines. Our duty as the parents in this relationship is to control the machines’ learning paths – expunging any chance that the data we are feeding this technology is creating unethical entities. Unfortunately, we have no real control on the future output, but it’s essential to start applying an ethical framework to AI even at this current (relatively early) stage.

Even though human ethics are fluid and differ over time and across cultures, we do have certain, common basic moral codes that regulate societies. As a preliminary step we should apply this framework to machines. This sounds like a non sequitur, but truth is it hasn’t been done yet. There are two main reasons to this: first, the lack of connection between how we make technological decisions and create algorithms and social life experiences. Second, the fact that regulatory institutions and tech giants have never really sparked off a debate — the Facebook/Cambridge Analytica case has done nothing but make this disconnect more explicit than ever.

The risk of casualties caused by automation will most likely never cease to exist, but avoiding a one-size-fits-all ethical mindset and gathering a human perspective, tapping into different views on specific moral questions people might have, is a good start in mitigating the likelihood of an incident taking place. Thanks to crowdsourcing, which means involving internet users and tapping into their knowledge to develop a project, we now have the opportunity to have everyone contribute to this growing ethical issue – hopefully preventing accidents through the creation of a more secure system. Something similar has been theorized by Microsoft’s Dr. Hsiao-Wuen Hon and, in 2016, experimented with by MIT through their project, Moral Machines – a platform that is collecting a human perspective on moral decisions made by intelligent machines.

These are examples of how crowdsourcing can contribute to create an inclusive society for machines and humans: the collective moral view of a crowd on various issues creates an ethical system that’s inherently better than one built by an individual. Triple-entry accounting, an innovative system used today for blockchain to ensure the legitimacy of a transaction, could find a new application in the AI environment and be the best solution in ensuring a higher level of transparency when it comes to opening the decision-making process to the crowd.

Eventually, we’ll also have to consider who should be held accountable for the final decisions made by AI with preprogrammed morals. Many world leaders and industry analysts are crying out for the establishment of a decisional process in this sense — Germany has actually started to develop a regulatory masterplan for self-driving cars, for example. However, governments are rarely equipped with the full technological know-how needed to do that, and the best solution may be to create, along with entrepreneurs, an international, neutral “governing body” based on open leadership.

As we age year on year, AI does too — only a lot faster than us. This means that we have a maximum of 10 years (maybe less) to set up an AI moral system and a legislative framework. It’s a very small amount of time — and that’s why we should stop focusing on the potentially negative aspects of tech developments and work on the positive ones.

As the acclaimed documentary Who Killed the Electric Car? showed, innovation can indeed be slowed down or even stopped entirely by caging the debate in a single-minded mindset. This may ultimately prevent us from experiencing the incredible opportunities offered by AI, just as has happened with electric cars: a great invention whose failure was due to a lack of vision and shared decision-making process.

Learning from this big mistake and tapping into people’s knowledge through crowdsourcing is the right path to follow if we want AI to unfold its full potential and become an ethical technology that is in harmony with our own morality codes and principles.

8 takeaways from the rise of Artificial Intelligence

On February 10, 1996 the first chess game between a human champion and a computer took place. Garry Kasparov, the international champion at the time, kept the machines at bay,  beating IBM’s Deep Blue five games to one. A year later, Kasparov lost a rematch to Deep Blue. Not only was it his first match loss ever, but it was also the first time a computer won a game against a human in a tournament.

Years later, Kasparov said that even if he overwhelmingly won against Deep Blue in 1996, he understood that something was changing. If a computer can win against a world champion even once, then soon computers would have the ability to win every time. In this sense, 1996 was a huge milestone: Garry Kasparov was the last human champion ever and since then moment computers have always won.

Artificial Intelligence is the technology that redefines the boundaries of human potential and has the capability to transform the ways in which we interact with each other and the environment.

There are people who think that super intelligent humanoids could be humankind’s last invention, whilst others think that AI will unlock an ever-lasting wave of new opportunities. From enthusiastic tech gurus to sceptic scientists and prudent experts, everyone seems to have their own point of view on this transformative technology. It was this which our debut event, maize.live – The Rise of Artificial Intelligence was dedicated to.

Throughout two days of in-depth analysis, inspiring debates and thought-provoking speeches, we delved deep into the implications of Artificial Intelligence and the future of humankind.

Exploring the dark and the bright sides of this technology, we learned that our collective future is exclusively in our hands. The way we shape our tomorrow depends on how we face the enormous change that AI is bringing into our lives today. So what were the most powerful insights on Artificial Intelligence gained? 

Robert C. Wolcott, Co-Founder & Chairman, The World Innovation Network (TWIN) Clinical Professor of Innovation & Entrepreneurship at Kellogg School of Management Managing Partner at Clareo, moderator of our event maize.live – The Rise of Artificial Intelligence

# POSSIBILITY

“There’s no magic, there’s nothing special: it’s basically simple math” as Pascal Weinberger, Head of Rapid Development & AI at Telefonica Alpha Innovation, suggested, in short, let’s stay grounded. Even if AI might worry us, it is unlikely that this technology will suddenly revolt and take over the world. Despite the fact that artificial systems are growing quickly, they are not human and are based on mathematical structures. Human intelligence is extremely complex, multifaceted and involves many areas such as common sense, reasoning, language, planning, analogy. Today, the only field in which AI really can make the difference is deep learning. Machines can’t do anything on their own and their decisions and actions are still trapped within the limits of what we deem necessary.

Rather than seeing it as a threat, we should view AI as a field of limitless possibilities. Artificial systems learn from the data we give to them, if we want to create powerful machine learning algorithms, we have to choose correct, positive and bias-free data, keeping in mind that the inputs coming from the surrounding environment are actually influencing the way this technology makes decisions. As machines become more and more sophisticated, the real challenge for us is to define how we program and employ these systems. If we always keep humans in the loop, AI will neither surpass nor defeat humankind.

“The best way to predict the future, is to create it!” – Abraham Lincoln

# CYBERSECURITY

Today, crime is being lead by highly intelligent and extremely smart people. According to Menny Barzilay, CEO and Cyber Security Strategist at FortyTwo, crime itself is getting more innovative and creative every day .

In the world of crime, we can recognize an inherent asymmetry: cyberspace intrinsically favours hackers and hinders security people’s work. As with terrorism, if you are a hacker you only have to succeed once, if you are the security agent you have to succeed every time. Also, security is very costly, hacking is very cheap. Artificial Intelligence can change the rules of the game and disrupt this pattern, contributing to cybersecurity in fields such as anti-spam and phishing, automation, malware detection and many more. Innovation is a two sided coin and every new technology creates new problems.

Some years ago, the Defense Advanced Research Projects Agency (DARPA), the agency in charge of developing defense products, systems and technology for the USA, organized a competition wherein participants tried to hack very complex defense systems. For the first time in history, the participants weren’t human: machines competed against other machines with AI replacing human hackers.

In this unsettling scenario, where AI systems acted simultaneously at the same time, both as the threatening actor and the “good guy”, a question arose: who is more likely to win? Artificial systems would most likely maintain the same asymmetrical pattern of current security systems and it will, once again, prove to be harder to guarantee security and easier to commit crimes.

Today, algorithms can learn to ID pixelated faces and even evaluate someone’s personality and psychological traits. From AI-assisted fake porn to real-time face reenactment, we are able to make these machines truly infallible. AI systems can mystify our perception of reality and we are at risk of losing the ability to understand what is real. Trust is our only weapon against this tendency and without it we can’t move forward. Trust is something we have the power to create and it’s imperative to work together to make sure that AI does not divide us.

“AI will create amazing, amazing opportunities for humankind, but also amazing, amazing threats.” – Menny Barzilay

# EXPLAINABILITY

AI today is very powerful but remains, ultimately, a black box: you have an input, something happening in between, and then the output. As with us, something occurs within the AI brain/engine that we are not privy too: we don’t really know what is going on or how decisions are being made.

However this is going to change, in Europe at least. The new regulation which came into effect last week, GDPR, stipulates in art. 13 that it requires anyone who is using AI to know why and where certain decisions have been taken by the machines. Businesses are finally being called to find a chain of accountability.

Starting from this urgency, Professor Mischa Dohler told us how in King’s College London, a new concept of explainable AI planning (xAIP) has been developed. This concept works on a very deterministic decision tree where causality is explainable and decisions made by the machines are re-computable. The most interesting aspect is that these explainable AI systems can coexist with our decision-making process. Humans can tell the robot to change a decision taken and the computer can start to act from that point the decision has been taken. This results in a much more accountable, open and traceable type of AI methodology.

“AI will automate jobs, and humanize work.” – Mischa Dohler

# HUMAN

In spite of the hype, Artificial Intelligence it’s not about technology taking over, but rather about extending our limits. Exponential technology can work as an extension of ourselves and capabilities that help us to solve complex issues. As a good example of this potentiality, Bart de Witte, Chair Faculty of Digital Health del futur/io Institute, shared with us the powerful revolution occurring in healthcare that we can trigger today simply with a tap on the smartphone.

Overperforming algorithms, data sensors and AI together are creating huge possibilities for early diagnosis, but it is the combination of human intuition and judgement, together with the precision and consistency of these machines that  will supersede the performances of both. The future of healthcare is not humans versus algorithms, but humans and algorithms against disease.

The extraordinary power of these technologies also stems from the potential to be open and accessible to everyone. 70% of the world’s population has no access to healthcare – AI could provide key solutions to this huge problem, saving millions of lives. Healthcare for all is a desirable future and humans play a central role in shaping it.           

While algorithms can automate many aspects of our worklife, the nature of artificial systems itself implies that humans will always be central. Scott David, Head of Information Interaction at the World Economic Forum, explained that algorithms should be viewed as cognitive tools capable of augmenting human skills and redesigning organizations.

We should work not only using AI to optimize aspects such as customer experience or supply chain, but also on scaling our own cognitive capabilities and augmenting people and jobs. To make the most out of AI technology, a design process is essential in driving innovation across organizations and defining our relationship with these machines.

AI should become an extension of the individual. While computers are better at making predictions and calculations, people are better at rethinking experiences and redesigning processes, modelling our world in order to convert human thoughts into a human-centered data structure.

“Technology is giving us the capacity to become superhuman.” – Bart de Witte

# CUSTOMER CENTRICITY

In almost every industry, AI is entering corporate DNA – revolutionizing business models and dynamics. With Lars Schwabe, Associate Director of Lufthansa Industry Solutions, we investigated how Lufthansa is focusing its attention on the design of customer-centric experiences in the age of AI. When it comes to improving the customer experience, conversational interfaces are gaining momentum due to their ability to simulate conversations with humans and offer instant and effective digital solutions. A good example is Lufthansa’s bot, Mildred, a tech savvy assistant that you can contact easily via the Facebook Messenger App.

An increasing number of customers prefer to deal with chatbots in certain situations to obtain immediate information and fast answers to their needs and queries. To engage with customers in a way that traditionally only a person would do, chatbots rely on Natural Language Processing (NLP). Today, Lufthansa is working intensively on these aspects, developing business-relevant customized NLP solutions.

We now see humans and AI merging their strengths to achieve the best result for all involved: while chatbots can handle a considerable amount of repetitive tasks and analyse complex situations, humans can focus on higher value services, therefore providing an overall significant and effective customer experience.

# PREDICTABILITY

Another interesting example of how artificial systems can assist humans in high pressure and complex situations was given by Vittorio Di Tomaso, H-FARM’s Artificial Intelligence Director. By using machine learning to filter information and detect patterns, we can predict natural disasters and improve the way we respond to them.

Extreme weather events are expected to become increasingly frequent and longer in the future. Extracting information from the large quantities of available data from different sources, including social media and crowdsourcing, we can extrapolate and provide fast and effective information (as well as solutions regarding prevention) in order to quickly react to natural disasters.

This is how the European project I-REACT (Improving Resilience to Emergencies through Advanced Cyber Technologies) is taking advantage of predictive AI. Thanks to mobile apps, social media analysis tools, drones, wearables to improve positioning, and augmented reality glasses to facilitate reporting and information visualisation by first responders. All of which allow actors such as organizations, policymakers and stakeholders to improve disaster prevention and response.

“Improving Resilience to emergencies through Advanced Cyber Technologies.”  – Vittorio Di Tomaso

# BIG DATA

Among the financial services, insurance is one of the key sectors that is exploring the possibilities of AI. Traditionally, a customer only thinks of insurance when approaching a purchase, a significant life change or when facing an unfortunate life event. Today, insurance companies are trying to create a different perception of their services, becoming active participants in the customer’s daily life.

As Reza Khorshidi, Chief Scientist of AIG, told us, the insurance sector is trying to increase its proximity to customers and reduce the traditional gap between brokers and clients, taking advantage from the spread of the ecosystem model. This trend explains how engaging with a customer on a need other than insurance (e.g. travel, healthcare, entertainment), by gathering large amounts of data and understanding in-depth customers’ needs, insurance companies will be able to improve their speed and accuracy in creating more personalized and direct-to-customer offerings.

Data is also central to new fintech realities, as Roberto Mancone, Chief Operating Officer of we.trade Innovation DAC, pointed out. Banks are already exploiting the power of data and advanced analytics to identify, segmentate and acquire customers, keep track of the customer journey, improve credit risk decision making, develop and enhance SME behavioral rating models (using social media data) as well as early warning default detection.

“Software is Eating the World, But AI is Going to Eat Software.” – Jensen Huang, CEO, Nvidia 2017

# CHALLENGE

A true, meaningful journey through Artificial Intelligence, its wonders and threats, leads to an obligatory question: how will this technology effect us as human beings? Our last speaker guided the audience in reflecting and identifying the most potent considerations and questions for the future of human culture, behaviour and even the ways in which each one of us thinks as individuals.

With a Neuroscience Ph.D. and a past experience as a science journalist and author, Lone Frank pointed out the fact that technological innovation is deeply connected to two great “mental revolutions”, comparable to what Copernico and Darwin’s theories did to transform our view of the world forever.

On one side there is neuroscience: for centuries we have been thinking that there was something immaterial and solely human that defined our being. Now we know that is the “state of our brain” that define us, a unique organ that can be empowered through the use of new technological implants and substances. On the other side there is genetics, and the ongoing popularization of genome sequencing.

But while we are still discovering more and more about our own mechanics, without concepts such as “soul” and “inner personality” how can we define “who we are?”.

AI is the latest ingredient of this revolutionary recipe. Machine learning and data analytics are unveiling the fact that human behaviour is predictable and that we are almost biological machines, with inner rules and patterns. So, if a machine could learn these behavioural norms, what remains that is human? What is our role in society if not only our jobs can be completed by a piece of algorithm, but the way in which we think itself? 

Cogito, ergo sum?

“Our behavior is predictable, the more we use AI the more this will become clear. And in the longer term this fact will shape human beings, as individuals and our society” – Lone Frank, Neuroscience Ph.D., Science journalist at Weekendavisen

In case you missed it, you can check out all of the action here 

Healthcare goes virtual: Healing by feeling

Virtual Reality is playing a leading role in a new version of Exposure Therapy (ET), an approach first employed in the Fifties to treat anxiety disorders by repeatedly “exposing” sufferers to the stimulus they fear. Thanks to VR, this can now be recreated in a both a practicable way and in a safe environment: teaching them how to relax their muscles and control their breathing. There is no longer a need for sufferers to book several flights in order to beat aerophobia or to spend an hour in an elevator to conquer claustrophobia. VR googles, headsets and controllers are enough to live-out certain experiences so as to exorcise these fears. The view, the sound, the smell: all appear real but are not.

Even extremely serious traumas can be reproduced. In fact, Virtual Reality Exposure Therapy (VRET) is used to treat Post-Traumatic Stress Disorder (PTSD). Doctors and patients have learnt to trust tools like Bravemind, developed by the Institute for Creative Technologies of the Universities of Southern California. Virtual Iraq is another platform, developed by Virtually Better, a company founded by UCLA professor Skip Rizzo. In 2010 he has been awarded by the American Psychological Association for his Outstanding Contribution to the Treatment of Trauma. Both of these tools proved that VRET can drastically reduce the length, the frequency and the intensity of the episodes.

VR is proving more than useful when it comes to physical pain too, thanks to its application as a means to applying distraction therapy. Figures provided by Firsthand Technology show that VR can reduce time spent thinking about pain by 48% whilst narcotics today reduce it by only 10%. An incredible development at a time where, according to a 2017 UN report, opioid overuse in the U.S. has reached worrying levels (45,580 daily doses per million people).

Company’s CEO Howard Rose started his career at the Human Interface Technology Lab (HITLab) of the University of Washington (Seattle) where a groundbreaking game was conceived: SnowWorld, the first immersive virtual world designed to reduce pain from severe burns. With wideview goggles, audio headphones and a hand controller, patients make their journey into a frozen world, and enjoy the enchanted landscape where they can throw snowballs at penguins and snowmen. Studies have found that, while playing SnowWorld, the wound-care sessions are 50% less painful.

How can it work so well? According to its creators, “pain perception has a strong psychological component. The same incoming pain signal can be interpreted as painful or not, depending on what the patient is thinking. Pain requires conscious attention (…) being drawn into another world drains a lot of attentional resources, leaving less attention available to process pain signals”. Rose left the lab, founded Firsthand Technology which produced another game, Cool!. Quartz wrote that clinician Ted Jones, from the Pain Consultants of East Tennesse clinic, tested it out with 40 participants, with only one person reporting that their pain had not been reduced. The other 39 reported that their pain fell by 60-75% during the VR session and by 30-50% immediately afterward.

VR technology has always been highly valued by the military. Many governments have spent huge resources for their armies to have access to the most advanced technologies in this field. So it’s unsurprising that some of the most important labs in the world have been founded by former soldiers. HITLab was created by Thomas A. Furness III, the “Grandfather of VR”, who pioneered the use of this technology in the Sixties when he was working for the US Air Force. Similarly, Surgical Theater was founded by two Israeli Air Force officers, Moti Avisar and Alon Geri. They began by employing VR for flight simulations but soon moved into healthcare once they discovered the technologies remarkable applications in this field.

Surgical Theater aims to train surgeons in the same way fighter pilots are. Through the VR medical visualization platform, and together with a wide array of high-precision tools, Surgical Theater helps to accurately simulate and plan complex operations. Osso VR, another company, aims to do the same, this time with orthopedic surgeries. These new instruments allow surgeons to travel inside the patient’s body, and visualize HD 3D vision of arteries, veins and organs – crucial in planning for complications. At the time, these platforms help patients (and their families too) to better understand what they are going to go through, as well as the risks, and thus improve their engagement and cooperation. It goes without saying that medicine students benefit greatly from this technology too.

VR has made the unthinkable possible. In 2016, cancer surgeon Shafi Ahmed performed an operation at Royal London Hospital using a VR camera, allowing people to follow the surgery through the Medical Realities website – a company which the year before had launched The Virtual Surgeon. Originally conceived to help a particular category of beneficiaries, doctors, these tools proved to be a formidable asset to a wide range of recipients and help explain why VR use in the Health Sector is blossoming. It is a powerful and useful ally in the treatment of mental diseases such as schizophrenia, eating disorders, addictions and in rehabilitation programs.

Immersive Rehab, for instance, creates interactive physiotherapy programs in Virtual Reality that improve the effectiveness of physical and neuro-rehabilitation: In the virtual world, patients can perform movements they can’t in real life. By tricking the mind, VR makes the unthinkable possible.

VR in healthcare creates lots of business opportunities too. Money keeps flowing and the VR/AR tech-sub sector is booming. According to the November 2017 Mixed Reality (MR) Headsets Market report by Global Market Insight, the MR headset market is expected to surpass $35 billion by 2024. By the same year, according to another study, the Gesture Recognition Technology market will be around $45 billion. By 2022, it is predicted that healthcare will account for the second largest share of the overall MR market, being worth over $5 billion by 2025.

Public and private health systems have been facing revenue pressures and declining margins for years as increasing demand, infrastructure upgrades, and therapeutic and technology advancements strain already limited financial resources. With increased spending fueled by aging and growing populations, developing market expansion, clinical and technology advances, and rising labor costs – the pressure is on to continue to utilise technology in distributing smart healthcare to patients. R+ is one of the key technologies in realising this sector’s survival, and is one of many examples of how healthcare is adapting to the digital age.

It’s time for AI to get ethical

In 1998, Yahoo! shared insightful data with two students who were working together on a thesis titled “The Anatomy of a Large-Scale Hypertextual Web Search Engine”. At the time, Yahoo! could never have known that these students would later become the founders of Google – which today dominates 78% of the search market.

It stands to reason that if the company did, in fact, know what Larry Page and Sergey Brin would use the data for and were capable of, then they most certainly would have been more cautious about sharing it, and the way we use the internet today might look very different.

Since then, similar “giveaways” have occurred multiple times across different industries; the difference today is that we are more aware of the relevance and the importance of data and how it can be utilized as a full-fledged asset. I’m not only talking about Facebook and Cambridge Analytica, but also something not so widely reported: public organizations which, in order make up for their structural lack of funds, are providing startups with the valuable information that fuels them.

The digitization of businesses today is fueling a massive acquisition of the private sector by the public one: we are constantly witnessing markets consumed by giant, powerful and private tech companies which have been granted access to an unlimited amount of data. This issue really entangles ethics when it comes to Healthcare, one of the most sensitive businesses in our society: when this happens, who retains data ownership? And how will the use and sharing of data be regulated?

On the one hand, Big and Smart Data, made possible by AI, is empowering Healthcare Diagnostic Services to advance in terms of research with an exponential crescendo: the higher the quality of the data we collect is, the faster and more promising the research will be. This will, ultimately, be better reflected in terms of quality for the final user — in our case, the patient — and benefit the Healthcare System as a whole.

However, if, as the latest trends seem to confirm, Healthcare Diagnostic Services will be consolidated into a handful of global-scale private providers – the data is at risk of being unevenly shared and ending up in private hands. This phenomenon then, has more potential to amplify inequalities rather than reduce them: as Smart Data is essential to Smart Healthcare, what could the consequences for the public systems of countries with less access to data be?

Healthcare delivery is reaching an inflection point. Five disruptive trends, several of which have already transformed other major industries, are triggering changes that will profoundly affect healthcare for years to come. When and how these disrupters will strike will vary across the healthcare ecosystem. Forward-looking funds are reviewing their investment strategies now to determine how to best capitalize on the opportunities and mitigate the risks that disruption will bring (see figure).

Another major underlying topic is the thorny problem of data ownership: this is the most sensitive data one could think of. Who owns it? Whilst patients may think that they’re in control of their data, this is nothing but a myth — in fact, this market does not have any of the self-regulating forces we see in other industries. I hear a lot of discussion on how to solve this issue, with some pointing towards innovative technology — especially Artificial Intelligence and Blockchain — as the possible problem solver. I’m not so enthusiastic: there’s still a long road to go before we can actually implement these technologies on the markets.

According to a recent study, people tend to place a lot of trust in university hospitals. This is why I advocate transforming these research powerhouses into the experimental platforms of our future Healthcare system. By involving academies in the issue, we could finally fire up a deep, constructive debate about the use (and misuse) of data. Every other industry is already having this discussion: it’s time we started building our own antibodies and thought more about AI Ethics, both in terms of research and in quality of the service we bring to our communities.

We might end up trusting AI to practice a surgery on us — but can we trust the humans behind the Smart Data this technology collects?

What is UBI?

By 2030, as many as 800 million jobs could be lost worldwide to automation. In times like these, it is easy to understand why the tech industry leaders in the Silicon Valley widely endorse the concept of Universal Basic Income in order to stem the predicted problem of job loss over the next two decades.

So what is UBI? UBI is unconditional income sent directly from the state unto all of its citizens at regular intervals, regardless of their income or employment status. The idea first appeared in Thomas More’s Utopia (1516), wherein the Portuguese traveller Raphael Nonsenso narrates a conversation he says he had with John Morton, the Archbishop of Canterbury, about preventing punishment for thievery when the thief had no other choice but to steal.

From here, UBI has reared its head multiple times over the centuries. Becoming more refined and reiterated as time marched on. Beginning with More’s friend and fellow humanist, Johannes Ludovicus Vives (1492-1540), who was the first to work out a detailed scheme and develop a comprehensive, theological and pragmatic argument for it, UBI has also emerged in the ideas of Thomas Paine, Charles Fourier, John Stuart Mills and Bertrand Russell.

There are a few fundamental questions UBI raises. First, the term itself: Universal Basic Income. Who defines what is universal? Is it regional/national? All citizens/legal citizens? Who defines what is basic? Is it a living wage or more (or less)?

There is also the question of whether or not UBI will cause inflation. The answer: unlikely. UBI will be financed by taxes. It will only cause inflation if people decide to do nothing outside of this sum figure, which would result in less goods and services with the same money. Which begs a further question: are we we inherently lazy? Is mankind made to mooch?

The answer lies in the sleepy central Canadian province of Manitoba. Where the experimental guaranteed annual income project, Mincome, was held in the 1970s. Mincome acted as a type of negative income tax – aiding households who had incomes which dipped below a certain amount. No official report was issued on the results of the experiment, but years later University of Manitoba economist Evelyn Forget conducted a quasi-experimental analysis which found that only two groups, new mothers and adolescent males, worked less after receiving Mincome, and not necessarily out of laziness.

Mothers with newborns stopped working because they wanted to stay at home longer with their newborn children, whilst the adolescent males worked less because they weren’t under as much pressure to support their families – resulting in more of them graduating. In addition, those who continued to work were given more opportunities to choose what type of work they did. Forget discovered that in the period that Mincome was administered, hospital visits dropped 8.5 percent, with fewer incidents of work-related injuries, and fewer emergency room visits from accidents and injuries. Additionally, the period saw a reduction in rates of psychiatric hospitalization, as well as in the number of mental illness-related consultations with health professionals. It would seem then, that we can trust ourselves to be proactive under this new model, and that it has the capability to enhance the lives of those who experience it.

More recently, you may have heard of a UBI trial’s failure in Finland, which began in early 2017. Never mind the fact that preliminary results aren’t expected until 2019, this recent pilot was never really about UBI – in fact, it was always about “promoting employment”. Doomed from the start, the trial targeted 2,000 randomly selected unemployed Finns (one fifth of the proposed originally proposed) to receive €560 a month (about $675) for only two years. In short, it was too limited in both scale and duration. Instead of giving free money to everyone, the experiment in effect handed out a form of unconditional unemployment benefits, with not an iota of universality in sight.

Although the Finn experiment was shut down by an austerity-driven conservative government, UBI has no political leanings, and the left and the right have radically different hopes for this model. In this sense, it is clear that the UBI debate cannot be reduced to the employment debate alone, and its wide-ranging support from both the left and the right is a testament to the fact that this is a multifaceted issue – one from which complex and mixed extrapolations can be drawn.

On the right, you find a more libertarian, neoliberal model for UBI. This ideology believes in UBI as a means of providing agency to individuals, but that it should only be implemented in line with the abolishment of more traditional public services such as healthcare, education and housing. Instead, individuals should receive UBI so that they can purchase services on the market.

Which is a problem for the left, who see the support from the right as a kind of trojan horse for neoliberalism, a highjacking of an idea that in their interpretation differs greatly. In their progressive, egalitarian vision, only a few public services (then redundant) would be stopped – unemployment benefits and food stamps for example.

This version of UBI would have to be a high enough amount to ensure a truly free market wherein workers can decide what they do and whether or not they want to engage in paid work at all. If the UBI sum does not warrant this, they fear it will be exploited by employers who will pay less knowing that UBI will make up the difference.

If we were to a utilitarian standpoint (which proposes actions are right, if and only if, they promote happiness and maximize pleasure, even if those actions in question cause suffering and impinge upon the natural rights of a few) we would implement UBI as soon as possible – as it ticks all of the boxes by reducing the amount of suffering in our world. Redistribution of wealth however, is a dangerous topic, and throughout history has been dogged by corruption, inequality and malpractice – all factors which could damage the concept beyond repair.

Now back to automation. As Joseph Schumpeter argued 75 years ago, innovation is a process of creative destruction. The social and political challenge is to accentuate the creative and mitigate the destructive. As we as a society progress, so too does the risk of job loss due to automation. John Maynard Keynes, writing in his essay on the Economic Possibilities for our Grandchildren, (1930) foresaw a “nervous breakdown” in society, as people experience the adverse consequences of automation, and how this is “a fearful problem for the ordinary person, with no special talents”. So what will people do with their spare time? What will a world look like without work? These are questions we have never had to ask before, and, once we are deprived of our traditional purpose, will have to answer.

There is an old Maori proverb, “Ma Te mahi Ka tino ora” – “Work brings health”, to be without work is to be without purpose, and studies have shown that people who are unemployed for more than 12 weeks are between four and ten times more likely to suffer from depression and anxiety – it is also linked to higher rates of suicide and heart failure. Furthermore, British economist Guy Standing found that lack of occupational identity and economic insecurity is a direct antecedent to extremism. In this case, widespread automation spells disaster for the future of humanity. If handled properly, UBI could reduce today’s ever-growing populism and mitigate the negative effects of unemployment. If handled badly, it could lead to a tsunami of insecurity that will flood entire populations with a sense of worthlessness – exacerbating the very thing that it set out to cure. We must ensure that UBI is able to fill the existential hole we humans create when we lack a tangible sense of meaning.

With automation, you can continue to increase Capital (K) without increasing Labour (L), the implications of this are huge. Why in this world would L (which is in such oversupply) be rewarded with anything but pittance? The resulting scenario, Derek Kerton writes, will be a form of neo-feudalism – akin to when serfs who worked the land for the landowner, and received just enough compensation to survive. In this instance, it will be even worse than before, as the serfs are not even needed. UBI has the potential to mitigate this dystopia or reinforce it, we must continue to analyse this model to ensure that we get it right.

However, philosophers, futurists and economists have spoken about a jobs crisis in the face of automation now for decades, perhaps even centuries, and it has never been realised. Is this panic misplaced? The economist Doug Henwood, seems to think so, and notes that you should expect to see a rapid increase in productivity as a precursor to this dilemma, something which is currently not happening at all. In fact, the opposite is true: productivity across the world has stagnated and declined the world over, suggesting that we are safe – for now.

Even without the threat of automation, today, we are missing out on extensive and unnecessarily high levels of human potential. Not only are massive levels of poverty preventing populations from excelling, but we also have a lot of workers in jobs that are not needed. One study in the UK, for example, found that 33% of workers believed that they had jobs which they thought had no reason to exist. With UBI, we will begin to see wages reflect the social value of the jobs that people do. More important societal roles such as teaching or garbage collecting – which today have low wages – in the future will always be able to fall back on UBI, granting them extensive leverage in settling higher salaries. The not-so-important jobs however, will be in for a shock.

Ultimately, we need more tests on UBI in order to see whether or not this model can work for humanity. But we should remember that we need to debate it on its own merits and principles, not as a proxy topic to automation, as it is so much more. With or without, UBI could be a viable model for a better world, one in which the way we relate to work and ourselves is radically redefined – an age of imagination realised.

A tale of three cities

By taking a broad and holistic perspective, it becomes evident that mobility is a multi-layered issue. Not only does mobility include fulfilment means (cars, trains, bicycles), but also the broader public infrastructural elements (roads, bridges, tunnels) as well as a third dimension, us – the users who want safe, reliable, affordable and efficient mobility solutions.

If you look at it from the automakers’ perspective today, many are going for the obvious answers: improving their existing means of transportation at low cost and adapting them for future mobility requirements such as self-driving, vehicle electrification and enhanced connectivity.

These are undoubtedly important to the future of mobility. But we also see, for example, new business models emerging such as Mobility as a Service (MaaS), which is making huge inroads thanks to asset light new players/mobility providers such as Lyft in the US, Uber or Didi in China. With increasing urbanization and the rise of the megacities (characterised by their lack of space and increased noise and air pollution), we need to look also at these bigger, more pervasive changes.

In order to address the subject matter of innovation in mobility, we should therefore look at it from different perspectives. By looking at mobility through a city infrastructural lens, for instance, and dividing different cities into three distinct types, it becomes clearer how we will achieve access to and provision of good transportation.

The first city type worthy of a mention is Efficient Collectivism. These are the cities which are good at scaling and are used to what is known as the visible hand: they are highly regulated, scaled and standardized city systems which have been streamlined towards collective efficiency. Guangzhou is a good example. In these cities, the city itself features heavily in the provision of transportation and the setting of the rules of the mobility game.

At the other end of the spectrum you have cities characterised by what might be called Overburdened Sprawl. These are diverse, informal and unorganised cities that rely heavily on DIY and local solutions. Mobility solutions are about enabling individual development for improved quality of life. Dar-es-Salaam and Mumbai are examples of this type.

Lastly, we see cities defined as Effective Individualist, which are mainly found in the Western world. These are cities which are good at innovating and are under the grasp of an invisible hand: a liberal and highly diversified economy where individual effectiveness is maximised. The Bay Area, and even large cities such as London, Berlin and Milan are good examples of this type.

There are probably numerous other city types you could come up with, but three seems like a reasonable number to draw clear distinctions. The point is that we need a more nuanced approach to mobility for each context and city type and the solutions can be vastly different from one to the next. In a collectivism scenario, the government will take a lead role in structuring the adoption of new means of mobility, even possibly one day outright banning ownership if efficiency calls for it.

Then, closer to home in the West, we see a bit of a mix: some restrictions on individual mobility but also an emphasis within cities on outsourcing. Outsourcing the provision of mobility through certain players who play by certain rules – resulting in a healthy interaction between mass transportation (which is often already in public hands) and more localised and individual transportation to get from A-B.

This is where shared mobility and autonomous vehicles will play a very big role. With electric vehicles (powered by renewable energies) you will have a significantly reduced local carbon footprint. This, together with autonomous vehicles, with their lower accident rates, greater efficiency and better utilisation rates – are what will fill part of this new space.

Shared electric mobility then will play an important role going forward. Automakers will inevitably expect a fair slice of this transforming market with new players competing for a share of wallet whilst revenue per mile will fall over time with better asset utilisation. So automakers will need to look for new revenue pools in related product and service offerings.

At the same time, public authorities will aim to make their cities/regions attractive in the context of increasing urbanisation rates and pollution. Software tools allow governments to better manage their city infrastructure by helping them to define, simulate and implement the overall social and political objectives on how the infrastructure should be utilized. This in part the thinking behind the investment of Porsche SE – a holding company which owns the majority of the voting stock in Volkswagen Group (which in turn owns inter alia the Porsche brand) – in PTV Group. PTV Group provides software to simulate, plan and optimise networks across all modes of transport, from pedestrians and bicycles to cars, trucks and even fleets as well as public transport. The market leading software facilitates to lay out the mobility framework of the future – setting, monitoring and managing the rules for mobility providers in a city.

It is an ever-changing and complex system. For example, cities have certain priorities in certain areas at certain times – for example pedestrian zones in some areas vs. good traffic flow in other parts of the city. Or logistics services should be prioritised at night whilst commuter traffic in the mornings and in the evenings. As individual mobility providers will optimize for themselves this may come at the expense of a city’s objectives, for instance through negative externalities of congestion, noise or exhaust emissions. So cities will increasingly use software to define and manage dynamic policy schedules.

In short, mobility is more than just getting from A-B. It is a multi-layered and truly compelling interaction between multiple parties and different levels – it is time to treat it from a holistic perspective across different stakeholders.

Can culture be designed?

In the ‘70s, Southwest Airlines, the American low-cost airline, was founded with a bold mission: to democratize the airways at a time when only a third of Americans had ever been on a plane.

Herb Kelleher understood the importance of unifying people and made the decision to design a purpose-driven culture where people came first. With this human-driven approach, he prioritized people over profit.

This was not just a catchphrase: Southwest Airlines’ recruiting campaign was centered on the uniqueness of individuals, their skills, and aspirations. This encouraged employees to show their personalities at work, resulting in more natural and friendly interactions with their customers. His vision for a new type of company became a recipe for a thriving business with happy and engaged employees, high employee retention and a loyal customer base with people loving flying with them. This also led to a business with 44 consecutive years of profitability.

Successful individuals build successful companies. Engaged employees are naturally involved in and enthusiastic about their work. However, only 15 percent of employees are engaged globally, according to Gallup. There are no benefits with low engagement, and it’s a waste of potential. High engagement, however, drives both productivity and profit. And high engagement starts with a thriving culture.

The culture of a company is its core; it may seem like something integral and unshakeable, but it can be designed. Businesses today need to be culture-driven, not only to thrive but even to survive in their markets long-term. This is not new, but the digital revolution is pushing the Future of Work forward faster than ever before. Your high achievers, your most driven and engaged people, are more sought-after than ever and they have more choice than ever. To keep and attract them, a cultural transformation for most companies is needed.

After years of experience as an award-winning entrepreneur and keynote speaker, I founded 30minMBA with the vision of re-imagining workplace engagement through cultural design and actionable learning — which ultimately empowers people to reach their full potential. To us, engagement is made of four factors: people, processes, technology, and environment. Contrary to what one might think, culture is not motionless and can be shaped and moulded over time. Culture can be designed, but it takes effort, knowledge and the wisdom of experience to design it well.

Needless to say, it can be easier to design culture from scratch as startups are doing. This doesn’t imply that corporations or companies that have been on the market for years can’t rethink their culture: it will just take more effort, time and commitment. We believe that one of the best places to start is cultural design and actionable learning. We empower companies through cultural transformation; preparing them for the Future of Work. In addition to cultural consulting, the center of our delivery is an innovative mobile learning experience dedicated to supporting positive behavioral change. Your people develop their business skills ‘on the go’ and apply new knowledge to their work instantly. Mastery of skills and new concepts is part of what drives engagement and can lead to new opportunities.

Once you have a well-designed culture, recruiting on cultural fit is essential because an employee can be a follower in one company or high potential in another — depending on how engaged they are with the company’s values and mission. So, developing a process for hiring the right people for your company that shares your values and goals is at the heart of finding your right fit.

Working with culture is not the easiest thing to do: it’s about effort, human behavior, and soft skills. In the last few years, an increasing number of leaders have recognized that culture drives engagement, which can be a competitive edge for any company.

Acknowledging that people and culture are what matters most is the first step to re-imagining workplace engagement and fueling a healthy and sustainable environment that empowers what the Future of Work will be centered on: people.

Blockchain’s infinite applications

As you probably know, the blockchain is what underlies Bitcoin and any other crypto-currency. To me, more so than the digital currencies themselves, the true innovation that will disrupt industries most is the actual blockchain. Its infinite applications, from content creation to climate change, can reshape the way we will work and live in the near future.

First, I’ll take a step back and define what “the blockchain” is. The original Bitcoin blockchain is an open-source ledger invented in 2008 by Satoshi Nakamoto, whose identity is still unknown, as the underlying database for the Bitcoin cryptocurrency. It comprises a list of records (read: transactions), that are linked to one another in an ordered ledger system. Its main, disruptive feature is that, just like in a real chain, it’s virtually impossible to modify any transaction (be it financial or of any other nature). It is a database that is read and written but not editable. To sum up, its innovative feature is the fact that it’s non-amendable and is managed by a community of peers. Any data on a blockchain is immutable, verifiable and traceable and these three characteristics change everything.

It’s a truly innovative technology that will bring positive change in industries in ways we still haven’t even begun to imagine. One of the first industries the blockchain will have a positive impact on is content creation. The internet today is governed by a few big corporations: we search for information on Google, buy products from Amazon, purchase with Paypal, connect through Facebook, find entertainment on Netflix and so on. These companies are extremely powerful and benefit from the open-source nature of content that’s available across the internet and from one that its users generate for free. This makes content creators powerless in their work and barely able to make a living out of their job.

Here is where the blockchain could make a difference. If we created a blockchain specifically for copywriters for instance and set a “licensing” protocol, the creators would retain full ownership. A good example would be if I wrote an article: with the blockchain, I could choose who had access to it, for how long, where, and for what amount of money. This would also cut out the “middleman”, as there would be a direct link between the author and the consumer, allowing for even further control. This isn’t just publishing — the same would go for the music industry and any other piece of content. The blockchain would erase plagiarism: if all content made from a certain point onwards were on a blockchain, one could check the transactions and it would be harder to infringe the copyright. This would affect the overall quality too: the community could reward high-quality content and downgrade that which is bad.

Another area that would greatly benefit from a blockchain-based system is climate change. Currently, if a citizen has solar panels on the rooftop of their house and the panels make more energy than they need, they are forced to sell it to a government-controlled company. With the blockchain, they could become part of a new type of community, whose members share the energy with one another and their neighbours and gain credits for that. All of this without going through a third party, whether it be a company or the central government.

This technology can also be used for social good and financial innovation. Today, many citizens of developing countries find that they are not in control of their identity nor their possessions due to the fact that current ledger systems are either poorly managed or nonexistent. Over a billion citizens do not have an official, government-issued ID, and find their property registered incorrectly, meaning that they often can’t prove ownership of their possessions. This makes it harder for them to climb out of poverty.

A blockchain-based system is more portable, transparent, private and protected which would provide such citizens with control of their identities and prove ownership of their possessions, including their data. The positive runoffs of this are clear: it will make it easier for them to open a bank account, get a loan or even start a new business. Furthermore, since everything that’s on the blockchain has a clear source with no middleman, a consumer of a certain product could in theory demand fairer prices from the producers directly.

These are just a few examples of the applications of the blockchain: there are many possibilities that are still being and will be, explored. In fact, I just co-authored a book with Philippa Ryan on how blockchain can be used for social good. The book: Blockchain- Transforming Your Business and Our World, will be available in English and Chinese later this year. The blockchain will impact every industry that we know and disrupt it, sometimes beyond recognition. It may take a while, maybe ten years or so, to see the real impact of the blockchain in our daily lives, but it will certainly be worth the wait. We are only at the beginning of this journey: there is no set path, and it’s in our hands to make the most of it. Now is the time for experimentation. These are fantastic times to be alive and part of this revolution.

Artificial Intelligence, collective cerebella, attention and our future

In H.G. Wells’s masterpiece The Time Machine, by the year A.D. 802,701 humanity has bifurcated into two races: the Eloi, who live within a daytime paradise, and the Morlocks, subterranean creatures who maintain the ancient machinery that support the Eloi’s life on the surface. The Morlocks, apparently without knowledge of how to construct the machines they support, provide a carefree existence for the Eloi, and in exchange, eat them.

Wells’s time traveling protagonist reasons the upper classes have become livestock for the working class Morlocks. Living in apparent paradise, the Eloi (in Hebrew, “lesser gods”) have lost all curiosity and initiative. In one pivotal scene, none of the Eloi notice the plight of one of their own drowning, much less attempt to rescue her.

Without attention and intent, we risk living as Eloi, tended by technology we fail to understand. Today we already exist in a complicated, complex web of relationships between humans and technology beyond the comprehension of any individual. Increasingly, AI monitors and arbitrates on our behalf, from health monitoring and resource allocation, to customer service and security. What control will we cede? How much have we ever had?

INTO THE UNKNOWN

Thus far, we have been the agents creating paths, defining processes and algorithms instantiated by technologies. AI introduces new exploratory agents. In our image, these systems will exhibit curiosity and act to change our world for better and worse. As systems become more capable and complex, some will evolve beyond our control. Even with control mechanisms – which we’ll be well-advised to create – AI will discover insights and capabilities not conceived in advance. This is the nature of exploration.

Machine Learning systems have already begun to surprise their coders. Google’s language translation engine, Google Neural Machine Translation (GNMT), provides the best-publicized example. Programmed to learn translation between human languages, the system began generating a new internal language, dubbed by Google interlingua, helping the system translate between pairs of languages it wasn’t explicitly programmed to handle. While commentators argue about to what extent GNMT accomplished something for which it was not programmed, it’s a harbinger.

Humanity has many times invented machines we initially failed to fathom. Something as simple as the barometer at first perplexed scientists. Through experimentation, Evangelista Torricelli, Blaise Pascal and others eventually explained the mechanism’s operation — upending two thousand years of Aristotelean theory. Machines catalyze understanding.

Exploration requires navigating through opacity, un-familiar phenomena, notions for which symbols do not yet exist. The recent acceleration of AI is a redo of early progress throttled by some of AI’s brightest minds. In 1957, pioneering AI researcher Frank Rosenblatt introduced the perceptron, the first operational neural network. Its apparent ability to learn generated significant interest.

Far better known and connected researchers had other ideas. In 1969, AI luminaries Marvin Minsky and Seymour Papert published a book, Perceptrons, that excoriated the notion of neural networks. Funding evaporated (most of it coming from US Government sources heavily influenced by experts like Minsky and Papert) in favor of other AI paths. Neural networks appeared a dead end until a new generation of researchers resurrected Rosenblatt’s work in the 1980s, a catalyst of current AI ferment. Sadly, Rosenblatt passed away in 1971, never seeing his ideas vindicated. A true Kuhnian story of personalities and paradigms dominating scientific progress.

Genius is no guarantee of truth. Fortunately, today’s AI research is not dependent on one dominant funder. Wider, more diverse capital sources support AI research and application. The more experiments and applications worldwide, the more likely we’ll discover fruitful hypotheses, unexpected insights and confounding.

TOWARD COLLECTIVE CEREBELLA

Karl Popper asserted that “All life is problem solving.” Evolution encodes solutions successful within the environments in which they developed, shifting as conditions change. Many solutions remain embedded in our living systems. Our cerebellum, the reptilian brain within, maintains breathing and heartbeat without our conscious intervention. We stand upon automated scaffolds, which both enable and constrain.

Technology continues this dynamic, transferring activities from conscious to automatic operation. Shifting attention, brain plasticity, cybernetics and over longer periods, evolution, will transition our cognitive systems to new roles. Already, portions of the brain adapted for map reading and navigation atrophy as many of us obey our Google Maps. From grey matter to the cloud.

Meanwhile, the scale advantage of data for machine learning suggests this dynamic might generate, in a sense, collective cerebella. Data emerge from individuals, though they’re most useful with reference to groupings of individuals. As systems connect and integrate, they’ll play cerebellum-like roles across groups, transforming social systems. They’ll become agents of culture change and mediation, technological mechanisms for collective action and control.

In general, machine learning operates more effectively with larger data sets. This suggests a standardizing effect of connectivity and machine learning across ever-larger populations. On platforms such as Facebook, LinkedIn and WeChat, billions of individuals interact within standardized environments— for the first time in human history.

AI will amplify this standardizing effect across economies and cultures. China’s leadership currently pursues social engineering at unprecedented scale through consumer applications like WeChat and AliPay, widespread deployment of video monitoring and a nationwide Social Credit System to rank each citizen on their – and their network’s – actions. Set to roll out by 2020, the system will allocate benefits and sanctions based on the score.

The liberal democratic mind recoils at the prospect.The European Union recently enacted the General Data Protection Regulation (GDPR) to return some modicum of control to individuals. While a noble mission, if machine learning advances through data access, which region’s solution might prove more economically effective? What might be the implications for civil society and the lives we aspire to live?

Wider data access, analytics and agency can lead to far better service and security, as well as exploitation and oppression. Realizing the potential of AI depends on the value functions pursued, and on which organizations command the resources to do so. Questions of liberty and equity loom large.

WHAT SHOULD HUMANS DO?

Recently in Harvard Business Review, I posed the question, “when technology can increasingly do anything, what should human beings do, and why?” This query will define much of our journey this century. Answers intimately relate to what we ask AI systems to do – and eventually to what AI systems decide to do.

Over decades, AI and robotics systems will become far more capable than we humans at nearly everything. Even humans-remain-special safety blankets like creativity and empathy will succumb to technology, at least from a pragmatic perspective. Perhaps AI systems will not ‘feel’ empathy as we do, though this distinction hardly matters if they are capable of using empathy to accomplish objectives. Ethical and spiritual questions abound.

The market mechanism, driven by efficiency, ensures we will be best suited to stop doing many things. Actuaries hold high-prestige, high-paying jobs. In the near future, AI systems will outperform any one human’s ability to execute traditional actuarial tasks. The mission of actuarial science will remain. How it’s accomplished will transform.

As in past transitions, humans will discover new opportunities, solving problems in new ways and solving new problems. But this time change will happen faster. Electrification of manufacturing in the late 19th century took 20 years to diffuse to half of all relevant facilities in the US. AI capabilities diffuse more rapidly. From consumer launch in November, 2014, Alexa and other voice-based systems will likely surpass 50% of US households soon after 2020. This is just one consumer-facing component of vast industry and cultural changes underway.

Human beings now exist in a constant, accelerating, shifting search for relevance. Fortunately, AI won’t simply lead to an ‘us-versus-them’ robot apocalypse. These technologies will be integrated with our cognitive, living, social systems. As ever, our greatest challenges will remain us versus us.

ATTENTION AS OUR ESSENTIAL QUESTION

Throughout life, each of us retains a singular choice: attention. While Descartes  likely erred in postulating the mind-body dichotomy, his assertion, cogito ergo sum becomes ever more relevant. Thought provides not only evidence of individual existence, but also the mechanism through which we construct our world.

Conscious attention is each individual’s most limited resource. Automation’s essential contribution is to release our attention for activities of choice rather than necessity. The Agricultural Revolution released much of humanity from subsistence-level food production.

Liberated from survival concerns, desires might lead attention anywhere. As necessities become more widely available, a larger group of humans has the option to seek more intellectually, emotionally, experientially engaging activities, or to wallow in stimulatory surplus.

On what to attend becomes one of our most challenging, essential and ethical questions. One person’s trivial distraction could be another’s transcendent ritual. Our post-modern world deconstructs traditional definitions of value, opening horizons for exploration while fermenting confusion, anxiety and discord. Unlimited options can paralyze.

William James asserted that, “attention equals belief.” Equals belief through an iterative process. To what we attend stirs and sways our beliefs, which in turn bias our attentions, moving each ever closer to equivalence. Social media echo chambers and the challenges they pose to our social-political institutions offer a poignant example.

Hölderlin’s poetic line framing this essay suggests our challenge. In The Question Concerning Technology, Martin Heidegger leveraged Hölderlin’s insight to explore technology’s roles in the human experience. “Technology harbors in itself what we least suspect, the possible arising of the saving power.” Though he cautioned to keep, “always before our eyes the extreme danger.”

Our creations will advance beyond our control in ways we have yet to imagine. Fortunately, our choice is not between Eloi or Morlocks, but between passivity or engagement. Prisons of our inadvertent making or platforms for ever-greater experience and actualization?