Crowdsourcing morality

Now that AI making its way into our lives in the form of self-driving cars and smart personal assistants, it's time to talk about ethics.

by Epi Ludvik

alt=
AI 31 May 2018

In the recent years, AI development has quickly accelerated and is finally making its way into our lives in the form of self-driving cars and smart personal assistants. As AI gets more and more reliable, people are slowly overcoming their suspicions towards it and embracing it as a part of their day-to-day lives. With adoption rates soaring, now is the correct time to switch the focus of the debate to Moral Machine Learning: which ethics do we want machines to have? And how do we bestow these values onto them?

As self-driving cars make headlines for both their life-changing potential and ethical implications, we have ask ourselves: which moral values does a car need when it finds itself in a dangerous situation? In Arizona on March 18, one autonomous car killed a pedestrian who was crossing the street. This casualty highlights how urgent it is to dig into this topic: does AI have a sense of what is good and what is bad?

To answer such a complex question, we have to get back to the basics of human morality, reflect on our ethical codes and the common agreements that are currently in place. We can’t instil a sense of morality in AI if there is no consensus about what “ethical” is for us in the first place.

AI, born in the late ‘50s, is a very young branch of Informatics — which is why it behaves just like a child, observing and learning anything it comes across without any sort of filter. So far, we’ve just kept on “feeding” it with uncontrolled stimuli; but children won’t grow their own ethics if parents fail to provide them with ethical guidelines. Our duty as the parents in this relationship is to control the machines’ learning paths – expunging any chance that the data we are feeding this technology is creating unethical entities. Unfortunately, we have no real control on the future output, but it’s essential to start applying an ethical framework to AI even at this current (relatively early) stage.

Even though human ethics are fluid and differ over time and across cultures, we do have certain, common basic moral codes that regulate societies. As a preliminary step we should apply this framework to machines. This sounds like a non sequitur, but truth is it hasn’t been done yet. There are two main reasons to this: first, the lack of connection between how we make technological decisions and create algorithms and social life experiences. Second, the fact that regulatory institutions and tech giants have never really sparked off a debate — the Facebook/Cambridge Analytica case has done nothing but make this disconnect more explicit than ever.

The risk of casualties caused by automation will most likely never cease to exist, but avoiding a one-size-fits-all ethical mindset and gathering a human perspective, tapping into different views on specific moral questions people might have, is a good start in mitigating the likelihood of an incident taking place. Thanks to crowdsourcing, which means involving internet users and tapping into their knowledge to develop a project, we now have the opportunity to have everyone contribute to this growing ethical issue – hopefully preventing accidents through the creation of a more secure system. Something similar has been theorized by Microsoft’s Dr. Hsiao-Wuen Hon and, in 2016, experimented with by MIT through their project, Moral Machines – a platform that is collecting a human perspective on moral decisions made by intelligent machines.

These are examples of how crowdsourcing can contribute to create an inclusive society for machines and humans: the collective moral view of a crowd on various issues creates an ethical system that’s inherently better than one built by an individual. Triple-entry accounting, an innovative system used today for blockchain to ensure the legitimacy of a transaction, could find a new application in the AI environment and be the best solution in ensuring a higher level of transparency when it comes to opening the decision-making process to the crowd.

Eventually, we’ll also have to consider who should be held accountable for the final decisions made by AI with preprogrammed morals. Many world leaders and industry analysts are crying out for the establishment of a decisional process in this sense — Germany has actually started to develop a regulatory masterplan for self-driving cars, for example. However, governments are rarely equipped with the full technological know-how needed to do that, and the best solution may be to create, along with entrepreneurs, an international, neutral “governing body” based on open leadership.

As we age year on year, AI does too — only a lot faster than us. This means that we have a maximum of 10 years (maybe less) to set up an AI moral system and a legislative framework. It’s a very small amount of time — and that’s why we should stop focusing on the potentially negative aspects of tech developments and work on the positive ones.

As the acclaimed documentary Who Killed the Electric Car? showed, innovation can indeed be slowed down or even stopped entirely by caging the debate in a single-minded mindset. This may ultimately prevent us from experiencing the incredible opportunities offered by AI, just as has happened with electric cars: a great invention whose failure was due to a lack of vision and shared decision-making process.

Learning from this big mistake and tapping into people’s knowledge through crowdsourcing is the right path to follow if we want AI to unfold its full potential and become an ethical technology that is in harmony with our own morality codes and principles.