How AI presents a unique opportunity for humanity

Artificial Intelligence has now entered the public debate. Everyone, everywhere are more aware of AI and the changes it could bring than ever before.

by Limor Lahiani

alt=
AI 18 December 2017

As with any advanced technology, there is a risk that Artificial Intelligence can be abused, and many worries about what it will mean for humanity are grounded in logic. However much of what gets covered in the press is not only negative but speculative – apocalyptic visions of an intelligent AI taking over. I choose to focus on a more positive aspect of AI, it’s effect on humanity and the positive process I believe we humans are going through as we develop it.

Last year, Microsoft introduced Tay. A Twitter chatbot that went from an AI ‘modelled to speak like a teen girl’ into an anti-feminist racist within 24 hours. The backlash was quick, and Tay was cited as an example of why we should be worried about AI – because AI can behave in ways humans may not predict. Yet Tay only evolved the way it did once it interacted with offensive twitter users due to the fact it was trained to learn from people’s interaction with her, without the ability to distinguish right from wrong.

When I heard about Tay coming out as a racist, I didn’t see a failure, I saw an opportunity. An opportunity for humanity to reassess the information we put online and how we interact with each other on a human level. By identifying what made Tay so poisonous so quickly we can track the flaws in our own behavior, reiterate and improve. The dangers or bad behavior we associate with AI are really just a mirror of the flaws we have as humans – the ones that we should try to fix. AI helps us to identify these flaws. Facial detection technologies often fail to detect the faces of people of color due to biases in the training data of the algorithms. AI has only increased our awareness of these biases, resulting in efforts to improve by using less biased and more diverse training data sets.

One of the main challenges facing AI development is that in many ways it operates as a ‘black box’. “There is no obvious way to design such a system so that it could always explain why it did what it did” MIT professor Tommi Jaakkola states and “it is a problem that is already relevant, and it’s going to be much more relevant in the future.” The challenge of ‘reverse engineering’ is not that we don’t understand how those neural networks work, but rather, in many cases, it is hard to “understand” why the algorithm outputted a specific result. It isn’t like classic programming, where we can trace the sequence of instructions that led to a specific output, given a certain input.

Which is why it is difficult to appease calls for regulation. It is much harder to reverse engineer exactly what is happening from a Deep Learning algorithm. It’s hard to look at neural networks that were trained to solve the problem and come up with an intuition as to why it works. The bottom line is that the algorithms are programmed by examples (training), and they can be designed to keep evolving as they are introduced to new data. Once an AI is out there in the ‘real world’, analyzing new data, we cannot predict the outcome with 100% certainty. Therefore, regulating or controlling the algorithm is far from trivial.

It will be hard or even impossible to predict how AI more intelligent than humans will behave. The truth is that if we ever reach that point we can only think or simulate within the boundaries of our own intelligence. In a way, artificial general intelligence (AGI) designed to keep learning is a lot like our own children. We try our best to teach them how to distinguish right from wrong and give them the tools to learn and cope with what life brings. But at the end of the day, they’ll become independent and will take their own course.

Even if we ensure that AI such as Tay are trained with the right data that reflect the desired behavior, we still can not anticipate what will happen once they are exposed to new experiences. Hence the challenge on the one hand, and the opportunity on the other. We need to find a way to teach AI morals: what is right and what is wrong? This will be a huge hurdle to surmount.

The challenge is that since the dawn of the internet, our main source of training data is already biased and is a reflection of our own biases. Even if the AGI that we create is more intelligent than ourselves, it will still have been trained on biased data. But since we are only aware of our conscious biases: How will we uncover our unconscious ones? Can AI reflect these incognito forces? The opportunity this creates, however, is the chance for us uncover these biases and create a diverse representation of the world, a more inclusive one.

Ultimately, we are only beginning to understand the potential of AGI. Like all technological research, we should approach the topic with a certain level of caveat, but we should also not jeopardize progress with dystopian fear. The current debate is dominated by this and shadows the opportunity for us to look at the potential for a positive transformation as we develop AI and as AI becomes more present in the public conversation. As we develop AI, we are exploring our own biases, our own morals, our own values, our own ethics, trying to sculpt a representation of the world that reflects them, based on which we will train AGI.

Fortunately, if you are reading this, you are a testament to how this conversation has begun to open up. We are already well on our way to questioning our own ethics and addressing conscious and unconscious biases, and therefore preparing a positive foundation for the future of AI and of this planet. I for one continue to be excited about the future that awaits us and remain hopeful for what lies ahead.