Which direction will AI take?

Like all technology, AI is neither good nor bad. But like all technology, it is not neutral either.

by Vittorio Di Tomaso

alt=
AI 15 April 2019

2018 witnessed AI being deployed at an astonishing speed and scale in many use cases, from facial recognition to language-based human/computer interfaces. In 2018 we also saw the first large scale failures, from the Facebook – Cambridge Analytica scandal to the controversy around tech giants selling image technology for enhancing drone strike and facial recognition technology for surveillance, despite studies showing high error rates in these systems for dark-skinned minorities. In 2018 we also saw a Tesla crash on autopilot, killing the driver, and a self-driving Uber crash, killing a pedestrian.

These high-profile failures have helped to remind us that if AI is shoddily built and wielded in haste, the consequences will affect many human lives. They were the first wake up call for technologists, policymakers, and the public to take an active responsibility in creating, applying, and regulating Artificial Intelligence ethically.

The first positive consequence has been that the AI bias, once a little-known concept, is now a well-recognized term and top-of-mind for the research community, which has begun developing new algorithms for detecting and mitigating it. Not an easy feat, but a problem that must be addressed somehow if we want to trust autonomous systems for life-and-death decisions in fields like radiology, credit scoring and crime prevention.

The second positive consequence has been a deeper engagement of social activists, lawyers, and academics with AI. AI is a very technical topic, but technologists cannot be left alone to plot its future. We need a collective, multi-faceted effort to educate the public and policymakers and to elevate the quality of the regulatory debate.

This attention to AI led to the third positive consequence: companies have begun hiring ethical AI officers, and establishing codes and processes for evaluating AI projects and countries like Canada and France or supranational bodies like the European commission have defined agendas for the global AI ethics discussion.

More attention to AI ethics and to the consequences of the deployment of autonomous systems is very good news both for the field of AI and for society as a whole because the sooner we realize we are facing tough problems, the sooner we can start solving them. And we are not thinking of the existential threat that, according to some very important people like Elon Musk and Bill Gates, a general artificial intelligence (or super-intelligence) may pose in the future, but to the very concrete and potentially life-threatening risks that the current narrow AI system can pose already today.