Exploring the Unexplored

Digital Ethics To Shape Post AI Societies

By Mariarosaria Taddeo January 10th, 2019

https://s3.eu-central-1.amazonaws.com/com.h-farm.h-ive.prod/articles/1b31e9c9-c9d6-42a8-91f4-664030935459.png

— Digital technologies have created a wave of transformation that has been over half a century in the making. This incredible shift has brought about new ethical problems and exacerbated existing ones. If we are to continue to capitalize on the wonders of technology and not be consumed by it, we must have a serious conversation about the crucial role of digital ethics in our future.

Digital innovation brings excellent opportunities to improve individual and social wellbeing. Unfortunately, these opportunities are also coupled with significant ethical challenges. The extensive use of increasingly more data (big data), the growing reliance on algorithms to analyze that data to shape choices and to make decisions (including machine learning, AI and robotics), as well as the gradual reduction of human involvement or even oversight over many automatic processes, pose pressing issues of fairness, responsibility and respect of human rights, among others.


These ethical challenges can be addressed successfully and foster the development and application of digital technologies while ensuring the respect of human rights and values. This will help us to shape open, pluralistic and tolerant information societies – a great opportunity of which we can and must take advantage. Digital Ethics plays a key role to this end. This is a new and ever-expanding field of research. Its goal is to identify and mitigate possible ethical risks that digital innovation may bring about, but also ensure that we harness the potential for good that digital technologies have. Ethical analyses are crucial when considering artificial intelligence (AI).


AI is a transformative technology. Like other transformative technologies, e.g. electric power or mechanical engines, AI is permeating the fabric of our societies, reshaping social dynamics, disrupting old practices, and prompting profound transformations. This makes AI a new foundational technology in need of its own specific ethical framework.


AI-led transformations pose fundamental ethical questions concerning our environment, societies, and human flourishing. From industrial plants and roads to smart cities, AI prompts a re-design of our environment to accommodate the routines that make AI work. It is crucial to understand what values should shape this design, the benefits that will follow from it, and the risks involved in transforming the world into a progressively AI-friendly environment. AI will free us from tedious tasks, and will enable us to do more things that we want to do, such as enjoying dining with friends. Thus, it will provide new opportunities for people to flourish in terms of their own characteristics, interests, potential skills, and life-projects. However, AI’s predictive power and relentless nudging may influence our choices or decisions, thus undermining self-determination and human dignity. Identifying the right values to shape the design and the governance of AI in order to avoid this risk is paramount.


This becomes evident when we consider AI’s social impact. A recent report pointed out that AI could eliminate 400 to 800 million jobs. The figures seem speculative but, even if it does not cause catastrophic levels of unemployment, AI will transform workforces into hybrid forces, made up of human and artificial agents. Estimates show that in 2 years time, over 1.7 million new industrial robots will be installed in factories worldwide. Hospitals in EU, China, and Israel already deliver diagnoses to patients leveraging hybrid teams of human experts and AI systems. These figures require a reconsideration of which values should underpin the sharing of the benefits and costs of these changes.


What is the AI Black Box Problem?


In order to address these transformations successfully, ethical analyses must provide an overall framework in which the ethical opportunities and challenges posed by AI are addressed coherently. The underpinning question here is ‘What kind of post-AI society do we wish to develop?’ To answer this, the ethics of AI must be developed as part of our overall digital ethics, defined as “the branch of ethics that studies and evaluates moral problems related to data (including generation, recording, curation, processing, dissemination, sharing, and use), algorithms (including AI, artificial agents, machine learning, and robots), and corresponding practices (including responsible innovation, programming, hacking, and professional codes), in order to formulate and support morally good solutions (e.g. right conducts or right values)”, as Luciano Floridi and I defined it in our paper What is Data Ethics?


While the ethics of data, algorithms, and practices are a distinct line of research, they are obviously intertwined. Together they define a conceptual space within which we must place the ethics of AI. Within this space, ethical analyses need to provide principles and recommendations to support the successful design and use of AI in society to ensure that it supports human and societal values.


This is a complex task, but one that we must meet. Failure would bring huge costs on current and future generations. We learned this historical lesson the hard way, when we did not try to control the impact of the industrial revolution on the rights and lives of labour forces and when we failed to focus on the environmental consequences of massive industrialization and mechanization quickly enough. It then took decades establish suitable norms. Indeed, on the environmental side, we are still struggling to enforce regulations to ensure sustainability. It may be worth trying harder with AI-led innovation, to avoid having to learn the same lesson again. Successful attempts, like in the case of controlling the development and use of civil nuclear technology through regulations, should encourage us in this direction.


This is why it is important to support initiatives like the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems, and the Partnership on Artificial Intelligence to Benefit People and Society and the AI4People. Ethicists must provide these and similar initiatives with shared values, methodologies, and overarching goals and prevent the risk of a siloed approach and redundant work. This is one of the main goals of research groups like the Digital Ethics Lab of the Oxford Internet Institute, University of Oxford.


If in its early days, the ethics of AI provided theoretical analyses mostly focusing on autonomy, responsibility, and freedom, the pervasive dissemination of AI requires that these theoretical analyses now be translated into viable guidelines to leverage the transformative power of AI as an opportunity to promote human dignity and to contribute to the development of open, pluralistic, tolerant, and fair societies. The key word here is “translational”. Translational ethics of AI goes from the white board of academia to the desk of policy making, using theoretical analyses to shape regulatory and governance approaches to AI. Two steps are essential to move in this direction: ethics must assess the trade-offs between competing values; and it must provide a methodology to prevent unwanted uses of AI.


AI-led transformations may expose or exacerbate the incompatibility between fundamental values and rights. This may lead to difficult policy decisions. Think of the use of AI for the identification of cyber threats and the risks for mass-surveillance that this may pose. At the same time, ethical analyses can identify socially preferable solutions by assessing trade-offs and shaping policy decisions. As we argued in Science early this year, the more the pace of AI-based innovation accelerates and the technology matures, the more important it is to anticipate its ethical impact. Translational ethics of AI needs to develop foresight methodologies to envisage forthcoming ethical risks, challenges, and opportunities, to shape the development of good AI, and prevent unwanted consequences.


Ethicists and scholars alone will not be able to address the challenges posed by AI. AI-led transformations require a joint effort of designers and developers, regulators and policy-makers, academia and civil societies to identify and pursue the best strategies to unlock the potential of AI. Together, we can prepare our societies to deal with these transformations and take the historical opportunity of using AI as a force of good.

The end.