pluto

Digital ethics to shape post AI societies

Digital technologies have created a wave of transformation that has been over half a century in the making. This incredible shift has brought about new ethical problems and exacerbated existing ones. If we are to continue to capitalize on the wonders of technology and not be consumed by it, we must have a serious conversation about the crucial role of digital ethics in our future.

Digital innovation brings excellent opportunities to improve individual and social wellbeing. Unfortunately, these opportunities are also coupled with significant ethical challenges. The extensive use of increasingly more data (big data), the growing reliance on algorithms to analyze that data to shape choices and to make decisions (including machine learning, AI and robotics), as well as the gradual reduction of human involvement or even oversight over many automatic processes, pose pressing issues of fairness, responsibility and respect of human rights, among others.

These ethical challenges can be addressed successfully and foster the development and application of digital technologies while ensuring the respect of human rights and values. This will help us to shape open, pluralistic and tolerant information societies – a great opportunity of which we can and must take advantage. Digital Ethics plays a key role to this end. This is a new and ever-expanding field of research. Its goal is to identify and mitigate possible ethical risks that digital innovation may bring about, but also ensure that we harness the potential for good that digital technologies have. Ethical analyses are crucial when considering artificial intelligence (AI).

AI is a transformative technology. Like other transformative technologies, e.g. electric power or mechanical engines, AI is permeating the fabric of our societies, reshaping social dynamics, disrupting old practices, and prompting profound transformations. This makes AI a new foundational technology in need of its own specific ethical framework.

AI-led transformations pose fundamental ethical questions concerning our environment, societies, and human flourishing. From industrial plants and roads to smart cities, AI prompts a re-design of our environment to accommodate the routines that make AI work. It is crucial to understand what values should shape this design, the benefits that will follow from it, and the risks involved in transforming the world into a progressively AI-friendly environment. AI will free us from tedious tasks, and will enable us to do more things that we want to do, such as enjoying dining with friends. Thus, it will provide new opportunities for people to flourish in terms of their own characteristics, interests, potential skills, and life-projects. However, AI’s predictive power and relentless nudging may influence our choices or decisions, thus undermining self-determination and human dignity. Identifying the right values to shape the design and the governance of AI in order to avoid this risk is paramount.

This becomes evident when we consider AI’s social impact. A recent report pointed out that AI could eliminate 400 to 800 million jobs. The figures seem speculative but, even if it does not cause catastrophic levels of unemployment, AI will transform workforces into hybrid forces, made up of human and artificial agents. Estimates show that in 2 years time, over 1.7 million new industrial robots will be installed in factories worldwide. Hospitals in EU, China, and Israel already deliver diagnoses to patients leveraging hybrid teams of human experts and AI systems. These figures require a reconsideration of which values should underpin the sharing of the benefits and costs of these changes.

In order to address these transformations successfully, ethical analyses must provide an overall framework in which the ethical opportunities and challenges posed by AI are addressed coherently. The underpinning question here is ‘What kind of post-AI society do we wish to develop?’ To answer this, the ethics of AI must be developed as part of our overall digital ethics, defined as “the branch of ethics that studies and evaluates moral problems related to data (including generation, recording, curation, processing, dissemination, sharing, and use), algorithms (including AI, artificial agents, machine learning, and robots), and corresponding practices (including responsible innovation, programming, hacking, and professional codes), in order to formulate and support morally good solutions (e.g. right conducts or right values)”, as Luciano Floridi and I defined it in our paper What is Data Ethics?

While the ethics of data, algorithms, and practices are a distinct line of research, they are obviously intertwined. Together they define a conceptual space within which we must place the ethics of AI. Within this space, ethical analyses need to provide principles and recommendations to support the successful design and use of AI in society to ensure that it supports human and societal values.

This is a complex task, but one that we must meet. Failure would bring huge costs on current and future generations. We learned this historical lesson the hard way, when we did not try to control the impact of the industrial revolution on the rights and lives of labour forces and when we failed to focus on the environmental consequences of massive industrialization and mechanization quickly enough. It then took decades establish suitable norms. Indeed, on the environmental side, we are still struggling to enforce regulations to ensure sustainability. It may be worth trying harder with AI-led innovation, to avoid having to learn the same lesson again. Successful attempts, like in the case of controlling the development and use of civil nuclear technology through regulations, should encourage us in this direction.

This is why it is important to support initiatives like the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems, and the Partnership on Artificial Intelligence to Benefit People and Society and the AI4People. Ethicists must provide these and similar initiatives with shared values, methodologies, and overarching goals and prevent the risk of a siloed approach and redundant work. This is one of the main goals of research groups like the Digital Ethics Lab of the Oxford Internet Institute, University of Oxford.

If in its early days, the ethics of AI provided theoretical analyses mostly focusing on autonomy, responsibility, and freedom, the pervasive dissemination of AI requires that these theoretical analyses now be translated into viable guidelines to leverage the transformative power of AI as an opportunity to promote human dignity and to contribute to the development of open, pluralistic, tolerant, and fair societies. The key word here is “translational”. Translational ethics of AI goes from the white board of academia to the desk of policy making, using theoretical analyses to shape regulatory and governance approaches to AI. Two steps are essential to move in this direction: ethics must assess the trade-offs between competing values; and it must provide a methodology to prevent unwanted uses of AI.

AI-led transformations may expose or exacerbate the incompatibility between fundamental values and rights. This may lead to difficult policy decisions. Think of the use of AI for the identification of cyber threats and the risks for mass-surveillance that this may pose. At the same time, ethical analyses can identify socially preferable solutions by assessing trade-offs and shaping policy decisions. As we argued in Science early this year, the more the pace of AI-based innovation accelerates and the technology matures, the more important it is to anticipate its ethical impact. Translational ethics of AI needs to develop foresight methodologies to envisage forthcoming ethical risks, challenges, and opportunities, to shape the development of good AI, and prevent unwanted consequences.

Ethicists and scholars alone will not be able to address the challenges posed by AI. AI-led transformations require a joint effort of designers and developers, regulators and policy-makers, academia and civil societies to identify and pursue the best strategies to unlock the potential of AI. Together, we can prepare our societies to deal with these transformations and take the historical opportunity of using AI as a force of good.

All that glitters is not gold

To begin, it is good to look a why the technology behind R+ is gaining traction today. This is down to two crucial factors: hardware and software – both of which are much cheaper today then they were some years ago. When it came to hardware, in the past it was more complicated to develop – sensors and tracking systems were large and expensive, and R+ relied on cameras which only added to the costs. Today, we have commercial visors and headsets, and it is possible to spend less than a thousand dollars on the gear needed to experience these new worlds. Ten years ago, the costs were tenfold.

Graphics Processing Units (GPUs) too, have witnessed a sharp decline in cost. This is due to the widespread applications of this technology as more have been produced to meet demand. From decoding the genome to mining cryptocurrencies, there are now more GPUs around than ever before, leading to reduced cost.

Then there is the software element, which provides the ability for creators (production studios and independent developers) to produce content. The gaming industry over the last years has pushed hard in developing rendering engines which are now much more widely and readily available. Unreal Engine and Unity 3D are two of the most well known and have provided the possibility to democratize this software through an open source vision – only charging users once their product becomes profitable with the fee based on a percentage of total income. Open-source is, in essence, the distribution of software and its original code for free, allowing it to be modified and further distributed by anyone who would like to experiment with it. The second reason for this scalability is that the quality of rendering engines has increased sharply and access to these rendering engines has become much easier. Anyone keen on exploring the world of R+ can now easily add rendering engines into their workflow.

More generally, R+ is trending now because we are used to interacting with many different things in our lives today. Our phones are interactive, our computers, televisions and almost every aspect of our homes are too. This has caused a shift in understanding, expectations, and culture. This new context has especially aided the adoption of AR, as we are always expecting to experience more interaction and information – AR layers just that, directly into our reality, and smartphones are perfect windows into this world. VR too has an extremely high level of interaction because you can design and develop any interaction in an entirely new world. It’s not simply the gaming aspect, it’s the exploration and navigation aspect inside this new ambiance. The technological, together with the human (the need to discover, touch, interact and reside within your interactivity), is a potent mix which is charging the trend.

Many companies see this technology as a shortcut to innovation. But innovation today doesn’t solely rely on technology, it is more to do with how it interacts with humanity as a whole. Not every company needs to have R+ interactions implemented into their business model – technology is not the key to a successful business – but all need to have space in their plans for the people they are interacting with.

That is not to say that there aren’t some common use cases that many businesses which could use R+ for – and still retain the human focus. Internally, with Human Resources for example, upon hiring it is usually only the hard skills which are testable. With R+ we can provide a voice to those who are more attuned to their soft skills by allowing them to be simulated in situations that require them. Such skill sets are often not obvious during today’s hiring process, and some candidates with the strongest hard skill set are often not the best fit for your team. R+ can make it easier to create the right culture for a company, and give some candidates (who often may not seem like the ideal ones) a better chance in proving themselves – testing their abilities as a human being rather than just another cog in the machine.

One of the most interesting things about this period is the emphasis on why we want to interact rather than what we want to interact with. We are now passing through the eye of the needle, this technology has already proven itself engineering-wise, now it needs to prove itself with its ability to create solutions which are focused on people. Most of our R+ work currently is related to support and training users for work in hazardous environments. Through simulation, it is possible to replicate such high-risk scenarios as climbing to great heights and dealing with heavy machinery in order to prepare individuals for when they have to deal with these issues in the real world. Thanks to such simulations, users can export the lessons from their interactions and use what they have learned in actual reality.

Such experiences are designed around people from the beginning because it is paramount that they understand what they have to do in such high-risk situations. Inside the simulation there can be augmented graphics added too, heightening understanding of the environment and explaining what is needed in real time – something which could never be done in our reality during the event. Outside of the simulations, R+ technologies can be blended in ways that present the precise information offered by digital mediums and map it onto the real world – think AR glasses being worn during the event, showing more accurate information to the user who in turn will learn, retain, and use that information better than ever before. This is just one example of thinking about how R+ can have a positive purpose in relation to humanity, in the sense that it is saving lives. In the above example, the care is not about the technology, nor the visual level of the interaction, but instead about the consequences that result.

As content creators begin to better understand this technology and its complicated technical abilities, design processes will undergo a renaissance. Due to its intuitive, hands-on nature, R+ can allow designers to witness their products in 3D. Such a change in perception will undoubtedly help with product prototyping, allowing designers to visualize their products and gain a better understanding of where it will fit in the real world. From sneakers to buildings, the interfaces of R+ are so much more natural than what has come before, that it opens up an entirely new paradigm of product design.

This is also true at the other end of the product cycle. Customers will be able to visualize what they are purchasing even before they have entered the store. R+ will offer an entirely different experience to traditional brick & mortar venues, allowing for the testing of products in multiple scenarios, environments, and contexts. It will be difficult to find an area of life wherein we won’t see the disruption caused by these technologies.

As the internet connected us over immense geographical distances, R+ will take it further. Online, we are often acting like caricatures of ourselves, with R+ we can once again put the human factor into the online space. By inhibiting the same digital room, we will be better prepared for collaborative working and cooperation – reducing business costs for travel at the same time. As ideas such as smart working begin to permeate more and more workplaces, expect R+ to take charge in shaping many of the future workplaces defining methodologies.

This human-centric approach doesn’t solely sit in the territory of R+. It is something which happens with all technologies. We now live in an era wherein we can design any experiences that stem from software, apps, and workflows etc. around people because now we have the tools to do it.

Going forward, although R+ is able to create entire universes for us to explore and engage with, it is also something which should keep us looking closer to what is home, and what is human. In creating cultures which help us to all excel, and in breaking down century-long tropes of what it means to be a worker, R+ can bring benefits to businesses that are world’s greater than simply ticking the innovation box. It is important to remember that R+ is just another set of tools, it isn’t the revolution, just a part of it.