How does AI measure uncertainty?

Computer science aims at cracking complicated mechanisms beyond doubt. But does it make sense to always reduce uncertainty?

by Francesca Alloatti

alt=
AI 21 October 2021

In the domain of computer science, almost everything is measurable, even uncertainty. Perplexity is a measurement of how well a probability distribution or probability model predicts a sample. A low perplexity means that the probability distribution is good at predicting the sample. In contrast, a higher perplexity value indicates difficulty in computing prediction: models with lower perplexity can make “stronger predictions,” in a sense.

Perplexity is not the only measurement computer scientists use to assess the performance of, let’s say, a neural network that classifies objects. Another fundamental metric is accuracy. Informally, accuracy is the fraction of predictions a model got right. It means the number of correctly predicted data points out of all the data points.

Even without formal training in statistics, it is clear that perplexity and accuracy are different but complementary concepts. The central point is that accuracy does not say how certain the model thinks it is about… anything. And certainty matters. An AI model may get the correct results 80% of the time, but if it also claims that it is completely sure about that value, and scientists act on that certainty, humans may be taking involuntary risks. If the system informs the scientists about how confident it is about the value itself, they can take that into account as well.

From prophecies to predictions

Computer science’s obsession for certainty has undoubtedly been there from the beginning: after all, what is the point in building huge computing machines if not to crack complicated mechanisms precisely and beyond all doubt? The surge of probabilistic reasoning has improved our ability to make accurate predictions and, therefore, also changed our perception of the future. Future is no longer something mysterious and unpredictable, but rather the result of visible patterns that unwind from our present onward.

In his book Homo Deus, Yuval Noah Harari points out how the modern world does not believe that events occur because of divine forces at play anymore, but rather because of human choices. If pretty much anything can be explained through a concatenation of choices and motives dictated by humankind’s will, then — theoretically — all these facts could be computed together to predict what would happen next. Right? Well, although on paper it could be possible, there is a physical limit to the number of variables a machine can combine to depict all the possible scenarios.

That means that beyond a certain threshold, it doesn’t really matter how good the forecasting method is since there is a material limit to the computational capability of the machine that is running the algorithm. Let’s suppose that the prediction model is quite effective and is already taking into account the most critical variables, those that are most likely to affect the results. For example, a certain city is located in a seismic area, so it is likely to be hit by an earthquake sooner or later.

Even then, there is still a mass of micro-variables, small events, and decisions that could change the course of the future but are just too numerous and too small to be considered at present; it is the butterfly effect. If a system were to consider every single variable — even if weighted specifically according to the impact it could cause — it would come up with such a huge number of possible scenarios that we could not possibly take them all into account. Unless, perhaps, we could use a quantum computer, a system capable of taking into account every scenario in the world that could ever occur, and treating them all as possible.

Once you have predicted all the possible outcomes, what is going to drive your choice? 

Once a computer has laid in front of humankind everything that could possibly happen, we could realistically make informed choices. But would that lead to making “exact” choices? (Where “exact” means entirely correct, choices that would avoid fatal problems and mistakes that affect the world’s population.) Probably not: personal interests and other factors, such as morality, would then come into play, leading to apparently meaningless decisions.

alt=

Morality can be defined as the concept relative to customs, that is, to practical living, as it involves a conscious choice between actions that are equally possible, but to which different or opposite values are due or attributed—good and evil, right and wrong. Even if a system were able to give us several different predictions made with a low perplexity and high accuracy, the difference between the occurrence of one scenario or the other would still be human choice. A choice based on what we believe to be good, or fair, or just the best option at that moment. In that sense, technology will always give us the benefit of the doubt between two different options.

On the other hand, for technology itself, the concept of “doubt” is just a matter of confidence. Indeed, AI systems usually produce outputs according to a metric of confidence: they give a certain answer because it is the most probable one for them, according to how their algorithms work, the algorithms that humans have entered into them. This is the mechanism that, for instance, autonomous cars use to make decisions. The famous “trolley problem” has been applied to autonomous cars to study what a machine would do when confronted with that classic philosophical dilemma.

This situation has sparked numerous discussions around the ethics of machines; but the truth is, machines are machines, and their (potential) ethics is just another algorithm that humans can embed into their operation. In the specific case of autonomous cars, the trolley problem can be likened to risk management. The solution to this dilemma, at least considering imminent technologies and their foreseeable descendants (that is, those that we would expect without imagining radical transformations of what is currently available), is that the car should simply continue along its trajectory rather than swerve.

“Any steering the car does will be sharp relative to its speed, and thus at significant risk of loss of vehicle control,” says Rebecca Davnall, from the University of Liverpool, “in any environment sufficiently crowded that all paths available to the car result in collisions, a loss of vehicle control is much more dangerous than a controlled stop”. In the words of Andrew Chatham, an engineer on Google’s “X project”: “It takes some of the intellectual intrigue out of the problem, but the answer is almost always ‘slam on the brakes’.” What does it tell us about the machine’s capability of doubting? That, to be honest, there is none: if the most sensible thing to do according to the laws of physics is to keep going straight to minimize risk, then the machine would do that, no matter whether there is an old lady, a baby or five people on its path. Leave the dilemmas for the humans.

Once a computer has laid in front of humankind everything that could possibly happen, we could realistically make informed choices. But would that lead to making “exact” choices? Probably not.

Can technologies reduce our uncertainties?

Nonetheless, technologies do reduce uncertainties in a way. They can compute and take into account many variables more precisely than the human mind can. They can output the most probable scenario and also tell you how confident they are in their prediction. That said, it still always depends on the variables at play: when a completely unprecedented event occurs, such as the pandemic, no prediction system could have foreseen its deep impact and consequences. Mathematical models had to be adjusted to the unforeseen events happening all around us. Even if AI cannot help us with specific predictions of the future, it can still help us manage the delicate phase where we do not know how our world will look in a few months.

Once we accept that AI’s predictive ability is only valid when we, as humans, can specify the variables it needs to compute, we can also stop having unrealistic expectations about its applicability. We can’t stop thinking that AI is the answer to all of our doubts and dilemmas about what lies ahead not only as individuals but also as a human species, and finally accept that the role of technology — for now — is supportive.

The role of technology is to enhance our ability to understand the complex situation surrounding us and help us see glimpses of what will happen in the future. According to Martin Riedel, a pandemic expert who cooperated with the Daimler group, the owners of the Mercedes-Benz brand: “Our brains aren’t made for calculating exponential functions. Imagine this: you’ve got a container in which you place a pathogen that doubles its numbers once a minute so that the container is full after one hour. This means that after 59 minutes the container was only half full. And after 55 minutes it was only one thirty-second full. It’s very difficult for us to understand why the curve suddenly moves upward so steeply.”

What technology can do is make these concepts easier for us to visualize and understand; it can help us make decisions about our future, recommending the most sensible path according to the data it has been fed. However, we should keep in mind an important concept: while the machine’s answer will probably be just “slam on the brakes,” we as humans still have to deal with the moral consequences of that action. We can leverage AI technologies to our advantage, but only through our creative potential, we can travel across the (fertile) swamps of doubts, uncertainty, and risk to get to a new normal and make new discoveries.