When Artificial Intelligence embraced uncertainty

The skeptical enthusiast

by Vittorio Di Tomaso

alt=
AI 10 December 2021

Imagine you were an AI researcher in 1975: the world population had reached 4 billion for the first time, American civilians had been airlifted from the roof of the U.S. Embassy in Saigon, finally ending the Vietnam war for American troops, the movie Jaws had been released, and Iron Maiden had formed in England.

But as an AI researcher, the things the mattered most were the Turing Award given to Herbert Simon and Alan Newell, and the publication of Buchanan and Shortliffe’s paper describing MYCIN, an early expert system that used artificial intelligence to identify bacteria causing severe infections and to recommend antibiotics, with the dosage adjusted for patient’s body weight.

Simon and Newell were both at Dartmouth’s summer workshop in 1956 when the field of AI was born. At the workshop, they presented Logic Theorist, a program that was able to prove mathematical theorems, even showing the ability to create a novel proof of a lemma. Unfortunately, Logic Theorist could not prove any new theorem; that was achieved only many years later, by EQP, in 1996.

As the Turing Award’s board says in its presentation address, Simon and Newell were the principal instigators of the idea that human cognition can be described in terms of a symbol system, and developed detailed theories for human problem-solving.

Simon and Newell were among the first to use AI as both a tool for investigating the mind and as a tool for solving interesting real-world problems. What they tried to understand is crucial for the mechanization of intelligent behavior: how can we program a machine to solve problems? Logic Theorist succeeded in a specific kind of problem-solving. However, most real-life problems have very little in common with proving mathematical theorems.

In 1959, Simon and Newell (with J.C. Shaw, a colleague of Newell at RAND Corporation) presented General Problem Solver (GPS). This program used means-ends analysis to solve non-mathematical puzzles like the Tower of Hanoi and the “missionaries-and-cannibals” problem (three missionaries and three cannibals on one side of a river; a boat big enough for two people; how can everyone cross the river without the cannibals ever outnumbering the missionaries?).

GPS was the first AI program that separated the knowledge of problems (rules represented as input data) from the strategy of solving problems (a generic solver engine). While Simon’s and Newell’s goal was primarily to use the program as a tool to investigate the cognitive ability of problem-solving displayed by humans, GPS’ (relative) success triggered the idea that AI techniques could be used to solve problems that had a very direct and significant impact on the real world.

Problems such as medical diagnosis, creditworthiness judgements, airport slot allocation, and problems for which a (sub)set of domain knowledge could be formalized into a set of rules and a theorem prover, augmented with heuristics, could circumvent the combinatorial explosion and obtain solutions.

When Buchanan and Shortliffe described MYCIN, they sparked a new industry. In just a few years, the expert systems market boomed to billions of dollars, with major investments from Fortune 500 corporations eager to reap the benefit of the new and promising “knowledge economy.” It was the second summer of AI, epitomized by Lisp Machines, logic programming, and knowledge engineering.

Reasoning about any realistic domain always requires simplifications: the very act of preparing knowledge to support reasoning implies that we leave many facts unknown, unexpressed, or roughly summarized.

As we all know, the upside part of this hype cycle ended in the mid-80s, starting the second AI winter, when companies failed, AI labs shut down, and researchers were afraid of even using the expression “artificial intelligence.”

There are many reasons for the perceived failure of expert systems and knowledge-based approaches, but one reason stands out among others. It was the lack of profound understanding of a crucial feature of human reasoning: being capable of dealing with uncertainty.

Reasoning about any realistic domain always requires simplifications: the very act of preparing knowledge to support reasoning implies that we leave many facts unknown, unexpressed, or roughly summarized. For example, if we choose to encode knowledge and behavior in rules such as “Birds fly” or “Smoke suggests fire,” the rules will have exceptions that we cannot afford to enumerate, such as penguins, chickens, or broken chimneys…

Even the conditions under which the rules apply are usually ambiguously defined. Reasoning with exceptions is like navigating a minefield: most steps are fine, but some can be catastrophic. The problems raised by exceptions and non-monotonic phenomena, for instance, defeasible inferences in which we draw tentative conclusions that can be retracted based on further evidence, were well known.

In the ‘70s and ‘80s, many AI researchers — those responsible for the design of first-generation expert systems  — tried to adapt classical logic to deal with uncertainty, for example, by attaching to each proposition a numerical measure of uncertainty and then combining these measures according to uniform syntactic principles, the way truth values are combined in logic.

This approach, called certainty factors, was introduced in MYCIN and was adopted in most subsequent commercial expert systems. However, it is not effective because it often yields unpredictable and counterintuitive results, as it was subsequently shown formally.

The savior of AI from the conundrums of uncertainty was an émigré from Israel, Judea Pearl, an engineer turned physicist, turned computer scientist, who was among the few researchers in the field who kept working on probability and causal inference in the ‘70s.

It may seem strange today, but in that period, probabilistic theory was considered a controversial topic in AI and computer science in general. When Pearl’s book Probabilistic Reasoning in Intelligent Systems was published in 1988, it took the field by storm. In just a few years, Pearl became one of the most published and cited scholars in computer science.

Pearl encouraged the field to study and understand uncertainty because he believed that a sound probabilistic analysis of a problem would give intuitively correct results, even in cases in which rule-based systems behaved incorrectly.

His contribution, for which he received the Turing Award in 2011, was an application of Bayesian probability to reasoning and causal inferences. His work had a transformative and lasting effect on the way practitioners of AI think about truth conditions (and had a profound impact outside the field, in psychology, medicine, and social sciences.)

Pearl showed that, while it may be tempting to think that reasoning is a well-defined deterministic Cartesian machine, in which you can neatly skip from one true proposition to the next, in reality, exceptions and non-monotonic phenomena turn up everywhere, and the only solution is to embrace uncertainty and use it to our advantage to solve ever more complex problems.