How to fail and be successful

by Vittorio Di Tomaso

alt=
AI 13 May 2021

The failure of Artificial Intelligence was televised by the BBC in June 1973, when the broadcaster’s science series Controversy featured a debate between Sir James Lighthill, the accuser, and three AI experts as defendants.

In 1972, Sir James Lighthill, a mathematician turned physicist, a fellow of the Royal Society and Lucasian Professor at Cambridge University, was asked by the U.K. Science Research Council to write a report on the state of the art in the field of Artificial Intelligence. The U.K. government, as well as the U.S. government, was starting to feel suspicious of AI: in the 20 years since the field’s inception and after millions of dollars had been invested by public institutions (in particular the U.S. military,) the promises of autonomous robots, machine translation, and intelligent computers like HAL 9000 were nowhere to be seen.

The program played out like an episode of a court drama. Lighthill stood behind a lectern on a platform elevated above the other debaters. Seated in the pit below were Donald Michie and Richard Gregory (both from the Department of Machine Intelligence at Edinburgh,) and John McCarthy (from Stanford University, one of the organizers of a Dartmouth University summer school where an AI research program started in 1955.) In the first half of the proceedings, Lighthill stood on his platform and stated his case. In the debate that followed, the scientists tried to rebut Lighthill’s accusations. Lighthill accused the field of both bad science and a failure to achieve meaningful results. The science was bad, in Lighthill’s view, because AI researchers had approached very complex problems with over simplification, failing to address the issue of combinatorial explosion while dealing with real-world obstacles. He labelled the results irrelevant because, after 20 years, the discoveries had not achieved the promised impact in any domain. The three scientists’ defense appeared weak; only McCarthy maintained the position that the research program was sound and that machine intelligence was achievable in a manageable time frame and with the conceptual and technological tools available at the time.

We do not know if the public watching the debate on their TVs were convinced by either side, but we do know that Lighthill’s report convinced most of the investors at the time: universities and other mission-oriented public bodies (particularly in the defense sector). After 1973, most AI funding had dried up. In the U.K., AI research disappeared at most universities (exceptions were Edinburgh and Essex;) in the U.S., the Defense Advanced Research Project Agency (DARPA) cut funding for many large projects, such as the ‘Speech Understanding Research’ initiative at Carnegie Mellon, a $3 million a year grant (more than $15 million a year in inflation-adjusted 2020 dollars.)

Notably, many of the AI algorithms, devised in the ‘60s and early ‘70s, that Lighthill accused of being unable to treat the combinatorial explosion of real-world problems are still at the core of broadly used real-world applications. Two examples stand out. One is the pathfinding algorithm A* first published in 1969 by researchers at Stanford, and still used in ubiquitous mapping and navigation software like Google Maps. The other are the speech recognition methods developed in the ‘70s at Carnegie Mellon University, such as the Hidden Markov Models, which are still embedded in contemporary voice assistants like Alexa and Siri.

In the decade that followed, AI suffered its first winter, which ended only in the early ‘80s, thanks to new hype around logic programming, Fifth Generation computing, and expert systems. Unfortunately, it was then a short-lived summer, followed by a second winter that lasted until around 2010.

Now, thanks to neural networks and deep learning, we are living the third summer of AI, and in the last few years, we have become used to reading statements like “AI is a transformative technology, on par with electricity.” AI has become both a necessary ingredient for any software to be considered relevant and as a topic of thoughtful discussions on the future of human intelligence.

Practitioners are delighted: Their professional lives have become more interesting, more relevant, and sometimes more lucrative. But they have a chill inside because they remember (or have been told about) that night in 1973 when AI failed, so spectacularly, on live television, and they know that with great power comes great responsibility (and the risk of great failure.)