Robots are machines that perceive the world around them, do some computation, make decisions, and act outside their bodies. Autonomous mobile robots (AMRs) can also move freely within their environment on their own, proving to be intelligent agents. However, AMRs are unbelievably tricky to build. In addition to the challenges they pose on the electromechanical front, they present cognitive hurdles because they need to sense their environment, understand it to a certain extent, and act on a plan to achieve their goal. Furthermore, AMRs must operate in a stochastic, partially-observable world populated by other agents (including humans), each of which acts according to its own plan.
During the initial decades of AI history, the dominant paradigm was the information processing model: the operation of a cognitive function was described in terms of symbolic information processing. This paradigm led to the classic three-tier architecture for robots. First, sensors acquire raw data from the environment. Then, somehow, this data is integrated into a symbolic representation of the world and compared to a representation of the robot’s goal. In the last step, the representation is used to compute an action plan that is transformed into actions by a range of actuators. This approach is very rational and straightforward. But it clashes with a problem: the real world is too rich, dynamic, and complex to be effectively represented. As a result, robots following that logic were stuck with the same dilemmas that were plaguing the rest of the AI community because they couldn’t cope with the combinatorial explosion and uncertainties of the real world.
While most researchers fled to simpler, more abstract sandbox-worlds that they could easily represent, a group of young researchers – influenced by biologists like Maturana and Varela, and by the seminal work of cybernetics pioneers Norbert Wiener and Gray Walter – began to approach the matter differently: What if we do away with the representation? Couldn’t we solve the problem using simpler agents that directly couple perception and action, a method more akin to using quick, automatic reflexes than relying on slow, ponderous, conscious reasoning?
Inspired by evolution, the movement’s idea was that the essence of autonomous life is its ability to navigate a dynamic environment, sensing its surroundings enough to stay alive and reproduce. By solving this problem, they aimed at solving cognition. Led by MIT professor Rodney Brooks, these researchers spawned a “nouvelle AI” that was as revolutionary as the “nouvelle vague” of cinema. They implemented it as a new paradigm that shifted the focus from high-level cognitive architectures to low-level behavior in the real world. Their keywords were situatedness, the ability to interact with the real world within human-like timeframes; embodiment, the need to have a physical body that acts in a real physical environment; and emergence, the belief that intelligence emerges from the interaction of many simpler components, from the bottom up.
Armed with this new paradigm, the researchers started to build robots. Their robots were small, nimble machines, such as the insect-like 6-legged Genghis, that could actually autonomously move over rough terrain. And they were successful!
Brooks co-founded iRobot in 1991 to develop Roomba, a simple AMR that could clean real homes: it took a few decades, but today the robotic vacuum market is worth several billion dollars. Mark Tilden was a very eccentric, cowboy-style British-born roboticist who went from working for NASA and the military to become the most successful roboticist on earth, selling millions of his creatures, like the Robosapien, to kids around the world. All his robots, as those of the neurobiologist and violinist Valentino Braitenberg – a south Tyrolean of noble origin who for many years was the director of the German Max Plank Institute and authored Vehicles – Experiments in Synthetic Psychology, a fascinating book on robot design – were based on some version of the nouvelle AI paradigm. Without too much complex cognition, they worked.
But the real jewels of behavior-based robots are far from planet Earth. If the human race ever leaves low orbit and the Earth-Moon system, AMRs will be necessary: we cannot hope to explore deep space without highly autonomous robots. And indeed, NASA turned to Brooks and Tilden in the ‘90s when designing the first Mars rovers. The first rover, Sojourner, traveled just 100 meters in 1997, but its successors – Spirit, Opportunity, and Curiosity – explored tens of kilometers on the surface of Mars. Perseverance, which landed on Mars in February 2021, even launched the first aerial drone capable of flying autonomously in the sky of another planet.
Nouvelle AI was a turning point for artificial intelligence. It reminded us that – in the words of Hans Moravec – simple things, such as moving without bumping into obstacles, are harder than complex things, like playing chess, and that when you succeed at simple things, great things become possible, like putting flying robots on Mars.