AI and self-driving cars

The guarantee of safety in autonomous driving points to some challenges with the evolution of AI technology, what is data's role?

by Vittorio Di Tomaso

alt=
AI 16 December 2019

When creating AI, time is not always on our side. The goal of AI is to create systems that exhibit behaviors that are comparable to human intelligent behaviors. Some of us have read this sentence (or something similar) in the last few years so many times that we might almost take it for granted. We also know that this aspiration is difficult, and that, at the moment, even if progress in some areas of AI application have been fast in the last few years, we are still far from our goal.

It took millions of years for the human brain to evolve (from the first tiny mammals of 250 millions ago to the H. Sapiens), and as far as we know, the human brain is the only natural object that exhibits general intelligence. Nevertheless, we have the idea that the “evolution” of the artificial mind will be much faster than the evolution of the biological brain. This impression is based on the fact that neural networks are computer programs, and training them involves computational processes inside machines, that are more powerful than humans. Increasing the machine’s power, with the right algorithms, usually causes increased performance in less time. We are not only conquering intelligence, we are also conquering time: we do not need the painstakingly long process of trials and errors that evolution had to endure.

Time, however, is still a problem. Consider autonomous driving, one of the Holy Grails of contemporary AI. Automated Vehicles are robotic systems that contain three primary stages of functionality: Sense, Plan, and Act. Sense is the ability to accurately perceive the environment around the vehicle. Plan, or driving rules, is the ability to make decisions about what strategic (i.e. change lanes) and tactical (i.e. overtake the red car) actions to take. Act is the execution of the decision (translated into mathematical trajectories and velocities) to actuators within the vehicle to perform the driving decision. The stages of sensing and planning are the most likely sources of mistakes that might lead to accidents.

We can assume that society will not tolerate road fatalities caused by machines, so the guarantee of safety is crucial to social acceptance of autonomous vehicles. So how do we guarantee safety? The typical response from autonomous vehicle practitioners is returning to a statistical data-driven approach, where safety validation improves as mileage data is collected.

In recent papers, Amnon Shashua and his colleagues from Mobileye discuss the problematic nature of a data-driven approach to safety. When humans are driving, the probability of a fatality caused by an accident is known to be one out of a million hours driven. We may assume that for society to accept machines to replace human drivers, the fatality rate should be greatly reduced, for example, by three orders of magnitude, to a probability of one in a billion per hour (in aviation standards, this is the same probability that a wing will spontaneously detach from the aircraft in mid-air). Given these assumptions, can we guarantee safety using a data-driven approach?

According to Shashua and his colleagues, the answer is no. The amount of data required to guarantee a probability of one in a billion fatalities per hour of driving is roughly in the order of 30 billion miles. Moreover, a multi-agent system interacts with its environment and thus cannot be validated offline, thus any change to the software, of planning and control, will require a new data collection of the same magnitude: this is clearly unwieldy. 

We simply do not have the time to collect and validate the data, let alone the algorithm to learn, run the simulations, and obtain results. The guarantee of safety in autonomous driving shows that the long process of learning through trials and errors is still with us: “The progress of AI is certainly much faster than we expected.” These were the words of U.C., Berkeley Computer Science professor, Stuart Russell, who co-edited, A Modern Approach to AI, in a 2017 TED talk. But time is not always on our side.

Image: Renault Float – concept by Yunchen Cai, Central Saint Martins for Renault.