The headline is a question, not a position, but it stems from the recent “recall” of Tesla’s Autopilot system. It is not a recall in the sense that I grew up with — where you have to take your car to a dealer to get the fix. Rather, it is an over-the-air software update. The update will not fix many of the issues with the Autopilot, it will merely attempt to warn the driver when they are not paying enough attention. Therein lies the issue.
All of these systems are helper systems. They all require that a human being pay enough attention when they are not in control to be able to take back control in an emergency situation and at short notice. People are terrible at such tasks. it has been proven over and over again that people just don’t have the right kind of attention span to be asked to do nothing for extended periods of time and then asked to make critical decisions with little to no warning. The fundamental premise appears to be flawed.
Perhaps this is merely a Tesla problem. The Tesla system is different than most other systems, in ways that most experts agree make it less safe. Its lack of LIDAR, high quality maps, and infrared cameras to catch when people are paying attention distinguishes its system from all the others in a way that most experts think makes it less safe. And of course, I have already written about how Tesla’s marketing lies to people about what the system can and cannot do, leading to life-threatening misunderstandings. But Tesla is largely the only company that implies its system can drive for you. And self-driving cars, such as those by Cruise, had significant problems when they were let loose in San Franciso, including dragging a victim for feet under the car. This does not appear to be an isolated problem.
The counter to this is that self-driving cars and driver assistance programs are safer than human only driving. Except that may not be true. The safety reports are largely based on self-reported data, and when researchers tried to look at the data, they found that the claims do not hold up. A peer-reviewed paper recently found that if you control for road types, conditions, and driver age, partial automated vehicles have 11% more accidents than human beings would be expected to under the same conditions.
There is the concept in programming of a minimum valuable product. It just means the minimum functionality you can put out and expect people to get use from. The expectation is that you will improve the product over time, adding more features and benefits as people use it. Self- and assisted-driving systems seem to work under the same concept. They put out systems that cannot actually replace humans in all situations with the expectation that what they learn will improve the product. Unfortunately, as a result, we end up in a less safe environment than the one we had before.
Self- and assisted-driving may be one of those areas where partial success is the same as complete failure. Given the record, and given the knowledge that human beings are terrible at the “don’t pay attention until it’s an emergency” environment that appears to be endemic to these systems, it seems reasonable to argue that such systems cannot be allowed on the road until they are much more feature complete. Any system that allows drivers to take their hands off the wheel, any system that can drive for a person for any length of time, appears to be too dangerous to allow on the road.