Anyone who is following EVs and Space has run into the theme that the Biden administration has gone to war with Elon Musk. For months I’ve been arguing against this view on many forums. The appointment of Missy Cummings to the NHTSA safety advisory board is a hard one to justify.
Professor Cummings has the qualifications that really isn’t the issue. The issue as I see it is she is wedded to one specific approach and set of technologies. The day that the appointment of Missy Cummings starts to trickle into the broader news she understandably deletes her entire Twitter account. What was interesting is she had been deleting specific threads relating to Tesla and those tweets supporting TslaQ the anti-Musk community. National Traffic Highway Safety Administration (NHTSA) comes out with a preliminary report of a famous fiery supposed Autopilot crash clearing Tesla {as the people were in the front seats and the pedal was to the floor} in what appears to be faux balance in its timing. Buttigieg’s rather snippy response to have Elon call him didn’t help.
It is very important for NHTSA to get a handle on autonomy. Tesla as a worldwide leader and more importantly the company with the most complete dataset in history, it is of course going to come under enhanced scrutiny because of that. It is equally it is important to have skeptics on an advisory panel, it actually should be a requirement. However to have someone who has been openly hostile not only to the approach academically but to the specific company and its leader while sitting on the board of and a million dollar stock holder in a company supplying the technology for a different approach seems to put conflict of interest beyond just appearance. This doesn’t even count the sources that her academic research is funded from.
This appointment has a bunch of threads that can be confusing and cross a lot of triggers for all involved. It isn’t just a Musk vs Anti-Musk confrontation it has much deeper roots into AI and its place, development approaches, where government regulation fits in with developing technologies, and a host of other issues.
This Slate article and this YouTube video lay out the positions fairly well. Cummings is an expert and Autopilot is dangerous, Cummings is conflicted and Autopilot is safer than the competition what about them. There is validity on both sides of the argument and an Advisory board should have skeptics for sure. What has Tesla supporters {as opposed to the rabid fans} up in arms about is not the skepticism, but the longer term history that Cummings has with Tesla and Elon Musk in particular. Up until her appointment became public, the general consensus view was that NHTSA would be fair and Tesla would have little problem with its determinations. However on top of statements from the head of NTSA, the Administration, Congress, and the Media the appointment of an actual hostile actor has raised doubts that any fair hearing can be had.
Much of the confusion and controversy is from the absolute pace of technology going into today’s automobiles. ADAS {Advanced driver-assistance systems} are now a standard feature marketed by most in the industry heavily. More and more features are added to ADAS everyday. Not just parking and cruise control. Tesla was unique in the ability to over the air add to and update these features across their fleet. Now more and more manufacturers are striving to duplicate that. This has put the regulatory agencies in a tough position as they now have to monitor systems continuously. This is an entirely different framework and has led to some definitional conflict. Just what is a defect and what is an enhancement. Tesla recently added some functionality to its cruise and lane keeping when encountering flashing lights in low light conditions, NHTSA contends that it is a defect correction and not an enhancement, and so a recall, they would be on stronger ground if it was an industry standard instead of a new feature. These types of headline grabbing conflicts will become more standard as competitive pressures mount across the industry. Tesla is rolling out stationary and low speed pedal confusion safeguards is that a recall or defect? Is it an industry requirement? Who is responsible if it fails even if it succeeds thousands of more times? That last question is actually more nuanced then you think as we are just now coming into an era where your driving history is as available as your browsing history. A feature can now be tracked to show not only accidents but accidents avoided. The idea that you can actually measure the effectiveness of a feature directly will become an important tool.
But what about all those crashes you may think. That indeed seems a valid point. But is it in comparison to other companies? Here we have a huge technological difference between companies. Because their cars are so connected Tesla unlike most every other company knows when there is an accident and the state of the car when it occurs {if it is in cell range}. Because they know they are required to report if the ADAS system was involved. Where that comes in to play is highlighted by the current crash investigation covering emergency vehicles. All of these accidents were investigated already and because of alcohol and other factors the drivers were quickly found at fault {a drunk pilot is responsible even if autopilot doesn’t save him}. What we have here is a major potential data and reporting bias. The problem is pretty much with the exception of Tesla we just have very little access to the data. That is beginning to change as other manufacturers race to catch up. Very soon we may start to get better data across the industry that is directly comparable and regulations will start to have to accommodate that with an array of reporting criteria. That is a very good thing.
Now is exactly the time that government needs to get on top of this. The switch to EVs is being accompanied with the rapid development of smart cars and a push for even more autonomy. Our regulatory agencies are unprepared for this shift. A strong advisory committee could be a great aid, but it will need to be broad and expert on several different levels. And it will need to be trusted and seen to be fair. ADAS systems have shown to be very effective in reducing accidents, they have not eliminated them. It is obvious we are better off with them than without them, but the level of reliability needs to be monitored and set. This is even more important as they continue to get better people will naturally come to depend on them. Also the will lead to cross brand confusion GM’s and Ford’s versions will be expected to behave in much the same way and that may or may not be true.
I’ve been talking ADAS up to now because it is the basis of all the coming developments. It is clearly a system that requires driver attention, it is designed to assist the driver and enhance their ability to safely get from point to point. The next level of assistance is the enhanced cruise features being marketed under various names. Tesla has come under criticism for calling theirs Autopilot. All of these systems require the driver to actively engage the feature and all of them require driver attention to be maintained. These systems will periodically remind the driver that attention is needed and expect a response, most will disengage and perhaps steer to side of the road if that response does not come.
This is the first area of conflict. The name Autopilot has gotten some attention that seems to me ridiculous. {I’m pretty sure that the company pedigree tied to aerospace thought that was a pretty good definitional name.} Since Tesla doesn’t advertise the contention is this is false advertising. All of Tesla’s marketing materials specifically warn that it is in no way autonomous, drivers are required to pay attention. At the same time other manufacturers are marketing their systems as hands free. There is a very subtle shift going on in the actual marketing campaigns of ADAS systems. Up until now the have been marketed as safety systems that engage due to better observation [kid following a bouncing ball] or momentary distraction [looking at the baby in the backseat]. Now increasingly you are seeing commercials where the driver engages the system and immediately takes their hands off the wheel and looks away. I find this actually a much more dangerous type of marketing given the known state of the art.
This gets us to the next area, of driver monitoring. Consumer Reports after that fiery Tesla crash famously produced a video of them rigging a bypass to the monitoring system. What is less well known is they later produced a report bypassing every other system they tested. The NTSB has recommended vision driver monitoring systems and some do employ them. Those however have also been bypassed, most with simply pictures or dark sunglasses. Already it is common for people to put tape over their cabin facing cameras. Requiring them will a huge battle. We definitely need to have that debate and safety may indeed trump privacy, but at what level. The capability will be there should it monitor when the system is not engaged to prevent falling asleep or even drunk or erratic driving? Where does the responsibility shift when safety features are bypassed. These questions become even more complex as right now we are in a transition. Lane centering seems like a good feature, but in the real world it also has to be overridden, you have to avoid things in the road or other vehicles drifting into your lane. So you have to evaluate at a system level and not a feature level and consider cases over time. If a system shows 10 accidents avoided for every 1 is that good, how does that compare with pure human driver control?
Where should the systems be allowed to be used is another conflict that is bubbling up. Every system available for use to the general public has specific limitations build into it. This is where Tesla will be an outlier. Currently most systems can be used only in certain places as autonomy increases. However as these systems evolve just a bit is where the major conflict will come. They are all striving for autonomy but Tesla has a totally different approach that is more type based than map based.
Which brings us to back to Missy Cummings.
The vast majority of the industry is Lidar and High Definition maps as the basis for their autonomy solution. Cummings is a strong proponent of this approach and the company she served on the board of is based on that technology. Tesla is basing their approach on vision only. These two different approaches are right now probably equally valid. When talking about maps we are not talking about route planning but very highly detailed maps of each bit of roadway that will include precise distances to curbs and center-lines and a host of other features. The road is traveled and mapped by a driver and others can use that map as the start of its choice matrix. It knows a lot of what should be and so can start from that point this allows for a lot of processing to occur out of the vehicle at the expense of centralized data. This is not quite as difficult as it sounds because there is a certain amount of sameness to broad types of travel environment and mainly comes into play in complex environments mainly urban. Tesla instead creates its vector space from its image processing contentiously, whats different from one frame to the next. Instead of central data it uses preprocessers for the data reduction. Both systems of course have to handle the more transitory real-time issue of other cars and pedestrians. Where they differ now is with the sensors that evaluate that. Tesla has decided that sensor reduction is the more logical path to avoid the confusion of sensor fusion. Lidar systems require other sensors if only to tell if the light is green or red. How the various sensors are prioritized and readings combined is very important. With vision only Tesla is avoiding that at the expense of having to infer more. But it also makes the system cheaper and so has been attacked for that reason. They however argue that a shared environment with the other actors allows for reasoning within the crowd and so extends capabilities and as important allows the crowd to process its behavior more. It is subtle [but at this level it needs to be], if for example cars are slowing to look at an impressive Christmas display the Tesla AI would probably slow also, but because others are slowing that is the expectation, so someone seeing a developing gap may just chose to turn into it where they normally wouldn’t.
Without getting deep into the weeds on AI, there really isn’t any reason that either approach wont work. One may give you very precise data and the other perhaps less precise for one or the other variable, but not really important in the real world. However the orientation has already infected the decision process. The emphasis you may hear on “trained drivers” is right out of the pre-mapping approach. You need trained drivers to start. The Lidar systems at least at this point require trained drivers repeatedly covering the same areas. The AI is then trained around these "Ideal” drivers responses. Tesla outside of quality assurance uses vetted drivers across as many conditions as possible. This is a more generalized bigger data approach.
That is the final part of this I want to cover. Both Missy Cummings and Jennifer Homendy are strong proponents of both trained drivers and limited testing areas. This is precisely the part of AI that Elon Musk has been critical of. Not just with autonomous driving but as a general criticism of AI development and training bias. This is more than a fight just over autonomous driving, but a battle in the overall AI development wars. General or specific training, key data or general reasoning, there is a huge battle going on and how deep the stack is part of it. While this is just one front in the AI wars it is important as part of forming a overall response to AI’s increasing influence and application in out day to day lives. What and how government should regulate is a real concern and as we see with Social Media and its AI, it can have profound effect. With autonomous driving we probably have enough law in place to do something and that may be able to set a framework for other areas. The big question is where the emphasis is placed, in the effect or the development. The current rhetoric seems to indicate that the main focus is in the approach, that I see as a major flaw. As we have seen over the last years with vaccines, we need some regulation across the spectrum. With autonomous driving we are in stage one and two trials. The basic technology is in place and is being tested in the field. Waymo for example is offering driver-less rides in Phoenix right now and others in other places. The same is happening with other providers worldwide with tests including delivery vans. That also is another difference. Autonomy is a huge industry goal with the idea of Robotaxi and delivery money at the end of it. So they are aggressively testing “completed” driver-less systems in expanding areas. That is not actually what Tesla is doing at all. Tesla is aggressively pursuing it but with the goal of incremental feature deployment along the way. The city streets function they are looking to deploy next is the enhanced ADAS system applied to city streets. Lane keeping, collision avoidance and the other features that you see on the highways right now.
Advisory boards are important as we have seen with vaccines. But they should not be seen as anything other the experts examining the actual data. You really wouldn’t expect a recent Moderna board member on the FDA advisory board especially if they were there during the development of the vaccine. Having Missy Cummings on the NHTSA board should be no different.
I am aware that Elon Musk and by extension Tesla creates a strong response. So I’m going to close this with a purely mercenary look at Tesla. And why not just the technology approach Cummings advocates is a question, but the developmental approach maybe even more so.
Tesla sells insurance, they are in the process of rolling that out nationwide. Other automakers are also rolling out there own systems, but while they have some of the telematics required for these usage based systems none of them have the complete vertical integration and broad connectivity that Tesla does. The shift away from demographic and driving record based insurance will have a profound effect. A speeding ticket on a deserted stretch of straight road wont change your rate, speeding in rush hour traffic will. The usage based monitored will allow for much better risk management and safer drivers will not subsidize riskier ones as much.
Tesla has gone one step further it has tied its autonomous development directly to its insurance. It is using the training of its AI to learn the best methods and habits that lead to accident reduction. That cannot really be done without a huge set of examples and it is important that the trainers have close to the same inputs as the vehicle. Your trainers react to visual clues mostly, for lidar HD map systems unless they are wearing AR devices your trainers are operating in a different environment than you are training for.
Insurance looks to be a huge profit center for Tesla with margins approaching 50+ percent instead of the industry avg of around 5 percent. That leads to a very strong push for accident reduction and elimination. If you believe the stock analysts who are looking at this a Tesla under autonomous control has a up to a 20 times advantage in risk reduction. [Remember as of now autonomous control requires driver attention.] Your rate will be determined by your driving score. Tesla is excluding autonomous control as not yours. The score will adjust itself and you can improve it and reduce your rate. Right now it looks like an automatic process so as you improve your next months bill will be lower. If you respond to the prompts while the car is under autonomous control your score is not impacted. What you see here is a huge financially incentivized feedback loop towards safety while autonomy is chased.
It is also an observed behavioral modification endeavor. The access to the FSD beta is governed by your insurance score. Those attempting to get a high enough score to be include [98] have noted and Tesla has observed changes in driving habits. Hard Turns, braking and following are all areas that are easy to observe and change as a driver. By excluding autonomous control Tesla is shifting risk [in insurance terms] from the driver to the company. This is moving safety and autonomy from just a competitive and marketing advantage, to a direct and measurable profit center dependent on accident reduction.
I know I come off of a bit of a fan. In this case I probably am. I also happen to like a lot of the music Eric Clapton made and have admired some of Aaron Rodgers play. What is a fan?