Just one day after Tesla CEO Elon Musk posted on Twitter about how Teslas were many times less likely to be involved in a car accident because of driver-assist technology, Autopilot is back in the news. And for all the wrong reasons.
Two men in a 2019 Model S crashed into a tree and died in the subsequent four-hour fire in Spring, Texas. Police say that there was no one in the driver’s seat and that the Model S owner had taken his friend for a ride to show him how the car drove itself. The implication is that one of them activated Autopilot and slid into another seat.
Elon Musk has already denied this, saying on Twitter that Tesla logs show Autopilot was not enabled at the moment of the crash. It couldn’t have been, because the car was in a residential area without lane markings and because Tesla has safeguards preventing the driver from moving into another seat while the vehicle is in motion.
That has not prevented the deluge of criticism, including calls to discontinue Autopilot (and its younger but more advanced sibling, Full-Self Driving or FSD) until the technology is fully matured. Automotive engineer Sandy Munro believes that move would be nothing short of “criminal.”
The technology is not perfect, he says on a call with TMZ Live (video is available at the bottom of the page). The technology is not fully matured, either, which is why it’s labeled driver-assist tech. Munro doesn’t say it out loud, but the indication is that if there was Autopilot misuse in this particular crash, it’s still human error and, as such, not the fault of Autopilot.
Munro says he hates the term “human guinea pig” occasionally used in reference to driver-assist tech. But you have to be practical about it: anything that makes driving a bit safer is good, even if not perfect. Cars with driver-assist tech are eight times less likely to get into a crash if the tech is used right—and that’s what the industry and legislators should focus on.
Elon Musk has already denied this, saying on Twitter that Tesla logs show Autopilot was not enabled at the moment of the crash. It couldn’t have been, because the car was in a residential area without lane markings and because Tesla has safeguards preventing the driver from moving into another seat while the vehicle is in motion.
That has not prevented the deluge of criticism, including calls to discontinue Autopilot (and its younger but more advanced sibling, Full-Self Driving or FSD) until the technology is fully matured. Automotive engineer Sandy Munro believes that move would be nothing short of “criminal.”
The technology is not perfect, he says on a call with TMZ Live (video is available at the bottom of the page). The technology is not fully matured, either, which is why it’s labeled driver-assist tech. Munro doesn’t say it out loud, but the indication is that if there was Autopilot misuse in this particular crash, it’s still human error and, as such, not the fault of Autopilot.
Munro says he hates the term “human guinea pig” occasionally used in reference to driver-assist tech. But you have to be practical about it: anything that makes driving a bit safer is good, even if not perfect. Cars with driver-assist tech are eight times less likely to get into a crash if the tech is used right—and that’s what the industry and legislators should focus on.
Your research as a private individual is better than professionals @WSJ!
— Elon Musk (@elonmusk) April 19, 2021
Data logs recovered so far show Autopilot was not enabled & this car did not purchase FSD.
Moreover, standard Autopilot would require lane lines to turn on, which this street did not have.