This site is reader-supported. When you click through links on our site, we may be compensated.
The data used in the study was generated from the over 1 billion miles driven by Tesla owners since its activation in 2015, about 35% of which were determined to be assisted by Autopilot. Of these, 18,928 disengagements of Autopilot were annotated, which indicated instances when drivers took over during challenging driving situations. Overall, the numbers demonstrate a high rate of driver vigilance.
Tesla has provided a unique opportunity to form a baseline for objective, representative analysis of real-world use of Autopilot, as stated in the study:
“Due to its scale of deployment and individual utilization, [Tesla’s] Autopilot serves as perhaps the currently best available opportunity to study and understand human interaction with AI assisted vehicles ‘in the wild’…naturalistic driving research can now begin investigating and identify both promising and concerning trends in drivers’ behavioral patterns in the context of Autopilot.”
As automation has expanded over the last several decades, a pattern of overtrust in reliable automated systems has been shown by human behavior research studies. In the context of driving scenarios where property damage, injury, or death are possible consequences, the concern with the transition to semi-autonomous systems relying on driver input to function safely is obviously significant. The results of the MIT study are therefore promising, initially showing an approach to automation in driving systems that’s more careful than other areas.
“The two main results of this work are that (1) drivers elect to use Autopilot for a significant percent of their driven miles and (2) drivers do not appear to over-trust the system to a degree that results in significant functional vigilance degradation in their supervisory role of system operation,” the MIT scientists concluded.
The study further notes that more research will be needed as more data becomes available and more familiarity grows with Autopilot’s features.
Tesla has received a fair amount of criticism and attention whenever an accident involves one of its cars, especially if Autopilot was engaged around the time of the event. However, Tesla consistently maintains its position that the feature is not yet fully autonomous and requires drivers to both pay attention and intervene when necessary while Autopilot is in operation. The program is additionally equipped with several alerts which give drivers audio and visual warnings if hands are not detected on the steering wheel, something found to have been ignored in some prior crash events, playing into concerns the MIT study sought to address.
Beginning in Q3 2018, Tesla has been releasing quarterly Vehicle Safety Reports providing updated numbers for vehicle incidents occurring both when Autopilot was engaged and when the driver-assist feature was deactivated. For Q3, the company reported one accident or crash-like event for every 3.34 million miles driven with Autopilot active and one event for every 1.92 million miles driven with Autopilot disengaged. In Q4 2018, those numbers dropped slightly, possibly due to winter conditions, to one accident for every 2.91 million miles driven with Autopilot engaged and one accident for every 1.58 million miles driven without.
By comparison, the National Highway Traffic Safety Administration’s (NHTSA) most recent data at the time showed a crash event every 436,000 miles, a figure which includes all vehicles in the US whether or not the cars are equipped with driving enhancement software. Tesla’s numbers further include both accidents that have occurred and “near-misses”, and the NHTSA’s figures only include accidents that actually transpired.
Along with touting a correlation between lower accident rates and Autopilot being engaged, Tesla also maintains its title of producing the safest cars in the world based on NHTSA test results.