Will Tesla demonstrate human-level safety in its autonomous vehicles in the year 2026?
I believe the evidence suggests no.
The march of the 9s is the progression in the number of 9s in a safety rate.
For autonomous vehicles (AVs),
1 accident in 100 miles is 99 accident-free miles, or two 9s: 99% safe
1 accident in 1,000 miles is 999 accident-free miles, or three 9s: 99.9% safe
…
1 in 1,000,000 (million) miles, six 9s: 99.9999% safe
Each additional 9 is an order of magnitude increase in safety (increase in miles per accident).
Each additional 9 requires more than an order of magnitude additional miles to estimate whether that level of safety has been achieved (20x or more times each 10x increase in miles to have appropriate sample size, at a minimum).
As a ballpark number (exact numbers will depend on details like urban vs rural, metro area, weather, vehicle type, etc), to achieve higher than human safety, an AV should achieve about six 9s.
So to estimate safety at six 9s probably means at least 20 million AV miles are needed to get started measuring safety of a true autonomous vehicle accruing miles in a representative collection of real-world driving scenarios (no human supervision, scenarios that a real-world driver needs to complete every day, like taxi trips, or commute to work).
Today Tesla is not accruing miles in a representative collection of real-world driving scenarios with autonomous vehicles (except for selective small demos at most, as far as we know).
Instead they have many human-supervised miles, mostly from consumer-owned vehicles not following a standardized measurement protocol.
Tesla and others believe that an AI model can increase the number of 9s in their AI driver by training the AI model on more data with a bigger model (which runs on computer hardware in the vehicle to make the real-time driving decisions). The bigger model requires higher capacity data processing hardware.
For AI, there is good reason to believe (based on data from AI studies) that the error rate, for a given model structure/design/architecture, can be decreased by training a bigger model with more data using ever increasing data processing hardware capacity (this has been called “scaling laws”).
However, a study by Waymo specifically for AI driving shows that the additional data processing needed to lower error rate increases faster than the decline in error rate.
An order of magnitude decrease in error needs more than an order of magnitude increase in computing power.
Said a third way, the increase in compute further increases (accelerates) for each additional improvement in error rate as a model progresses through additional 9s of safety.
How about Tesla?
How far away are they from demonstrating human-level safety (at least 10 million miles, let’s say as a very minimal number) in an AV in a representative collection of real-world driving scenarios?
They are not demonstrating this kind of autonomy today.
Assume going from version 13 to 14 of their latest AI driving model, which took about 1 year, reduced error by an order of magnitude.
If Tesla needs at least 1 more order of magnitude improvement in error (they most likely need more improvement than that, I think, but no one knows), then they are at least 1 year away from demonstrating human-level autonomy in their AVs if what we know about machine learning scaling laws applies.
And even when they have the autonomous AI model, they still need time to accrue 10s of millions of miles to measure and demonstrate human-level safety.
Based on the above, Tesla seems unlikely to demonstrate human-level autonomy this year, 2026.
Required caveat: many things are possible and the above is based on estimates and approximations, but Tesla human-level autonomy in 2026 would surprise me.