Tesla Austin “robotaxi”: 3 accidents in 7k miles

AI-driven (robot) cars on the road affect us all and are macroeconomic as this new technology is developed by major and startup companies and safety-tested on public roads.

For all of you health- and medically-aware people, this is like a clinical trial for an experimental drug except

  • we are all forced to participate when we share the road with AI drivers
  • the experiments are not controlled, replicated and randomized like a proper clinical trial (although more conscientious companies can take steps to work towards the rigor of a proper experiment)

Welcome to the experiment!!!

As best I understand what has been reported:

In its initial 11 vehicle taxi fleet (I think 11 distinct plates have been observed, initially) in Austin, Texas, Tesla has reported to NHTSA 3 accidents from June 22 through July 15.

The taxis are human-supervised using Tesla’s FSD software (“Full Self Driving”), meaning a Tesla employee in the vehicle is legally responsible for the driving decisions and outcomes by always monitoring the software and intervening manually to control the vehicle, if needed.

(I believe there is a 2-month lag from accident to public disclosure for more minor accidents. Tesla has 1 month to report and there is another 1 month lag until the data are public. More serious accidents are reported within 5 days from the event.)

So as of September 15, we are now seeing disclosed minor accidents through July 15. And more serious accidents through about mid August.

Tesla was accruing about 7k miles per month as of July 15 in total for their small fleet (mentioned on Q2 earnings call).

While this is a tiny sample size (3 events, 7k miles), it is the best data we have publicly disclosed on the performance of Tesla’s AI driver (Tesla is very opaque in its disclosures of its AI driving performance and even redacts what might otherwise be helpful info in its NHTSA reporting).

So the best estimate we have is that the Tesla FSD accident rate is about 3 accidents (1 with minor injury) per 7k miles or about 429 (= 3 x 143) accidents per million miles (fault not attributed to Tesla or other entity).

So a human driving 14k miles per year with this accident rate would have 6 accidents per year, 2 of which would include a minor injury.

For reference, Waymo has about (ballpark) 1 accident per million miles (fault not attributed to Waymo vs other entity) within the cities and geofences it has been operating (in a May 2025 analysis). For the range of driving scenarios Waymo considered (accounting for the effects of specific routes in specific cities), Waymo’s rate was about 80% better (fewer accidents per mile) than a human driving similar scenarios (this is an attempt to have data closer to a controlled experiment, by accounting statistically for other variables, such as geographic route).

I believe Waymo’s categorization of accidents is more severe than Tesla’s (Waymo may not include minor accidents), so the accident sets may not be directly comparable between Waymo and Tesla.

Caveats I can think of:

  • Tesla is human-supervised, which would underestimate the true Tesla AI driver accident rate (because the human will sometimes intervene and prevent an accident, one would expect, thereby reducing the observed accident rate below the actual AI rate).
  • Definition of what is an accident may differ between Tesla and Waymo.
  • Driving scenarios (cities, routes) might be different between Tesla and Waymo (eg, one could be driving inherently less/more safe routes)

Tesla’s 7k miles of data is only the very beginning of the data needed to assess safety. With only an estimated 16-17 taxis now in Austin (a recent 50% increase in fleet from the initial 11) and a similar number in San Francisco, about 3 months into their “robotaxi” rollout, with this tiny scale (assuming these taxis will be their only source of safety data - maybe they have another source?), it will take Tesla a very long time to measure its safety (for reference, Waymo has over 100 million miles of true autonomous (unsupervised) driving with 50+ million miles disclosed in a public analysis).

Tesla article (https://electrek.co/2025/09/17/tesla-hide-3-robotaxi-accidents/)
Waymo data (https://www.tandfonline.com/doi/full/10.1080/15389588.2025.2499887)

Some more on estimating accident rate posted here:

8 Likes

That is a really silly kind of projection to make and you know it is inconsistent with what we know of all other Teslas.

1 Like

Can you please explain?

The above is the best available data on FSD safety I know of.

What else do we know? What is the inconsistency?

3 Likes

Taking the data from 7500 miles and projecting to a million miles is only the first completely nuts thing.

1 Like

We don’t know anything about “all other Teslas.” We have no information on the number of accidents in parallel situations, only highly curated non-transparent announcements from the company which do not give any information on local/highway, interventions, speed, fog, rain, snow, driver age - or anything else.

You might as well compare crash rates of model airplanes with 747s.

3 Likes

First, it seems you are commenting on small sample size (7k miles).

I agree the sample size is small - I called it tiny and said many more miles are needed, I would absolutely emphasize the caution on the tiny sample.

I’ll add that what matters most, for sample size, is the number of events observed (3 accidents) and not so much miles per se, although of course they are intimately related by the rate (accidents per mile). When we observe 10, 20, 50 events, we will have much more confidence in an estimate of the AI accident rate, regardless of the number of miles needed to observe those events (Waymo’s May analysis was on 68 events over 56.7 million miles).

The purpose of the calculation of accidents per million miles is not to “project” or “forecast” the number of accidents after 1 million miles, but to put the rate in the same units as the data from the best reference/benchmark that we have: Waymo’s data.

I’ll further add that most of us would conclude that a new driver that has 3 minor accidents in 7k miles (1 of which resulted in a minor injury and a vehicle tow), under the supervision and control of a licensed driver, might warrant some caution and some further scrutiny.

I don’t consider the above scrutiny on the safety of robotic vehicles on public roads to be nuts.

What is the second completely nuts thing?

1 Like

The issue is larger (how much we have no idea) than just the 3 accidents. The real issue is the number of times the safety driver has had to intervene in a situation that would otherwise have been dangerous or had lead to an accident.

Until they share the intervention data of the safety drivers, we will never really know how ready they are - until they are.

3 Likes

Yes.

The 3 accident count is a lower bound on the accident count one would observe had the AI driver been unsupervised.

That’s why I wrote as one caveat:

2 Likes

We don’t know what we wish we knew, but we don’t know nothing and what we know is not consistent with three crashes in 7K miles.

On top of which, what if the accidents are part of a shakedown of something new … which might not even be there any more.

Point being that extrapolating on data like that is foolish (small F).

No! What matters is a meaningful data set. One could do something … which had a flaw! … and have an accident in the first mile … would you then extrapolate that to a million accidents in a million miles … especially if the flaw is long since fixed?

Can you please share what you (we?) know with us on our discussion board, here?

Specifically,
what exactly do we know about Tesla AI driving that is not consistent with the 3 accidents in 7k miles that was just reported to NHTSA?

I’d like to add to our collective knowledge.

2 Likes

Don’t you think that if Teslas were having accidents at the rate of 1 ever 2K+ miles that we would know about it?

Without precise, carefully collected large volume data sets we know that this data point is radically at variance with all other experience. It doesn’t take a precise statistical test to tell us that.

Sure it does, because what you’ve been fed is a false picture, a curated set of data which you have no earthly clue about except a top line number, which may or may not be entirely false. Even if it is true, it’s likely to be false, since it bears no relations to actual “self driving” as there is a human behind the wheel making decisions when to disengage, when not to engage in the first place (rain, snow, fog), not using it when in certain parts of a city, or any other unusual driving situations.

Again, it might all be in there, but YOU DON’T KNOW and cannot in good faith pretend you do.

Actual “self driving” is different than something labeled “self driving” which was not, even if you think it was somehow “pretty close”.

3 Likes

Are railroad crossings unusual?

video from this July in which a Tesla robotaxi test rider said his vehicle failed to see a railroad crossing in Austin.

Joe Tegtmeyer, the robotaxi rider and a Tesla booster, said in a video on X that his vehicle was stopped in an area with a traffic light and a railroad crossing. Then, he said, it began to move at precisely the wrong moment.

“The lights came on for the train to come by, and the arms started coming down, and the robotaxi did not see that,” he said. He said a Tesla employee sitting in the front passenger seat overseeing the robotaxi had to stop the vehicle until the train had passed.

1 Like

That’s not what he’s saying. He’s not saying that Teslas driven by humans are having accidents at that rate. He’s pointing out that Tesla’s being driven by FSD are having accidents at that rate.

The safety data that Tesla releases all involve Autopilot or FSD as an ADAS system - an aid to a human driver. But they’ve released no data about how well FSD performs when they’re trying to have FSD drive the car alone. And since the relevant safety rate for a robotaxi is how well the car can operate without a human driver, that’s the really important question for assessing Tesla’s progress towards a robotaxi. Whether FSD is ready to start driving on its own. We have only a tiny tiny sample. But the fact that they’ve had accidents at this rate, even with Tesla employees in the car acting as safety drivers, certainly isn’t an optimistic sign.

4 Likes

I give up! Believe this non-data if you want. Decide that 7K miles is a reasonable sample size. I’ll have to think about what I can sell you next!

Oh I don’t think that at all. I do think it’s interesting that they’ve had so many accidents in such a small sample size, and that it is so different from the curated data they have released in the past (even if from a quite different but perhaps similar set). This time, however, they are required to report publicly, which has not been the case in the past. Perhaps that’s why the numbers are at such variance?

It’s either that or they’ve been wildly unlucky in their “test” phase.

1 Like

No one here decided that.

The initial post and other posts make that clear, but it’s the best info we have.

No one to date has provided better data on FSD.

If you have better or additional data, again, please share.

2 Likes

Extrapolating 7K to 1M miles implies it. Frankly, 7K miles is barely worth discussing.

I addressed this already:

We’ll have to disagree.

For a new technology with real public safety and financial consequences, and sparse data, 3 events in 7k miles are the best data we have.

In fact, I bet if we scoured the internet we could find a few critical interventions for Austin robotaxi that would increase our count above 3.

Like the rail crossing event above (FSD is known to mess up rail crossings, I’ve read).

Now my count is 4 per 7k.

If only someone had many millions of miles of data and a budget to pay for some analysis?