Waymo self-driving cars -- progress

“Remember the Alamo.”

And it shows two collisions from Waymo vehicles during the first month it operated in Austin (Oct 2024).

But, LiDAR?

Anyway, a software update was done in Dec 2024 for all Waymo vehicles after a couple dozen collisions with chain, gates, or barriers.

But, LiDAR?

Look, I agree Waymo has a really good safety record. But, LiDAR doesn’t prevent collisions and what we’re seeing is that software is the real differentiator. So, both Waymo and Tesla will continue to improve.

1 Like

It’s important to compare apples-to-apples, as much as feasible.

Was Waymo using an onboard safety monitor/driver when these incidents occurred (and if we are being really strict, as in “clinical trials strict”, because lives depend on it, in Tesla’s geofenced area)?

If no, then these Waymo incidents are not comparable to Tesla.
If yes, then we need to control for miles driven, and look at incident per so many miles.

Why?
Because, in Austin, Tesla uses an onboard safety monitor (who may take action to avoid accidents) and operates within a geofenced area.

But this just gave me an idea of a measurement we can do.

We can measure the amount of time that elapses until Tesla can abandon onboard safety monitors in its Austin robotaxi fleet.

We can then compare “time from onboard safety to autonomy in Austin” for Waymo and Tesla.

Now we’re talking.

1 Like

This makes me think the only (main?) reason Tesla pushed into Austin for robotaxi right now is because Iron Man saw Waymos cruising around town last year and got antsy.

Or am I being silly?

Or has this theory already been a discussion point?

I think you are spot on. Robotaxis were presented as a core portion of the business in Master Plan Part Deux in 2006 and announced as coming Real Soon Now™ since then.

It can’t have been lost on Elon that other companies had robotaxis but Tesla didn’t.

1 Like

No one is expecting the software to be perfect, but it might be worth noting that an after-the-fact analysis of 38 of Waymo’s first accidents showed that 34 were the fault of a human. The cars were rear ended, people cut across lanes at traffic signals and hit them, and some of them involved passengers opening doors into traffic lanes as they exited.

That still leaves four in which they were found at fault, and yes, the software was updated to stop the curious tendency to hit chain link fences and gates.

But any “Robotaxi” or (true) “Full Self Driving” corporation should be prepared for the media to cover such accidents, simply because they are “new” (hence the word: news.

2 Likes

Thank you for bringing actual data to this thread.

Much better than anecdotes of a few selected occurrences and better than us speculating.

The success/failure of AVs will rise/fall on benchmarked safety, performance and economic data.

Now we just need some Tesla data. We should have some laying around because they have collected so much data. You would think they could spare just a little.

Dear Tesla:
Can you please make public incident data on your AI driver so that we can evaluate important metrics like the safety of your vehicles against benchmarks in an apples-to-apples comparison?
Thank you,
Fellow drivers who share the road.

2 Likes

Which is my main point - that it’s not, as was discussed above quite a bit - whether the car has LiDAR or not, but how good the software is. Software has been and will remain the differentiating factor between autonomy efforts for quite some time.

Note that the Austin Dashboard also shows “Traffic Blocking” and “Failure to respond to police” incidents as well. My point here is that we should (but as a general population won’t) give Tesla the same pass for growing pains that we’ve given Waymo. Which I think Waymo deserves, btw.

1 Like

Other than poking fun at Iron Man, I don’t think anyone here is giving any company preferential treatment with respect to measuring product and business performance.

I also think everyone here very much appreciates the technical challenges that these companies are working on.

Just above we see way mo incident data available and provided for Waymo than for Tesla.

If you have equivalent data for Tesla, then please provide it - so we can treat all companies fairly - and make the apples to apples comparisons.

In another thread

there is a comparison of recent, open predictions made by Tesla and Waymo regarding ramp up of robotaxis.

If you have predictions from these or other companies, then please provide them so we can evaluate predictions across multiple companies.

Other open questions:
For Tesla,

What exactly is Tesla doing with lidar?

Let’s gather the data and make the fair comparisons.

It doesn’t really support that point, though.

The theory of AV that Waymo (and nearly all other companies in the space) is pursuing is that having multiple hardware sensors enables one to get “more driving” with “less advanced software.” I put those terms in quotes because they’re not really definable. But the general sense is that if you have LIDAR, you don’t have to do as complicated of stuff with the vision inputs, because the LIDAR will give you certain types of information that are very hard to tease out of just the vision input (especially in poor visibility situations).

So it’s not just how “good” the software is. Under this approach, if the hardware has more sensors, that also improves driving. Tesla doesn’t think this approach is either correct or scalable, but it’s the approach nearly every other company is following.

That doesn’t mean that LIDAR automatically makes a car able to self-drive, of course - it might be a necessary but not sufficient condition. It may be that no technology exists today or for many years that can allow cars to hit Level 5, whether LIDAR or vision-only (which is my personal opinion).

The Waymo theory is that it will be easier to get the cost of a LIDAR-equipped vehicle down through economies of scale than it will be easier to improve software to the point where it can process vision-only well enough to not need overly expensive “outside the car” support. We have no way of knowing who is correct at this point. One piece of data, though, is that a few Chinese models (like the new Xiaomi YU7) are starting to include LIDAR standard as part of their ADAS suites…which is a pretty strong signal that those things might start to get cheaper fast.

3 Likes

That could very well be true. It could also be true that the software effort to correctly perform the sensor fusion between the two (or more) sensor types is the more difficult task. This is not a bunch of tradition code with a lot of if-then-else lines of code where one can (very logically) see that one sensor type is overriding the other when needed. It is a massive model with layers and layers of weights and biases trained from a large dataset.

Mike

Maybe - but is that actually the case? The experience of Waymo and all the other car companies that are using LIDAR seems to argue against that. Waymo’s achieved a modest degree of Level 4 performance using a LIDAR/radar/vision combo, and no one seems to think that their software capabilities are super beyond Tesla’s - which is hard to explain, if using multiple sensory inputs made the software especially challenging. And as I noted above, there’s quite a few car Chinese firms that are selling cars that include LIDAR as part of their ADAS systems - including some modestly priced ones. So it doesn’t seem like it’s all that much of a heavy burden to mix it in.

Right.

Same observation from far upthread.

Maybe you pay for it one way or the other?

High res vision data (space and time) and crazy detailed and advanced feature and AI modeling.

or

Multi-sensor data that must be synchronized together and then modeled together in AI.

No free lunch?

If I had to bet today, I would go with the multi sensor approach with the reasoning that it brings more different kinds of data together (expands feature dimensions in informative ways that is more difficult or not possible with a single sensor type). And hence has more discriminatory information than single sensor.

And, after you have arrived at some solution to the AI driving problem, you can then optimize by trying to simplify the data and model by removing data that are less/not important and focusing in on the aspects that are more important. This could result in simplifying data processing and/or removing sensors.

Tesla is still playing with Lidar, so they haven’t removed it yet from their AI modeling, as best i can tell. They are correlating it to their camera data. But maybe they have advanced in this optimizing step.

As my comments here and upthread show, there are arguments both ways.

Again, do you have something specific to say here about the relative analytic complexity of the different players?

If not, I don’t think we know which systems have more/less analytic complexity/sophistication or however you might define it.