And it shows two collisions from Waymo vehicles during the first month it operated in Austin (Oct 2024).
But, LiDAR?
Anyway, a software update was done in Dec 2024 for all Waymo vehicles after a couple dozen collisions with chain, gates, or barriers.
But, LiDAR?
Look, I agree Waymo has a really good safety record. But, LiDAR doesn’t prevent collisions and what we’re seeing is that software is the real differentiator. So, both Waymo and Tesla will continue to improve.
It’s important to compare apples-to-apples, as much as feasible.
Was Waymo using an onboard safety monitor/driver when these incidents occurred (and if we are being really strict, as in “clinical trials strict”, because lives depend on it, in Tesla’s geofenced area)?
If no, then these Waymo incidents are not comparable to Tesla.
If yes, then we need to control for miles driven, and look at incident per so many miles.
Why?
Because, in Austin, Tesla uses an onboard safety monitor (who may take action to avoid accidents) and operates within a geofenced area.
But this just gave me an idea of a measurement we can do.
We can measure the amount of time that elapses until Tesla can abandon onboard safety monitors in its Austin robotaxi fleet.
We can then compare “time from onboard safety to autonomy in Austin” for Waymo and Tesla.
This makes me think the only (main?) reason Tesla pushed into Austin for robotaxi right now is because Iron Man saw Waymos cruising around town last year and got antsy.
Or am I being silly?
Or has this theory already been a discussion point?
I think you are spot on. Robotaxis were presented as a core portion of the business in Master Plan Part Deux in 2006 and announced as coming Real Soon Now™ since then.
It can’t have been lost on Elon that other companies had robotaxis but Tesla didn’t.
No one is expecting the software to be perfect, but it might be worth noting that an after-the-fact analysis of 38 of Waymo’s first accidents showed that 34 were the fault of a human. The cars were rear ended, people cut across lanes at traffic signals and hit them, and some of them involved passengers opening doors into traffic lanes as they exited.
That still leaves four in which they were found at fault, and yes, the software was updated to stop the curious tendency to hit chain link fences and gates.
But any “Robotaxi” or (true) “Full Self Driving” corporation should be prepared for the media to cover such accidents, simply because they are “new” (hence the word: news.
Thank you for bringing actual data to this thread.
Much better than anecdotes of a few selected occurrences and better than us speculating.
The success/failure of AVs will rise/fall on benchmarked safety, performance and economic data.
Now we just need some Tesla data. We should have some laying around because they have collected so much data. You would think they could spare just a little.
Dear Tesla:
Can you please make public incident data on your AI driver so that we can evaluate important metrics like the safety of your vehicles against benchmarks in an apples-to-apples comparison?
Thank you,
Fellow drivers who share the road.
Which is my main point - that it’s not, as was discussed above quite a bit - whether the car has LiDAR or not, but how good the software is. Software has been and will remain the differentiating factor between autonomy efforts for quite some time.
Note that the Austin Dashboard also shows “Traffic Blocking” and “Failure to respond to police” incidents as well. My point here is that we should (but as a general population won’t) give Tesla the same pass for growing pains that we’ve given Waymo. Which I think Waymo deserves, btw.
Other than poking fun at Iron Man, I don’t think anyone here is giving any company preferential treatment with respect to measuring product and business performance.
I also think everyone here very much appreciates the technical challenges that these companies are working on.
Just above we see way mo incident data available and provided for Waymo than for Tesla.
If you have equivalent data for Tesla, then please provide it - so we can treat all companies fairly - and make the apples to apples comparisons.
In another thread
there is a comparison of recent, open predictions made by Tesla and Waymo regarding ramp up of robotaxis.
If you have predictions from these or other companies, then please provide them so we can evaluate predictions across multiple companies.
Other open questions:
For Tesla,
What exactly is Tesla doing with lidar?
Let’s gather the data and make the fair comparisons.
The theory of AV that Waymo (and nearly all other companies in the space) is pursuing is that having multiple hardware sensors enables one to get “more driving” with “less advanced software.” I put those terms in quotes because they’re not really definable. But the general sense is that if you have LIDAR, you don’t have to do as complicated of stuff with the vision inputs, because the LIDAR will give you certain types of information that are very hard to tease out of just the vision input (especially in poor visibility situations).
So it’s not just how “good” the software is. Under this approach, if the hardware has more sensors, that also improves driving. Tesla doesn’t think this approach is either correct or scalable, but it’s the approach nearly every other company is following.
That doesn’t mean that LIDAR automatically makes a car able to self-drive, of course - it might be a necessary but not sufficient condition. It may be that no technology exists today or for many years that can allow cars to hit Level 5, whether LIDAR or vision-only (which is my personal opinion).
The Waymo theory is that it will be easier to get the cost of a LIDAR-equipped vehicle down through economies of scale than it will be easier to improve software to the point where it can process vision-only well enough to not need overly expensive “outside the car” support. We have no way of knowing who is correct at this point. One piece of data, though, is that a few Chinese models (like the new Xiaomi YU7) are starting to include LIDAR standard as part of their ADAS suites…which is a pretty strong signal that those things might start to get cheaper fast.
That could very well be true. It could also be true that the software effort to correctly perform the sensor fusion between the two (or more) sensor types is the more difficult task. This is not a bunch of tradition code with a lot of if-then-else lines of code where one can (very logically) see that one sensor type is overriding the other when needed. It is a massive model with layers and layers of weights and biases trained from a large dataset.
Maybe - but is that actually the case? The experience of Waymo and all the other car companies that are using LIDAR seems to argue against that. Waymo’s achieved a modest degree of Level 4 performance using a LIDAR/radar/vision combo, and no one seems to think that their software capabilities are super beyond Tesla’s - which is hard to explain, if using multiple sensory inputs made the software especially challenging. And as I noted above, there’s quite a few car Chinese firms that are selling cars that include LIDAR as part of their ADAS systems - including some modestly priced ones. So it doesn’t seem like it’s all that much of a heavy burden to mix it in.
High res vision data (space and time) and crazy detailed and advanced feature and AI modeling.
or
Multi-sensor data that must be synchronized together and then modeled together in AI.
No free lunch?
If I had to bet today, I would go with the multi sensor approach with the reasoning that it brings more different kinds of data together (expands feature dimensions in informative ways that is more difficult or not possible with a single sensor type). And hence has more discriminatory information than single sensor.
And, after you have arrived at some solution to the AI driving problem, you can then optimize by trying to simplify the data and model by removing data that are less/not important and focusing in on the aspects that are more important. This could result in simplifying data processing and/or removing sensors.
Tesla is still playing with Lidar, so they haven’t removed it yet from their AI modeling, as best i can tell. They are correlating it to their camera data. But maybe they have advanced in this optimizing step.
As my comments here and upthread show, there are arguments both ways.
Again, do you have something specific to say here about the relative analytic complexity of the different players?
If not, I don’t think we know which systems have more/less analytic complexity/sophistication or however you might define it.
It does. Avs with LiDAR and AVs without LiDAR sometimes hit things. The fix has ALWAYS been to upgrade the software. We didn’t see Waymo say “Oh, to fix that we have to physically recall all our vehicles and change the hardware.”
Two Points:
Half a decade ago, Karparthy of Tesla showed how to calculate accurate 3D worlds from cameras inputs, and showed how AI was able to understand fog and smoke and “see through” them.
Once you solve the algorithmic program/NN weights, all you need is enough compute on-board to run them in real-time. Compute is cheaper than sensors, and while sensors are getting cheaper, compute still outguns it.
We have no proof of that. Cruise had BOTH LiDAR and cameras and yet it hit/ran over a pedestrian that its cameras saw, but LiDAR didn’t. Why? Because of the software.
Having more sensors and especially more types of sensors creates complexity that needs to be solved - you guessed it - in software. What do you do when those sensors in your software disagree? Mobileye has a whole white paper on this, which I think is actually somewhat silly, but the point is that just adding more sensors doesn’t automatically help. It actually makes it more complicated and maybe even less safe.
According to the co-CEO recently, the whole point of LiDAR was to get going more quickly. When asked about needing to keep LiDAR, at first she said that LiDAR helped them on their path, implying that whether it was actually needed wasn’t the point. And when pressed, she admitted to a future world where AVs wouldn’t need LiDAR, but “not yet.”
Not actually selling yet. Just announced/promised.
There are also Chinese companies announcing that future autonomous vehicles will not have LiDAR, following Tesla’s lead.
Nope. I agree we don’t know, and just pointing out how much we don’t know.
Sure, if the problem is in the software. If the problem were in the hardware, they’d have to fix the hardware.
A really crude analogy would be if Albabyco tried to develop self-driving with only two cameras, mounted three inches apart inside the vehicle, using the traditional system of mirrors that humans use in order to drive with a single point of vision. Humans can do that, so theoretically a sufficiently advanced AI could do it, to. So it’s a software problem to run an AV car with only two “eye” cameras located inside the cabin.
But obviously, that’s a terrible approach. Having many more cameras provides vastly more - and better - sensory input, which lightens the load on the software.
So the fact that Tesla AV-operating cars with 8 or so cameras might get into an accident doesn’t show that multiple cameras aren’t helpful or necessary, any more than an accident from a LIDAR-equipped vehicle says anything conclusive about the utility of LIDAR.
Technically, she was asked if she could see a world where AV’s wouldn’t need LiDAR, and she said “not yet.” That’s not admitting to a future world where LiDAR isn’t necessary - it’s saying that at present she can’t even see a future world where it isn’t necessary.
The entire Xiaomi YU7 lineup has LiDAR as part of their ADAS system:
…and since that model is their entry in the mass-market SUV category, it’s expected to sell at pretty high volume (their SU7 model in the sedan category is already selling more in China than the Model 3). Those cars will all have LiDAR included as part of their ADAS, as I said. That kind of mass-production/mass adoption of LiDAR systems in ADAS should have some effects on the cost structure of LIDAR components used in AV systems going forward.
Of course, the flip side of this is that with multiple types of input, one introduces the potential for conflicts between those types and the need to resolve those conflicts.
You can interpret it that way if you want. To me, from her previous answer and LiDAR being necessary for Waymo’s path to autonomy (avoiding whether it was really needed for autonomy), and her facial/body language, I stick by my interpretation.
Chinese colleagues at CnEVPost have reported on XPeng’s latest regulatory document, outlining how the brand intends to do away with LiDAR sensors in its upcoming cars. Indeed, a prototype of a new Xpeng model (presented in October as the P7) without LiDAR, but equipped with Eagle Eye technology with new cameras and sensors, was spotted last July.
According to the aforementioned documents filed with China’s Ministry of Industry and Information Technology (MIIT), the G6 and G9 SUVs could also adopt Xpeng’s Eagle Eye technology.
No question LiDAR is getting cheaper. However, it’s still a cost, and it’s still fugly. And for personally owned cars, it’s another hardware thing to maintain and fix when broken, unlike software.
Yeah - they’ve got a dozen ultrasonic sensors and three radars on those cars:
So pretty far from “vision only,” and they also seem to be comfortable with having several different types of sensory inputs that the AI has to parse. Just as LiDAR has come down in cost (relative to performance), so has automotive radar - so it wouldn’t be entirely shocking to see the end result of AV have all four sensor modes: vision, radar, ultrasonic, and LiDAR. But certainly there will be some automakers that try to just do the first three, since LiDAR is the more expensive of the bunch. Tesla’s the only one going with just the one sensor type…
I don’t understand why such a big deal is made of this. If either of the two systems says “red flag”, slow down! Maybe stop.
It’s not so hard to get conflicting systems to play nice; airliners do it thousands of times a day (e.g. alert the pilot). Valve systems at refineries take pressure and other readings at multiple points in the process and decide whether the difference is enough to alert human monitors or not (e.g brake pedal). Automation systems sense whether product is flowing too slow or too fast for other parts on the line and adjust accordingly (eg: brake pedal), and call for help when outside parameters.
And, of course, your brain does it continuously and if it senses danger - even if wrong, takes some action (e.g. brake pedal.)
There are lots of data analytics which are trained to ignore certain faults but if it senses a cascading fault of the same kind takes appropriate action to stop the process (e.g. brake pedal.)
When did we decide “less data is good, because more data might be confusing?”
As illustrated by your examples, if all you are interested in is noticing a red flag, then the problem is simple, just notice any red flag. But, if the two systems are giving you entirely different views of reality, then it is a whole different class of problem.