Risk of Tesla camera-only self-driving

As best I can tell, Tesla is the only major AV player with a pure camera-only sensor array for informing AI driving decisions. (maybe there are others?)

Arguments can be made about the possible relative benefits of camera-only versus multi-sensor configurations (hardware, data and software complexity, utility and cost).

And I don’t have a strong view about which sensor configuration is best - I really wouldn’t claim to know.

And it very certainly can be that camera-only and multi-sensor are both viable solutions to AI driving.

I’m more interested in what the available data says (or doesn’t say) about how various sensor configurations perform with respect to safety, time to market, unit economics, R&D costs, etc.

But, to me, it could be telling that Tesla may be a bit alone in its sensor configuration, with other companies using cameras plus other sensors like lidar, radar, ultrasonic.

Another interesting tidbit is that Tesla apparently uses lidar data together with camera data in much more limited testing (vs the large camera-only data from its retail fleet).

We could also debate the degree to which map data are another sensory input for driving (vs navigation).

But my main question is, did Tesla arrive at the camera-only decision because the data suggested it as the best path versus other sensor configurations, or was there a push from leadership, that was much less informed by data, that resulted in the camera-only approach?

If the answer is the latter, that could be a serious risk for Tesla’s AV plans.

Probably most people on this board have experienced executive decisions that turned out to be poorly informed and then saw the consequences of those decisions.

Disclaimer: I’m open to definitions of things like “major AV player”, “driving”, “navigation”, etc. I’m also open to being wrong or not fully informed on any of this.

To answer your main question, “I don’t know” but in all Musk enterprises, “a push from leadership” is not a bug but a feature.

As to “If the answer is the latter” Elon has made many mistakes such as over-robotizing the assembly line. He bit the bullet and reinvented the assembly line. If camera only is crap, I’m confident Elon would have changed course just as he did with robots.

The Captain

2 Likes

One of the risks, as yet barely recognized, is the liability (I mentioned upthread.) There will inevitably be accidents. When it comes time to determine fault, I can already hear the plantiff’s lawyer saying “and how much did you save by making the Pinto’s gas tank susceptible to rupture?”

Oh, sorry, time warp. I meant to say “How much did you save by not including better sensing imagery when you foisted this on the public?”

3 Likes

They want to fail as quickly as possible, and then adjust.

But is that what is happening here?

Waymo is ahead of Tesla if the measure is time to market. Waymo is plodding along, but they do continue to expand geographically whereas Tesla is just getting started with commercial robotaxi in Austin this weekend and by indications it’s a very modest start.

I recognized it and expressed my gratitude:

Time will tell but what I’ve seen of the latest versions of FSD, vision only is working well. One problem is that most people are still thinking of AI in terms of obsolete algorithmic AI which was not very good. RADAR and LIDAR complicated the algorithms even more which is why Tesla dropped RADAR. Neural network AI run on huge datasets on huge purpose-built data centers is a radical improvement over the algorithmic AI technology.

The law will have to catch up to the technology.

The Captain

2 Likes

Here’s an opinion from someone with technical expertise and close experience with Tesla, also noted in another thread:

1 Like

I cannot remember the source or whether it was Musk or an engineer, but the statement I recall that Tesla’s test with lidar increased the confusion of the AI. In other words lidar lead to reduces put comes

The answer when I questioned GROK.

Evidence of Noise from LiDAR: Posts on X indicate that when Tesla tested LiDAR alongside cameras, the differing signals caused interference and noise, making it harder for the AI to determine which sensor was more reliable. Tesla engineers reportedly found that a camera-only setup yielded better results due to reduced complexity. Andrej Karpathy noted in 2021 that radar (a similar case) was removed from Tesla’s stack because it contributed noise, suggesting LiDAR could pose similar issues.

Cheers
Qazulight

3 Likes

But other companies are multi-sensor, including lidar in many cases.

Does this mean the other companies, including waymo, haven’t been stopped by this issue (confusing multi-sensor data) but tesla has?

1 Like

There are so many variables that I cannot know.

I can speculate. My speculation is that Tesla has thrown way way more data and training into the AI model than anyone else and the no one would have the problem because their AI is not as intelligent.

By the way. I sold my position i Tesla right before the inauguration, and I have not repurchased it. I am in very conservative investments and will likely stay that way for the rest of my life (sub 20 years)

Cheers
Qazulight

3 Likes

Posted translation:

Robin Li (Baidu CEO): Apollo Robotaxi’s Only Chance Lies in Rapidly Shifting to a Pure Vision Approach

At Baidu’s recent quarterly executive meeting, Robin Li delivered an internal speech focused on “reflective thinking,” where he reviewed the company’s business wins and losses. He also challenged many of Baidu’s previous viewpoints — including its stance on Robotaxi.

One key takeaway was that Robotaxi must pivot its technical strategy and move decisively toward a pure vision approach.

In the past, like other players such as Waymo, Robotaxi had been a firm proponent of lidar + vision, relying on a multi-sensor fusion system.

If true, it would be an interesting development. Baidu’s current Apollo Go 5th generation robotaxis, which operate in Wuhan and other cities, use LiDAR from Hesai.

2 Likes

Waymo’s sensor hardware for handling different weather conditions:

All the sensors on the sixth-generation driver have much more powerful heaters. They have a kind of washer fluid. They have wipers. Basically, every sensor is like its own mini car window or front windshield, so we can melt off snow and ice. We can clear that with the wiper. We can spray it to remove road grime and salt buildup and things like that.

1 Like

Hyundai’s Self-Driving software chief quits, apparently over Lidar:
https://www.thelec.net/news/articleView.html?idxno=5509

Song said it was not easy to aim to make an AI device rather than a car and implant software DNA rather than focus on hardware.

He had countless conflicts with a legacy business and faced unseen walls while trying to transition to software-defined vehicles, Song lamented in his memo.

Song has attempted to shift Hyundai Motor’s lidar-focused self-driving system to camera during his tenure.

During Hyundai Motor’s annual developers conferences, he touted camera-based autonomous driving systems. This year the autogiant unveiled Atria AI that offers Level 2 autonomous driving capabilities.

This shift from lidar to camera was met with some pushback as Hyundai Motor has invested in lidar for a long time. Some of its lidar projects by research groups were halted by Song. Inside Hyundai Motor, Song’s moves were considered following Tesla.

2 Likes

Yup, you risk losing your job! Who was that fellow at VW who lost his job for following Tesla? Dies? Diess? :slightly_smiling_face:

The Captain

Just a point of information, may or may not be true/significant:

We hosted a discussion with the designer of the FSD Community Tracker. The tracker’s core metric (miles to critical disengagement) exhibited a >20x improvement after FSD v14.1.x was released in October (from 441 miles in the previous version to 9200+ miles).

1 Like

Cool! That means you’ll only have an accident twice a year, on average.

2 Likes

Ignoring the trend just doesn’t seem wise to me.

1 Like

I’m not ignoring the trend, just noting that human drivers have accidents on average of once every 500,000 miles. There’s a pretty great distance between 9,000 and 500,000, and absent some ‘miracle breakthru’ it would seem that the progress would get smaller and smaller as edge cases are solved (reliably), so it’s still a long way away.

Although if you mandated FSD for teenage boys, who have many more accidents than average, yhou might be on to something :wink:

1 Like

That’s only if you define “critical disengagement” as if not done would cause an accident. That is NOT the case for this tool, this tool leaves defining “critical disengagement” to the user, and the users are random drivers out there. Not only do random drivers often panic and disengage when they don’t understand something, but they also often overestimate the danger, underestimate the distance, overestimate the speed, etc. And plenty of drivers disengage for trivial things* that appear to be risky but aren’t when you have 7 eyes looking in all directions at once.

* This includes me! This week I drove a cybertruck (loaner) and yesterday while it was backing into a supercharger spot, I panicked and slammed on the brakes thinking it was about to hit the supercharger post. I got out of the truck, went to plug in the charging cable … and it was too far away. Got back into the truck and backed up another foot. I didn’t report it as a critical disengagement, but in a regular parking lot, where you didn’t have to get back in and move it some more, others might have reported it as such.

1 Like

That’s ignoring the trend. It was “a pretty great distance” from 441 miles to 9200+ miles, and that just happened quite suddenly. Biggest sequential increase in 4 years of the tracker. If anything, we’re seeing larger increases in the reported miles between interventions, not smaller as you assume.

I’m not saying Elon’s hype about texting while FSDing or beating Waymo are real, just that the trend is actually looking good for FSD.

EDIT: In full fairness, the Piper analyst makes some bad conclusions:

  • Austin Robotaxi data implies 40,000 miles between crashes (7 NHTSA incidents in ~280k miles).
  • The data implies that at 13,000 miles driven a year on average for most people, an FSD-equipped car can go approximately three years without crashing.

That’s completely ignoring interventions from the Safety Monitors in Austin. We don’t have any data on how many interventions the Austin RT’s have had. We’ve seen several just in the early days, and Tesla doesn’t report that number.

So, things aren’t as rosy as Musk or Potter are saying, but they are rosier than before and the trend is good.

3 Likes