Risk of Tesla camera-only self-driving

As best I can tell, Tesla is the only major AV player with a pure camera-only sensor array for informing AI driving decisions. (maybe there are others?)

Arguments can be made about the possible relative benefits of camera-only versus multi-sensor configurations (hardware, data and software complexity, utility and cost).

And I don’t have a strong view about which sensor configuration is best - I really wouldn’t claim to know.

And it very certainly can be that camera-only and multi-sensor are both viable solutions to AI driving.

I’m more interested in what the available data says (or doesn’t say) about how various sensor configurations perform with respect to safety, time to market, unit economics, R&D costs, etc.

But, to me, it could be telling that Tesla may be a bit alone in its sensor configuration, with other companies using cameras plus other sensors like lidar, radar, ultrasonic.

Another interesting tidbit is that Tesla apparently uses lidar data together with camera data in much more limited testing (vs the large camera-only data from its retail fleet).

We could also debate the degree to which map data are another sensory input for driving (vs navigation).

But my main question is, did Tesla arrive at the camera-only decision because the data suggested it as the best path versus other sensor configurations, or was there a push from leadership, that was much less informed by data, that resulted in the camera-only approach?

If the answer is the latter, that could be a serious risk for Tesla’s AV plans.

Probably most people on this board have experienced executive decisions that turned out to be poorly informed and then saw the consequences of those decisions.

Disclaimer: I’m open to definitions of things like “major AV player”, “driving”, “navigation”, etc. I’m also open to being wrong or not fully informed on any of this.

To answer your main question, “I don’t know” but in all Musk enterprises, “a push from leadership” is not a bug but a feature.

As to “If the answer is the latter” Elon has made many mistakes such as over-robotizing the assembly line. He bit the bullet and reinvented the assembly line. If camera only is crap, I’m confident Elon would have changed course just as he did with robots.

The Captain

2 Likes

One of the risks, as yet barely recognized, is the liability (I mentioned upthread.) There will inevitably be accidents. When it comes time to determine fault, I can already hear the plantiff’s lawyer saying “and how much did you save by making the Pinto’s gas tank susceptible to rupture?”

Oh, sorry, time warp. I meant to say “How much did you save by not including better sensing imagery when you foisted this on the public?”

2 Likes

They want to fail as quickly as possible, and then adjust.

But is that what is happening here?

Waymo is ahead of Tesla if the measure is time to market. Waymo is plodding along, but they do continue to expand geographically whereas Tesla is just getting started with commercial robotaxi in Austin this weekend and by indications it’s a very modest start.

I recognized it and expressed my gratitude:

Time will tell but what I’ve seen of the latest versions of FSD, vision only is working well. One problem is that most people are still thinking of AI in terms of obsolete algorithmic AI which was not very good. RADAR and LIDAR complicated the algorithms even more which is why Tesla dropped RADAR. Neural network AI run on huge datasets on huge purpose-built data centers is a radical improvement over the algorithmic AI technology.

The law will have to catch up to the technology.

The Captain

2 Likes

Here’s an opinion from someone with technical expertise and close experience with Tesla, also noted in another thread:

1 Like

I cannot remember the source or whether it was Musk or an engineer, but the statement I recall that Tesla’s test with lidar increased the confusion of the AI. In other words lidar lead to reduces put comes

The answer when I questioned GROK.

Evidence of Noise from LiDAR: Posts on X indicate that when Tesla tested LiDAR alongside cameras, the differing signals caused interference and noise, making it harder for the AI to determine which sensor was more reliable. Tesla engineers reportedly found that a camera-only setup yielded better results due to reduced complexity. Andrej Karpathy noted in 2021 that radar (a similar case) was removed from Tesla’s stack because it contributed noise, suggesting LiDAR could pose similar issues.

Cheers
Qazulight

2 Likes

But other companies are multi-sensor, including lidar in many cases.

Does this mean the other companies, including waymo, haven’t been stopped by this issue (confusing multi-sensor data) but tesla has?

There are so many variables that I cannot know.

I can speculate. My speculation is that Tesla has thrown way way more data and training into the AI model than anyone else and the no one would have the problem because their AI is not as intelligent.

By the way. I sold my position i Tesla right before the inauguration, and I have not repurchased it. I am in very conservative investments and will likely stay that way for the rest of my life (sub 20 years)

Cheers
Qazulight

2 Likes