Risk of Tesla camera-only self-driving

Tesla addressing sun glare for cameras

Tesla’s latest patent application addresses one of autonomous driving’s most persistent technical challenges—sun glare. The filing, designated US 2025/0334856-A1 and titled “Cone-Textured Glare Shield for Enhanced Camera Vision,” outlines a hardware solution that combines microscopic surface engineering with motorized positioning systems. Development arrives as camera-based autonomous systems continue struggling with light saturation events that compromise sensor reliability.

tesla sun glare patent

1 Like

Sounds good, but it would have to be a retrofit for an awful lot of cars already out there.

My four year old Y regularly gets warnings about a door pillar camera being blinded. With the sun not too high in the sky, the camera facing it has this problem when I drive past leafless trees, as their shadows pass over it in quick succession, alternating with full sun.

2 Likes

Check out Marc Rober’s test of camera-only self-driving. 121M views. Speaks for itself. No LIDAR saved Musk and consumers many dollars, but… https://www.youtube.com/shorts/U1MigIJXJx8

In context, he was using Tesla’s Autopilot, which is fundamentally an automatic emergency braking system. In the last couple years, most or maybe all manufacturers offer AEBs, and Tesla’s AEB is rated highly compared with other brands. There are personal preferences and variations in performance, but over all it is pretty good.

Overall, AEBs are shown to reduce accidents fairly dramatically, so Autopilot is a valuable safety feature and that’s how it should be viewed. Sure, it could be improved with Lidar, but it is still a safety upgrade over no AEB. The video was a bit unfair in that regard.

That said, I think Tesla’s deserves to take it on the chin for this. It isn’t called AEB, it is called Autopilot, which implies it can operate on its own. That is 100% false. It is a design requirement that a human be ready to intervene at all times. The name is completely misleading, and if Tesla is willing to mislead you about safety features what else are they misleading you about?

2 Likes

To be fair to Tesla, that’s not what “autopilot” is. Though that’s a common misconception. Even Archer misunderstood the concept (and anything with both Patrick Warburton and H. Jon Benjamin is a delight)…

Archer - Heart 1 | FOX Home Entertainment

2 Likes

The state of their progress in vehicle autonomy.

That’s what all/most of the discussion is about.

1 Like

This “test-video” was debunked a very long time ago. Why bring it back from the graveyard?

The Captain

3 Likes

Debunked? Hardly. Debatable, absolutely - but the full version of that single, flawed test showed LIDAR equipped cars detecting and stopping in front of that wall. Having dealt with Tesla’s egregious non-braking at a.) flashing-light-vehicles at night, b.) partially blocked lanes and c.) phantom braking at shadows during the day, it makes sense to point out significant improvements still could be made. That’s why.

Short answer: The claim about Mark Rober’s early-2025 video showing a “wall” fooled Tesla cameras while LIDAR detected a wall has been widely debated, and researchers and observers have pointed out limitations in that specific test. The video’s conclusions are not universally accepted as definitive evidence that Tesla’s camera-based system is always inferior to LIDAR.

Context and key points

What the video claimed: Mark Rober’s broader test contrasted a LIDAR-equipped setup with a Tesla-style camera system, suggesting the camera could be fooled by an image of a road while LIDAR correctly identified the barrier. This kind of “fake wall” scenario is used to illustrate potential weaknesses in vision-only systems under adversarial-looking conditions. However, the realism and generalizability of such a targeted scenario are contentious.
​

Community responses: Many observers, including some engineers and researchers, argued that a single, crafted test (like a fake wall) does not represent overall system performance across the diverse, real-world driving envelope. Critics emphasize that both sensor suites have strengths and weaknesses in different conditions (e.g., weather, lighting, occlusions) and that modern systems often use sensor fusion to mitigate individual failures.
​

Broader evidence: There is ongoing debate in public discussions and independent analyses about camera-only versus multi-sensor approaches. Some sources highlight that lidar/radar fusion can improve robustness in certain scenarios, while others point to advances in computer vision and learning-based perception that aim to close gaps with camera-only designs. The topic remains unsettled in the community, with positions varying based on interpretation of tests and conditions.
​

What to know if you’re evaluating the test

Test design matters: A single test with a stylized barrier may not capture the full range of real-world obstacles, sensor noise, and environmental variability. Valid conclusions require multiple diverse scenarios and independent replication.
​

Real-world performance vs. isolated tests: Real-world autonomy relies on continuous perception, prediction, and decision-making under imperfect conditions. Critics argue that a narrow demonstration does not prove a global claim about one approach being categorically worse or better than another.
​

Ongoing research: The literature and industry analyses show a spectrum of opinions. Some researchers argue that cameras, when paired with advanced learning and data from diverse environments, can achieve strong depth and object detection, while others contend LIDAR adds safe redundancy, especially in adverse conditions.

FC
​

Context and key points

What the video claimed: Mark Rober’s broader test contrasted a LIDAR-equipped setup with a Tesla-style camera system, suggesting the camera could be fooled by an image of a road while LIDAR correctly identified the barrier. This kind of “fake wall” scenario is used to illustrate potential weaknesses in vision-only systems under adversarial-looking conditions. However, the realism and generalizability of such a targeted scenario are contentious.

Community responses: Many observers, including some engineers and researchers, argued that a single, crafted test (like a fake wall) does not represent overall system performance across the diverse, real-world driving envelope. Critics emphasize that both sensor suites have strengths and weaknesses in different conditions (e.g., weather, lighting, occlusions) and that modern systems often use sensor fusion to mitigate individual failures.

It was not a single crafted test, however. The full video included several tests. The Tesla passed some tests, failed others including failing to stop in fog and rain.

I think a fair criticism of the video is that despite the intentionally deceptive name, Tesla’s Autopilot is fundamentally just an AEB system. As an AEB system it works okay. It could work better though, as the video demonstrates. But it works as well as anybody else’s.

There is some relevance here because the Tesla bull case is that someday Tesla will have the software figured out, flip a switch, and there will be millions of potential robotaxis on the streets, followed by millions more coming off the assembly lines.

But the video demonstrates vision-only is insufficient for general L4 autonomy. That’s not neccessarily bad, but it is a big step down from investors’ expectations.

Not a wall - a painting. On foam.

And not “LIDAR equipped cars,” but “a LiDAR equipped car” (Singular)>

And at that a car that’s not even a true production model available for purchase, but one specially modified by Luminar. A company that is now bankrupt.

And, he tested AutoPilot, not FSD.

And, this was months ago, and new versions of FSD are up now.

I’ve driven both the latest FSD as well as now owning a lidar-equipped car and hands-down FSD is the better system.

Anyone making investing decisions on that video is going to be disappointed in their portfolio’s performance.

1 Like

Demos and single user and other small sample experiences measure feasibility and capability but not reliability.

Measures of reliability over several 10s of millions of miles are needed to validate autonomous vehicles.

Thus far, in the US, only Waymo has provided evidence of reliability over many 10s of millions of autonomous miles and accrue millions more each month.

Tesla’s evidence thus far is near 0 miles for true autonomy. In 2025, despite all of the talk and hype, another year of about zero autonomous miles demonstrated by Tesla.

Tesla hasn’t demonstrated reliable autonomy for their corporate fleet in geofenced regions. They are farther away from delivering an autonomous vehicle for broad consumer purchase (ownership) for which Tesla assumes liability, except maybe under substantial restrictions (strong maybe, I am not expecting much true autonomy for Tesla consumer vehicles anytime soon).

Tesla’s FSD capabilities are advancing, but they are not yet providing evidence of accruing meaningful autonomous miles on public roads, which is a massive step up in difficulty from supervised driving.

In the meantime, Waymo is expanding cities and capabilities and measuring autonomous reliability at scale, including northern cities with more inclement weather, going international, and moving onto highway driving and serving airports. This is important because now we have data to benchmark a product against, we have one good measure of capability that is scaling.

2 Likes

Seems to me that the key point here … which keeps being mention, but then the discussion continues as if it hadn’t … is that a test of Autopilot is not a test of FSD. Yes FSD comments get interspersed but the whole bit about Autopilot needs to get put in a context box.

2 Likes

Yeah, thanks for saying that. Again and again.

But, the title of this thread, and the post to which you’re supposedly responding to, are about camera vs LiDAR, not touting of real world miles. Your real world miles point is a thing, for sure, but not particularly relevant to the conversation at hand, and for which there are other threads in which it would better be discussed.

What IS more relevant is this recent event:

Showing that hardware isn’t the issue for autonomy, rather software is.

1 Like

Not sure how to validate this technology without real-world miles.

Anecdotes won’t validate much.

Waymo:
Multi-sensor (with cameras, lidar, radar), modular AI
10s of millions of miles of reliability data

Tesla:
Camera-based sensor suite, claimed E2E AI
0 miles of reliability data (rounded to nearest million)

It can be both, right? Let’s say the vision system fails due to overheating, short circuits, or moisture ingress. All things which are known to have happened. Can the car achieve a minimal risk condition in that state? If the car is a Tesla, the answer is no, it can’t. Which means regardless of how good the software is/becomes, current Teslas can’t be L4 by definition due to hardware limitations. And you can go down the list. Besides vision, there are a number of other hardware-related single points of failure which would prevent the car from achieving an MRC.

This isn’t a particular problem because FSD is L2. It is assumed a human will take control of the vehicle in case of component failure, so camera-only is fine for the consumer market. But if your investing case depends on Tesla releasing a software update and instantly making millions of cars robotaxi-capable then it is a problem. I don’t see any scenario where regulators will sign off on a robotaxi unless the hardware meets SAE/ISO L4 standards, and current Teslas don’t.

3 Likes

Err, no matter what the car is, it can’t drive safely without the vision system.

1 Like

Correct.

Incorrect. First, SAE’s definition of Level 4 doesn’t require redundancy, and only mentions the word once, in an example case that’s not definitive. Second, Tesla employs redundancy through-out its system. Multiple cameras with overlapping fields of view. Dual CPUs. Etc.

You need to read the actual SAE J3016 specification.

It is in the nature of predictions about a great many things that even the soundest of predictions with respect to substance are vulnerable to errors with respect to time. Consequently, it is often easy and dramatic to make evaluations of the form “you only did X when you said you would do Y by now” (where X is sometimes zero) while ignoring that there has been very real progress toward the real goal. Such evaluations look dramatic in the moment, but are meaningless in the overall scope of the prediction coming true in the end.

Seems like, with Musk’s history of the timeliness of predictions and yet record of accomplishment, we should by now have a multiplier to apply to the time scale of his predictions before we start proclaiming failure.

The is correct. Note: I did not use the word redundancy. SAE defines what L4 is, but is almost entirely silent about how L4 is achieved. It does not prescribe requirements for sensors, architecture, etc.

However, does discuss in a fair bit of detail how an L4 vehicle must perform in the event of system failure. In the event of system failure, the clear and unmistakable rule is the vehicle must be able to achieve a minimal risk condition without human intervention. There is no ambiguity on that point.

I’ve got a better idea. Instead of searching for key words, let’s read it together and let the reader decide if current Tesla hardware meets the L4 requirements.

First, a key definition from SAE (emphasis added): DDT Fallback. The response by the driving automation system (or human driver, depending on level) to either a system failure or a situation where the system can no longer perform the Dynamic Driving Task, including achieving a Minimal Risk Condition.

In short, if a system fails, there needs to be a Plan B, which they call the DDT Fallback. SAE helpfully gives us an example of what they mean by that (again emphasis added)

A Level 4 ADS-dedicated vehicle (ADS-DV) that performs the entire DDT within a geo-fenced city center experiences a DDT performance-relevant system failure. In response, the ADS-DV performs the DDT fallback by turning on the hazard flashers, maneuvering the vehicle to the road shoulder and parking it, before automatically summoning emergency assistance (see Figure 7). (Note that in this example, the ADS-DV automatically achieves a minimal risk condition.)

Let’s use your example of the dual ADS processors. Both the processors are running on the same board. Let’s say the board itself fails while driving–which is known to have happened on Teslas.

So dear readers, in event of a ADS processor board failure, can the Tesla perform a DDT Fallback by turning on the hazard flashers, maneuvering the vehicle to the road shoulder and parking it? If not, the hardware isn’t L4 capable by definition.

1 Like

The requirement isn’t “drive safely.” The requirement is achieve a minimal risk condition without human intervention. Which in most cases means safely pull over and stop.

Waymo has not published schematics, but based on publicly available information, Waymo uses multiple lidar units (overlapping coverage), multiple camera sets, radar, independent processing pipelines, and seperate power domains.

So any one system could fail, and the vehicle can still achieve a MRC. This isn’t hypothetical. Many or most state AV regulations incorporate the SAE standards nearly verbatim. As far as I know, Waymo has been able to obtain AV operating permits in every jurisdiction where they have applied. Which means regulators agree Waymo can achieve an MRC in the event of a vision system failure.

Tesla uses fewer cameras than Waymo, no radar, no lidar, and a single power supply. And of course has convinced zero regulators to issue AV operating permits, despite robotaxi being a major focus of the company since at least 2016.

To go back to my previous post, if a Tesla’s vision system fails (which is known to have happened) can it safely pull over and stop without human intervention?

If the answer to that question is no, then the investor case of a simple software update enabling millions of robotaxis goes away.

3 Likes