Risk of Tesla camera-only self-driving

No, my point, again, is analogous predictions on replacing human abilities. From the Cotton Gin to robotics and now AI, humans are making non-human things that replace humans. Driving is but one more task to be replaced.

Ten years ago, what would most people think about having computers that easily pass the Turing Test?

The primary real-world discussion in 2015 centered on the aftermath of the 2014 event at the Royal Society in London, where a computer program called Eugene Goostman (which used the persona of a 13-year-old Ukrainian boy) convinced 33% of the judges it was human, a benchmark claimed to be passing the test.

In 2015, the AI community widely critiqued this claim, arguing that the bot succeeded through “cheap tricks,” obfuscation, and grammatical errors consistent with a non-native teenage persona, rather than genuine intelligence or understanding. Critics emphasized that the low bar and the specific persona made it easier to deceive judges with non-answers or simple diversions.

Back in 1999, it took someone like Ray Kurzweil to make a prediction of AI robusting passing the Turing Test by 2029. Many doubted him. Turns out he was too pessimistic.

XPeng out of China is also cameras only.

Actually, I follow the company and its technology closely, so I do honestly believe I have better insights than most.

Which references an article that claims Tesla needs multiple sensors activations for its AEB system. Which obviously isn’t true today.

2 Likes

Let’s not conflate ideas.

These two things can be true:

  1. multi-sensor data is higher dimensional and more information rich than single sensor data
  2. multi-sensor is not required for autonomy

Item 1 is true.
Item 2 may or may not be true.

1 Like

That would be so cool if only it was true.

* **Radar and Ultrasonic Sensors:** Unlike Tesla's full vision-only approach, which removed radar entirely from new production vehicles, XPeng maintains radar and ultrasonic sensors across its product line. This approach utilizes radar for its ability to cut through adverse weather conditions like heavy fog or rain and provide precise object locations, which contributes to effective automated parking features.

So: XPENG: Vision, Radar, Ultrasonic
Tesla; Vision.

Like I said, one guy has decided he don’t need no stinkin’ extra input. OK. Maybe he turns out right. So far nobody else is lining up in that particular parade.

2 Likes

Yeah, Elon’s programming FSD all by himself.

Indeed, I was just showing how an article you referenced is wrong.

Whether it’s right on other aspects, I don’t know. Maybe.

As for “no one knowledgable,” I guess the Cruise AV folks weren’t knowledgable enough, as an accident caused by their vehicle trusting the LiDAR over the camera led to the company being shuttered.

1 Like

Yeah, so after all of what you wrote, this remains true, because it is true:

Define autonomy.

Define soon.

Oh, I’m sorry, I didn’t realize that re-quoting oneself proves the point. Got it. Too bad the folks at Cruise didn’t get the message.

And as we see, at least one other company, who calls their system “Pure Vision,” doesn’t think LiDAR is necessary.

What happened was that the Waymo vehicles were programmed on encountering dead traffic lights to “occasionally” phone into Waymo’s human-staffed remote center and ask for advice. Since there were so many dead traffic lights and so many Waymo vehicles, the remote center was overwhelmed, and the cars just sat there waiting to hear back.

None of Waymo’s redundancy or variety of sensors helped prevent this failure. Of course, only the cameras could see the traffic lights anyway - neither LiDAR nor radar nor USS can tell what color a traffic light is.

Waymo’s response is telling: rather than fix the general issue, they’re fixing only this specific problem for the future:

While our Driver already handles dark traffic signals as four-way stops, we are now rolling out fleet-wide updates that give our vehicles even more context about regional outages, allowing them to navigate these intersections more decisively.

The larger problem is what the vehicles do when they don’t get real-time support from the remote center. Current behavior is, apparently, to just sit there until the remote operator suggests a path. Fixing behavior on dead traffic lights is but one example of a larger problem, and who knows what the “context about regional outages” is and how that works?

Well, I said “driving autonomy” and “relatively soon,” so I find it telling that you mis-quoted me. And, no, I’m not going to define those for you. I’ll just prove my point by re-quoting myself:

1 Like

Interesting to note that at CES yesterday, Mercedes and Nvidia unveiled their L2++ autonomous driving solution, shipping in the CLA this quarter, and coming to Lucid by end of year.

It does not use LiDAR nor HD maps, although I suspect the future L3/L4 versions will.

1 Like

This is absolutely not true! The primary reason the project was ended is because GM didn’t want to fund the billions necessary to keep it going. And nobody else wanted to buy it (and fund it). The accident may have been used as an excuse to stop funding it, but it really was a money issue, not a technology issue.

Yes. But since you were talking about the SAE/ISO standard, what does the standard say about that? Does the standard consider it “a reasonable thing to do” (for a L4 system) in this case?

I believe you have it exactly backwards. The excuse GM gave, that robotaxis wouldn’t be profitable enough flies in the face of Google and Uber and others investing heavily in that space.

The SAE spec doesn’t define what the DDT Fallback has to be, just that one has to be defined and implemented. Remember, the spec itself disclaims that it’s not a legal document, but a technical one.

Here’s Nvidia CEO Huang on Tesla’s FSD and comparison to Nvidia’s CES announcements:

Tesla’s FSD stack is completely world-class. They’ve been working on it for quite some time. It’s world-class not only in the number of miles it’s accumulated, but in the way it’s designed—the way they do training, data collection, curation, synthetic data generation, and all of their simulation technologies.

Of course, the latest generation is end-to-end Full Self-Driving—meaning it’s one large model trained end to end. And so… Elon’s AD system is, in every way, 100% state-of-the-art. I’m really quite impressed by the technology. I have it, and I drive it in our house, and it works incredibly well.

He goes on to explain that Nvidia is building a set of building blocks for other companies to pick and choose and integrate with.

1 Like

I’m reasonably certain nobody mentioned anything about investing based on that video. There’s no one left on this site making such highly flawed decisions.

FC

1 Like

Architect says only 10 billion miles needed to handle the long tail.

Almost there.

”Roughly 10B miles of training data is needed to achieve safe unsupervised self-driving. Reality has a super long tail of complexity."

They’re at 7 billion. Tesla started collecting driving video in 2016, so roughly 9 years worth. (At the time the great prognosticator said it would take 6 billion, so new goalposts, again.)

But there are more cars each year, and more collection so it’s not a straight line thing. How long to get those last 3 billion miles? And how likely is it that the goalposts don’t move again?

In other news…Mercedes enters the game:

Even those who don’t fully trust the Tesla system, which for now remains supervised (meaning you need to be ready to always take control) and has fewer redundant safety measures than legacy automakers deem necessary, are impressed by FSD’s capabilities.

So, we were super excited when Mercedes-Benz offered us a chance to ride along in a new 2026 CLA electric sedan with the automaker’s latest MB.Drive Assist Pro ADAS setup that has similar capabilities to FSD but with an even more impressive suite of sensors than what the Tesla system uses.

Meanwhile, back at the valuation table:

Tesla stock remains as controversial as ever at the start of a new year.

On Saturday, former Fidelity portfolio manager George Noble posted a sum-of-the-parts, or SOTP, valuation for Tesla on X, which suggested the electric-vehicle maker was worth $80 a share

Using a “sum of the parts” methodology, he finds:

Tesla’s car business is worth $60 billion, or $18 a share, Noble wrote. Tesla’s battery storage business is worth about $20 a share. Those are Tesla’s main businesses. Then there are the AI-related businesses. If Waymo, [Alphabet’s](https://www.barrons.com/quote/stock/US/GOOGL?mod=ANLink) robo-taxi business, is worth [$100 billion](https://www.barrons.com/articles/bounce/bounce-waymo-tesla-robo-taxi-f091b108?mod=ANLink), based on private-market valuations, then Tesla’s is worth $30 a share, he estimates. If robot maker Boston Dynamics is worth $39 billion, then Tesla’s Optimus humanoid [robot program](https://www.barrons.com/articles/bounce/bounce-elon-musk-tesla-30-trillion-robots-5de8ff44?mod=ANLink) is worth about $12 a share.

Finally, and this is my favorite: Tesla is promoting its Cybercab, except oops, somebody else owns the trademark:

Move fast and break things is the typical Silicon Valley mantra. Unfortunately for Tesla, so is talking big before taking action in a timely manner (ahem, [Full Self-Driving and Robotaxis]. That second piece brought forth a slip-up which may have cost Tesla big: the Cybercab name.

A delay in filing paperwork led the United States Patent and Trademark Office to suspending Tesla’s application to register the Cybercab trademark ahead of the vehicle’s launch. The reason? Someone else beat them to the punch.

https://insideevs.com/news/783634/tesla-cybercab-trademark-suspended-patent/#:~:text=Tesla%20failed%20to%20register%20the,drop%20the%20Cybercab%20name%20claim.

I’m sure they’ll just buy it, but hey, fun money!

2 Likes

July '26

Unknown. That’s the issue with solving previously unsolved problems - you can’t accurately predict the timeline. Elon’s biggest problem is that he continues to make time predictions while the competition has learned to evade the question when asked.

First, this is more Nvidia than Mercedes, but Mercedes has been in the “game” for a while. They have one of the few regulatory-approved Level 3 vehicles for certain parts of Nevada and California, including lights outside the car to tell the cops that the driver can be looking at their cell phone.

1 Like

Waymo is solving and expanding autonomy with only a very small fraction of 10 billion real-world miles, suggesting the superior capability of Waymo’s

  • team
  • hardware
  • software
    (or some combination of above factors)

It also suggests the importance of simulation in accelerating development.

1 Like

Well, we still see enough Waymos doing bad things to intelligently guess that Waymos haven’t seen as many edge cases as needed.

Sure simulation helps, but not with edge cases. The real world comes up with crazy behaviors, reactions, and scenarios that not a group of 100 people could 100% come up with. Farzad talks about “the guy in a chicken suit crossing the road,” but there are other things, like a tow truck towing a tow truck towing another truck. Bicycles with wheels spinning, yet they’re mounted to the back of a car or RV.

I don’t disagree that Waymo has a pretty good system. It’s still slow to scale and it does still make mistakes, luckily non taking a human life (although it did kill a cat in SF recently). And it went onto light rail tracks with a train coming.

1 Like

I’m just the reporter. My “announcements” as you put it mean exactly zero. But regulations do matter.

Question: How many jurisdictions agree with you that Tesla vehicles can autonomously perform the maneuver you described?

Answer: Zero.

That’s what matters. The SAE/ISO standard is written in plain language and easily understandable. A reasonable person can read it can conclude current Tesla vehicles cannot be L4 except in very limited circumstances.

Most states (maybe all?) have adopted the SAE/ISO standard language for their L4/robotaxi regulations.

Fill in the blanks. Don’t shoot me, I’m just the messenger.

2 Likes

The standard says assume a “minimal risk condition.” Stopping in the lane of travel on a city street seems to reasonably meet that condition.

In your opinion, what should the vehicles have done in order to meet the SAE/ISO standard?