Tesla’s approaching is already working. Hundreds of thousands of people are using FSD daily. It is already very good and getting better every day.
The problem here is this excessive fascination with levels instead of recognizing that there is a continuity of self-driving with errors. When the errors get few enough that people feel comfortable climbing in a car with no driver we call it level 4, but one error too many and it is suddenly level 2. So, what are you going to shout when that one error is gone … a miraculous, unexpected transformation?
So, was the level 4 claim premature? Or is that error level acceptable for level 4?
And yet, there is nothing discontinuous about the error rate in the way that ice is different from water … there is just the company willing to call it level 4 and the passengers willing to ride without a driver … as the Cruise case illustrates.
This point is being hammered on, but may simply be a reflection that the current testing is at a very early stage and therefore it might be prudent to have a non-driver present. Would your position change completely if they started testing without that passenger? It shouldn’t.
Yes, it very much would.
By all available information, Tesla’s vehicles are very far away from being able to operate without in-person monitoring. Admittedly, the available information is scarce. But crowdsourced data indicates that FSD goes less than 1K miles between critical interventions. We see the same with the Austin beta - you had several videos of issues requiring the monitor to jump in and interrupt what the car was doing, and the company subsequently indicated those errors only occurred over about 7K miles.
Neither are anywhere close to the many hundred thousand miles between self-caused accidents that human drivers experience.
Finally, there’s the fact that Tesla has chosen not to pursue further AV permitting in CA, stopping at the point where they have to disclose performance data. The implication is that such data would be bad for them to disclose.
If they can pull the monitors and safely operate the cars without humans doing realtime monitoring, we would know the cars had reached a level of safety vastly greater than any information today indicates. The gap between Level 2 and Level 4 isn’t one of degree, but of kind. The difference in error rates is incremental, but the difference in operating characteristics is a fundamental change.
Clearly autonomous capability (amount of human supervision) is (or at least should be for careful implementations - which is maybe where Cruise provides the cautionary example) highly correlated with error rate: less supervision implying lower error rate.
However, I think most people, experts and laypersons, would agree that the difference in human supervision between L2 and L4 is a step-change in AI functionality of monitoring/supervising the driving process (discontinuous in your phrasing).
The difference is almost certainly a step change in terms of level of human labor needed to run a taxi service and thus the economics of a service.
My point being that yes, it may be a step change in functionality, but it is based on a continuous spectrum of how well the car manages its own navigation. Focusing on things like the monitoring passenger as being discontinuous distorts the reality of what is changing.
IMO, the term FSD has become a dog whistle.
IMO, terms L2 n L4 have become dog whistles.
Each side hear the term and builds its own silo fortress complete with straw knights and proceeds to straw joust their POV.
I’m gonna pretend someone up thread asked “why did Tesla choose L2 and why is Tesla sticking with L2?”.
I asked ChatGPT what the fanbois say:
{Tesla fans and supporters—sometimes nicknamed “Tesla fanboys”—often explain Tesla’s decision to stick with Level 2 Full Self-Driving (FSD) over Level 4 autonomy with a few recurring arguments. These reflect a mix of pragmatism, regulatory strategy, and Musk-style optimism. Here’s a breakdown of the most common reasons they give:
1. “Regulatory Headwinds Are Too High for Level 4”
Explanation: Level 4 requires approval from regulators (like NHTSA) for driverless operation, while Level 2 does not.
Fan rationale: Tesla can roll out advanced driver-assist features globally without waiting for regulatory clearance by remaining Level 2.
Quote logic: “Why go L4 and deal with lawyers when you can deploy L2 and collect data now?”
2. “Staying at Level 2 Lets Tesla Collect Real-World Training Data”
Explanation: With millions of vehicles on the road using FSD Beta, Tesla gathers massive real-world datasets to train its neural networks.
Fan rationale: You can’t get that kind of scale or variety in closed geo-fenced Level 4 zones like Waymo.
Quote logic: “It’s like shadow mode—but in the real world. Level 2 = infinite data.”
3. “Level 4 Is Too Constrained for Tesla’s Vision”
Explanation: Companies like Waymo or Cruise need pre-mapped, geofenced cities. Tesla wants an “anywhere, anytime” autonomous system.
Fan rationale: L4 is a dead end if it only works in Phoenix or San Francisco.
Quote logic: “Tesla is solving the hard problem—generalized autonomy—not the easy one like Cruise.”
4. “Level 2 Legally Offloads Liability to the Driver”
Explanation: Level 2 means the human is responsible. Tesla avoids legal liability for crashes.
Fan rationale: Staying at L2 while making it feel L4-like gives Tesla the best of both worlds: performance + zero legal risk.
Quote logic: “Smart business. Why take the legal bullet if you don’t have to?”
5. “Level 4 Is a Crutch That Avoids Solving Real AI”
Explanation: Level 4 systems rely on HD maps, redundant sensors, and constant remote oversight.
Fan rationale: Tesla fans argue that’s not “real AI.” They claim Tesla’s end-to-end neural networks will eventually outperform L4 systems that rely on hand-coded logic.
Quote logic: “Level 4 is a patchwork; Tesla is building AGI-on-wheels.”
6. “Tesla Will ‘Flip the Switch’ to Level 4 Later”
Explanation: Tesla is just holding back FSD until it reaches a reliability threshold.
Fan rationale: Once performance is good enough, Tesla can flip from Level 2 to Level 4 via OTA (over-the-air) update.
Quote logic: “They’re just waiting for regulators and 10x safety. The software is already there.”
Counterpoints from Critics (for context)
“They call it FSD, but it’s L2 with marketing.”
“Tesla fans excuse delays with moving goalposts.”
“Calling supervised driving ‘self-driving’ is misleading.”
End ChatGPT info.
-±±±–++&±
Me: this one REALLY resonates with me, especially given all the lawsuits Tesla is managing.
{ 4. “Level 2 Legally Offloads Liability to the Driver”}
The ChatGPT summary is pretty much what I remember from the discussion/YT a few years ago.
The critics’ counterpoints are also what I think I remember from that contentious time (and see today)
I would add that I see the critics also moving goal posts.
I asked ChatGPT how Elon described the choice to stick with “L2” as the label…
ChatGPT says Elon has never explicitly said.
ChatGPT says Elon is playing his cards close to his chest.
I, ralph, think this last point is true. Elon reveals what he wants to reveal, and, IMO, is not above feints and misdirections.
FWIW
→ -> …
ralph
I’ll agree with you here, except that I don’t really think “misdirections and feints” is descriptive enough.
I would go with “he lies and exaggerates” for his own (companies’) benefit, without regard to whether something is achievable within a reasonable time frame, or even true. That some investors have been handsomely rewarded does not negate the statement, in my view.
This is no way diminishes my admiration for his ability to push new technologies forward where others have failed; I felt the same about Steve Jobs, but was willing to admit to his many flaws and mistakes (product failures) along the way. (The difference, to me at least, is that I can’t think of a time when Jobs made such extravagant promises or mislabeled the products with attributes they did not have.)
[For the record, I have made more money with Musk than I ever did with Jobs: first by arbitraging the difference in price of Twitter vs his bid, and now by shorting Tesla ]
It isn’t a valid question, rhetorical or not. Tesla said specifically they are not sticking with L2 and intend to roll out L4 to half the country by the end of the year (although they didn’t call it L4).
I also disagree Elon is playing his cards close to his chest. He made a very clear statement about where he expects Tesla’s L4 capability to be this year. Not much ambiguity at all.
I answered that question a long time ago. “Because it was not advantageous for Tesla to go to L4.”
It’s good for investors to understand the business they are invested in. It also applies when rooting for the competition.
The Captain
Which was at least plausible when Tesla was planning to “switch on” Level 5 autonomy for private owners through FSD finally being able to drive itself, anywhere.
But you can’t have a robotaxi service with Level 2. You have to have Level 4. Otherwise it’s just Uber.
So now that Tesla has an open beta of their robotaxi service, how is it still plausible that it’s “not advantageous for Tesla to go to L4”?
Reality is more important than labels. There is a reason why most lidar and sensor bloated companies have shut down or are in the process of being shut down. The tech does not work and is not scalable. Cruise, Argo, Uber, Lyft all tried it.
Tesla Robotaxi is already beginning to expand to NY, NV and FL now. Yes, it is with a person in the car, but the car is self driving. It is a prudent move.
Next steps will be determined by reality on the ground, not a theoretical argument of labels.
I’m holding out the possibility that this is a promise which will never come true - for anyone. (I also think it may come true, but there seem to be significant impediments.)
There are many technologies which were pursued which were never quite right, or which seemed “the net step” which just did not happen.
Supersonic air transport, for instance, was heralded as the future of aviation, but it didn’t happen. In spite of billions spent by Boeing, in Russia, and the EU, only one service ever launched, and it was never profitable.
Theranos, Videodisks, Segway, Google Glasses, Clippy, Hydrogen blimps, the Dymaxion Car, Microsoft Bob and lots of others held big promise and came to naught. Some actually got to the state of an industry only to quickly collapse due to economics, safety, consumer rejection, competition, or other issues.
It’s possible - I’m not saying likely, but possible - that this is a reach just a little too far for today’s technology, or any in the near future. The fact that it’s being (unprofitably) done doesn’t negate the argument: hydrogen blimps and the Concorde were a thing, right up until they weren’t.
I consider the possibility that Tesla’s “camera/software” only approach will never quite be good enough, and the Lidar approach too expensive or cumbersome, leaving us back in the days with human drivers a la Uber and Lyft.
I put this probably at maybe 20-25%, but I think it’s real.
Hundreds of billions of $ going into building out the AGI. Very intelligent people like Jensen, Zuck, Elon and Sam are betting on this. All evidence is pointing to that models are getting better daily. The growth is exponential. Results are stunning.
Autonomy is coming. Drones are coming. Robots are coming.
Similar amounts went into Google Glass, the Concorde, and other examples upthread. That doesn’t guarantee success - but maybe it will.
I don’t care how smart people are, they still have failures. Edison, Ford, Tesla, Jobs all had failures.
Drones already exist. Robots already exist. Autonomy might. It’s close, but then so was the Space Shuttle, right up until it wasn’t.
No. They were all lab experiments.
AI / AGI is real. Feel free to live in ignorance.
They are absolutely different kinds of driving. Safely coming to a complete stop is an essential function of driving and Tesla L2 CANNOT do that. Period. Full stop.
huh ? This is an absurd statement.