Risk of Tesla camera-only self-driving

The specification is an important thing. But the real world is also quite important. For the last few months I’ve been using Tesla’s latest version of FSD (14.2.X), and it is absolutely terrific. As far as I can recall I haven’t had to intervene due to fear of mishap for at least a year, probably last time was mid-2024. I do, of course, sometimes intervene to choose a better lane, to change the route, or to “nudge” the car forward when I deem it safe and the FSD hasn’t deemed it safe yet. And I often take over just before it chooses a parking spot only because I have my own preferences regarding parking. But it parks very nicely in my driveway, at superchargers, and pretty much anywhere else at this point.

I am willing to try other manufacturers driving systems, but so far I have seen none available here in the USA. I’ve heard that some Chinese cars have such systems, but none sold in the USA yet. Does anyone know of cars other than Teslas that you can enter a destination, press the “SELF DRIVE” button, and the car will take you there on its own? That is, cars available in the USA that I could test drive?

Are you ready today to let it take responsibility to drive you, your family and friends to your regular destinations, doing 100% of the driving all the time, every time, without any human supervision?

That is the autonomy question.

What do you think?

No car is completely fault tolerant to every possible hardware failure. Two can play your game.

Let’s say a front suspension A-arm in a Waymo iPace breaks due to some impact. Such failures have occurred in vehicles. Some iPace failures requiring towing are this and this and this. In such an event, the car cannot move. Does this mean Waymo iPaces can never be L4 by definition?

2 Likes

The appropriate question isn’t whether a vehicle has hardware redundancy for every potential failure.

The question is whether a given vehicle, Waymo or Tesla or any other, has sufficient hardware redundancy to achieve sufficient autonomous driving capability and safety.

For Waymo, maybe so far yes. For Tesla, no data, we are waiting.

1 Like

Trucks are not perfect either, yet they have redundant safety features like air brakes using two separate lines (in addition to using the ‘air brake’ of the engine), and self-driving truck manufacturers are building in redundant power, redundant sensors, and redundant computing to insure safe operation in the event of a failure of one component.

It goes without saying that airliners have multiple systems: 3 computers, 3 separate power sources, and 3 hydraulic systems, any of which can fill in in the event of a loss of another.

I’d repeat myself if I noted that trains also have multiple systems for signaling, braking, and communication as well, but you probably already knew that.

The XPENG system has multiple hardware, software, and sensor systems (from what I have read) and Tesla has redundant chips and power but not sensor systems, so it is (perhaps) deficient in that way.

Game over. I win. :wink:

1 Like

I’m getting more and more comfortable with it. Since version 14.2.X, I’ve allowed it to do 95% of the driving, and much of the 5% was from one of my kids who used the car a bit but drove it manually instead of automatically (FSD). There are still plenty of things I don’t like about it. For example:

  • I don’t like that when it backs out of my driveway (or other places), or when it makes a K-turn, that it turns the wheels before the car is in motion. I believe that that action (turning the wheels while stationary) contributes to premature tire wear.
  • I don’t like that it uses the brakes much more often than I do when driving manually. When I drive the car on my own, I hardly use the brake pedal, perhaps once a week or less.
  • I usually don’t like the parking spots that it chooses in parking lots. However, the day before yesterday, it finally learned that when I shop at a particular supermarket, I like to back in to the parking spots with chargers if available. I was shocked when it drove straight to the chargers and pulled in!
  • I don’t always like the speeds it chooses on the highway (and I’ve tried all the modes available). In general, I prefer “standard” mode, but I would like it to drive 69 mi/hr in the 65 mi/hr zone, instead it drives 71 - 72 mi/hr.
  • I sometimes want to choose a specific maximum speed on certain roads. For example, there is a 30 mi/hr road nearby that I travel on quite frequently. I want the car to go no faster than 35 mi/hr on that road (it’s in a small city with very attentive traffic enforcement), but it’ll often get up to 37 if the traffic is flowing at that speed.
  • Sometimes I don’t like the lane it chooses, especially where there are multiple turning lanes to choose from. In certain places, the leftmost turning lane (for a left turn) is used by many for U-turns. At those intersections, I prefer the second to left turning lane (unless, of course, I am making a U-turn).
  • Sometimes it chooses odd routing. And sometimes as it progresses, it ignores that routing and chooses something better on the fly. But it doesn’t tell me that it’s going to change the routing, so it’s a guessing game until the last minute if I will or will not grab the wheel and force it to take what I consider the more rational route.

But as far as safety goes? Overall, at this point, it is a safer driver than I am. I consider myself to be a very safe driver, but as I age, I simply can’t see as well and I don’t have as quick a reaction time as I did, and I can’t look 7 or 8 ways at once. So as to the question - “do I trust it to drive my family around?” The answer is a resounding yes. Do I trust it to drive alone (without a qualified driver in the car) yet? Probably not, but I’m getting close.

But I will give an example of something I would trust at this point. My wife is working today, and one of my kids is working today, but instead of each of them taking a car, leaving me with no car for errands/gym/etc, we gave the kid a ride to work. My wife is working till late tonight, but kid finishes at 5 or 6 today. What will probably happen is that kid will text and I will go pick them up from work (it’s only a couple of miles away). But if my car was capable, I would trust it to, later this afternoon, drive by itself to kid’s workplace, park in the lot there, and wait for their workday to finish so they can easily get home. Just this simple functionality alone would be huge for households with multiple drivers that require the vehicle at different times of the day!

I think true autonomy is coming. Maybe not as quickly as many of us would like, but it’s coming. I also think there are a few interim steps between now and true autonomy that could help streamline transportation and make it a heck of a lot safer than it is today.

1 Like

Apologies. I didn’t want to include too many definitions in my post, as that gets cumbersome.

However, I’m afraid you are wildly misinterpreting the standard. There is nothing about being “completely fault tolerant” as you put it. The standard is the ability to achieve a minimal risk condition in the event of a failure of part of the ADS.

Is the suspension part of the ADS? No it isn’t. Is the ADS computer part of the ADS? Yes it is. SAE includes an example:

A Level 4 ADS experiences a DDT performance-relevant system failure in one of its computing modules. The ADS transitions to DDT fallback by engaging a redundant computing module(s) to achieve a minimal risk condition.

2 Likes

There is a twist here that is perverting the conversation. Yes, having the car drive safely includes doing something reasonable when something fails … but, hey, isn’t the bigger issue driving safely without failure most of the time?

Focusing on Minimal Risk Condition is rather futile for FSD because FSD’s idea of how to address the unsolvable has been to turn things over to the human. Yes, this has to change to get true L4, but what are you evaluating right now?

One might note that all perception could fail and the car would still have a view of the world a second before that failure and that might be enough to pull safely to the side. And, if it was only a partial failure …

I’m evaluating the investor case of current Tesla’s becoming robotaxis:

The SAE standard is crystal clear that in the event of a failure of a portion of the ADS, the car must be able to achieve a MRC. Most states AV standards mimic the SAE language.

FSD works fine as a driver assist system, but Tesla can’t get to L4 with current hardware.

1 Like

Given SAE’s pattern of what must be accomplished, not how, and defining problems based on what actually happens, not what someone dreamed up, I think we can safely say that what L4 really means is yet to be known. TSLA has some measure of redundancy, so I think the questions will be “what actually fails?”, “how frequently?”, and “how seriously?”. We don’t know that yet, especially for Tesla.

1 Like

OK, I’ll grant that point. But you yourself don’t uniformly make that distinction:

Which, as you had earlier pointed out, is incorrect - SAE doesn’t say how the vehicle must perform, but rather only how the ADS must perform.

The SAE spec actually acknowledges the overlap in its definitions of separate “actors”:

This document also refers to three primary actors in driving: the (human) user, the driving automation system, and other vehicle systems and components. These other vehicle systems and components (or the vehicle in general terms) do not include the driving automation system in this model, even though as a practical matter a driving automation system may actually share hardware and software components with other vehicle systems, such as a processing module(s) or operating code.

So, yeah, it’s pretty easy to conflate the components being specified. I’d say the spec isn’t quite mature enough yet. Heck, sometimes it refers to an “driving automation system” and sometimes to an “automated driving system.” A technical point, to be sure, but a potential confusion that a mature spec wouldn’t have.

Moving on…

Incorrect.

Which part of Tesla’s vision system has overheated, developed a short circuit, or had moisture ingress? Or, why do you believe Tesla’s redundancy is somehow deficient compared to others? Do you believe that Waymo, for instance, can achieve a DDT Fallback if its vision system is completely taken out?

Mobileye has asserted that and more, FWIW.

The underlying tenant of much of this discussion is the mistaken belief that Tesla doesn’t have near enough redundancy. Maybe that’s because Tesla doesn’t write numerous blogs on it, so people assume it’s not there; but redundancy actually has been there for years. For instance:

Tesla Hardware 3 (Full Self-Driving Computer) Detailed - AutoPilot Review.

One important tenant of the FSD Computer, and design of the Tesla cars in general, is that they have fully redundant systems in case of the systems fails. The FSD Computer actually has two computers running, and is able to immediately shift over to another if one fails. That’s in addition to redundant power supplies, steering controls throughout the car, etc.

As TSLA hit new ATHs this year, those of us invested in TSLA actually won, whereas you, with your TSLA short, have lost. :winking_face_with_tongue:

Taking this in phases:

  1. Does anyone here believe in LiDAR so strongly that they invested in now bankrupt Luminar :sob:, or Mobileye (down 48% this year alone :cry:)?

  2. Is anyone here actually shorting or even just avoiding TSLA because of its supposed lack of hardware redundancy? That’s silly, do you really think Tesla isn’t aware of the J3016 requirements and/or hasn’t bee talking with some regulators about what they’d require?

Morever, does anyone here know what the underlying physical architectures of the CyberCab are to such a detailed extent that they know, or even suspect, regulators will fail to certify the vehicles because of this supposed lack of redundancy?

Do you really think Tesla isn’t capable of modifying its vehicles to have enough redundancy? That’s the easy part of all this.

And, back to the thread title, does anyone still think Tesla’s camera-onbly system won’t be good enough to get approvals to run autonomouysly in at least one jurisdiction this year? And if you believe that, why?

1 Like

I think it can, for two reasons. One is the redundant systems described above. The other is that until recently Waymo did not operate on freeways. Likely because achieving a MRC on the freeway requires safely moving out of the lane of travel. So Waymo has been able to convince regulators in California its vehicles have that capability in case of a system fault.

This is strictly speculation on my part, but in Texas Tesla uses a safety operator in the driver’s seat (as opposed to the passenger’s seat) when travelling on the freeway, probably for the same reason. Simply stopping the vehicle isn’t safe enough, you have to move it out of the way.

I believe the cybercab is almost certainly designed to meet the SAE standards. It would be a major misstep if that wasn’t that case.

Edit: I meant to respond to this part as well:

Tesla’s own service documents say this happens. Also, 200,00 vehicles were recalled due computer problems causing failed rear cameras. Plenty of discussion on Reddit regarding camera failures as well.

To individual cameras. Not to the entire vision system. With overlapping fields of view, the Tesla vehicle with a single failing camera will, like Waymo, be able to perform a DDT Fallback.

Answers to questions.

  1. No, no strong opinion on lidar, or any particular sensor, only interested in evidence of what works best.
  2. No, not shorting or avoiding based on lack of hardware. I have shorted in the past based on declining revenue, declining EV sales, declining net income, astronomical P/E, failure to deliver anything to close to claims. I acknowledge defeat here as stock went up in 2nd half of last year regardless of business performance, but we all know stocks can move a lot in different directions regardless of operational and business fundamentals. No opinion on Tesla and J3016.
  3. No opinion on what regulators will approve. I’m more interested in what actually works autonomously.
  4. No.
  5. Again, no strong opinion on what regulators approve, only on what works and will be actually deployed and meaningfully scales. I made predictions for 2026 here:

I’d be very surprised if Tesla sells any meaningful quantity and mileage of autonomous vehicles for retail ownership for which Tesla assumes liability for the driving in 2026.

I’d be surprised if Tesla can begin any meaningful autonomous scale for the taxi business in first 6 months of 2026 or meet a modest goal of 3 cities in 3 states by eoy 2026 (and hence demonstrate scale and geographic and regulatory generalization by end of year).

These are very modest goals (see prediction thread for fine print) compared to claims that have been made over and over and continue to be made.

Please feel free to add your 2026 predictions.

2 Likes