Teslas depreciating like a Yugo

I don’t think that’s accurate. I think you’re overstating FSD’s capabilities. As IGU pointed out, FSD does a lot of amazing things - but it’s still at best an assistive device for an attentive driver. My understanding is that it still has a relatively high rate of driver interventions - instances where the AI decides that it can’t/won’t manage the driving in that specific moment, and reverts control back to the human driver. It would be incredibly dangerous (and of course, completely illegal) to turn the car on, give it a destination, and get in the back seat and let the car take you there. FSD cannot consistently perform automatic driving - it still requires intervention. AIUI, it might not require intervention on every single trip (which can result in some amazing-looking YouTube videos), but it requires intervention often enough that it cannot be safely used as anything other than driver assist.

As for Tesla converting vehicles on the road to “robotaxis,” they lack certain features that I think will be absolutely necessary for a robotaxi to operate. The doors can’t be closed remotely (except the falcon doors on the X?). The interior camera can’t see the entire interior volume (it can’t see much of the rear cabin, the footwells, or the rear cargo space). There can be aftermarket fixes to those things, of course - but it’s probably much more efficient to use a purpose-designed robotaxi, if they ever manage to solve self-driving.

Is current FSD more risky than the average human driver? Probably. How much more so? I know I don’t know and I’m pretty sure you don’t know. No one has a clue unless they are driving a Tesla HW4 with the latest FSD version set at the most conservative driving level.

I wouldn’t be surprise though if the latest FSD with the latest hardware set at the most conservative setting was safer than the average human driver for the large majority of driving scenarios.

If true, it doesn’t mean it should be allowed for general use without regulations. But it would suggest that the car is capable of self-driving under most situations.

Somewhat. Tesla doesn’t release data like intervention rates. All you have is anecdotal reports from people using FSD (like IGU upthread, and another one linked at the bottom of this post), that make it clear that FSD is not ready to be used as anything other than a Level 2 system.

Perhaps - but so what? It’s possible for an AI to be safer than the average human driver for the “large majority of driving scenarios” and still be far too unsafe for use for anything other than driver assist. As this driver reports, it can be “very very good” when it works, and “horrid” - and dangerous - when it doesn’t. For it to be actually used as automatic driving, it has to be able to safely handle virtually all driving scenarios. There might be some really weird edge cases out there - but FSD has to be improved to eliminate the phantom swerving and issues with advancing into heavy cross traffic from a stop sign that this article describes before you can let the driver not be driving the car with assistance.

It’s not as advanced as you seem to think it is, btresist.

better/

It matters because it means that FSD is capable of self-driving but needs to improve in consistency and edge exceptions. Those incremental improvements seem plausible to me, particularly with neural network/machine learning where such improvements come from experience rather than having to write complex code. This also puts Tesla far ahead of its competitors who are basing their self-driving on a more heuristic coding approach, which appears to only be effective under very limited conditions.

In addition, because FSD is designed to work under all conditions, it has substantial value even in its current form as a driver-assist program. Its use backed up by an attentive driver is probably the safest form of car travel available outside of a police escort.

Is that a legal opinion? In any case, I am not sure why you assume this. I think FSD is capable of going from a to b without intervention, but it doesn’t do so all the time. So the issue isn’t whether it can drive safely on its own but rather whether it can learn to do so more consistently. At what point am I assuming too much?

You and others appear to believe self-driving will never happen, that the general public will never accept it. On the other hand, the number of locations willing to try driverless robotaxis appears to be increasing, as is the number of Tesla owners willing to pay thousands for FSD. Go figure.

No it doesn’t. Consistency is a core requirement of self-driving. For example, a car that successfully stops at 99.9% of red lights is dangerously unsafe to operate as self-driving - because it would run a red light far too often to be safe.

Again, Tesla doesn’t release actual figures - but the system disengages and/or engages in erratic behavior (like phantom swerving) in far more cases than “edge exceptions.” It’s not capable of going from a to b with enough regularity that it would be safe to let it drive itself without human supervision.

Whatever this means for FSD’s capacity to improve in the future, FSD is not capable of either automatic or autonomous driving today. If you take a Tesla and try to let it drive you around for a month or two while sitting in the back seat, you’re almost certain to end up in an accident, and perhaps a very bad one at that (not advice - this is illegal whether you get in an accident or not).

Again, if it can’t drive safely on its own consistently, then it can’t drive safely on it’s own. To use some round made-up numbers, car that goes to various destinations 99% of the time without an intervention, but disengages and kills a pedestrian on that 1% of the time, is not safe.

I think you’re assuming that FSD has a pretty low disengagement rate. I think that’s wrong. I’m sure it’s much better than it was a year ago - and the below article represents an attempt to generate the sort of data Tesla doesn’t release, on a limited set of observations. But it showed then only a dozen or so miles between disengagements and driver interventions (which would basically be a failure every other trip, if not more often). It’s almost certain that the present system is better, and may go quite a ways before running into a situation the AI can’t handle. But the handful of assessments I’ve seen on the interwebs (and again, as IGU pointed out upthread), FSD is simply not ready to regularly drive itself in a city. It needs a human far too often.

Consistency would be a nice attribute for a self-driving vehicle, but it is not part of the definition of self-driving. Well it is for you I guess, but yours is an arbitrary opinion.

I think we all agree that a drunk driver is unsafe with inconsistent driving behavior. Perhaps even more so than FSD. Yet I think we would all also agree that a drunk driver is self-driving. Safety and consistency are not part of the definition of self-driving.

You are missing my point. The fact that FSD can sometimes get from a to b without intervention means that it can do the task, it just has to learn consistency, which with neural networks should come with repetition and experience. It is more likely to reach a high level of performance than say Waymo that limits its self-driving to a small regulated environment.

Analogy warning! One can play tennis very conservatively, just tapping the ball back. Fewer unforced errors but the ceiling for improvement is low. That’s the Waymo equivalent. Compare that with a player who hits the occasional top spin shot. Longer learning curve, but much higher ceiling of play. That’s Tesla.

It is all about the details. My wife hasn’t had a traffic violation in 10 years. Yet I still find myself subconsciously pressing imaginary brakes when she drives. Those would be nnecessary disengagements based on different styles of driving that are not related to safety.

I agree, but that is a different issue from whether FSD is capable of self-driving. I am not ready to play at Wimbledon, but I am still capable of playing tennis.

1 Like

I think they are, but I think we’re also just using the terms differently. A drunk driver isn’t in any condition to drive. He’s legally prohibited from driving, because he cannot operate the vehicle safely enough for him to be permitted on the roads. “Driving,” IMHO, implies the ability to operate the vehicle sufficiently not just to get from a to b, but to do so in a manner that more or less follows the rules of the road and doesn’t present an enormous risk to either passengers or people outside the car. A vehicle that can physically move from a to b but doesn’t know what stop signs or traffic lights are (and therefore runs every one of them) isn’t capable of self-driving.

But I see your point. If the vehicle can get from a to b, even if it did so unsafely, it’s still doing the thing. But I don’t think a car that went from a to b without at least minimally avoiding accidents and injuries would be considered “self-driving” by too many people, any more than too many people would consider a really drunk person as being capable of driving themselves home.

Even then, though - is FSD even at that low standard yet? We obviously don’t have any real data, because Tesla doesn’t release information on things like disengagement and intervention rates. But the average American drives about 37 miles a day. Can FSD do 37 miles of trips without an intervention? If FSD can’t do a day’s worth of driving without requiring a human - even ignoring safety - would you regard it as self-driving?

I want to explain once again that this is a not true. At all. Adding a DIY rear camera is a singe subsystem that isn’t connected to anything other than a display via a wire. And it has other issues as well - aiming, movement, various failure modes, etc. But the biggest issue is that it isn’t connected to the “brain” that processes all the images coming in every few milliseconds to determine what actions the autonomous driving engine (“engine” as in software) needs to take. Adding a front bumper camera is a VERY COMPLEX task, mechanically complex (for example, it needs to be mounted properly to point in the right direction), structurally complex (can’t move around too much when the bumper is tapped by another vehicle in a parking lot), architecturally complex (it needs a proper wiring harness), and most of all it needs to be connected to the main MCU that processes all video coming in (as described above). I am pretty sure that the Tesla MCU3 (and definitely MCU4) has additional video inputs. But I am not sure if the processor has the bandwidth to process those additional inputs yet (or ever in MCU3). MCU4 can very likely process more inputs, and I wouldn’t be surprised to see new models come out with this additional camera properly connected and processed.

Even for Tesla to add it to a previously sold car would be a huge undertaking, and would be extremely expensive. And may not even be possible due to the structural limitations of current bumpers. At best, if they attempted it (and I bet they have attempted it already), it would be a substandard solution to the problem, and Tesla, in general, despises substandard solutions.

I’ve used recent versions of FSD (the beta versions with all the bells and whistles), and it can NEVER EVER do this. Try it yourself - get in the car, enter a destination, and tap the stalk down twice to enter FSD mode. What does the car do? NOTHING. It will not start moving. Only the driver can do that, and then later engage FSD. That means that the car always requires a driver. In my case, I need to pull out of my driveway (whether facing forwards or backwards), then drive up my street, and only then can I engage FSD. Two thirds of the time it’ll take over at that point, but the other third of the time, it requests that I drive a few more tens of yards (two turns to exit my housing development) before it’ll engage FSD.

Similarly, when it arrives at the destinations, the FSD will disengage RIGHT IN MIDDLE OF THE STREET in front of the destination address! Again, the driver has to take over, and drive it into the parking area, and park it. Even when the FSD can handle the complete drive on its own, and I’ve experienced many such cases (trust me, it’s like magic, and it thrills me every time, everyone should experience it*), but FSD is not F, at a minimum it requires a driver at the start location and at the end location. So when so many people (Like Elon Musk, for example) say that the current fleet of a few million Teslas can easily be converted to a fleet of robotaxis, I have very strong doubts that that is probable or even possible.

The current model 3 and Y production, the VAST majority of all Teslas, don’t include a radar module. The folks in this article probably looked at an X or an S model.

* Even TACC is kind of like magic. I drove 300+ miles yesterday mostly with TACC, and it makes driving so much more pleasurable and relaxing. But FSD is truly magical. I took a family member on a ride a while back and they were blown away. They were scared at first, but a few miles in, they had already got used to it. By the end of the trip, they were just amazed. It took us about 17-18 miles on normal streets with stoplights and stop signs, with left turns and right turns, and a merge, and one unprotected left turn. It did quite a lot of “good driving” and one or two instances of “not so good driving” (too much hesitation and creeping up in one case, and a too quick right turn in another case). But I only had to take over once (in addition to the start and end of the trip) in those 17-18 miles and that was only because of my own impatience, the car would have figured it out eventually. It’s just surreal to sit in the car and see it stopping, “looking”, and then turning, all on its own!

3 Likes