Tesla, Full Self Driving, and the Wall Street Journal

Indeed. If Tesla is right - if the camera is enough to let the car see as well as it needs to in order to drive in poor visibility environments - then it will be fine. Human brains are really, really good at “filling in the gaps” in patterns - so we can function in low visibility where the information getting to our brains is very incomplete. If an AI needs a more complete sense “picture” than cameras in low visibility can provide, it might be a different story

And yet another incorrect analogy: One is related to environmental conditions, the other to vehicle modifications. They are not akin to each other.

You claimed knowledge of that “Tesla argued up and down [to NHTSA] that driver+AP was safer than just driver.” I asked for citations, and you admit you have none. Please stop making things up.

1 Like

You are missing the point.

As a human driver, if the fog is too poor to drive, I still have control. I can chose to drive the next exit (maybe at 10 mph) and get it out at the nearest fast food joint. If the robocar determines it can’t read the road, will I as the passenger have any means to tell it to keep going until we get to someplace that doesn’t potentially strand me for hours? I don’t know - but that is the issue as I see it.

Edit: on a related point, what does a camera-only car do in sleet and snow? My car has numerous cameras currently that can often be completely obscured by snow - but I can see just find out of my window. What happens TODAY if that was a Telsa on FSD and how does one fix that issue if it is a robotaxi? Will the passengers have to get out and clean off the cameras?

A valid point - I presumed that Tesla would have made the same argument to regulators that they do to the public, but they might have simply not done so. I shouldn’t have said that.

The larger point still stands. If all Tesla had to do was show that the system was safer than a pure human alternative, they could have done so - rather than go through the recall and a software update that (anecdotally) appears to have annoyed some of their customers. Which suggests that simply showing that your system is safer than pure human is not going to carry the day, else they would have done it.

Instead, if you look at the NHTSA report, the agency did what I’ve suggested they might do - they looked at what other peer systems in the marketplace were doing and their results, and determined that Tesla was an outlier in how it designed its system.

1 Like

Do you think it should agree to keep driving when it believes it cannot do so safely?

Yes, this might be irritating, but a lot depends on how good the car is. With current trends, it is easy for me to imagine the car being better than most or all drivers. If so, then when it thinks it can’t go on, I think you should believe it.

Forget sleet or snow. I’ve been using FSD for all drives recently. Today my car drove me to Costco and it was great, it pulled into the parking lot and then I parked it myself. But on the way home, it pulled out of the parking lot, drove about half a mile, and then there was a sudden torrential downpour while the car was stopped at a red light. During that stop, suddenly FSD disengaged and gave me the big red warning to take over the driving. I assume that the camera images were unclear enough such that the car determined that it can’t see well enough to drive. I took over for about half a mile, and then the downpour slowed and I reengaged FSD and it drove all the way home very well. Obviously the current version of FSD, or the current cameras are not ready for full autonomy (this was in a 2024 HW4 Tesla running the latest version of FSD software, v12.5.6.1).

2 Likes

2 things, I may be able to see better than the camera and I don’t want to be stranded. Period.

I understand not wanting to be stranded. I also understand not wanting to die or be maimed. If the vehicle is equipped with a steering wheel and peddle, I can understand that there might be conditions under which the FSD felt it could not proceed safely, but I would think I could (rightly or wrongly). But, if one is giving over driving to FSD, then I think one has to go along if it thinks it cannot drive safely, no matter how irritating that might be.

See MarkR’s comment above where FSD shut off due to rain, though he could continue to drive.

If the cameras are no better in a robotaxi where we would not have that option, then it is a problem.

1 Like

Seeing better than the cameras doesn’t help the car drive safely.

For decades now, taxi and bus drivers have refused to drive when they thought conditions weren’t safe, including when they already had passengers on board and en route. Why should a robot driver be any different?

1 Like

Maybe better than one camera, but definitely not better than 7 (or 8 in some cases) cameras all “looking” at the same time in all directions.

I would not make a definitive statement either way. I can see how all those cameras can “see” better than a human. But I can also see how multiple cameras could be covered in snow, ice or mud and FSD could decide to pull over and stop…but at the same time I could see clear enough to proceed, even if only at 10 or 20 mph, to an Interstate exit and the safety of a place to park and have a meal.

Mike

2 Likes

Sure it does!

It is not uncommon for my back up camera to be completely obscured. As the driver, I can easily look over my shoulder as well as use my physical mirrors to see when my camera cannot.

I don’t think the question is really about the extreme conditions, but the middle.

Humans are really good at processing visual information to detect patterns. Computers aren’t necessarily as adept at doing that. So there can be scenarios where a human can “process” distorted or incomplete visual information better than certain levels of computer. That’s why text-based Captcha with distorted letters and numbers works (or worked) - it presented visual information that was distorted enough that computers couldn’t “read” it but humans still could.

The computers in autonomous systems are really advanced, and optimized to process visual information and recognize patterns. So it’s a question at this point whether there’s a certain level of obscurity or distortion (like in fog or rain) where the incomplete visual information keeps the computer from “seeing” properly, even though a human still could.

Clearly, if there’s minor interference (say, 95% visibility) then both the human and car can drive safely. If there’s maximal interference (blackout conditions or only 5% visibility) then neither the human and car can drive safely. The question is whether there are intermediate conditions (say, 40% visibility? 30%? 45%) that humans can use vision to still drive safely, but computers aren’t there yet.

The answer may be “no” - that any scenario where visibility is good enough that a human could still drive, the computer can still drive. If the answer is “yes,” then that may create a mismatch between passenger needs and the car’s capabilities, depending on how often and how long those scenarios occur.

3 Likes

But this may be the crux of the overall problem. Because humans are human, the vast majority of them will overestimate their level of capability to drive safely (I’m pretty sure nobody can argue against this based on the myriad facts out there showing that it is true). Meanwhile, any AV system (Tesla or any of the others) will NEVER overestimate their capabilities because they are machines, not humans. If the machine’s calculations say “not safe enough”, it’s not safe enough and it just won’t do it. They will figure out how to pull over in as safe a manner as possible and wait it out. Furthermore, humans change over time (usually for the worse), usually night vision is one of the firs things to deteriorate, yet the human says “I’ve been driving well for 40-50 years, and I have tons of experience doing it”. Meanwhile, the machine logic doesn’t have this deficiency, if anything, it’ll likely get better with age. :rofl:

One of my kids took my Tesla to work earlier this week, to try out the FSD … and it missed the highway exit because it couldn’t get over in time! That’s a big failure, it had to take the next exit, go around, enter the highway in the reverse direction, suffer extra traffic, and another 7 or 8 minutes getting to work. And waste about an additional 2 kWh (minor, but still part of the cost of this failure).

1 Like

Really? I mean, it’s possible that the machine is programmed/has learned to do that kind kind of assessment. But there’s no reason it has to be the case. The AV driver might blithely keep driving in 45% visibility (or whatever), not aware that it will be acting unsafely by operating in those conditions. Unless there’s something in its programming or training to get it to do a “not safe enough” calculation, why would it do one? How does it know that this level of visibility is where it should disengage?

It’s certainly possible that it might, but there’s nothing ineluctable about a self-assessment protocol being installed or emerging on its own from the training.

1 Like

That seems to be missing something. If the machine miscalculates, concluding that it is safe enough, though it is not???

Not when it’s a driverless robotaxi.

1 Like

Hasn’t it been established that current cameras are actually better than human sight?