Tesla, Full Self Driving, and the Wall Street Journal

They haven’t. The SAE isn’t the NHTSA (or any state highway regulatory authority). The SAE categorizes a Level 5 system as one that can drive equal to a human. That’s not the same as granting regulatory approval to a system that drives equal to a human.

I suspect many (most?) states will be reluctant to allow driverless vehicles unless they are safer than the average driver. Specifically, the AV will have to be safer than the average sober adult driver - and since the driving population includes lots of drunks and teens (both groups causing disproportionately high numbers of accidents), an average sober adult will be better than the average driver.

And again, that’s just not how these types of product safety regulation regimes typically operate. If you have a driverless system that’s safer than a human, but has a design flaw that hurts people, which design flaw has a solution, they’re going to make you fix the design flaw. They’re not going to let you keep selling a product with a defect, just because the product on net saves lives. They’ll make you fix the defect.

I don’t know whether NHTSA will determine that Tesla’s “vision-only” system has a design defect. They might not. But they’ve shown no reticence in making Tesla change other aspects of their ADAS software package through recall orders, so they’re perfectly comfortable labeling an aspect of those systems a “defect” even though the system overall is arguably safer than a human alone.

I guess you didn’t read the CA DMV paper I linked. It specifically incorporates the SAE J3016 paper into its regulations:

SAE International’s Taxonomy and Definitions for Terms Related to Driving Automation Systems for On-Road Motor Vehicles, standard J3016 (APR2021), as may be revised, which is hereby incorporated by reference.

You keep inventing new ways (now “sober”) regulators will regulate when there is no evidence anything of the sort will happen.

No, as my example of ABS system performing worse than non-ABS systems under certain conditions proves. No regulator is out there making companies “fix” their ABS system to not brake worse on gravel or light snow. Why? Maybe because on net the systems save lives.

And no, as my example of AEB regulations shows, regulators are not very worried about system performance in conditions like fog, where something like radar would probably do better than cameras or LiDAR.

You can continue to speculate that regulators will label a lack of specific hardware in FSD as a “design defect,” but that’s all it is, your speculation. Today it’s a Level 2 assistance system, and like the flawed cruise control systems before it, require supervision and thus lack of 100% performance is not in itself disqualifying.

1 Like

BTW, it’s not just Tesla under investigation:

These investigations happen all the time, and rightfully so. They sometimes end with no action, sometimes a recall.

This one ended up with a Waymo software update:
https://www.washingtonpost.com/business/2024/06/12/waymo-software-recall/

What I find interesting is that mapping is partially to blame:

The company…said it made the decision to fix a “mapping and software issue” through a software update

Waymo itself said:

in certain situations, our vehicles had an insufficient ability to avoid collisions with on-road narrow, permanent objects within the drivable surface,”

I get the software change aspect, but mapping? So every permanent object like a telephone pole needs to be pre-mapped? What happens if a pole is installed one night, will Waymo vehicles hit it the next day until it’s actually mapped? That would be odd to say the least. Hopefully, Waymo is referring to mapping tech, not the actual maps of objects they retain.

1 Like

It’s a definition, not an approval. It doesn’t say anything that’s equal to a human driver is approved - it says that the definition of a Level 5 system is something equal to a human driver. Unless there’s something else in the regs that maybe I missed?

Probably because there isn’t a fix. Is there a fix?

If everyone else has an ABS system that brakes effectively on gravel or light snow, and my ABS system doesn’t brake effectively on light snow because of the idiosyncrasies of how I chose to design the system, I don’t think the regulators will let me keep my design. If no one has a system that solves the problem, they obviously won’t consider something defective for lacking something that doesn’t exist. If it’s possible to avoid the problem, and other manufacturers do avoid it, then it’s more likely. And if there’s any easy solution to the problem that you’re choosing not to implement, they’re not going to let you keep the problem just because the overall system is safer than nothing - they’ll make you fix it.

If FSD has difficulty in low-visibility environments, and all the other ADAS systems are able to avoid that difficulty because they have imaging radar, NHTSA isn’t going to just stop their review just because FSD+human is safer than human alone. They’re going to look at whether Tesla’s choice not to include imaging radar is a design flaw. They may very well conclude that it’s not - that may even be the most likely outcome. But the fact that FSD+human is safer than just human is not going to be dispositive.

You’re effectively proposing that regulators will demand systems better than SAE Level 5.

Are you saying incorporating LiDAR is an “easy solution?” Because there’s nothing easy nor inexpensive about it.

Safer than human is the defacto and stated bar for such systems. Your insistence otherwise is contradicted by precedent and published regulator discussions.

1 Like

Of course, there are two very different environments - Tesla, where the FSD is doing the driving, but there is a driver available (for now) to handle any problems vs. Waymo where there is no driver in the car.

I don’t think so, and neither am I. It’s rare for a regulatory body to specify technology, they look at performance and evaluate it on that criteria. They don’t say whether your crumple zones should be aluminum or steel, carbon fiber or shredded wheat, they specify how much deceleration must be achieved, to what effect, in what period of time.

However I suspect that - even with regulatory approval - if a manufacturer goes ahead with a single system when a multiple system would work better (in some circumstances) you are going to see a cargo ship full of lawsuits every time one of those cars gets into an accident that might - or even might suggestibly - have been prevented by an alternate system.

Do you equally complain about Porsche calling its top of the line BEV a “Turbo”? There’s no turbocharger, nor is there any goal of installing a turbocharger

Please. “Turbo” is a body style which has come to be associated with a particular look of high end Porsche’s. Other manufacturers have used the term, no one has complained, nor to my knowledge, has Porsche tried to stop anyone else from doing so. It may have started off being an engine modification but the term has become pretty generic.

We need to keep in mind that there is a very, very important distinction between vehicles where a driver is present vs. not. With a driver present, the system can say “I can’t drive in these conditions” and let the driver decide what to do. Without a driver, it is possible it should make the same kind of decision, though not necessarily on the same conditions, but if it happened very often, there would be a lot of p*ssed off people.

Sure. I mean, isn’t that what everyone expects? That regulators aren’t going to approve autonomous systems that are no better than human drivers - it won’t be until autonomous systems are inarguably much better than human drivers that they’ll allow them to be utilized?

But more specifically - yes, if there is a dangerous failure case that could easily be avoided by modifying the design, they’re not going to let you keep experiencing the dangerous failure case. They’ll make you fix the design. Even if that involves going from “safer than human” to “even more safer than human.”

No, I’m not. LiDAR is expensive and difficult. But radar is not as expensive and difficult as LiDAR. AFAIK, the other autonomy systems currently utilize all three sensor suites (and maybe some ultrasonics). Tesla used to have both radar and ultrasonics. If the only “solution” is LiDAR, then I expect NHTSA wouldn’t rule it a defect not to have it unless there was a big safety delta. But if the solution space involves something cheaper, like radar, then they might do something about it.

It’s not a “de facto and stated bar” for those systems - it’s just the definition of what a Level 5 system is. If a regulation defines a “car” as “a motor vehicle designed primarily for carrying nine passengers or less, excluding motorcycles and motorized bicycles,” and I make a car that has no seatbelts or airbags, it’s still a “car” - but it’s not one that I am allowed to sell, because it doesn’t meet the specific safety standards applicable to do that.

If I have an autonomous car that eliminates drunk driving accidents, but drives worse in the fog than a human driver but could easily be fixed to drive as well in the fog, then that car would probably be safer than a human and they’d make me fix the fog problem.

Sure. If NHTSA identified this as a defect, then one possible solution is to just have FSD not engage if low-visibility conditions. It just turns off.

The problem, though, is that if you can’t solve that problem through software - if the only way to solve it is by having multiple sensor systems - then it’s unlikely that existing Tesla’s will be able to be used as driverless vehicles. Maybe Level 3, but not Level 4 or 5 - they’ll always have to have a licensed driver in the driver seat capable of taking control. Which would be inconsistent with their business strategy for FSD and robotaxi.

Yeah, the full quote from the CA regulations that I linked says:

The manufacturer certifies that the autonomous vehicles are capable of operating without the presence of a driver inside the vehicle and that the autonomous technology meets the description of a level 4 or level 5 automated driving system under SAE International’s Taxonomy and Definitions for Terms Related to Driving Automation Systems for On-Road Motor Vehicles, standard J3016 (APR2021), as may be revised, which is hereby incorporated by reference.

So, it’s right there that the cars have the meet the SAE’s definition. Not exceed it. Not be better than “sober” drivers as you insisted. Etc. etc.

It’s not like Waymo vehicles with LiDAR aren’t hitting telephone poles or Cruise vehicles with LiDAR aren’t dragging pedestrians thirty feet after impact, after all. So yeah, good luck proving that a “multiple system” is inherently better. There will be performance metrics that are measurable, and that will be the criteria used. But, I’ve already given a recent example where NHTSA declined to add environmental criteria that would give perference to some non-camera systems, and even gave context that it didn’t want to do so. Would they change their minds in the future? Possibly, but probably not likely.

Turbo was initially introduced meaning the Porsche cars literally had a turbocharger. It’s morphed over the decades. Give FSD the same time allowance.

That’s not the way Autonomy and ODD works. You don’t need a driver to have a vehicle that only operates in its ODD. The vehicle simply has to refuse to go, or pull over or remain on the side of the road in a legal parking place. It’s in the SAE document in numerous examples.

1 Like

Right - that’s the minimum standard. But if NHTSA says that an AV has to meet a certain safety standard in low visibility situations that a specific Level 5 car can’t meet, then that car’s not going to be allowed.

That regulation - like many regulations - is a “necessary but not sufficient” standard. The autonomous driver has to meet the standards of Level 4 and Level 5 systems - but if the car has some other safety or design flaw that makes it unsafe, the use of that definition isn’t going to get you out of having to fix the design flaw. Trivially, you couldn’t have an AV that lacked seatbelts, even though it would still meet the Level 4 or Level 5 standards - meeting that bar doesn’t exempt you from having to meet any other safety standard, or avoid design flaws.

I’m not talking about technology with this point - I’m talking about the business model.

A Level 4 car doesn’t need to be able to handle every scenario immediately - as you point out, it just needs to be able to stop driving and pull over to the side of the road. As a safety standard, that’s enough.

But practically, that can’t be the end of the story. There’s a passenger in the car. That passenger is trying to get somewhere. The car has avoided a collision, but it still needs to get a move on.

The Waymo solution (as most robotaxis will I think) is to have the car “phone a friend.” The car has encountered an unusual thing it can’t handle (like a policeman directing traffic in a way that doesn’t make sense to the AV), so a remote operator tells the car what to do. The car has to stop, but it can then proceed on its way.

But you can’t solve a visibility problem that way. If the car can’t “see” because it has become too foggy, it’s not a viable solution for the car to then be paralyzed until the fog has lifted - which could be a long time indeed. The passenger is safe…but stranded. Not sure how you fix that.

1 Like

We’ve already seen the SAE define the bar as what a"typically skilled human driver can reasonably operate a conventional vehicle" as. We’ve seen the California regulators adopt the SAE guidelines verbatim by reference. What you’re now proposing is a that different regulators will require more than the SAE has defined.

There’s simply no precedent for that, nor indication any of them are even thinking that.

This is already true, as seat belts are required under FMVSS, which the California regulators also incorporate by direct reference and verbiage. The SAE is silent on seat belt usage. What you’re proposing is that regulators will demand more than FMVSS requires or SAE defines within the scope of those documents.

That bolded part is important. The SAE doesn’t redefine FMVSS, nor even mention it. Those are orthogonal requirements.

The California regulators have already addressed that, and yes, they do specifically allow (with examples) what you said wasn’t viable:

The manufacturer shall describe how the vehicle is designed to react when it is outside of its operational design domain or encounters the commonly-occurring or restricted conditions disclosed on the application. Such reactions can include measures such as notifying and transitioning control to the driver, transitioning to a minimal risk condition, moving the vehicle a safe distance from the travel lanes, or activating systems that will allow the vehicle to continue operation until it has reached a location where it can come to a complete stop.

The “minimal risk condition” is an SAE term, btw.

It’s clear to me that you personally think the SAE requirement that systems equate to what a typically skilled human driver can reasonably operate a conventional vehicle aren’t sufficient, but that’s what’s defined, that’s what regulators have incorporated, and that’s what makes actually makes sense.

2 Likes

Well, if that’s the bar then it’s definitely going to be the “sober” driver standard. A “typically skilled human driver” will drive better than an impaired driver. I think we’re saying the same thing. The accident rate for the “typically skilled human driver” is going to be lower - perhaps much lower - than the average accident rate across all drivers, because accidents will disproportionately occur among atypically skilled drivers. Namely, impaired drivers and teens.

All of which is a little besides the point on the visibility issue. Again, if Tesla’s vision-only FSD has trouble seeing in low-visibility environments in a dangerous way (and that may not even be true! It might be fine!), and there’s a manageable fix to that problem, NHTSA is going to make them fix the problem regardless of whether the broken system is also safer than a human. They’ll still make them fix it, the same way they made them fix the “attentive driver” problem with Autopilot even though Tesla argued up and down that driver+AP was safer than just driver.

Again - this isn’t an issue of regulatory viability. It’s commercial viability. The regulator may be satisfied with a system that involves the car stopping on the side of the road until the fog lifts, even for hours and hours. But commercially that’s not a satisfactory solution. If the car encounters something that requires the autonomy to safely disengage, there has to be a real-time way to get it going again quickly. You can’t do that if the reason it shut down is because it can’t see well enough in low visibility.

The auto regulators might let you run a system where passengers are stranded but safe, but the market won’t. The folks who regulate Taxis and Liveries may also have something to say about that as well.

No, if the autonomous vehicle is safer than a typically skilled human in the same low-visibility situation, there’s no assurance NHTSA is going to make them meet a higher bar.

Not just another the non-pertinent comparison, but I doubt you have citations for what Tesla itself actually argued with NHTSA on this. I suspect you’re just quoting people not actually involved are saying on social media.

It’s a fair point that commercially, companies will not want to strand passengers for long periods of time, but no-one has said that’s the only remedy or that’s what Tesla will do. There are also situations today in which people are stranded in manually driven taxis for hours, such as in blizzards with accidents blocking roads. So, it’s not like this is new.

Yeah, like they had something to say about Uber taking over the taxi world, lol.

1 Like

You are interpreting this as “no one, with any technology, could drive in these conditions”. I was thinking of “I am not able to drive in these conditions”. The latter might be quite diverse and multi-dimension and could be the source of key market differentiation, but is a very different standard. It is not a defect to recognize that conditions exceed capabilities.

We looked for a used EV and checked out the VW ID4 (great turn radius, but many recalls), IONIQ5, and Ford Mustang Mach-E (fun to drive). Ultimately, we chose a Tesla Model 3, which became available at our budget when we needed it. Now, we see how extraordinarily advanced its technology and software are compared to the other EVs we tested. The range, supercharging network, entertainment options, and FSD capabilities stand out. We’re using the free trial of FSD regularly in the city, where it performs super well. However, on challenging urban hillside roads with obstacles (like fallen over trashcans on trash day) and low visibility around corner turns, it can hesitate where a human driver might react faster. I anticipate significant improvements in video processing chips and AI in the coming years. It’s just a matter of time until the robocars are among us.

There’s no assurance the NHTSA is going to make them do anything at all. And I don’t see where a “typically skilled driver” standard would even be applied (assuming arguendo that was what the NHTSA did), if this is a visibility issue. It’s more akin to window tinting/opacity - the car has to be able to “see” well enough in order to drive safely. It’s not necessarily an AI “skill” issue.

If FSD can “see” well enough in low visibility conditions, it won’t be an issue. NHTSA saw something that led it to investigate, so we’ll see what they determine.

I’m not quoting anyone at all. If Tesla could have ended the NHTSA inquiry by pointing out that AP+human is safer than human alone, then they would have. Instead, they had to go through the recall and update the software in a way that was (predictably) a little annoying to their customers (who don’t like the nags). So it’s unreasonable to assume that NHTSA could have been satisfied just by meeting the “well it’s better than no system” standard.

Yeah - I still think it’s a problem for their business model. It’s one thing to say that your car won’t leave the city limits, to use one example - that differentiates your service, but it’s a limitation that you can just work around. But if your passengers are getting stranded when the fog rolls in or the rain gets heavy, that’s probably not manageable. Your car needs to be able to operate safely in nearly all weather conditions - and if the car can’t “see” in bad weather well enough to function, it’s going to be hard to make that work.

Obviously, this is a matter of degree. A human driver is willing to drive in most fog and would expect the car to do likewise. But, there are also fogs that would stop most human drivers … might not be a bad thing if the AV car did too.

Crashes happen all the time. Here’s a truck running into stopped traffic on a highway, just yesterday:

Earlier this year, people driving too fast (or at all) in 10 foot visibility fog created a bad pile-up:

Having systems that know their limits could all by itself save lives.

2 Likes