Tesla, Full Self Driving, and the Wall Street Journal

The WSJ doesn’t think much of Tesla’s Full Self Driving (approach):

https://www.wsj.com/business/autos/elon-musk-robotaxi-end-to-end-ai-plan-1827e2bd

There’s an 11 minute video in the piece which is worth watching. Otherwise: the story explains the difference between how Tesla is training its AI versus all the other guys, and it is significant. There’s also the mention of how Musk eschews additional sensing equipment (Lidar, etc.) because of cost, and whether that is likely to be successful, along with a bunch of other interesting points.

I can’t find a “gift link” but offer a direct link to the Journal as well as the Apple News entrance, so hopefully you can see it.

1 Like

The Captain does not think much of the WSJ.

The Captain

3 Likes

This is a regurgitation of an older piece that was discussed on the paid side a few months ago. And the author hasn’t kept up with the technology from most of the players.

For instance, the latest news, just released a few days ago, is that Waymo is itself going down the end-to-end AI path:

The article also tries the usual dinging of Tesla not using LiDAR, which Waymo and Mobileye use today. But just last week on its Earnings Call, Mobileye’s CEO said that he sees a future where autonomous vehicles do not have LiDAR.

So, the real truth is that the competition is more and more adopting what Tesla has been doing.

I think HD Mapping is going to be next. Right now Mobileye and Waymo make heavy use of detailed maps. Mobileye goes to great lengths and expense trying to keep its maps up to date, as roads are constantly changing.

Here’s one example of where that mapping fails Waymo:

(start at 10:30).

The road is narrow, and a pickup truck is partially blocking it. As discussed in the video, the Waymo vehicle could get around the pickup, but it would have to drive a little bit onto the driveway on the other side of the road. However, Waymo’s map defines the driveable boundary to not include that, so the car gets stuck for several minutes until the car behind it backs away so the Waymo can do the same.

Here’s another Waymo vehicle ride

In this one the vehicle fails and has to be manually driven by a Waymo employee who is dropped off at the car to drive it. What’s interesting is that the driver takes it out of the Waymo pre-mapped area, and the on-screen display loses much of its data. The roadway literally disappears, and we’re left seeing only some of the other vehicles and some of the buildings. Whereas Tesla shows drivable paths and roadways everywhere, without that pre-mapping.

Where maps come into play are to help the system anticipate what’s coming. Like regular commuters are smoother than Sunday drivers. Tesla does have map information, not just the typical navigation, but also for things like traffic signs and lights - there are videos showing Tesla vehicles describing on the screen slowing for a stop sign that is around a bend or hidden by a tree, that it couldn’t actually yet see.

A big downside of detailed HD maps is the effort needed to create and maintain them. For all its goodness, Waymo hasn’t shown an ability to scale easily. It’s still in just a handful of cities covering hundreds of square miles total, while FSD can drive on over 3 million square miles of the US, plus other countries.

The WSJ video is a mix of a couple old situations combined with poor/illegal driving of other vehicles that no human driver could have anticipated. Meanwhile, despite lidar, a Waymo vehicle drove straight into a pole and a Cruise vehicle dragged a pedestrian more than several feet. None of these systems are perfect yet.

1 Like

The simple explanation that was given to me at some Autonomous Vehicle conference a few years ago was - when LIDAR and Vision agree, they use that agreement to decide what to do. When LIDAR and Vision do not agree, they go with Vision. So what’s the point of having LIDAR? I don’t know if that’s correct or not, but it appears to be sound reasoning.

I’ve been using the latest version of FSD (V12.5.6.1) for all my trips recently, and it is spectacular. It is much smoother, much more human-like, and generally agreeable. I allow it to do all the driving, all I need to do is start it up to get out of my driveway and it brings me where I need to go. Once it even got there and parked the car for me, but usually I get there and have to park myself. A few times I’ve had to tap gently on the accelerator to make it go a little faster, or to make it take a right turn on red when it hesitated to long. But the recent improvements are very noticeable, especially things like 4-way stop signs, unprotected left turns, merging from entry and exit lanes, etc. That said, it is still not ready for true full self driving, I suspect that the start and end cases will be most difficult to handle. For example, when it pulls out of my driveway, how would it know if a kid left a toy directly in front of the car? Or how will it choose the correct lane when entering a gated housing development? Or how will it know which spots it is permitted to park in?

1 Like

FSD from a sailor’s point of view.

Sailors rely on charts, what landlubbers call maps. Some of those charts are based on very old data collected by navies of various nations at times of war. We are asked to send updates when we find that things are no longer as charted. Notices to Mariners abound.

https://msi.admiralty.co.uk/NoticesToMariners/Weekly

https://msi.nga.mil/NTM

https://hydrobharat.gov.in/notices-to-mariners/

In the middle of the ocean one can see ships only a few miles distant but more importantly, only a few minutes distant. On a sailboat they often don’t even see you even if you have the right of way. Landmarks are often indicated by blinking lights which are in code. A fellow sailor on his way to Curaçao was steering off course (I don’t know why) and in the middle of the night he mistook Little Bonaire for Little Curaçao. The boat was smashed on the rocks. He had lost what Mentour Pilot calls “situation awareness.” With mixed signals your biases can be hazardous.

What the above illustrates is that charts (maps) are good for planning but not for execution. Execution cannot rely on day old data. The terrain is what it is NOW. An old sailor’s adage says, “Don’t go to sea with two clocks.” If they disagree they add confusion reducing situation awareness. A year or two ago Elon Musk was asked about ditching LIDAR, his reply, “LIDAR just makes things more difficult when they don’t agree with other inputs.”

One night, leaving port on our way to Trinidad, I was at the helm while the boat’s owner (acting captain) was below looking at the radar screen. He told me to correct course as I was not heading for the middle of the channel. I did. When he came on deck he saw we were heading into a large group of anchored oil tankers. He told me to resume my old course. Radar (LIDAR) vs. eyes (cameras).

Don’t expect perfection, all we can achieve is a low enough level of risk. Cars kill more people than airplanes yet those deadly machines are allowed on the road because their level of risk is found to be acceptable.

The Captain

1 Like

It might be for redundancy. If Vision is having trouble sensing conditions, due to conditions in the immediate environment, LIDAR can provide a second set of “eyes.”

I don’t think that’s correct. I wasn’t able to find a transcript of their call, but the summary below reports that the CEO was pointing to improvements in imaging radar - not camera-based vision - as maybe making LIDAR unnecessary.

Honestly, this is probably the biggest regulatory threat to Tesla’s FSD business model. NHTSA has already opened up an investigation into whether FSD functions properly in low-visibility situations. If they adopt either a regulation, or just a design defect ruling, that ends up requiring redundancy in self-driving sensing (ie. that any autonomous driver has to have at least two sensor systems), it’s hard to see how Tesla manages to comply with their existing vehicles.

I wonder if that’s also a factor in Musk’s personal decision to get so involved in federal politics. It might be a remote threat, but requiring radar or LIDAR in autonomous systems would have really bad consequences for Tesla.

https://www.reddit.com/r/MVIS/comments/1ggq613/mobileyes_q3_2024_earnings_call_summary/

I think this was the exact point! I don’t recall the examples given at the time, but let’s say vision can’t see well and provides some blurry rainy image of a potential truck driving slowly in front of you. And at the same time, LIDAR shows nothing in front of you. The vision could be an artifact of the camera behind the windshield or anything else. Do you go with the LIDAR or not? Every choice is “not”, you go with the vision.

It would have really bad consequences overall for everyone. And it’ll be due to the typical government corruption that we often see. There was a V2V system in progress a number of years ago, and it would have been “good enough” to save tens of thousands of lives over the years as it penetrated into the auto fleet on the road. But various interests (mostly interests that wanted access to the valuable DSRC spectrum) worked to kill it via government action (or inaction as the case was). Same here. The Tesla system, even if imperfect, would still save thousands of lives over the years because it is “good enough” to do so, and they already have millions of cars on the world’s roads.

1 Like

I’d be curious to see the actual statement. There’s no logical reason they would include LIDAR if it was never used for anything. It’s expensive. It has to be doing something.

I think this misunderstands how safety regulations work. Suppose I’m manufacturing super-cheapo airbags - but they’re defective. About 20% of the time that they should activate, they don’t, because I’m using super-cheap parts or equipment. The regulators will stop me from selling my clearly defective airbags. And I can’t go to them and point out that a car with my defective airbags is still safer than a car with no airbags. Even though that’s true - they still work 80% of the time!!

If the system doesn’t work properly with just vision, they’re not going to let Tesla use it - regardless of whether they sold a bunch of cars with the system that doesn’t work properly. And even regardless of whether the system works better than no system at all. They’ll compare it to the system that works properly, and tell Tesla that they either have to correct the system or not use it.

1 Like

Here’s an archived version of the transcript:

I didn’t say what made Mobileye’s CEO say LiDAR won’t be necessary. Regardless this is the latest in a huge turn-around for the company that used to say LiDAR was essential and even went to the point of having its own LiDAR development project in house, which they recently terminated.

With Mobileye, we have to remember that as a Tier 1 or Tier 2, it has to sell to OEMs or Tier 1s. That means it has to find value add and differentiation where it can. IMO, Mobileye throws around a lot of FUD, mathematical and otherwise, in their presentations.

Regulators, at least at the federal level (including the EU via UNECE) have been careful not to specify specific technologies. They do this because they realize that superior technologies come out all the time and they don’t want to limit that adoption of something better. My example is the 2018 adoption of “rear view image” which doesn’t specify even cameras and screens - you could potentially do it with mirrors if you wanted.

But, for redundancy, as you indicate, there is precident, such as with master brake cylinders. The question there is how the redundancy would be specified. Would having multiple cameras and multiple compute be enough? It seems unlikely that redundancy would be specified as requiring different sensor types - that would instead be a performance-based requirement that the sensors can detect objects the size and shape of cars and pedestrians and bicyclists at certain distances under certain conditions - against without specifying specific technologies.

I see almost no risk that LiDAR or radar will be specifically codified into regulations, but there could be performance requirements that do effectively mean those are needed. That seems unlikely to me, as cameras today see better than humans, and that would be the gold standard for comparison.

1 Like

The goal of AV is to drive a car 1M or 1B times safer than a human driver.

Humans use vision for 99% of all “sensing”.

I argue that hearing is used 3% … sirens, talking, shouting, gunshots… And we turn our head, and verify with vision.

Next is touch, used 1.9%… The rumble strips on road wats next to the lines marking the lanes. Not only do we get visual from the painted lines, we also get a rub board jarring FEELING, that alerts us to use vision to verify. Bumping a curb tells how close we are. We “feel” road debris hit the car… and approximately where on the car, we should look for damage. Etc.

Smell? 0.1%. I’ve smelled noxious cars and id’d the offender visually, and responded. I’ve smelled BBQ, looked for the sign… And eaten BBQ.

We, biological organisms, “sense” the environment, verify with vision, and take appropriate action - based on several “alerting” sensors.
Evolution decided that “we” need multiple sensory inputs.

Vision, by itself, might not be enough to get to the desired/required level of “safer than human”?

Sandy Munro suggested FLIR… a couple years ago. But, I’ve not heard more about that from his team.

:face_in_clouds:
ralph

Robots also need balance n proprioception.
AVs might also benefit?

1 Like

I’m not sure it’s a huge turnaround. They still use LiDAR, and they said they’re going to continue to be using LiDAR. The CEO said he thought it might be possible that imaging radar might improve enough to just go to a radar+vision system, but not that it was likely. That’s not them moving away from LiDAR, and certainly them moving towards what Tesla is doing (which is vision-only, not even radar).

Perhaps. But it might not be possible to achieve those standards with just vision. If vision always has problems in certain conditions (heavy fog or dust), but vision+imaging radar can handle it easily, then the standard might be set to a level that vision alone will never be able to meet. That might be a low-likelihood outcome - but since that’s the general thrust of the NHTSA investigation now (how FSD handles in low-visibility conditions), it’s non-zero. If FSD doesn’t handle poor visibility conditions as well as other extant systems, it might not matter that it’s better than a human, because that can still be considered a defective design relative to alternatives.

I don’t think they would look at that as the standard. To use my airbag example again, any automatic deployment would be much faster than any human could ever activate an airbag. But if it’s not working reliably enough, it’s still going to be labeled as defective. If the camera is seeing better than a human, but not well enough for the system to function as well as a camera+LiDAR+radar system, they still might find it insufficient.

2 Likes

We’re just going to continue to disagree on this. Mobileye used to have 3 LiDARs per vehicle but has dropped to one, has terminated its own LiDAR developement project, and the CEO is admitting the future is probably no LiDAR. The trend is clear, even if the company is being careful not to upset existing/future customers with its statements.

First, even for the Inflatable Restraint requirement, FMVSS 208 it is expressed not in technology terms, but in performance terms. That is, the regulations don’t specify trigger mechanisms, inflatable materials, locations, etc. but have performance requirements such as activation "at 20 ms ±2 ms from the time that 0.5 g is measured on the dynamic test platform. The test dummy specified in S8.1.8, placed in each front outboard designated seating position as specified in S10, excluding S10.7, S10.8, and S10.9, shall meet the injury criteria of S6.1, S6.2(a), S6.3, S6.4(a), S6.5, and S13.2 of this standard."

Second, such requirements are included for things, like airbags, which are required in all vehicles. Only when autonomy becomes required (ie, humans no longer permitted to manually drive) would such better than human requirements be required, IMO.

But as long as manual, un-assisted driving vehicle remains permitted, such regulation would be self-defeating. Requiring autonomy to be better than average, even top 10% of humans is counter-productive.

And even if would go to that extreme, it would be done via a performance-related requirement, not a technology-based requirement. And I agree that is a potential future risk, but one solution would be to have an ODD that excludes operation in such conditions. Mercedes took this approach with its Level 3 Traffic Jam system, that refuses to engage, or if active disengages, if it’s raining or the road is wet.

I’m not particularly worried about current FSD investigations. FSD is sold as a Level 2 system, and Level 2 systems have always had issues. For instance, the cruise control system on older cars was a locked-on system that, if you weren’t paying attention, would literally ram into a slower car in front of you. And no adaptive cruise control system even today will stop at a traffic light unless there’s a car already stopped - yet there’s been no recalls of those systems even though manufacturers let you use them on traffic light containing roads.

That only makes sense if vehicles capable of unassisted human driving are no longer permitted to be sold.

1 Like

No, this isn’t the case as all! The numbers are more like 98.3% versus 98.4% or something like that (and vision only may even be better overall!). It’s not 80% versus 100%, and it’s not 80% versus 99%.

But it is a lot more than that. If penetration of a vision only system can reach 50% over 10-12 years, and penetration of a vision+lidar system can reach 30% over 10-12 years (due to cost), then even if the vision only system has slightly lower performance than the v+l system, it is better to go with the vision only system because many more lives will be saved due to the higher penetration.

Again, back to my example, the DSRC V2V system. Had they placed the mandate into effect 7 years ago, maybe only 15-20% of cars would have it today, but just that 15-20% would be enough to save a few thousand lives each year. Instead institutional corruption allowed it to be politically killed, the spectrum reallocated for profit motives, and some cockamamie new scheme involving the cell carriers ($$$) created with a minimum 10 year delay for it to ever happen. Still hasn’t happened. Meanwhile, beginning in 2018, the mandate for backup cameras went into effect, and now a good percentage of cars have them, and sure enough safety while backing up has been dramatically improved for that cohort.

Yes - but that’s the point. The system isn’t designed to stop at a traffic light, and the driver won’t expect it to stop at a traffic light.

If a driver can activate FSD in the fog and expect that it will work in the fog, but NHTSA finds that it actually can’t work as designed in the fog because it doesn’t have a redundant sensing system that can see in the fog when the cameras can’t (like imaging radar), there’s a non-trivial chance that they’ll designate that as a defect.

This creates a problem if Tesla wants to operate FSD as a Level 5 system. Level 3 and Level 4 systems are not expected to function everywhere. If one of those systems can’t function in dense fog, Mercedes’ system can simply not turn on in dense fog, or Waymo can just not offer service in dense fog. I’m not sure that fits with Tesla’s FSD model.

I see this idea from Tesla statements too - and it’s wrong. That’s not how safety regulations always work. They don’t always just look at total lives saved on both sides and just go with the one that saves the most.

To use a deliberately ridiculous example, suppose I invent a system that can infallibly detect through internal monitoring whether a person is too impaired to drive - and shut off the car if that person attempts to do so. But (and here’s the ridiculous part), due to a design defect, there’s a 1 in 10 million chance on each trip that it will short out and electrocute whoever’s in the driver’s seat. It would make the system more expensive, though, for me to correct that flaw, so I don’t correct it. I’m not going to be allowed to sell that system. It has a lethal design defect. It’s not going to matter that on the whole, it will probably save lives. Because it has a design defect that kills people. (Honestly, I might even potentially face criminal manslaughter charges in that scenario - even though, on the whole, I would be saving lives).

If an AV driver functions sufficiently poorly in low visibility environments because it lacks a secondary sensing system like radar or lidar, the NHTSA very well might refuse to allow AV drivers that don’t have a secondary sensing system if they regard that as a design defect. They’re not going to limit their analysis to simple totting up of accidents or fatalities and let someone sell a product with a design flaw because it’s cheaper to do it that way.

1 Like

But that same argument can be used against the existing adaptive cruise control vehicles that have and continue to be sold despite completely failing to stop if the car in front you was not moving when it first saw the car and hasn’t moved since. Such as coming around a bend on the highway to stopped traffic for an accident.

That’s pretty far off. For anybody.

That’s not just ridiculous, it’s not apples to apples.

A system that doesn’t work better in the fog than humans isn’t like a system that accidentally electrocutes the driver. It seems obvious that if a system is better than humans most of the time and no worse than humans the rest of the time, it’s a win.

Regulators need to be careful with what they specify. If they specify low visibility performance, then they also need to specify completely stopped traffic around the bend performance. The software and hardware of radar systems today can’t handle both. And LiDAR itself has reduced sensitivity in fog conditions, so that’s perhaps problematic as well.

1 Like

Indeed. Again, all of this depends on whether the failure to meet a certain level of performance/reliability/safety is a defect or not. A “dumb” cruise control can’t do the same things as an ADAS - but NHTSA doesn’t regard that as a defect in how those cruise controls are designed. They might reach a different conclusion for autonomous systems’ sensor packages, concluding that lack of redundancy is a design defect, not just an inherent limitation of that kind of system. This is at least a possibility, because NHTSA did open a defect investigation into FSD’s vision systems, something they haven’t done (I think?) for the scenario you describe.

Sure - but that doesn’t mean they won’t make you correct a design defect. The standard isn’t simply, “is it better than a human?” They compare the system not just with how humans perform, but how the system would perform if the defect were corrected.

We know this, because NHTSA has ordered recalls of Tesla software in the past. They’ve identified something that made the system a little less safe than it should be. Even though driving with those ADAS systems was arguably safer than just human driving even with the defect, they still ordered the recall. If a system has problem X, NHTSA doesn’t just determine whether the system with the problem is safer than no system. They look at the issues with problem X and what it would take to have a safer system and what dangers are caused by problem X, and then determine whether to order a correction.

You are arguing how you think it should be. Albaby is telling you how it is.

My complaint with Musk is that it’s called “Full Self Driving”, but we all agree that it’s not. That name should have been proscribed right from the start. It’s really good “Assisted Driving”, but that’s not a sexy name and I understand why they went with something else.

As for the “safer than human driving now”, that is unclear. Tesla doesn’t share all of its data with regulators, and it should be obvious to anyone who has worked with statistics that you compare “like” with “like”. Yet most of Tesla’s FSD stats are on highways, but most accidents occur on local roads. Elon is fond of pulling selective statistics to bolster the claim that “it’s already safer” but that is not clear. (It might be, but the way it’s only partially offered doesn’t prove that.)

More to the point, regulators are not going to approve something that doesn’t work as perfectly as it could given the other technological options available. If that means “vision only” doesn’t work in some circumstances, then it’s not going to be approved for those circumstances. (Think: night, fog, rain, etc.) They are going to require a more expensive alternative if that will work better, or a crippling of the system to stop it from functioning in any condition where it will not work at maximum efficiency. (At which point you no longer have a Level 5 anyway.)

Thus far the only Level 5 (no driver needed) systems approved (provisionally) use Lidar. That doesn’t mean they will always have to, just that in today’s technological world it’s the only system which does what it needs to do in all situations. [Yes, I’m sure it wouldn’t work in the middle of a tornado, but I’m trying to be reasonable and real]. (And of course even then it fails sometimes, but rarely catastrophically.)

No, he’s not. He’s speculating that regulators will decide to specify technology when they normally go to great pains to specify performance, not technology.

Do you equally complain about Porsche calling its top of the line BEV a “Turbo”? There’s no turbocharger, nor is there any goal of installing a turbocharger in those vehicles, lol. At least Tesla has been clear that FSD is a work in progress, is currently in Beta, and requires driver attention at all times. Unlike Porsche’s literally false advertising, lol.

Incorrect, if only because no Level 5 system has been approved, and you’re apparently confused as to what “Level 5” actually means.

And when one looks at SAE’s J3016 Autonomy paper, you’ll find that “LiDAR” is not only not required, but not even mentioned!

If one looks at the existing regulations used to approve the Level 3 and Level 4 systems that HAVE permits to operate (eg, Waymo), one sees that LiDAR is not only NOT required by those regulations, the term “LiDAR” is not even mentioned!

FMVSS is filled with performance-oriented regulations, and that is what is most likely to continue with autonomy. Whether those regulations will include driver replacement performance greater than some cohort of human driver performance is speculative and without precedent.

For example, even at Level 5 Autonomy, the definition (see SAE link above) includes that it must “operate on-road anywhere that a typically skilled human driver can reasonably operate a conventional vehicle.

Note that performance is specified in human driver capable terms. There is no requirement from any agency (SAE, CA DMV, etc.) that requires better than human capabilities, and there is unlikely to be any such regulation.

Additionally, the SAE has already provided for approval of systems that do not provide full Level 5 performance, yet are still safe, via self-enforcement of what they call the Operational Design Domain, or ODD. This is actually even required of Level 5 systems, with SAE giving examples of white-out blizzard conditions in which no human can drive. The idea is that automation driving systems know their own limitations and will refuse to drive when it’s not safe for them to do so. The example I provided upthread was the Mercedes Level 3 Traffic Jam Assist refusing to drive in the rain (despite it having LiDAR).

No, they are not, at least not necessarily. As a matter of historical fact, when NHTSA creates new regulations, they issue a Notice of Proposed Rulemaking (NPRM) that includes a cost/benefit analysis. This was done, for instance, in the discussion leading to the adoption of rear-view visibility (FMVSS 111). You can read that discussion here.

And even there, again, NHTSA’s final rule does not mandate certain technologies. No discussion of LED vs LCD screens, no discussion of CMOS vs CCD cameras - matter of fact the use of cameras of any kind is not even specified! The goal is to “present a rearview image to the driver” that meets certain conditions as to field of view, size of objects in that image, etc.

One might argue that LiDAR would do a better job than cameras for the purposes of avoiding rear accidents, but there is nothing in FMVSS 111 that requires the more expensive system.

So, while there is a risk that autonomy regulators will in the future specify performance requirements that are beyond the capability of today’s cameras, there is nothing from any autonomy regulating body that contains performance requirements in excess of human capabilities, and besides, today’s cameras already exceed human perception abilities.

2 Likes

I’m actually not doing either.

I don’t know what NHTSA will ultimately choose to require for autonomous systems. I am quite confident that they won’t apply the test that Tesla talks about, and which several posters here have mentioned - just comparing the proposed system with the performance of human drivers without the system. They don’t just do that. If a system has a defect, and would be safer without the defect, they can order (and have ordered) the defect to be corrected regardless of whether the system with the defect is safer than no system at all. They compare the system with the defect to the system without the defect - not against humans without the system.

If there’s a defect in my anti-lock braking system that causes it to fail to engage 20% of the time that it should, it doesn’t matter that it’s still safer than having no ABS at all. It’s defective. I have to fix that defect.

It’s entirely possible - perhaps very likely - that they won’t regard the absence of a redundant sensory system as a defect. I only note it as a possibility because NHTSA has opened up a defect case for FSD (and also Autopilot I think) based on performance in low-visibility environments. They might determine that it’s not performing well enough with a vision-only sensing package.

They won’t specify technology, but they will specify performance. Systems with vision+LiDAR+radar have broader sensory capabilities than a vision alone system. Since those other systems exist and have been shown to function, it is possible that NHTSA may decide that a system has to perform at a level that vision-only can’t meet, but the multi-sensory systems can. That would end up being a de facto requirement to add a non-vision sensory package to an AV system, even though it would be expressed as a performance-based assessment.

1 Like

But, they have already done that.

As I wrote above, the SAE has defined Level 5 Autonomy in terms of equalling, not exceeding, human driver performance.

One flaw in the examples you’re providing is that those are for driver assistance features, not driver replacement features. By definition, a driver assistance feature is designed to supplement human driver abilities.

Another flaw is that you’re not providing proper analogies. The comparison of LiDAR to cameras isn’t that one fails 20% of the time without apparent cause, as you gave for your ABS brake failure example.

The proper ABS analogy is that ABS brakes can actually increase stopping distances under certain conditions like gravel or light snow. Yet, that’s not considered by NHTSA to be a defect and did not prevent the requirement from being adopted.

Going further, let’s look at another new rule that was enacted this year: Automatic Emergency Braking (AEB). The text of FMVSS 127 is here:

The requires the car to brake for other cars and pedestrians, including at night in dark conditions. Using headlights to illuminate is included in the performance requirements, and having better headlights is given as an example in the NPRM discussion. So, not even here is NHTSA requiring specific technologies like LiDAR nor even writing the requirements such that cameras wouldn’t be good enough.

Matter of fact, NHTSA decided not to go down the visibility effect path:

This final rule adopts the proposed specification that AEB performance tests will be conducted when visibility at the test site is unaffected by fog, smoke, ash, or airborne particulate matter. Reduced visibility in the presence of fog or other particulate matter is difficult to reproduce in a manner that produces repeatable test results.

It’ll be interesting to see whether autonomy regulations adopt something similar or not.

3 Likes