No, he’s not. He’s speculating that regulators will decide to specify technology when they normally go to great pains to specify performance, not technology.
Do you equally complain about Porsche calling its top of the line BEV a “Turbo”? There’s no turbocharger, nor is there any goal of installing a turbocharger in those vehicles, lol. At least Tesla has been clear that FSD is a work in progress, is currently in Beta, and requires driver attention at all times. Unlike Porsche’s literally false advertising, lol.
Incorrect, if only because no Level 5 system has been approved, and you’re apparently confused as to what “Level 5” actually means.
And when one looks at SAE’s J3016 Autonomy paper, you’ll find that “LiDAR” is not only not required, but not even mentioned!
If one looks at the existing regulations used to approve the Level 3 and Level 4 systems that HAVE permits to operate (eg, Waymo), one sees that LiDAR is not only NOT required by those regulations, the term “LiDAR” is not even mentioned!
FMVSS is filled with performance-oriented regulations, and that is what is most likely to continue with autonomy. Whether those regulations will include driver replacement performance greater than some cohort of human driver performance is speculative and without precedent.
For example, even at Level 5 Autonomy, the definition (see SAE link above) includes that it must “operate on-road anywhere that a typically skilled human driver can reasonably operate a conventional vehicle.”
Note that performance is specified in human driver capable terms. There is no requirement from any agency (SAE, CA DMV, etc.) that requires better than human capabilities, and there is unlikely to be any such regulation.
Additionally, the SAE has already provided for approval of systems that do not provide full Level 5 performance, yet are still safe, via self-enforcement of what they call the Operational Design Domain, or ODD. This is actually even required of Level 5 systems, with SAE giving examples of white-out blizzard conditions in which no human can drive. The idea is that automation driving systems know their own limitations and will refuse to drive when it’s not safe for them to do so. The example I provided upthread was the Mercedes Level 3 Traffic Jam Assist refusing to drive in the rain (despite it having LiDAR).
No, they are not, at least not necessarily. As a matter of historical fact, when NHTSA creates new regulations, they issue a Notice of Proposed Rulemaking (NPRM) that includes a cost/benefit analysis. This was done, for instance, in the discussion leading to the adoption of rear-view visibility (FMVSS 111). You can read that discussion here.
And even there, again, NHTSA’s final rule does not mandate certain technologies. No discussion of LED vs LCD screens, no discussion of CMOS vs CCD cameras - matter of fact the use of cameras of any kind is not even specified! The goal is to “present a rearview image to the driver” that meets certain conditions as to field of view, size of objects in that image, etc.
One might argue that LiDAR would do a better job than cameras for the purposes of avoiding rear accidents, but there is nothing in FMVSS 111 that requires the more expensive system.
So, while there is a risk that autonomy regulators will in the future specify performance requirements that are beyond the capability of today’s cameras, there is nothing from any autonomy regulating body that contains performance requirements in excess of human capabilities, and besides, today’s cameras already exceed human perception abilities.