Is Tesla's unsupervised Robotaxi plan failing miserably?

This video is 13 days old, but it links to an article that speculates that TSLA hasn’t applied for a driverless permit in California because Tesla’s internal testing shows that they can’t complete the required test miles to get an L4-autonomous permit in California.

1 Like

I didn’t watch the video, but there is no question on that point. There is no reason Tesla wouldn’t apply for a permit unless they believed they wouldn’t get one.

5 Likes

Actually…

Should be:

TSLA hasn’t applied for a driverless permit in California because Tesla’s internal testing shows that they haven’t yet completed the required test miles to get an L4-autonomous permit in California.

Again, Tesla’s plan is:

  1. Work with Zoox and Waymo, maybe others to get NHTSA to modify FMVSS to allow cars without controls for human on-board drivers and also to remove side-effect requirements thereto.

  2. Once that is done, Texas has a simple and easy permit process to enable CyberCab as a legal L4 vehicle. Essentially, Texas just requires that the vehicles obey all existing traffic laws, safety requirements, responsibility/insurance, and compliance with FMVSS.

  3. In the meantime, continue to run a supervised taxi service in California using Model Ys to gather enough data for an eventual permit application.

2 Likes

FMVSS needs modifications; it all depends on what such modifications are in my opinion. Some are no-brainers, though, what else might they do?

As I have previously said; and have feared — the Texas DMV seems to have “punted” rather than implement state level safety criteria. Having different criteria in differing states coukd easily be a problem, but I would rather that Texas did so in this case rather than deferring to feds — they would still have the option to change it if at the federal level meaningful criteria gets implemented. The Texas law has been changed already at least 2 times –maybe 3. The Texas DMV says they will be monitoring what happens in May when new rules go into effect. They are prepared to change if nonsense is forthcoming from any operators

If they had pursued CA testing years ago rather than their questionable “beta” approach, they would have plenty of data already.

1 Like

California doesn’t require specific performance metrics for an AV ride hailing permit. In other words, there is no rule that says something like “the vehicle must have fewer than Y crashes per million miles” or anything like that.

Instead, manufacturers self-certify that their vehicles can safely operate in the ODD. Then approval is based on regulatory judgement.

Texas passed a new law last fall that supersedes this. The final rule is still being drafted by regulators, but does require AV ride hailing vehicles meet the definitions in the SAE standard. I understand the new rule is expected to be promulgated in May.

Presumably, the cybercab is designed to meet those definitions (it would be a major misstep if it weren’t), but the requirements are much greater than simply complying with traffic laws and insurance.

But it does require that a testing phase has been completed. To quote from the CA regs:

A summary of the manufacturer’s autonomous technology testing in the operational design domain in which the subject autonomous vehicles are designed to operate. The summary shall describe all locations where the vehicle has been tested and shall include:

(A) The total number of vehicle test miles driven on public roads, on test tracks, or other private roads in autonomous mode.

(B) A description of the testing methods used to validate the performance of the subject autonomous vehicles.

(C) The number of collisions originating from the operation of the autonomous test vehicles in autonomous mode on public roads that resulted in damage of property to any one person in excess of one thousand dollars ($1,000), or bodily injury or death, and a full description of the cause of each collision and measures taken to remediate the cause of each collision where applicable.

So your narrow technical carve-out is technically correct, but the overall requirements do include a necessitity to report a number of “test miles driven on public roads” as well a “full description of the cause of each collision.”

I suspect outside of the written regs that Tesla has talked with the folks at the CA DMV about what they’re expecting in terms of number of miles they want to see and how few collisions they expect to be reported.

Not true, as my quoting of Article 3.7 and OL 321 shows.

No, that’s what Texas actually passed, as discussed in a different thread here.

That’s now what that shows. The opposite in fact. The manufacturers self-certify. Then regulators then make a judgement of those data are sufficient to issue a permit. Those paragraphs don’t’ say “you must use XYZ testing method.” They say “you have have to us what method you used.”

I quoted from the document, so the facts support my view.

As Smorgasbord1 alluded to this has been discussed at length in other threads. Texas passed a revision to its law last summer that went into effect on Sept 1 - for most of its provisions but not all. The specific qualifying criteria that is not yet in effect for any individual robotaxi was drafted and approved by the Texas DMV earlier this year but has a 90 day waiting period before it is officially in effect in May. Tesla and all other wannabe robotaxi operators can apply in early May in anticipation of the 90 day period being up in late May. That’s how I remember from memory. Anyone can easily look up the specific and read the full text of the law.

To my dismay which I have already noted, the Texas DMV “punted” on specifying any low level vehicle criteria as they chose to defer to the Feds. Will Feds have lax criteria and allow self-certification? I hope not unless any new criteria is rigid for safety considerations. Right now, if the Feds had actually published anything yet, and Texas thought it was very good, this “punt” from Texas DMV would be okay. But Feds haven’t yet. In the meantime, Texas has also indicated if our initial experience here this summer is not good, they may edit the state qualification criteria and make it more specific than just deferring to Feds. I’m skeptical but we will have to wait and see.

Even though I am a relatively large Tesla stock holder, I don’t like the idea of Tesla self certifying considering all their questionable actions to date. I trust Waymo much more coincidentally and perhaps others though I am less familiar with them. NURO has been testing in my neighborhood and surrounding area with their delivery vehicles but they are now also testing their Robotaxis in other locations too. I kinda think I would trust them more than Tesla too based on their approach and their comments on safety

What do you mean by “low level vehicle criteria?”

My read of what Texas did is different than yours. Texas took a simple way out, which is that Autonomous Vehicles have to be/do what non-Autonomous Vehicle are/do today. No difference in performance, just restating the obvious:

  1. They have comply with FMVSS. These are vehicle performance safety standards. Think how quickly they need to come to a stop, or that headlights illuminate enough of the environment, etc.

  2. They have to comply with all existing driving laws. Think speed limits, communicating lane changes far enough in advance, deferring to pedestrians, keeping the proper distance from cyclists, etc.

Now, people may want some assurances that the vehicles will indeed comply with all existing driving laws. Texas hasn’t done that. California, OTOH, has stipulated a testing regimen must be performed with human safety drivers in the vehicles, with the results (miles driven, where driven, interventions, accidents, etc.) report to a group/committee that then decides if the vehicles are safe enough to drive without the safety drivers.

NHTSA has defined a 3-step process:

  1. Fix FMVSS where it had assumed a human driver. Zoox’s exemption request identified 8 areas, 3 of which are now open for comment. These will allow vehicles without manual controls to technically meet FMVSS requirements, but, at least at first, leaves it to the states to control operation of those vehicles.
  2. An updated Guidance document on AV safety: “We’re also moving ahead today with developing new guidance for AV developers on AV safety. This will be the first major guidance from NHTSA to AV developers since the AV 2.0 guidance document in 2017.”
    So, a version 3.0 of that document. Presumably, this will feed into states updating their AV operation rules.
  3. A new Minimum performance standards for ADS competency document, which apparently will be part of FMVSS in the future: “I am pleased to announce that NHTSA is working toward establishing minimum performance standards for ADS competency. This is not an easy task—no government in the world has been able to establish truly objective performance requirements that would meet the strict criteria for a FMVSS.”

Step 1 is underway now, which removes technical hurdles for vehicles like Zoox’s toaster and CyberCab to not have driver hardware. Step 2 will help unify what states will require - presumably both Texas and California will adopt that guidance once finalized. And then Step 3 will establish a federal nationwide standard so that AVs in one state can operate in any state.

1 Like

You ask:

Then you say:

Regarding what I meant by low level criteria - I mean including relevant specific details added to ensure safety in autonomy and esp. for robotaxis. Texas did absolutely nothing like CA whereby any attention is paid to the myriad of dangers that could set an autonomous car apart from a human driven car. You have named a lot in 2nd quote of what I would consider a part of such lower level criteria. I could easily throw in “sensor redundancy” as another that i think should be in there. It’s been a long time since I have read through FMVSS (and have little interest to read it again) but doesn’t it expect redundancy in some other systems already? Will it specify sensor redundancy? Lots of things Texas could have expected and required but “punted” instead. If FMVSS is updated in a thorough way, then I will be fine — just not holding my breath.

Also I like your recap of what NHTSA is supposedly doing – hopefully it is meaningful. They certainly know what not to allow based on Tesla antics over the years.

Here’s the link to NHTSA Administrator Jonathan Morrison’s AV forum keynote:
https://www.nhtsa.gov/speeches-presentations/national-av-safety-forum

in which he says what I think is the eventual goal:

I am pleased to announce that NHTSA is working toward establishing minimum performance standards for ADS competency. This is not an easy task—no government in the world has been able to establish truly objective performance requirements that would meet the strict criteria for a FMVSS.

Notice he uses phrases such as “performance standards” and "performance requirements. He does not say hardware standards nor hardware requirements. It’s the performance that matters, not the equipment, and not the technology. They’ve gotten burned by this in the past - the previous requirement for side view mirrors is an example.

When they went to make rear view requirements better, they didn’t say “rear view camera” nor “rear view screen,” they say “rearview image.” They not only don’t say how many pixels must be displayed - they don’t say it’s a pixel-based screen. Instead, FMVSS provides these definitions:

Rearview image means a visual image, detected by means of a single source, of the area directly behind a vehicle that is provided in a single location to the vehicle operator and by means of indirect vision.

Rear visibility system means the set of devices or components which together perform the function of producing the rearview image as required under this standard.

Nowhere does it say anything about LED vs LCD or even camera resolutions. Just a "set of “devices or components which together perform the function.”

As for redundancy, I haven’t read all of FMVSS (although a previous job did require me to have a thorough working knowledge), and what comes to mind are the requirements around braking performance. Even there, the word “redundancy” is not used, however.

What interesting there is that while for hydraulic brakes there is a dual master cylinder requirement (not a phrase used in the spec), it’s written in pieces and examples, starting with a “split service brake system” requirement:

Split service brake system means a brake system consisting of two or more subsystems actuated by a single control, designed so that a single failure in any subsystem (such as a leakage-type failure of a pressure component of a hydraulic subsystem except structural failure of a housing that is common to two or more subsystems, or an electrical failure in an electric subsystem) does not impair the operation of any other subsystem.

Note that FMVSS gives two examples of braking technology (hydraulic and electrical), and how those can meet the “split service” requirement. They don’t dictate hydraulic over electric, and actually don’t dictate one of the other - there could be something magnetic or even mechanical, as long as there are two separate subsystems where failure in one doesn’t impact the other much (they give stopping distance limits for speeds/weights and for both subsystems together as well as independently).

So, when it comes it vehicle vision, my belief is that we won’t see specific “sensor redundancy” requirements, but rather vehicle vision performance requirements, both when everything is working (whatever those "everythings are), as well as some sort of subsystem independence (split subsystem) requirement in case of “partial failure” (a term that is used in the FMVSS today).

Note that FMVSS doesn’t say you need an electrical brake in case your hydraulic brakes fail (or vice-versa), just that the braking system has subsystems that are independent enough such that one works if the other fails. For vision, that could be accomplished through multiple camera overlap as well as dual computer processing, which is how I’d expect that to eventually be defined, along with what failure mitigation/fallback maneuver performance is required.

2 Likes

Nothing inherently wrong with that. But what gets sacrifced when a human driver is replaced? The human brain and the human body’s sensor redundancy. How is such human performance re-created by a robotaxi?

How frequently does the human brain “protect” us from and/or alert us to dangerous situations? Or make judgment calls by being able to generalize? In particular how does it do so in ways no end to end an AI brain is remotely capable of yet. How many times does our sense of hearing alert us to oncoming emergency vehicles before our eyes do? Or our sense of feeling/touch alert us to dangerous road conditions moreso than our eyes do?

While even I can understand the importance of performance standards, a car minus a human driver brings on a lot of issues likely never contemplated in creating FMVSS for vehicles with human drivers. And sensor redundancy really stands out. I think this is especially true if Tesla thinks multiple cameras create sensor redundancy. Their performance kinda shows already that it doesn’t.

In 1997, Garry Kasparov lost to Deep Blue, and subsequently accused IBM of cheating by having humans intervene. A quarter of a century later and now it’s humans that are being accused of cheating by using computers.

We’ve already seen numerous examples of FSD detecting dangerous situations before human drivers, and I expect that eventually AVs will be safer than almost all human drivers.

Do you have examples of a Tesla camera failure to share? The FSD failures I’ve seen can all be best explained by the software not doing its job, and none from a specific failure of a single camera.

If you want to talk about additional sensors that aren’t cameras, fine, we can have that discussion (again), but those would be “additional” sensors, not “redundant” sensors.

That said, it may be possible that NHTSA will attempt to define a performance standard that no array of cameras can achieve. I suspect the comments to such a proposal would include that no human driver would be able to achieve that performance standard. And already the SAE has defined the top-most level of autonomy, Level 5, as equivalent to what “a typically skilled human driver an safely drive.” So, raising above that bar seems certain to not be an eventual NHTSA requirement in my view.

That’s a dodge of the question. This isn’t chess.

Another dodge imo. The issue is the cases where humans have to take control where FSD fails. And of course where’s any evidence that a Robotaxi will not have same or similar issues.

I made points regarding sensor redundancy in the same way the media and other observers do in that one type of sensor, in Tesla’s case cameras, are yet to prove they can do the job. Adding other sensors in many cases will overlap with functions you are assuming cameras can do alone. Plus they can also do things, or do some things better, than cameras. Along with the other resources, this is still a form of redundancy IMO. If you wish to claim those are “software problems” you might notice I also objected to Tesla’s software approach as apparently being inadequate too based on what we have seem

.

1 Like

It’s not a dodge, it’s a past example of AI application that I’m using for prediction here.

No, that wasn’t the context to which I was replying, a context that you actually provided, but now apparently want to to ignore, or at least dodge away from, and which didn’t even mention FSD at all. I quoted just one sentence from your full paragraph, which was:

My response is within that context.

It’s so unlike you to side with “the media” on things, so why now for an incorrect definition of “redundancy?” Webster’s defines redundancy for engineering as “the inclusion of extra components which are not strictly necessary to functioning, in case of failure in other components.”

More relevant is what NHTSA might require. As I pointed out by citing FMVSS 111, if NHTSA does want redundancy for vehicle vision, they will likely adopt a “split subsystem” requirement similar to that defined for braking systems, in which a subsystem will have to perform to some requirement level if the other subsystem fails. And since cameras are essential for performing even a basic failover maneuver safely (eg, need to see traffic light state), it would seem that cameras would be part of both subsystems.

That won’t be the criteria NHTSA adopts. As I pointed out earlier, even the SAE’s top Level of Autonomy requires performance only as good as a “typically skilled human driver.” Assuming NHTSA is successful in its AV performance requirement efforts (cited above by me), what we’ll end up with is some measure of an ODD for that typically skilled human driver, and then AV performance within that ODD.

You are attributing failures of a system to particular hardware implementation examples. As we are both outsiders to Tesla’s engineering data, we don’t know the causes of failures we have seen reported. The system involves various hardware components as well as software. You might insist that the failures reported are caused by Tesla’s cameras not being good enough, but you have nothing to back that up. IMO, it’s more likely that the sofware analyzing those images and executing driving policy is at fault.

While I agree Tesla’s software isn’t fully there yet, it’s getting better all the time. Which brings us back to my earlier chess example. Maybe you think Tesla won’t ever get there, just as many people used to think AI wouldn’t ever beat the best grandmasters.

If I thought the audience here was savvy enough to result in a good discussion, I’d attempt to start a comparison of what Nvidia is doing to what Tesla is doing. While Tesla haters had previously attached themselves to Mobileye as their savior from relying on Elon, smart prejudiced Tesla haters should have switched over to Nvidia months ago.

Alas, when an understanding of NHTSA performance-based rules and concepts is not properly understood, I’m not optimistic that a engineering-based comparison of Tesla to Nvidia is possible here.

To summarize:
LiDAR will not be required for redundancy.
LiDAR will not be required for AV performance.
LiDAR may be used as part of AV system/subsystem to help it achieve performance at least as good as a typically skilled human driver in the same circumstances.
A typically skilled human driver is expected to know his/her limits and slow down when conditions require, even to the point of not driving at all. AVs will be expected to do the same.

1 Like

Not really - I will point out what I believe is correct, or most likely to be correct regardless of source.

I dont understand why the concept of redundancy is so hard. This has been debated numerous times. There are multiple forms of redundancy. You only cite one and apparently are trying to suggest “ that’s all”. If you narrow the concept to engineering redundancy, there are still multiple variations. Some say LiDAR provides complementary functionality which is true as it can provide additional data beyond cameras. But it also provides substantial overlapping data with cameras.

The realty is that LiDAR does provide BOTH complementary functionality and redundancy.

1 Like

Overlapping cameras provides redundancy for the overlap since it is providing the same information for the same space. Hard to apply that to LIDAR since the information is so drastically different. There are examples in airplanes where two different technologies are used to measure the same value and are therefor intentionally redundant, but LIDAR and camera have dramatically different qualities.

No, I thought I was clear on discussing just what NHTSA might require:

Also as I said, it’s like NHTSA will include aspects of what the SAE has already defined (problematic and outdated as J3016 is today), such as:

When you take those probable NHTSA performance requirements together, we get my prior summary. Again:

What time is it when my response to one of the more thoughtful posters here consists of me regurgitating my prior responses?

2 Likes