California Suspends Cruise Autonomous Taxi License

So the California DMV has revoked Cruise’s license to operate fully autonomous taxis. They can go back to running with a safety driver in the vehicle, but they’ll have to show some changes before taking them out again.

On the surface, this is marginally good news for Cruise’s competitors. But if you look at what was apparently the precipitating event, it’s bad news overall for the current approach to AV:

The agency’s suspension order to Cruise focuses on an accident early this month involving a pedestrian hit by another vehicle who then “fell into the path” of one of the company’s robotaxis, which braked hard but ran over the woman, the order said. The car pulled over, dragging the victim 20 feet at 7 miles per hour and possibly injuring her further before stopping, the order said.

“Cruise’s vehicles may lack the ability to respond in a safe and appropriate manner during incidents involving a pedestrian,” the order said.

I mean, that is a real edge case there. But how on earth would an AI driver that’s trained (not programmed) to learn how to drive be “taught” how to respond to that? Humans know to stop the car if they’re dragging a pedestrian - not because we’ve observed that happening in enough situations to know that stopping rather than not stopping is the required action, but because we can think and reason.

It doesn’t appear that the DMV is looking to Cruise to demonstrate that on the whole their taxis are safer than human drivers - so that even if a pedestrian gets dragged on occasion, the taxis still save lives. Rather, it looks like they want Cruise to show that this problem has been fixed. That’s much harder to do.

6 Likes

It’s bad news for all autonomous taxi companies that are trying to produce an autonomous taxi like Cruise is doing. But for Tesla it is great news. Why? Because Tesla doesn’t have an autonomous vehicle yet and they can continue to get more and more data to finally overcome the hurdles. Cruise and Waymo will need to now hire drivers to be in the vehicles to get the data they need. Tesla’s drivers gladly give Tesla the data and even pay Tesla to do it.

Andy

3 Likes

I was once a big believer in self driving cars, but for about 5 years now I keep slowly losing that faith. It’s nearly all gone for me now. How much longer do we have to keep waiting until someone finally cracks this? Highway driving that is nearly all automated? Yes, possible. Domain-specific self-driving, like agriculture, mining, warehouses? Absolutely. City streets level 5? I don’t believe it will happen anymore. And robotaxis? Not gonna happen. I no longer believe it.

5 Likes

Oh, I think it’s terrible news for Tesla.

If you look at the precipitating incident that led the DMV to suspend Cruise’s license, it’s the sort of thing that isn’t going to show up in Tesla’s “more and more” data. How often will a Tesla vehicle encounter a situation where a driver is struck by another car, lodged underneath the Tesla, and then dragged or not dragged by the human driver? That’s the type of pure edge case that is unlikely to ever be “taught” by training an AI on lots of data.

The general “theory of the case” for getting authorization for Tesla’s FSD (and most other approaches to autonomy) is that regulators will look at vehicle safety on the whole. Autonomous cars will never be intoxicated, distracted, or teenagers (three conditions that lead to a very large proportion of accidents). So even if they might act “wrong” in certain edge cases, on the whole you will save lives by switching from human drivers to AV’s.

It’s bad news for Tesla that in the country they view as the most permissive for implementing AI, the regulator didn’t act that way. They reacted to an obvious edge case by determining that the AI isn’t safe for the roads. Since the car could not respond appropriately in that situation, they deemed it unsafe. They did not attempt a holistic review, or examine whether the increased safety in most other situations might outweigh the danger when fluke circumstances arise.

That’s horrible for Tesla. If they’re ever given permission to turn on FSD on a few hundred thousand cars at once (and I don’t think they ever will get that permission), their cars will encounter edge cases much more quickly than Cruise or Waymo. Just from the sheer numbers. And if DMV is this quick to pull their ticket based on an edge case, that’s just not going to work out.

2 Likes

Of course you do. But that is why you are so often wrong. :joy:

So your thesis is that one companies way of implementing autonomous driving is bad for all companies that are implementing autonomous driving. That doesn’t sound logical at all. I suspect that each company will be looked at individually.

Andy

Am I? Most of my skepticism towards Tesla has been that their very large profit margins that they posted a few years ago were an artifact of the absence of competitors’ inventory at the time (along with no advertising costs and their sourcing in China), rather than any enormous cost savings from Tesla’s manufacturing processes. Whether you think the collapse of their margins in the intervening years as more competitors came on line with inventory validates that thesis yet or not, it certainly doesn’t prove it wrong.

My skepticism about Tesla’s FSD is based on the idea that Level 5 autonomy just isn’t feasible with our overall level of AI technology, and that Tesla “brute forcing it” by building the largest computer with the most data isn’t going to fix that. And that their utter disdain for regulatory review will come back and bite them on the keister. It’s still too soon to tell whether those assessments are right or wrong.

No, because the issue isn’t specific to the company. It’s specific to the regulator. If a regulator adopts an approach to AI driving that’s bad for AI driving, it’s bad for all the companies under that regulator.

Musk has always articulated a very naive understanding of how regulators will look at FSD (whether he actually believes it is an open question). He intimates that as long as the technology advances to a point where the average accident rate in an FSD Tesla is materially lower than the average accident rate for human drivers overall, they’ll be allowed to turn on FSD. He highlighted that in the most recent conference call as the reason why FSD is oriented towards the U.S. market first, stating that you can do these sorts of innovative things freely here as long as you assume the risk.

The CA DMV’s action on Cruise is inconsistent with that theory of the regulator. This is a fluke case, and the DMV has to know it’s a fluke case, that would not materially affect the safety of Cruise’s vehicles on the whole. But they pulled their license anyway. That is a very bad sign for Tesla, because it shows that the regulator isn’t acting the way that Tesla’s model of FSD needs them to act.

5 Likes

It seems pretty simple. Add an overriding rule to the software stating - “if any contact is ever made with a pedestrian, STOP as possible”. Anyone remember Asimov’s 3 rules of robotics? Something along those lines.

1 Like

Not a single Tesla on the road today will ever have level 5 autonomy. There are many reasons, but one of the simplest reasons is that it can’t see if there is anything directly in front of the car. For example, a kids bike left in the driveway right in front of the front bumper.

Future Tesla models will have an additional camera on the front bumper that will add this ability. And those models might (just “might” because it may also require another generation or two of processing power, etc to reach true self driving) eventually have the ability to truly self drive.

Innovation takes a very long time. AI started in 1956 at Dartmouth College, It took some 50 years to get to workable neural networks.

When I went to college in 1959 a graduate student was talking about Shoe Box, his PhD thesis, voice recognition. It took several decades to become a working reality.

Full self driving is close. There are two or more paradigms, let’s see which one wins. Apparently Cruise’s suspension was due to incomplete communications with the authorities. With lives at stake that is not acceptable.

The Captain

The car had that programming - it braked even before it hit the pedestrian. The issue was that afterwards, it pulled off to the side of the road. Presumably because that’s what the training data showed that humans did when they got into an accident that didn’t disable the car - move out of the moving traffic and wait for the police. The issue isn’t that the car should stop - it’s what the car does afterwards. Wait in traffic? Wait on the side of the road? Wait until the police come? Wait until a company rep comes? The car can’t assess how badly a pedestrian has been hurt, so it’s hard to “program” the right response - and it probably doesn’t have training data sufficient to let it learn this on its own.

1 Like

I do not think you have the expertise to say that. You are surmising this but have no real world knowledge about what Tesla is doing. I also think using words like utter disdain is projecting your own feelings into the discussion without any real proof. If Tesla really had any disdain of government regulatory review they would already be declaring that they are at level 5 and having their cars ride around without any drivers.

Well then, your thesis is completely incorrect. Waymo is still operating in San Francisco. The only one affected at this time is Cruise.

Andy

1 Like

True! I only have a layman’s understanding of what Tesla is doing, based on reporting on their efforts. As do most people that aren’t either in the field or working for Tesla. But based on what we know about how Tesla is developing its FSD, we can reasonably infer that under edge cases that are neither present in the training data nor specifically anticipated by programmers, the car under FSD can behave in ways that are not consistent with how a human would behave in that scenario. Do you disagree?

Well, I’ll admit that I’m projecting Musk’s oft-stated and repeatedly demonstrated attitude towards financial regulators to the company as a whole. Perhaps that’s not fair. But their public statements about how they’re approaching regulatory review in FSD demonstrate virtually no appreciation for what regulators will want to see in order to approve self-driving, and Musk has had issues with reacting to public safety regulators in the past:

Because Waymo hasn’t had an edge case incident injure someone yet. It will eventually happen. It will happen more quickly the more cars Waymo (or any other operator) have on the road. The DMV has demonstrated how they will react to that. Again, not a great development for Tesla.

1 Like

UWindsor study finds drivers distracted while driving semi-autonomous cars | CBC News

Study shows drivers are more likely to be distracted when driving semi-autonomous cars.

Well of course, that’s the whole purpose of autonomous vehicles.

The level of distractedness is perhaps an issue, but Darwin will be an asset here…

Like sailors, any prudent driver will be keeping a watchful eye on conditions. But apparently some people just expect technology to be perfect, so they relax.

1 Like

Urban public streets are no place for autonomous vehicles.

Interstate highways are sensible for them and can be readily optimized for much higher speeds and safety. Those higher tech interstates could have offramps into urban area via limited access high speed high density transit lanes leading into intermodal transport hubs where humans and freight can shift from autonomous to subways, trains, and other urban transit, or to streets designed for human traffic including lightweight and smaller vehicles but starring bikes, scooters, and shoes on pavements.

However, human safety has NEVER been a significant issue in our history, and so I expect we simply idiotically continue with our profit seeking bloody carnage.

david fb

1 Like

Getting born is a death sentence. It’s only a question of time. Excess safety is PITA.

The Captain

2 Likes

I think the object is to do better than human’s but right now I would agree that their behavior is not adequate, but that is why they are training them. But where tesla is currently at I do not think anyone really knows, except people working on it at Tesla. Every day they are processing huge amounts of data.

I think they are shooting for the whole enchilada so we are not going to see a big improvement until it is here. That is why Tesla appears to be behind other car manufactures but, and this is just me guessing, I think they are much further ahead.

Waymo has many more hours driven than cruise and from what I have heard from people that have been in Waymo and Cruise, Waymo is a much better product with fewer problems. That should show that the more hours driven the better the product and Tesla has many more hours than either of them.

Tesla has both large amounts of compute and driving data. Waymo and Cruise have 100 to 1000 times less driving data than Tesla. Tesla probabably has 10x more compute and will scale to 200X more compute. Waymo and Cruise are likely data constrained on self driving.

Andy

2 Likes

That’s the Ark investment thesis, sure. I just don’t think having larger amount of compute and driving data will help Tesla develop a Level 5 AV. The technology isn’t far enough along yet to solve this problem.

An analogy: Today, we have Dall-E (and programs like it). You can enter any verbal description in natural language you like, and Dall-E can give you a picture of it. “Blue pig dressed like a cowboy riding a lawnmower.” “Portrait of Barak Obama in the style of the Dutch Masters.” “Photorealistic image of a damaged lampshade.” etc. Sometimes it’s a bit weird - but for the most part, it usually does a very good job.

So imagine you had a time machine, and travelled back to 2003. You could go to whatever software company you want, give them a billion dollars or two, and tell them to develop a program with the capabilities of Dall-E. They’re in 2003, of course - so they only have the hardware and computer theory knowledge available at the time. But they build as big a computer using that tech as they want, and build as big a dataset as they can with a billion dollars.

They won’t succeed. The problem is eventually solvable (after all, in 2023 Dall-E exists). But it can’t be solved with 2003-era technology, software skills, and an internet where people haven’t (yet) voluntarily posted literally trillions of scanned and labelled images to scrape. It doesn’t matter that they have the biggest compute power, or the biggest dataset, or the smartest software engineers available in 2003. Because none of those things is enough at 2003 levels.

I think that’s where we are with autonomous driving. The general level of technology isn’t advanced enough yet. Tesla can build the biggest computer with 2023-era tech, and accumulate the biggest dataset capturable with 2023-era capture - but it’s not adequate to build a Level 5 autonomous driver that can meet the standards that regulators will require.

4 Likes

Which might be a good thing! :slight_smile:

2 Likes

I do not agree with this. I especially don’t agree with the idea that such an edge case can be solved with some simple overriding rule.
So what does this newly programmed car do when it has 4 people in it and a pedestrian contacts the car…while you are slowly moving across a railroad track and a train is coming? Yes, another extreme edge case. But one that a human would easily be able to decide “correctly” to keep the car moving.
So then how do you program these things? With a cascading hierarchy of overriding rules – that makes no sense either.

Mike

1 Like

A cascading hierarchy of overriding rules was AI 35 years ago called “Expert systems.”

What Is an Expert System? | Definition from TechTarget.

The thinking back then was that AI was a flop.

The Captain

2 Likes