California Suspends Cruise Autonomous Taxi License

I get that but that is exactly what they are trying to solve. They started with nothing and are building it up. Where this helps Tesla is that they have the most data so they have the best chance of solving the problem. That was Cruise’s problem, they didn’t have the data so their cars were causing huge problems. This is where Musk is being smart. He isn’t saying he has a driverless car, in fact he says it’s only level 2. With that strategy he can keep getting data while people drive the car. Eventually he will solve the problem. When? I can’t predict but I am hoping within the next 2 years.

Andy

I’m trying to imagine me, as the driver.
A person gets projected in front of my car, I brake hard but still moving, hit the person.
I’m still moving.

I DO NOT KNOW THE PERSON HAS BECOME ENTANGLED WITH MY CAR.

I’m still moving - 7 mph.
While I’m processing this, I move 20 feet.

My RAV4 is 15 feet long. Ie, the cruise vehicle moved ONE car length.

How long does it take to go 20 feet at 7mph?
My car would likely move 20 feet while I’m processing the situation!

Remember - I do not know the person is being dragged. I think I might well just go ahead and pull over.
IDK what I would actually do in a dynamic, real world situation. I don’t want to find out, either.

Say 100 drivers face this situation. I’m not gonna ASSUME that all of them would stop the way this is being portrayed.

Definitely an edge case!
:fearful:
ralph

Edit to add this quote from the hot link in the OP quote:
*{Forghani says the driver of the other vehicle fled the scene.}

So, we have some evidence that humans might not “stop”?

4 Likes

I think you are missing the point. NO ONE will ever have a huge data set for edge cases. Tesla isn’t going to have hundreds of data sets where the driver has run over a pedestrian that was initially struck by another vehicle. Doesn’t matter how massive the data set Tesla creates if they simply don’t have a massive data set for every edge case (and of course what makes them edge cases is the fact that they are exceptionally rare).

Take the recent Alaska airlines incident where the pilot-passenger attempted to take over the plane. That is an edge case. Has that ever happened before? How would AI build enough of a data set to know what to do in such a situation? The human pilot(s) were able to react quickly to resolve it but it would take dozens, if not hundreds of repeat occurrences of that edge case to train AI.

If the regulators are using edge cases to effectively ground a fleet of AI vehicles, then that is a bad sign for ALL parties.

1 Like

Doesn’t matter. Even if on average humans would have behaved the same or even worse, if AI does the same thing every time (as it would be designed to do), then that is apparently problematic.

1 Like

That’s only true if data is the bottleneck to solving the problem. But it might not be.

Again, looking back in time is instructive. If you’ve got a team in the 1980’s trying to build a fully-functioning android, we know their project is doomed to fail. Not because they’re not smart enough, or because they lack adequate resources. Because we know that it’s literally impossible to build a fully-functioning android using 80’s-era tech. It’s just not advanced enough.

Tesla is trying to solve Level 5 autonomy using state of the art 2020’s tech. But it’s entirely possible that our state of the art tech just isn’t advanced enough to solve that problem. Even the best computers with the largest datasets might not be enough. It might take a generational advance in computing tech. It might take a completely different approach to constructing artificial minds than anything we are doing today. Heck, it might even be some kind of crazy hardware solution - like, say, floating swarms of nanobots surrounding the car that let it sense the surroundings in ways that are easier to unpack that visuals.

Other companies are trying to solve an easier problem - Level 4 autonomy. No guarantee that Level 4 is solvable with 2020’s era tech, either, of course. But one possible outcome of this “race” is that the other companies are on the right track - Level 4 is solvable and Level 5 is not. That’s the worst-case scenario for Tesla on that front - even worse than if no one can do much better than a really good Level 2/Level 3 system.

1 Like

You are not keeping up with Tesla. Because of the lack of edge cases in the real world, Tesla is creating lifelike animations of edge cases to train the neural networks. Nvidia has some amazing software used to create realistic games and simulators.

TX. It’s On.

The ultimate in ray tracing and AI.

NVIDIA RTX is the most advanced platform for ray tracing and AI technologies that are revolutionizing the ways we play and create. Over 250 top games and applications use RTX to deliver realistic graphics with incredibly fast performance or cutting-edge new AI features like NVIDIA DLSS and NVIDIA Broadcast. RTX is the new standard.

From 2019!

Tesla updates its Autopilot auto lane change animation

https://electrek.co/2019/08/24/tesla-autopilot-animation-software/

Edge Cases in Autonomous Vehicle Production

“Because [the autonomous vehicle] is a product in the hands of customers, you are forced to go through the long tail. You cannot do just 95% and call it a day. The long tail brings all kinds of interesting challenges,” says Andrej Kaparthy, the director of artificial intelligence and Autopilot Vision at Tesla, at the 2020 CVPR Keynote.

Here, “long tail” refers to edge cases in autonomous vehicles (AV). Edge cases are probable scenarios that have a low probability of occurrence. These rare occurrences are easily missed and thus are often missing in datasets. While humans are naturally proficient at dealing with edge cases, the same cannot be said of AI. Thus, they have to be dealt with carefully.

https://datagen.tech/blog/how-synthetic-data-addresses-edge-cases-in-production/

The Captain

3 Likes

One can never make self-driving absolutely safe. But it seems plausible to me that one can make autonomous driving significantly safer than human driving.

Roughly 35K fatalities every year from auto accidents. Who knows how many injuries. If Tesla’s FSD cuts that in half, doesn’t that justify its use?

2 Likes

No Hawkin I am not missing that point at all. Even people have a problem with Edge cases. You see it on the highways and streets. Could you be holding autonomous vehicles to a higher standard than you do people?

Andy

1 Like

Thus demonstrating that FSD is probably meaningless in evaluating the near term prospect on Tesla stock or financial performance. Someday, at some time, some company will announce it (and be wrong) and some other company will announce it (and be right) and then it will be extremely meaningful but that day is likely very far off.

Edge cases are, by definition, rare - or more to the point, entirely unanticipated. Those are the ones that nobody, no how, no way are going to come up with, and which will throw sand in the gears of FSD, at least until society (and regulators) accept the thesis of “averages are better, even if there are short term disasters meanwhile.” I expect that logic will escape well, almost everyone at least for a while. Headline writers, regulators, and a majority of consumers aren’t going to take kindly to “this might kill you, but overall, your odds are pretty good.”

Of course the success of the nicotine industry might contradict that idea :wink:

1 Like

Numbers-wise, of course it justifies its use. But that’s not how humans work. Out of the 35k annual fatalities, how many make the national news? Something close to none. Of the annual fatalities related to autonomous driving experiments, how many make the national news? All of them, and repeatedly, and shrilly, and pervasively (regular media, social media, water cooler conversations, general zeitgeist, etc).Even if AV were to reduce that number from 35k to 350, down 99%, it would still not be acceptable to the vast majority of people. It takes at least a generation, sometimes a few generations, for something like this to become widely accepted as “the standard”.

1 Like

There’s a problem, though, in defining what that means.

Serious car accidents aren’t evenly distributed across the driving population. The majority of traffic fatalities involve an intoxicated driver. More than 20% involve a teenager. There is probably significant overlap between those, but still.

So there’s a big space where an AI driver can be safer than “human driving” overall (because it will never be drunk or a teenager) but still less safe than the typical human driver (who is neither drunk nor a teenager). So fewer people would die in drunk- or teenager-caused accidents - but more people would die in other types of accidents. On the whole, fewer people would die - but some of the people killed by the AI driver that wouldn’t have been killed otherwise.

I think it’s unlikely that regulators will allow that - even though on net AI could result in fewer deaths. The deaths caused by AI will be very visible, while the avoided deaths will not be.

1 Like

It will ultimately be about cost/benefit. With each new iteration of FSD (or equivalent), the edge mistakes will decline. As the western populations grow older and more infirmed, the benefits of self-driving will increase.

In the end I think people will choose economics and convenience when combined with a demonstrated positive cost/benefit.

Will probably first see self-driving take hold in places like Shanghai, Hong Kong, Tokyo, and Singapore. High-density, heavy traffic, with difficult parking. In the US it will be retirement communities. These will serve to get people used to the concept.

“People” might. Regulatory agencies won’t. If AI’s are causing people to die in accidents that wouldn’t have happened if a human was driving, the fact that they prevent drunk driving deaths (and thus lower the overall rate of death) probably won’t carry the day.

And again, there’s no guarantee that manufacturers can demonstrate a positive cost/benefit. Again, serious crashes aren’t evenly distributed across drivers - they’re concentrated in incidents involving the intoxicated, teenagers, very hazardous weather conditions, and mechanical/maintenance failures. AI won’t be rolled out evenly among drivers, either - and if it’s not rolled out proportionately into the dangerous populations/situations, the short-term effect might be to make things more dangerous, than less.

1 Like

Since the conversation has turned to ‘safety vs risk’ and the benefits to society of a ‘risky tech’, and absolute (100%) ‘safe’ - years ago, a professor explained that the US ‘safety regulators’ use 1 death per 1M people involved in the activity, as a ‘regulatorily acceptable’ risk level. IIRC correctly the discussion was about taking a particular drug. So, if 1M people took the drug, then 1 death out of that 1M population of drug users, is acceptable.

Acceptable Risk | Encyclopedia.com

Although regulatory authorities are reluctant to define a precise level of acceptable risk, lifetime risks in the order of one in a million have been discussed in regulatory applications of the acceptable risk concept. This level of risk is considered to be de minimis , an abbreviation of the legal concept de minimus non curatlex (the law does not concern itself with trifles). Attempts have also been made to establish benchmarks, such as the risk of being hit by lightning, to help interpret such small risks. Higher levels of risk might be tolerated in the presence of offsetting health or economic benefits, when the risk is voluntary rather than involuntary, or when the population at risk is small.

Defining an acceptable level of risk for a ‘product/activity’ is basic governance?
Society gets to benefit from a useful ‘product’ rather than that product being thrown away due to an unachievable goal such as 100% safety?

Will society be denied the benefits of FSD due to FSD not being 100% safe in all situations, even extremely rare, black swan, edge cases?

In part, that may depend on who pays the shiniest campaign donations?

:thinking:
ralph thinks this ‘edge case’ is another ‘hit piece’ to try to slow development of FSD and EVs (cause EVs are tied to FSD in the public mind).

4 Likes

No.

But U.S. regulators probably won’t approve FSD unless Tesla can demonstrate that a typical sober adult driving a nearly-new car in good weather is safer for that driver turning on FSD than driving themselves. The fact that FSD is safer than a drunk or a teenager or someone driving in a snowstorm or a poorly-maintained junker with bad brakes won’t be relevant to that assessment. So comparing FSD’s accident rates to overall accident rates won’t be relevant, either, since those accident rates are heavily tilted toward those riskier drivers/situations.

Me? Absolutely not. The regulators, absolutely likely.

Hawkwin
Being a big fan of AI doesn’t keep me from thinking it will not be easy to get regulatory approval.

1 Like

Whatever conclusion you may come to it’s always good to know reality.

Early aviation killed lots of people which did not kill the industry. Maybe current Americans have thinner skins – the end of progress.

The Captain

1 Like

60 mph is 88 ft/sec.
So 6 mph is 8.8 ft/sec, so close to 2 seconds to go 20 ft.

Mike

1 Like

Nobody thinks it will “kill the industry’. It would probably slow it down a good bit.

I remember the days of ubiquitous “life insurance” vending machines at Newark Airport and LaGuardia, even into the 1950’s. Not really a great look if you’re the airline, but they made money for many years.

1 Like

Crucially, that one death is happening to the person choosing to take the drug. I very often have no choice about sharing public ways with automobiles.

d fb

2 Likes