Musk craves Buffett's blessing even as Ajit Jain scoffs at his claims

Reacting to Buffett’s carefully worded hypothetical on the impact of ‘automated driving’ on insurance companies, Musk promptly pleaded that Buffett “should take a position in Tesla.”

Here’s what Buffett said : “If accidents get reduced by 50%, it’s going to be good for society and it’s going to be bad for insurance companies’ volume,” Buffett said. “But good for society is what we’re looking for.”

More interesting is what Ajit Jain, the brain behind Buffett’s insurance empire said:
“If you multiply the number of accidents times the cost of each accident, I’m not sure that total number has come down as much as Tesla would like us to believe,” Jain said. He added that Tesla insurance so far “hasn’t been much of a success.”

“Time will tell but I think automation just shifts a lot of the expense from the operator to the equipment provider,” Jain said.’

So one of the world’s smartest insurance gurus thinks
(a) Musk’s claims aren’t credible
(b) The real impact of Musk’s accident-reduction hype around FSD will be on Tesla, not on the owner!


A safer car does not necessarily reduce accidents, it’s safer for the passengers in the car.

Unlike safer cars, FSD should cut down accidents and owners pay for the FSD.

Maybe “one of the world’s smartest insurance gurus” didn’t properly think it through.

The Captain

1 Like

A car in China flies through the air, flips over seven times, and all the passengers survive with minor injuries. A Tesla Model Y!

China shocked after Tesla Model Y flies through the air, rolling over 7 times

The Captain

1 Like

I’m pretty sure what happens is the liability for the “fewer” accidents moves from the driver to the provider and seller of the FSD technology that’s supposed to be reducing the accidents. Plus Musk will have to provide a legal defense for all the bogus claims he’ll attract. The only drivers who will have liability are those that disabled the app.

I know that small FAA certified airplanes like a new 4-seat, Cessna 172 come with a product liability policy that’s attached to the aircraft serial number. It accounts for about 25% of the $400,000 sales price.

Maybe you can charge enough for FSD to cover all these costs? Maybe you can’t.



With Elon Musk’s proven abilities in cash management, I’m sure he’ll come up with the solution that lesser mortals cannot even imagine.

The Captain

Maybe someday, when all cars have FSD and can talk to each other in real time and avoid accidents some of this will be true. Until you eliminate the humans driving, FSD is going to be in lots of accidents no matter what.

I know, I know, Tesla won’t have to pay for those. Har har, tell me another.

1 Like

How big is “lots?” Twice as many as humans? 10% of humans ? Five times of humans?

I wonder why I bother responding.

The Captain


Fewer on limited access highways, More everywhere else because driving is more complicated there.

You can’t help yourself when responding to the unassailable logic of my posts. Anyone who thinks there aren’t going to be lawsuits is on crack. And if it isn’t the driver driving, the lawsuits are going to be against the entity controlling the vehicle: the writer/owner of the software.


It’s possible. But, FSD will cause fewer of those accidents. Simply by virtue of how it works (it actively, and at all times, tries to avoid occupying the same space as anything else, a vehicle or a human, etc), and how it has a faster reaction time (way faster than a human has). Not only that, but FSD will have all the video stored to present to the insurance company to show that it wasn’t at fault.

It’s funny, I literally just 2 minutes ago watched an FSD video of a narrowly avoided accident. Here’s a link to it.

Obviously owners pay for FSD capability. Jain thinks the FSD vendor (Tesla et al) will pay for the consequences of an accident, as has already happened in Tesla’s case, even though the settlements aren’t public.

Unless of course you think Musk also has a magic wand to make any FSD liability claims go POOFF?

Much more than a magic wand. Currently accidents are dealt with existing legislation. Autonomous Mobility will require updated legislation. My guess is that Elon Musk will update the economics of the business to cover any new expenses. Tesla insurance is a step in that direction, forward looking just like the SuperCharger network was and is, without it the EV revolution might never have taken off.

Consider Elon’s invitation to Warren Buffett to invest in Tesla. Buffett’s greatest skill is using other people’s money (OPM) to fund his investments. One such source of funds is the insurance float which might be cut drastically by Autonomous Mobility. A financial marriage might be in the making.

Rather than doing discounted cash flows that backward looking analysts love, I’m open to developments fueled by the great phase change in economics driven by neural network based AI. Consider the current upheaval at Tesla, some think it’s Tesla unravelling. I believe it’s Elon Musk seeing the writing on the wall. EVs, SuperChargers, and storage are no longer the main course, they are becoming the vehicles to monetize AI. FSD, RoboTaxis, humanoid robots, and Virtual Power Plants (VPP) are examples.

The Captain


I mean - maybe.

AIUI, if you tried to run a car for a full year on FSD today, it would end up getting in a crash at some point. It’s just not ready for 100%, door-to-door on every trip with no interventions. At some point it will end up in a scenario where the system doesn’t know what to do, and it turns over control to the driver. It can’t handle 100% of scenarios.

So we don’t know yet whether FSD will cause fewer of those accidents. It is possible that a future AV system - maybe even a future version of FSD - will be safer than an adult human driving a well-maintained car. But we don’t know whether FSD will be that - much less whether it will be that when Tesla claims that FSD is ready for Level 5 use.

There are at least two very different things here. One is Level 5, 100% car doing the driving all of the time. But, the other is the car doing the driving all of the time that it can and the driver doing the rest. If the FSD drives safer than the driver on average, the latter will still be safer than having the driver drive all the time.

1 Like

Maybe? Again, we don’t know. We probably have no way of knowing.

There are at least two ways that could not be true. The first is that the FSD just isn’t better at driving than the driver would be. It could get into just as many (or more) crashes than the human driver would. For different reasons, to be sure - not because it was sleepy or inattentive or had slow reaction times, but perhaps because its programming led it to make the wrong movement at the wrong time. We don’t know.

The second way is that the handoff has its own dangers - the FSD might be safer than a human when it drives, but it’s bad at making sure that the human is in a position to take over when an intervention is required. This is the crux of the NHTSA’s current enforcement actions against Tesla (they’re about AP, but the same issue can be present with FSD or any Level 2 system). Again, we don’t know, and probably have no way of knowing.

You mean the way AI, already released to the world because there are no regulations saying not to, is prone to making stuff up, giving wrong answers, or refusing to open the pod bay doors, Hal? I guess I’m glad for the auto regulations, since I didn’t sign up to be a test-drive dummy for somebody else’s gigantic stock option play.

And there’s this one, my favorite of the last week:

An attempt by a Catholic advocacy group to spread the word of God using an AI model has backfired, and chat bot – Father Justin – has been pulled down and reworked.

The group’s Catholic Answers website contains answers to commonly asked questions from those confused by the good book. Father Justin was supposed to aid this, by answering any other queries worshipers may have, but as commonly happens the interactive Q&A bot really didn’t work that well.

As seen in this Twitter [thread], one questioner received Father Justin’s blessing to marry her brother, saying it was “a joyous occasion,” and also offered absolution after a confession – a huge no-no from a theological perspective for a non-priest.

In an interview, the group’s COO Jon Sorensen [said] they had … tested it over six months. However, this wasn’t enough to stop the AI cleric telling one questioner that baptizing a child with Gatorade was perfectly all right.

1 Like

I’m saying that TODAY’S version of FSD with the human driver taking over when necessary is safer than driving without FSD. It is so clearly obvious to me after using various versions of it for a few months. True FSD, with both F and S operating at all times doesn’t exist, so I don’t know how it will perform without a human behind the wheel. It’ll likely still be [a lot] safer than a human driving, but if it becomes confused, it may pull over and stop periodically. And that’ll be a big problem that will have to be solved somehow (Waymo/Cruise solves it by having a remote human take over for a driving problem that the car can’t solve).


Again - we don’t know. You have no way of knowing if that’s true. Your experience is anecdotal (and probably based on you safely and appropriately using FSD). You don’t know how often FSD makes a weird decision that results in a crash - or (more importantly) runs into a situation where it hands off to the human driver where the human driver doesn’t have enough time to react without a crash. It hasn’t happened to you (I assume), but that doesn’t let you know whether FSD is safer than a human, because most humans don’t get into car crashes over a period of a few months.

I’m not even sure Tesla knows. It knows that FSD+humans get into fewer crashes per mile than the driving population at large - but because the FSD+humans aren’t representative of the driving population at large (they’re disproportionately in newer and more expensive cars, they’re disproportionately in California, they’re disproportionately activating FSD in safer driving conditions, etc.), we don’t know if that outcome is the result of the different data set or because FSD is a better driver or not.

You have no way of knowing whether pushing that button to activate FSD makes you safer or less safe than choosing to drive yourself.

1 Like

The above is accurate.

FSD 12.4 is about to be released. Tesla is working on versions 12.5 and 12.6. All the above issues are no mystery to Tesla and you can bet they are working at it. What we can clearly see is how much faster the iterations of v12 are coming than previous versions, the reason is because the programmers writing heuristic code have been replaced by neural network training using the world’s second largest, fastest AI computer second to only Meta.

In recent videos they have been talking about introducing reverse which has so far been lacking. Reverse is crucial to get out of “pulling over and stoping periodically.”

I have said it before, the shift from heuristics to neural networks, once they have the required computing power, is a phase change advance. Time is shrinking fast.

Most convincing arguments! :clown_face:

The Captain


Well, perhaps you can argue that we don’t have a sufficiently rigorous metric yet, but the accident rate with FSD engaged is dramatically low, which seems like a strong indication. And, while there are issues about keeping the human in a properly supervisory state, I am not aware of any anecdotes even about a problem with the actual handoff. I.e., if either were the case, we would be very likely to know it.


What one would eventually need to compare are stats like the following:

1.33 deaths per 100 million miles driven.

I was not able to find accident rates per miles driven but those two stats would likely be the best way to compare.

I doubt any comparable data for FSD is public.

Tesla Autopilot Involved in 736 Crashes since 2019

The self-driving technology was also implicated in 17 deaths

Of course, no way to know if that is in any way significant and of course it is using older versions of FSD.