I have to note the desperation in pointing to “driving around a parking lot” as some sort of indication that autonomous driving is anywhere near ready for prime time, at least in a commercially relevant time frame. Heck, Cal-Tech students did that sort of thing in 2003, hardly meaningful for an investor looking to cash in on that ability during the next two decades.
FYI, that is written as Caltech.
DB2
To be fair, that was a one off demo device. In this case, what Tesla demonstrates is that ALL their generic vehicles can do it, of which there are a few million on the roads already. Perhaps that is what the major differentiator is between Tesla and the other AV programs; Tesla wants to have all their generic vehicles to have some level of that capability built in, while the others (CalTech, Waymo, Cruise, etc) all are one-off kinds of designs with all sorts of sensors and devices bolted onto an existing vehicle. Of course, that doesn’t mean that true self driving will be solved anytime soon. There are just so many scenarios that I see everyday that I’m not sure how it will solve without being far more human. Things like reading instructional signs and understanding them - for example “visitors/deliveries left/barcode right”, will it ever be able to understand that? It means that if you have a barcode sticker on your window allowing you access, you use the right lane, and the access bar automatically raises after the sensor reads your barcode. And if you do not have an access barcode, you use the left lane and show your ID at the entry, you tell the guard which house you are visiting, and they check if you are on the list, or they call the house to ascertain if you have permission to enter. And there are plenty of other examples.
Turing test has come and gone. We have moved from ML to baby AGI.
Most people don’t understand the difference. AGI is maturing fast.
The race is on.
No, it hasn’t. Nobody has claimed to have won the Turing test. Not yet. And for good reason. I would not even say we are at “baby AGI” yet.
From perplexity.ai about this:
Based on the search results provided, the Turing test has not been definitively solved or passed by artificial intelligence systems. Here are the key points:
- There is no official standardized version of the Turing test, making claims of “passing” it somewhat meaningless.
- While some AI systems like ChatGPT can fool people in short conversations, they have not demonstrated the ability to consistently fool skilled interlocutors in extended interactions, which was part of Turing’s original conception.
- Many AI researchers consider the Turing test to be an insufficient measure of artificial intelligence. It has been criticized as a poor benchmark since at least the 1960s.
- Recent advancements in AI have led some to argue that the “popular conception” of the Turing test may have been surpassed, but this is not the same as passing Turing’s actual proposed test.
- There are calls for developing new, more comprehensive ways to assess AI capabilities beyond the Turing test.
- Some experts argue that the ease with which humans can be fooled makes the Turing test problematic as a measure of machine intelligence.
- Claims that the Turing test has been officially passed are considered misleading by many in the field.
In summary, while AI has made significant advancements, the consensus from these sources is that the Turing test, as originally conceived, has not been definitively solved or passed. The discussion has shifted more towards finding better ways to assess AI capabilities.
I asked Perplexity to describe if the Turing test has been solved using surfer lingo:
Yo dude, the Turing test situation is totally gnarly right now!Some brainiacs are claiming AI has shredded the Turing test like a sick wave, but it’s not that cut and dried, bro2.
While some AI chatbots have been pulling off some pretty rad moves and fooling judges for a hot minute, no machine has totally nailed it 100%2.Check it out - there’s this chatbot called ChatGPT that’s been ripping it up lately. Some Stanford eggheads even said it passes a "rigorous Turing test"3. But here’s the deal - the OG Turing test is all about fooling ANY skilled human, not just some random kook off the street4.
So while these AI dudes are definitely catching some sick air, they’re not quite hanging ten on the perfect Turing wave yet. The tech is still a bit of a hodad when it comes to really understanding complex stuff4.
Bottom line, bro - the Turing test has been getting pounded, but it hasn’t totally wiped out. We’re still waiting for that perfect AI to come along and shoot the tube all the way through. Until then, keep shredding those neural networks, dude!
LLM AI did progress very fast. It may not be maturing fast any more. While they progressed very quickly very early, they’ve pretty much plateaued now - because they’ve already absorbed nearly all the data readily available to train them. It might not be possible to get them much better than they are today:
To train next generation AIs, engineers are turning to “synthetic data,” which is data generated by other AIs. That approach didn’t work to create better self-driving technology for vehicles, and there is plenty of evidence it will be no better for large language models, says Gary Marcus, a cognitive scientist who sold an AI startup to Uber in 2016.
AIs like ChatGPT rapidly got better in their early days, but what we’ve seen in the past 14-and-a-half months are only incremental gains, says Marcus. “The truth is, the core capabilities of these systems have either reached a plateau, or at least have slowed down in their improvement,” he adds.
Further evidence of the slowdown in improvement of AIs can be found in research showing that the gaps between the performance of various AI models are closing. All of the best proprietary AI models are converging on about the same scores on tests of their abilities, and even free, open-source models, like those from Meta and Mistral, are catching up.
IOW, there’s only one internet worth of human-created data to train these models on. And only a fraction of that data in the one internet we have is useful. There aren’t another 10 internet’s worth of human-created data out there. So most of these models are pretty much approaching the limit of how good they can get with that one internet worth of data. And it’s prohibitively expensive to get more data.
Early days, yet - but if that ends up being the case for language models, there’s some serious implications for efforts to use AI in real world applications (like robotics) where companies have to make their own massive data sets to train on.
Interesting that everyone is using AI to get an answer if Turing test has been met or not.
Nobody claims that LLM’s are able to pass the Turing test.
hmm, I will listen to the experts in the field
It is interesting that his own LLM disagrees with him.
Who should we believe the human or the machine ? Oh the irony.
So, how many fatalities occurred during testing? Accidents?
My response to these edges cases is pretty much that robotaxis (a subset of self driving cars) can be successful without doing most of what you are saying in these examples.
This is NOT to minimize the difficultly of the task to make a robotaxi work good enough to be technically successful. (Not considering being economically successful)
A business with different visitors and delivery entrances can easily setup a robotaxi drop off location. Uber and Lyft have already done this…my business campus has had 5 or 6 ride share pickup/dropoff locations for ~5 years. They have signage that are GPS linked in the Uber/Lyft apps that you must pick when you are nearby…just like when you are at an airport.
Mike
Edit: Original post retained at the bottom.
It appears that there is no good data on fatalities that occurred during FSD because Tesla has requested that information to be redacted?
May be behind paywall but you can listen to the story.
Also found some of the quoted text here (also paywalled but I was quicker than the paywall prompt!)
Tesla Asked NHTSA to Redact Info About Autopilot, FSD in Crashes: Report - Business Insider.
Tesla requested redaction of fields of the crash report based on a claim that those fields contained confidential business information," an NHTSA spokesperson told Insider in a statement. “The Vehicle Safety Act explicitly restricts NHTSA’s ability to release what the companies label as confidential information. Once any company claims confidentiality, NHTSA is legally obligated to treat it as confidential unless/until NHTSA goes through a legal process to deny the claim.”
(original post follows):
Wiki says 41.
List of Tesla Autopilot crashes - Wikipedia.
As of June 2024, there have been forty-four verified fatalities involving Autopilot[2] and hundreds of nonfatal incidents.
Note, those deaths happened when someone was behind the wheel (or at least supposed to be - and at least had an opportunity to react. If no one was behind the wheel, that number would of course be much larger. Therein lies the challenge.
One of the more recent deaths:
Tesla car that killed Seattle motorcyclist was in ‘Full Self-Driving’ mode, police say
https://www.reuters.com/business/autos-transportation/tesla-was-full-self-driving-mode-when-it-hit-killed-seattle-motorcylist-police-2024-07-31/
First, that is Autopilot, not FSD.
Second, that is life to date, starting in 1916!
So, for starters we need the right product, an indication of miles travelled, and restriction to a reasonably current release for it to be a meaningful indicator.
Indeed, if we had a total miles since 2015, 44 might be impressively low for Autopilot.
You are correct, I will attempt to find the correct data and edit.
1916? Perhaps you meant 2016?
1916 through 2015 there were zero fatalities due to Autopilot!
Yes, the first accident was 2016.
There are 40k deaths due to motor vehicle crashes annually in the US. It is the 2nd leading cause of death among those under 25.
And your point is?
The relevant number is miles per fatality with the particular technology in use and a comparison to other such numbers with or without a technology.