According to the timeline published by the nation’s auto safety regulator, NHTSA told Tesla on January 25 that it had four concerns about FSD Beta’s driving behavior and asked Tesla to issue a recall. After a couple of weeks of discussions, Tesla apparently did not concur with the agency but decided to issue a recall anyway, perhaps reading the writing on the wall.
This does not mark the end of Tesla’s driver-assist woes. NHTSA is continuing to investigate the less-capable but more ubiquitous Autopilot feature after 41 crashes since 2016, resulting in at least 19 deaths.
Perhaps it’s time for Musk to hand off Twitter to a CEO and return to Tesla,
Back to what I have said to my nephew for years, the hardware for FSD cars is not there yet. You can tweak any amount of software you like. The speed of the hardware as we currently design and build hardware wont do the job.
He told me two years ago it was ready for prime time. He is an MIT computer scientist.
BTW the 4090 GPUs by NVDA are incredible with DLSS 3.0. The problem is DLSS 3.0 is AI to make up for what the 4090s can not do.
Just another software update. So far as I can tell, as a regular FSD beta user, it’s likely to be just like the NHTSA mandated “must come to a full stop at a stop sign even though nobody’s around” change. All it does is confuse and annoy the people behind you because people don’t drive that way.
Maybe a few of these tweaks will improve things, but it’s doubtful. NHTSA should really stick to trying to make driving safer rather than trying to find fault with an experimental system that isn’t causing problems, but is rather making driving safer.
I’m pretty sure that rigorously observing and probing the various driver-assist systems that are being shipped to customers (by Tesla and others) is very much an important part of “trying to make driving safer.” These systems constitute a major potential change in how vehicles are operated in the U.S. - arguably/eventually the biggest potential change in history, if these systems eventually move us to Level 4 or Level 5 autonomy.
It is entirely within NHTSA’s wheelhouse to make sure that what Tesla (and others) are doing is actually “making driving safer,” and not just relying on what those manufacturers say the effects are.
Building on Albaby1’s point, “FSD” software is much, much better than it was a couple of years ago - but that software is among the most complex that has ever been written - as Zoolander said, it’s “really, really, really” hard.
The NHTSA just said “big, risky bugs - not safe enough yet.” (as hundreds of beta testers have pointed out online in several forums, it’s like driving with a kid getting its learning permit in complicated situations.) It’s just fine 99% of the time, but that 1% is no bueno. They have a lot of work to do.
All this is somewhat true. But NHTSA isn’t actually helping in any way. All they’ve done up to now is force Tesla to make the system worse for everybody. That’s my complaint.
It isn’t really “like driving with a kid getting its learning permit in complicated situations”. Been there, done that. With a kid you pretty much understand what’s going through their minds and the pattern to their various failures. FSD, while fairly predictable, makes ridiculous errors with no clear motivation.
Most of FSD’s errors nowadays come down to poor lane selection. And it’s not at all clear how much work there is left to do.
FSD, while fairly predictable, makes ridiculous errors with no clear motivation.
Ok, we’re doing what I call “violently agreeing” on this.
One measure of how much work is left is what hasn’t been fixed yet. After 3 1/2 years of ownership,
marginal to no improvement in the Autosteer / autopilot behavior (“Driver Assist Cruise Control”). Yes, this is not FSD.
meaningful improvements in FSD with stop sign, speed limit sign and traffic light recognition
better, but still not nearly good enough lane changing, left turn taking, phantom braking, undivided road obstacle avoidance etc decision making.
The bottom line is, this software is extremely hard, and expensive, to develop to the point of the level of autonomy required to be trustable - and non-fatal - as “self-driving”.
If it STILL is exhibiting mostly the same “ridiculous” or “bizarre” or whatever term is chosen for “not what a safety-focused and aware human would do” after ALL the money and person hours spent coding, testing and revising based on bug reports and automated reporting these last 4 years - there’s still a lot of work to do.
This in my way of thinking is the not the root of the problem. The software has to make up for the lacking hardware. That is why the software is hard to make and expensive.
Going back to the Nvidia 4090, an amazing GPU but the AI is DLSS 3.0 to give the fantastic effect that it is seamless. In a video game you can fill in the blanks so to speak. But in really life either you have the hardware or you dont. Video games are slow moving in comparison. You can not fill in reality with AI. Ouch!! Look you can to a degree but not substantially when the point is to know reality.
I have a young friend in Dublin who was busking on Grafton Street while I was there. She has been doing it for a few years now. I spoke with her and followed her on IG. She then left our conversation on auto pilot with the AI. I do not want to have anything to do with her IG profile. You can not substitute AI for reality. Why talk to someone who wont talk to you?