I’m an engineer … so I know exactly what you are saying.
Thing is, he didn’t need to say it in order to sell cars.
You’ve all probably seen this.
Reading the small print Tesla will win its cases for FSD vehicles.
Basically, if you were to use your car as a lender for Uber overnight you’d be out of luck when something goes wrong. Otherwise who knows…eventually FSD will be much better than you actually driving.
Recognizing that wikipedia is hardly the end-all/be-all of scientific research, their entry on AGI does note that there’s a minority opinion in the field that says that AGI will never be achieved. More to the point, it also cites folks in the field who believe that while AGI might one day be possible, that we’re a century or more away from it. Which is not “never” - but if that’s the timeframe, then it’s kind of a moot point for near- or intermediate-term autonomous driving.
I mean - maybe? I kind of doubt it.
I think FSD might one day be less likely to get into an accident than the average driver. Because the “average driver” includes drunk people, and teenagers, and people who drive in poorly maintained vehicles, and people who text while driving. And most serious accidents are caused by people who do those things - and I suspect that a very large proportion of non-serious accidents are as well.
I don’t do any of those things. That doesn’t mean I’m a great driver. It just means I don’t fall into any of the categories that are most likely to cause accidents. There is a huge amount of space between being safer than the average driver and being safer than a sober, attentive adult driving a reasonably maintained vehicle. It will take FSD (or any other system) a lot longer to be safer than the latter than the former.
I guess if you misunderstand it you might have an excuse for thinking it’s a dumb statement. The statement meant, and still does mean, that the software aspect of the machine will gain in value faster than the hardware itself loses value. I’m quite sure that Musk still believes that, and nobody knows whether it will become true at some point.
To be sure, it is not true right now, as the software has not gotten significantly better for the past few years, and it is unknown whether it will ever get sufficiently good.
But Musk’s statement itself is neither ignorant nor dumb. At worst it is overly optimistic. The fact that you obviously don’t understand it is your problem.
Snow is distracting to an AI? I can see that snow may degrade sensory acuity, much as it does with humans, but I’m missing your point.
There are always a few skeptics in every field.
The overwhelming consensus is that AGI is definitely happening, probably soon. Legg (in the interview below) says his best guess is a 50/50 chance by 2028.
The significant disagreements tend to be about safety, alignment and such.
Here’s a pretty good interview with Shane Legg, the Chief AGI Scientist at DeepMind. It gets a bit jargony at times, but I’m sure you can handle it.
(0:00:00) - Measuring AGI
(0:11:41) - Do we need new architectures?
(0:16:26) - Is search needed for creativity?
(0:19:19) - Superhuman alignment
(0:29:58) - Impact of Deepmind on safety vs capabilities
(0:34:03) - Timelines
(0:41:24) - Multimodality
The overwhelming consensus in the field has always been that it’s definitely happening, and probably soon. At least since 1950. And they’ve been consistently wrong. As noted in the wikipedia article, for the last 70 years scientists in the field have pretty consistently felt that AGI was coming in the next 15-25 years. Even though it never came.
It’s like the running gag that “fusion is always 30 years away,” a wry comment on the fact that researchers in atomic energy have consistently (since 1950’s as well) believed that a breakthrough in fusion was always around the corner…but just far enough in the future for it not to seem foolishly pollyannish. I think it’s likely that even if it takes 100 years or more to get to AGI, many researchers in the field will be optimistic that will be achieved “probably soon” through every one of those hundred years…
There’s not just skeptics in every field - there’s also optimists in every field as well.
I know you’re a Tesla fanboi, but man, this is a new level of Kool-Aid drinking, even for you.
Software is valueless without the hardware to run it on. If the hardware is worn out or doesn’t work correctly, the software is still valueless for that machine.
Looking at this particular issue, what software - exactly - are we talking about here? Certainly not the basic software that operates the car. Yes, that is updated regularly, with various features being added or upgraded. But those clearly don’t make the car more valuable. Just look at the prices for used Teslas. They are far less than the original sales price. Clearly, the market is not seeing added value for those updates.
So I would hazard a guess that Elon was talking about FSD.
While I have never looked into the details of it, if you paid the significant chunk of money to get FSD on your Tesla, can you transfer that to a new Tesla? Or - like in the personal computer world - do you need to buy (really license) the software again when you purchase a new Tesla?
If, as I suspect, the FSD license is for that one particular Tesla and is not transferable, then even your stretch of an explanation doesn’t hold true.
But Peter, that’s lazy. Do some research on your own before bashing poor IGU. OK. Let’s go!
Google, can I transfer FSD from one Tesla to another?
Hey, look. You can. Well, you could. For a bit less than 3 months earlier this year.
Tesla will only allow the transfer for owners who trade in their old EVs and order a new one during the third quarter, so until the end of September.
Heck, even Tesla itself puts a low value on FSD when trading in for a new Tesla. From that same article:
The issue is that Tesla seems to undervalue the feature when accepting trade-ins for used EVs, with multiple users saying that the estimates they got revealed a sub-$10,000 value for FSD Beta,
Even Tesla doesn’t buy Elon’s earlier statement about it being an appreciating asset.
So spin it as much as you like, it was an idiotic statement then, it’s an idiotic statement today, and will continue to be an idiotic statement into the foreseeable future. As has been pointed out many times, the current hardware is quite likely inadequate to deliver on the promise of FSD. So even if FSD does become a reality some time in the future, it probably won’t work on any Tesla on the road today - let alone Teslas produced back when Elon babbled about “appreciating assets” several years ago.
What Muck believes is irrelevant. His beliefs need to be backed with verifiable facts to be true. And your second sentence there tells the real story. No one knows whether it will ever become true.
Here’s my belief. The heat death of the universe will happen before any Tesla as delivered before 2024 runs an actual level 4 or 5 capable self driving software.
@albaby1 You are missing it. When society makes the jump all cars talk to each other and there are just about no accidents. I get kids run after balls etc…but that will be minimized. It is not just drunks speeding in neighborhoods with kids.
It’s not just drunks speeding…but that’s a lot of it. Most serious accidents are caused by a small slice of driving: intoxicated, teenagers, distracted, poorly maintained vehicles, weather, and violating driving laws (mostly speeding). A group of sober, adult, attentive drivers obeying traffic regulations in adequately maintained vehicles are pretty darn safe. And honestly, that’s the median driver. Not the average driver, but most people that drive are sober adults that aren’t engaged in risky behavior. It is going to be much, much harder for any driving system to be safer than the median driver, not the average driver.
And since even AV enthusiasts would agree that “perfect” isn’t a realistic goal for AV drivers any time soon, it’s entirely possible that we might be a very long way away from when it’s safer for a sober, attentive driver who doesn’t engage in risky behavior to turn on their computer driver, rather than drive themselves.
*Cars are pretty unlikely to talk to each other much more than they do now (through signalling and horns). It’s hard to see the use case for that kind of technology.
BMB Prediction: We will NEVER have a Minority Report style of AI cars. The vast majority of Americans (and their politicians) will never turn over complete control of their vehicles to some third party.
Heck, the movie itself gives good reason why it will never happen.
They won’t actually be driving at some point. Meaning they won’t be at the wheel when going anywhere.
Invest in the bar business.
I get it. I will never give control of my car over to some third party.
But my grandchildren? I’m not so sure. It seems as if they were born holding an iPad.
And I think that regardless of what’s happening around you, you’ll continue to blandly insist it isn’t happening… because you can’t see it. Maybe your wheelhouse has a remarkably obstructed view?
I suspect that right now, already, there isn’t a single thing in your field that an AI couldn’t do better than you, if anybody with reasonable resources were to bother training it to do so. AGI is harder only because it has to do everything, not just your thing.
Nah. I simply take with a grain of salt any claims that major discoveries in difficult fields are “right around the corner.” I don’t insist that progress isn’t happening - I’m well aware of developments in large-language model AI. But leaping from that to the idea that we’re only a few years away from an AGI?
Not even close. I’m fortunate that my particular field (zoning and land use) involves very, very little written work-product. I work for a BigLaw firm, and the firm uses AI extensively - our corporate and litigation attorneys have successfully automated wide swatches of their work product. But our niche practice remains stubbornly “hand-crafted.” We have a very large technology group, and we’ve met with them several times to talk about what the state of the art is for legal services AI, and it’s just not capable yet of doing much in our field.
Well, there’s another thing you “know” that isn’t true. I guess you must be very used to being wrong by now.
True enough, but irrelevant to this situation.
No need to guess. He was talking about FSD.
You’re being redundant here. You’ve never looked into the details of anything having to do with Tesla. Why should you? You’re a tourist and critic, not an owner.
The answer is that no, in general, FSD is not transferable. For most of 2023Q3 there was an offer from Tesla that the FSD software license could be swapped to an FSD license on a new Tesla delivered to the same owner by the end of the quarter. I took advantage of that for my 2017 Model 3, getting FSD on a new 2023 Model Y for no charge. My expectation is that such offers will come around again.
It’s a potentially appreciating asset. So long as FSD doesn’t work much better than it does now, its value mostly comes from betting it will work well in the future.
This is entirely unknown. For Teslas produced way back when, a public commitment was made that when FSD was eventually fully functional that anything about the then current hardware that didn’t support FSD would be upgraded for free.
Meanwhile, the current FSD work is concentrating on so-called Hardware 3, even though most cars are being delivered with the next generation Hardware 4. Tesla believes that fully functional FSD will work on HW3. Of course, as you say, what matters is what actually happens, not what anybody believes. But, on the other hand, as investors we know that the value of something reflects to some extent what people believe it is worth rather than any objective breakdown of its value.
So yeah, nobody knows what will eventually be true. Your views, however, are fed mostly by ignorance of anything other than what the media tells you. Mine are fed by personal experience over many years, and a close observation of many of the people involved. I may certainly be wrong in my predictions, but they stand a much better chance of being right than yours.
I own a Tesla with HW3. Where in my contract does it state this fact? How exactly does it define “fully functional”?
This is literally impossible. If I am at work and my Tesla is at home backed into my driveway, and come 5:45pm, I want it to come pick me up, how can it possibly pull out of the driveway? It CAN’T SEE if the kids left a toy on the driveway right in front of it! And the kids got home from school at 3 and played in front of the house and in the driveway for an hour and a half.
And then there’s the trivial case of charging. If the car is in my driveway, and the charge level has gone down to 4%, and I want it to come pick me up at work or at the airport, what does it do? All the current versions don’t support self-charging.
I think it would be more correct to say that Tesla believes that FSD will eventually work with HW3. With “FSD” being defined as Tesla defines it, the ability for a driver to enter a destination, turn on “navigate on autopilot”, and the car will drive to that location. It is NOT correct to say that Tesla believes that HW3 can literally do FULL self driving as in robotaxi, because it literally can’t due to missing important segments of vision.
There’s no way for you to know, as you don’t get to see anything remotely close to the cutting edge.
We have a very large technology group, and we’ve met with them several times to talk about what the state of the art is for legal services AI, and it’s just not capable yet of doing much in our field.
The state of the art of legal services AI products I imagine. The state of the art in legal services AI is far beyond that, but why would anybody bother paying to productize it? Only if they can profit from it far more than some other similar thing. So long as it remains the case that in another couple of years it will cost (probably) ten times less, people will work on only the most urgent and critical tasks.
Things humans can already do are far down the list. Things high on the list? Weapons of war, disinformation identification, drug discovery, cures for disease, climate change mitigation, market manipulation, etc. And of course, things that AI grad students are curious about or amused by (hint: that ain’t zoning and land use).
You will be protected from an AI takeover only so long as what you do is both boring and insufficiently profitable for an AI take-over. Of course, AGI will subsume everything (presuming it gets cheap), as it will need little specialized training to be better than anybody at anything non-physical.
The physical stuff will have to wait for much better robots. That will also happen soon, but I don’t know enough about it to have any guess as to when.