There are endless discussions about this. It depends on how they described FSD when you bought it. To what extent that includes what was on the web site and in the voluminous FSD-related blatherings of Elon Musk in addition to what was in the contract clearly depends on local laws.
A guy in England recently got Tesla to refund what he paid for FSD a few years back because of what the web site promised. Of course FSD was removed from his vehicle as a result.
Well, they might need to upgrade the camera suite some. I should have said the HW3 FSD computer.
I think Tesla will, at some point, provide some chargers that the cars can use by themselves. Possibly inductive pads. But that remains to be seen. Up to now, FSD doesn’t work well enough to require that upgrade.
But if you don’t leave the car in drivable condition (or it becomes underivable), I doubt that there’s any obligation for FSD to take physical actions that a human might.
Just curious, because although I like Tesla the EV car maker, I think that Tesla the AGI company is a long ways off. It should be purely obvious to anyone driving the FSB beta (as I’ve been doing for just over 2 years now) that FSD is doing anything but general intelligence despite what anyone claims.
Then I suspect there’s no way for you to know, either - since there’s no reason why you would have seen anything remotely close to the cutting edge of what AI can do in the legal field. AFAIK, you’re not in the legal field, and (I assume, but tell me if I’m wrong) have never done any research or investigation into the application of AI into the fields of zoning, land use, and other legal services involving contested regulatory approvals. There are things in the legal realm that AI can (and does) very well - my particular niche practice shares virtually none of the characteristics of those areas.
Or as long as it is intellectual and creative enough that AI can’t do it. I think you have it backwards - AI is coming for the boring jobs, not the non-boring ones. AI can easily write a letter of transmittal or a request for production of documents; it can’t construct a legal argument regarding an infrequently cited code provision, or sit down with a municipal official and convince them why their interpretation of the relevant ordinances is incorrect.
At one point, Musk was talking about designing FSD to work on HW3 first, and then will revise to work on HW4 - not the other way around. But that was back in August, so who knows.
Yes, and there has been evidence that is happening and no evidence otherwise. There are two reasons this makes sense. First, Musk’s usual overconfidence regarding FSD means he thinks this will be a big leap forward and happen soon. Soon means that there aren’t many HW4 vehicles on the road compared to HW3, so most of the new data is pouring in from HW3 vehicles. Second, since the HW4 cameras are higher resolution, and the HW4 computer is must faster, it means that anything that works on HW3 will work on HW4. So they won’t have to revise anything. That’s not the case in the other direction.
Tesla is currently growing its in-house compute capability rapidly. So, while they are (probably) prioritizing training the neural nets for HW3 and Optimus right now, in a few months they’ll have enough cycles to also train the HW4 neural nets at full speed and they’ll then be able to make progress on doing whatever they plan to do to make HW4 FSD better than HW3 FSD.
Incidentally, FSD V12 gets rid of almost all the C++ code in favor of end-to-end neural nets for path-finding. This takes up less memory and runs much faster on HW3 vehicles than FSD V11. So if they can get V12 to work acceptably, which seems quite possible, it should work fine on HW3, all the way up to robotaxi level.
I don’t know anything specific. But what I do know is if the research guys are anywhere close to AGI, then that means that pretty much any human intellectual task can, if targeted with sufficient resources, be done by AI. And with decent training, can be done better. So the specifics of your field don’t matter in the least, only that it’s an intellectual task that can be done by people.
AGI means there is no such task. Close to AGI means there are few such tasks. I doubt very much you just happen to be engaged in one of the few. But, as I said, you are likely engaged in something boring and unprofitable. What that means is, to be clear, not interesting to AI researchers, and not likely to be worth a lot of money to replace you with a machine (compared to other opportunities).
Remember, once they have AGI it will cost little or nothing to replace you, so there has to be a good reason to hurry it along before that point.
It’s funny that you think so. If people can construct a “legal argument regarding an infrequently cited code provision” then why would you think an AGI couldn’t? If it couldn’t, it wouldn’t be AGI. And I have no doubt it could do it now if there were a reason to target that task.
And as for “sit down with a municipal official and convince them why their interpretation of the relevant ordinances is incorrect”, an AGI would simply transmit the information to the municipal official’s AGI aide, which would do the convincing part (if merited). As for the graft and corruption part, which seems traditionally to be a big part of the business of land in Florida, I’m not at all sure as to how ethics will be built into AGI. It is a difficult and quite inconvenient question. The video I linked has some speculation about that.
You really don’t understand where this is going, do you? It’s truly terrifying.
Admittedly, the timeline is dependent on many things, especially cost.
The case was settled. And, as the guy wouldn’t accept being silent about it, I assume Tesla was pretty eager to settle.
Nobody claims that Tesla’s FSD has anything to do with AGI. It’s a set of neural nets tasked to solve specific problems.
But I claim that once AGI is solved, then FSD will be automatically solved for everybody who can feed the AGI sufficient information about the surroundings of the vehicle and have the AGI control the vehicle. More or less.
This demonstrates beyond all imagining that you have no idea how the world works. There are a few people who make a sale with logic. But overwhelmingly you convince and bring someone to your point of view with emotion; by reading the other person, by shaping argument on the fly depending on the responses, and close the sale at the perfect moment. You don’t do that with a concluding paragraph. You certainly don’t convince a municipal official by sending his assistant some kind of “convincing document” and let him do the arguing for you.
You can laugh all you want, but you’d be surprised how much of the world’s decision making revolves around personal relationships.
The infrequently cited code provision is a necessary but insufficient requirement. In the real world, the best argument doesn’t necessarily win. In fact, I’d say the best argument isn’t even the best predictor of who wins. Being able to sit down and shoot the sheet is how deals get made.
I always thought I would be great technically and advance my career that way. But in my field, being great technically is good, but inviting people out to lunch is better. A personal relationship trumps the best technical argument. I’ll go to the mat on this one.
We’re not talking about how the world works, we’re talking about how the world is going to work. The graft and corruption and “convincing” will have to find different paths through the system.
And yet, when the municipal official becomes superfluous because the AI assistant is the one who understands the rules, what happens then? When the municipal official is in the position where not following advice (of his expert AI) requires justification?
The interesting question for process and procedure is not actually what happens when AGI equals human intelligence, it’s what does the world look like shortly thereafter when AGI vastly exceeds human intelligence. It might be very shortly thereafter, with little time to adjust.
I’m well aware. I just think that AGI will change everything. And people won’t have much control over the process.
Add automatic braking and lane departure warnings (two things that already exist) and you could put a big dent in distracted driving as well. That might deal with another 10% to 20% of accidents.
I mean - really? Is that how you think these things work - that the way you persuade someone is by sending “information” to their aide and that they’ll go and argue with their boss on your behalf? Not a chance. If you want to persuade a municipal official, you have to go and talk to them and make your argument. Or hire a good lawyer to do it. Not send a white paper to their junior staffer and expect them to be your advocate for free.
As for the rest, obviously I don’t think there’s any reason to believe that “the research guys” are close to AGI - but I also think you’re confusing AGI with a much more advanced goal of an artificial superintelligence. An AGI is an AI that’s achieved something comparable to the general mental operations of a human. Not a brilliant human, not even a necessarily smart human. Just…a human. A program could be an AGI even if it were not especially good at doing some human tasks - and there’s good reason to think that’s the most likely outcome.
The first AGI’s may be exceptionally good at some tasks (pattern recognition, coding, playing chess and go) while being decidedly mediocre or worse at other things (creating a persuasive argument, writing poetry, comforting a distressed person). Jobs that require above-average skills or intelligence or creativity - that an average human can’t do, but certain humans can - are not necessarily replaceable even by an AGI.
There are many things I find quite questionable about how soon AGI will arrive and what it might be able to do. But one thing that you are missing is that even if AGI were “solved” in a large supercomputer cluster that does not mean it can automatically be applied inside an autonomous car, given that it must run, well, autonomously. This means stand alone, in the car for safety and other considerations that prevent needing real-time communications, having sufficient bandwidth for a dense population of cars, etc.
(We’ve recently seen how Cruise disclosed they needed remote help every few miles. Humans helping the cars isn’t AGI, but certainly a practical intermediate stage)
So, the computer in the car is insufficient to perform AGI by many orders of magnitude with today’s computers.
It also depends on what and how you want to define AGI.
So I asked the MS ChatGPT to define it:
Artificial General Intelligence (AGI) is a hypothetical type of intelligent agent. If realized, an AGI could learn to accomplish any intellectual task that human beings or animals can perform. Alternatively, AGI has been defined as an autonomous system that surpasses human capabilities in the majority of economically valuable tasks. It’s a branch of theoretical artificial intelligence (AI) research working to develop AI with a human level of cognitive function, including the ability to self-teach.
So the car would not really do any task, And if trained on a large supercomputer would not be selfteaching. The small subset of AGI for driving would/could be autonomous, as required for real-time safety.
I then asked ChatGPT how long till we get AGI
The development of Artificial General Intelligence (AGI) is a complex task that requires interdisciplinary collaboration… According to Demis Hassabis, the CEO of Google’s DeepMind, AGI might be developed “within a decade”
If we could get to AGI in a decade it would be on a large supercomputer (maybe 1000 or 10,000 nodes of 1000 - 4000 watt CPUs/GPU/Dojos). Shrinking this to a 100-200 watt car computer will take quite a while and not even be a practical thing to do.
Tesla’s share in the all-electric segment in the United States decreased to 57.4 percent from over 65 percent a year ago. That’s a noticeable change.
BEV registrations in January-September 2023:
Tesla (57.4% BEVs): 489,454 (up 41%)
Non-Tesla (42.6% BEVs): 363,450 (up 98%)
Total: 852,904 (up 61%) and 7.4% market share (vs. 5.2% a year ago)
One of the most interesting findings (when comparing the latest numbers with the ones from the previous report), is that Tesla’s EV sales growth almost stalled in September. The number of new Tesla registrations in September amounted to about 51,500 electric cars, which is just about six percent higher than a year ago. Meanwhile, non-Tesla all-electric car registrations more than doubled year-over-year in September to over 46,500.
I said nothing about it working that way now. But it’s an example of how I think things will work when we have useful AGI. Although I think you are confused about what information is – anything you can communicate in a sit-down meeting is also information.
I suspect that once AGI is achieved, affordable AGI will follow quickly, and super intelligence will be close behind. There’s no reason to think that human-level AGI is a stable point for machines. Humans span quite a range in intellectual capability. Machines will too. And it will likely turn out that various varieties of super intelligence will appear quickly, especially for any jobs deemed important. As I said earlier, the dull, unimportant jobs will exist for humans for a while longer.
We can hope that by that time we’ve figured out how to deal with the alignment and ethics issues.
It’s all speculation of course, but some speculation is more informed.
No, it’s not something I’m missing, it’s something you don’t understand. The current approach to AI, on which a clear path to AGI is seen, involves large clusters of powerful machines doing the training, but the result is fairly small stand-alone AIs. There is further specialization training that can be done quickly and cheaply too.
As I’m using the term, an AGI can do any intellectual thing a human can do, including self-teaching. A car doing self-driving obviously doesn’t need an AGI. But if an AGI exists that can go in a car, then self-driving is solved. Also obviously.
That’s funny. You know ChatGPT is not current in its knowledge, right? And asking an AI about AGI progress is just a demonstration of where we’re going. Even now, people are starting to trust an AI to know more about most subjects than they know themselves.
Yeah, I should have included weather prediction too. It’s kind of tied to climate change mitigation as climate change is making it much more urgent, but they would have done weather prediction anyway, as it’s one of those traditionally unsolvable problems, on the old list with chess and go, only more valuable.
A conversation is different than written communication. It’s a different medium of communicating information. More informal, more interactive, and more “forgiving” in terms of letting people express arguments and points of view than even an exchange of emails. I think on some level you recognize that, which is why your first thought was that the machine would communicate with an aide who would then try to persuade the decision-maker, not try to persuade the decision-maker directly.
Again, no reason to think that they’ll appear quickly. People once thought that because early computers had amazingly powerful capabilities to do some things better than humans (like mathematical operations), they would quickly be able to do other things better than humans relatively quickly too. Like computer vision - that in the mid-1960’s was thought to be so close to “right around the corner” that an MIT professor assigned solving it as a summer research project. Three decades later a computer was beating human grandmasters in chess, but vision was still very much an unsolved problem.
There’s no especial reason to think that artificial minds won’t share those same characteristics - that they will be exceptionally good at some things, and that they might be mediocre at other things - for quite a long time. AI’s that can do amazing things with language - like writing better college essays than a typical human - may end up being little more than stochastic parrots, and terrible at certain other types of reasoning.
Nor is there any reason to think that it’s the dull jobs that will last the longest - because the dull jobs are going to be the ones that are easiest to automate. As they have been throughout human history. Complex and complicated jobs that require difficult reasoning and creativity have been the ones that hold out the longest.