One negative outlook on autonomous driving

Good reading. Thanks!

However there is a comment in there that I find particularly relevant when trying to equate the achievements of AlphaZero with the challenges FSD.

AlphaZero’s ability to master three different complex games – and potentially any perfect information game – is an important step towards overcoming this problem.

That phrase, perfect information game, is reference to games with all the rules clearly defined. Nothing can happen on a chess board that isn’t within the rules. Driving is not like that, not even a little bit like that.

(I own a Model Y, for which I paid an extra $10k for FSD. I recently got the FSD beta but I have not used it very much yet.)

2 Likes

Absolutely! Driving is not the same kind of problem. So it will require something different to solve it. But it’s very uncertain how different. For example, how well can AlphaZero do on an almost perfect information game. Can it do almost as well? Or does it just fail?

So maybe the same techniques can be adjusted to handle problems that are similar?

And attacking the problem from the other side, maybe it’s possible to abstract away the imperfections of the driving information somewhat. What happens if we take the driving task and slow everything down 1000 times? It seems it would become much simpler in some ways. Maybe simple enough to be solved? Then the problem becomes speeding things up, which is a very different sort of thing.

So there’s much to be said for using tools that have been shown to work and adjusting the problem to fit the tools. Surprising things can happen.

Anyway, right now nobody knows why machine learning techniques seem to require massive amounts of data, while humans can pick up new things in just a few tries. Since we know it’s possible, because we do it, I suspect the answer to that question will come soon. From a software perspective it may look pretty much the same, but it will change everything. Overnight.

-IGU-

1 Like

Why do you think it will come soon?

I can understand the confidence that it’s possible - as you point out, we’re able to do it. But as you’ve frequently pointed out on this thread, no one can really know what the development path for AI will be, going forward. Maybe it’s soon - and maybe it’s 20 or 30 years down the road?

After all, learning to do something after just a few tries was also “possible” future development for machine learning in 2002 (for example) - because obviously humans were able to do it then, too. But developing an AI that could do it wasn’t going to happen “soon” from 2002. Machine learning wasn’t close to being able to do that yet.

I think I know why. :sunglasses:

Humans have experienced massive amounts of data before they are faced with the new things we can pick up in just a few tries. We are not starting from nothing, but have our cumulative life experience already integrated.

2 Likes

That is a good description. For example, humans know long before they learn to drive that a plastic grocery bag lying on the street isn’t going to harm them or the car, but a same size rock is to be avoided. You don’t need to drive over 100 different bags and 100 rocks of different types before you figure out how to handle seeing one of them.
Put a big rock in the bag and you’ll likely be a bit suspicious if there is any wind because the bag is fluttering but not moving so you’d want to avoid it.
You’d need another ~99 tries with AI.
AI researchers, of course, know this, and that is why the cars are generally over cautious but still making mistakes.

Mike

1 Like

Just to clear up any potential confusion: A game with perfect information means that there’s no hidden information. In chess, everything is visible to both opponents. Contrast this with poker for example, where you have hidden information, i.e. players do not know the cards of their opponents.

In other words, all the rules clearly defined (i.e. a game), and also no hidden information.

3 Likes

Sorry, no. Babies can do it too. There’s something about the architecture of brains.

-IGU-

1 Like

Because trying to solve a problem for which you already have an existence proof of a solution is much, much easier than trying to solve a problem for which there may be no solution. It’s partly a human psychological thing.

But it’s also a very practical thing. We know that we’ve got neural nets that do things a lot like the human brain. They’re just not quite there yet in a few ways: energy usage, training efficiency, size. So we can try variations and observe how they differ in implementation and in effectiveness. Then iterate. Get closer. The field is so new that lots of easy things haven’t even been tried yet. And as our computers get faster, things that we couldn’t try yesterday are easy today.

Evolution has been doing trial and error tests for millions of years. We’re getting better at looking at the details of the architecture of our brain. Can’t be all that long before somebody notices something interesting about which axons connect to which structures and says “nobody’s tried that yet, have they?”.

-IGU-

Which is true…but as I said upthread, it was no less true twenty years ago. And we know now that an AI capable of driving a car was nowhere “soon” back then.

Obviously, we’re leagues ahead of where we were in 2002 in terms of both computing power and knowledge of how to construct enormously complicated machine intelligences. But we genuinely don’t know whether we’re close to what we would need in terms of both computing power and skill/knowledge to generate an AI that can do these things sufficient to operate as a true Level 5 AV driver.

It sure seems like we’re close, of course - we’ve got AI’s that can do a lot of driving skills. But it might still be a long way away before we can get that last bit. DeepBlue beat Kasparov in 1997, but it was nearly two decades later before an AI beat Lee Sedol in Go. Even though we had an existence proof of a solution to Go-playing in 1997, and proof of an AI solution to chess-playing in 1997, neither the tech nor our knowledge base allowed us to do the same for Go back then.

Again, I think we just don’t know whether an AV AI capable of performing Level 5 driving will be developed “soon” or “a longish time from now.” And I don’t think the fact that we have really great Level 2 AI and rapidly developing Level 4 AI in limited areas means we’re necessarily close to Level 5, based on a super-limited layman’s understanding of where the tech is.

1 Like

I haven’t thought this through yet, but we can perhaps still discuss it. I don’t think Go is a good analogy to autonomous driving. That’s because Go has everything clearly delineated. For example, how would a Go AI react if the table shook and that caused a piece to shift from its spot to an adjoining spot with another piece? Or if the shaking caused a piece to fall off the board?

1 Like

I agree, Go is not at all analogous to driving.

But that’s not really why I was bringing it up (or, if I may speak for him, IGU either). IGU was noting Go as an example of a problem in AI that seemed like it might take a fairly long time to solve back in 2012-2013, but actually got solved pretty quickly from that point. It’s a good illustration that sometimes a solution to an AI problem that looks really insurmountable might end up being just around the corner.

And I was bringing it up in comparison to chess - another strictly rules-based perfect information game. We had an AI beat one of the best human chess players as early as 1997, but it took another 19 years to accomplish a similar feat in Go. And my understanding (which might be wrong) is that the AI that was able to win top-level Go ended up being created/invented/structured pretty differently than DeepBlue was as well. I’m using it as an illustration that sometimes when it looks like you’ve gotten most of the way towards solving a problem, it might still be a very long way away. It ended up being only about 8 years between computers winning titles in checkers and then to chess, but more than double that - and a pretty different foundation - to go from chess to Go.

We have pretty decent Level 4 AI systems, that can handle highway driving and certain types of suburban driving to a tolerable safety level. We might therefore think that the step from there to Level 5 autonomy might seem close - just figuring out how to do the same sorts of things, but in an environment where there’s a lot more pedestrians and driveways and pigeons/plastic bags and bicyclists and double-parked vehicles and the like. But it might end up being the case that those extra complications present a significantly larger enough universe of edge cases that we’re in a Chess-to-Go situation - that you can’t get from point A to point B with just a faster/stronger/quicker version of the same type of solution.

2 Likes

I know human drivers that operate at Level 4.

Specific circumstances? No driving at night, or in cities, or on highways, or in the snow…

Geofenced? No driving anywhere new.

I have family members - mature family members - who choose to abide by such restrictions.

(To say nothing about human drivers we have all observed, some of whom would struggle to be rated at Level 3.)

1 Like

The existence proof was true then. All the other things weren’t.

I think we’re pretty close to solving the problem of training neural nets from just a few examples rather than millions. I suspect it just requires a few tweaks to current solutions, but we don’t know what we’re doing so it’s hard to guess how long it will take. That will make every problem, including the all the problems of autonomous driving, look different. Many easy problems will become trivial, hard problems will become easy, very hard problems will become solvable, and impossible problems will become very difficult.

Whether this means autonomous driving will then be a solved problem, with just some implementation details to work on, won’t be known until they get there. I’m sure that at some point we’ll start arguing about how autonomy can’t work because it won’t do the right thing at a sobriety checkpoint or when challenged by a border guard. New impossible problems.

I drive a car that mostly drives itself. It’s an interesting experience, but it is nowhere near autonomous yet. It keeps getting better. Slowly.

Have you ever driven such a car? Do you drive one regularly? Or are you just discussing abstractions for which you have no intuition?

-IGU-

I have, in fact, driven such a car. I do not own a Tesla, so not regularly. But I have a close friend who does (and they paid for FSD), and since they’re an enthusiastic fan of the car, they’ve made sure that I’ve driven it on more than a few occasions. Not recently (ie. not since the beginning of the summer or so), so perhaps not the very most recent version of the software - but close. I agree that it’s a very interesting experience, and one that is both enjoyable and (for a non-regular user) a bit disconcerting.

I suspect not - a car that is capable of handling any driving situation by itself will probably be recognized as a working example of an autonomous car, even if there are a few specific scenarios where a human outside of the car will need to be able to give directions because of their legal authority (ie. not technically a “driving” situation). We’ll just need the AI driver to be as safe (or safer) than a human driver in driving situations.

No way. It will have to be much better, perhaps much much better. Fails will be headline stories, as they are now. A significant body of positive evidence will have to be established and that will take time. There will be lawsuits against people “not driving” and against companies which encourage people not to pay attention (even as their fie print says “Pay Attention.”) Legislation will be offered, debated, perhaps laws will be passed.

It will take years and years to be widely adopted, not least because of legal issues, but also simply because of price: outfitting a car is going to cost thousands more (not even including the possible liabilities). And, of course, the simple turnover of used cars taking more than a decade…

None of this is about “when it’s possible”, but even when it’s possible that doesn’t mean it will be in reach of most. (My view on when it will be possible - technically - reaches well out, and then add much further out for it to become practical, accepted, and much much further for ‘common.’

It’s already been almost a decade since Musk started talking about it, mislabeling the option “AutoPilot”, and then backtracking repeatedly admitting that it’s way harder than he thought. I think it’s even harder than that.

2 Likes

Probably. But once it’s as good, becoming much better will happen quickly. And insurance companies will be quick to tell us what the situation is by what they charge for their policies.

Of course. The noise will be endless.

No time at all for Tesla, as they collect the evidence as they go. They publish reports on it too (Tesla Vehicle Safety Report | Tesla).

Of course. Truly autonomous vehicles effectively do not exist until the purveyors of such systems indemnify the occupants of the vehicle against any (reasonable) liability. There’s nothing like that on the road now.

Of course. And much already has. Sufficient so that autonomous vehicles can drive in much of the US at least. Well, that’s what I’ve been told. I have not verified.

The price of outfitting a Tesla to be fully autonomous is $0. All cars are sold with all the equipment. It’s just the software that’s not up to the task yet, and the price of the software is completely arbitrary. And people claiming the current hardware will never work would do well to go look at what happened with AlphaZero: once DeepMind figured out their new approach it needed a smaller and slower computer for a massively better result.

As to legal issues, that will be interesting. Elon Musk has been quite clear that he believes that once the use of autonomous software leads to significantly fewer deaths and injuries on the highways, that it becomes a moral imperative to deploy that software. He has said that Tesla will deal with whatever lawsuits result. So who knows how that will go, but it seems likely that at least one company will be leading the way.

Tesla has also said that they are currently working on a vehicle that will cost about 1/2 of what a Model 3 costs to build, so they will presumably sell it for much less than a Model 3. I wouldn’t expect that until 2025, but it really depends mostly on how the market for vehicles and autonomy plays out.

Once new cars are much more capable than old cars, people won’t be able to get rid of their old cars fast enough. And, of course, many households that today find it necessary to have two cars will be able to get by with one.

Of course one aspect of new cars being more capable is the growing number of cities that will tell you that you can’t bring your old car on to their streets, either at all or (like London: Drivers of cars that pollute pay a fee in London's expanded ultra low emission zone : NPR) not unless you pay a hefty premium every single time.

You’ve piloted a plane on autopilot and a Tesla vehicle and found their “autopilot” capabilities very different? (Hint: it’s not mislabeled, it’s misinterpreted by people who’ve never used an autopilot system).

It’s been eight years since the first Teslas were delivered with Autopilot. I got one built the first week they built them in September, 2014. I had no idea I’d be getting it as they just put it in without announcing it – I got lucky that my order was filled a couple weeks later than they originally said it would be. In fact when I took delivery of the car and asked “What do these markings on this stalk mean?” I was told “We can’t tell you, but you’ll know soon.” But they certainly weren’t saying it would be fully autonomous back then, just do some cool stuff on limited access highways.

The actual problem is that he hasn’t backtracked (much). He keeps repeating that for sure it will work next year. So yeah, Musk has been very wrong about how difficult the problem has turned out to be.

A tepid defense is that estimating the time it will take to solve a software problem has always been pretty much impossible. As a software engineer, it was something I avoided to the best of my ability. One vaguely humorous rubric I remember was: whatever time you think it will take, triple that and then use the next units up. So if you think it will take two months, then say it will actually take six years, and a one day quick hack will really take three weeks. One addition to the rubric was: even taking this into account, it will take longer. And then, for extra accuracy, add in the well known rule that adding people to a late software project will make it later.

The real criticism I have of Musk in this regard is that he should have known better, and he should certainly know better by now. He should stop saying anything public about autonomy, as in STFU, or just leave it at “we’ll get there when we get there”. Because Tesla will get there eventually.

-IGU-

2 Likes

There’s no way for us to know this. It is in fact, not a fact. YOUR Tesla likely requires an MCU2 upgrade if you ever want full autonomy. And there is no concrete reason to believe that more recent Teslas won’t also require hardware upgrades for true full self driving.

In my experience, there are often substantial lags in certain cases which indicate to me a lack of sufficient processing power even today without true autonomy.

You’re right that there’s no way to know this for sure until it happens. And it’s possible that it will never happen. But my concrete reason for believing this is that Elon Musk said early on that people who purchased FSD would get the full thing when it was delivered, and if any hardware upgrades were required they would be done free of charge.

They already upgraded my FSD computer from 2.5 to 3.0 for free (and lots of other people’s), so it’s not as though it’s simply blind faith.

And I already paid to get the MCU2 “infotainment” upgrade. There were enough advantages to it that I decided it was worth paying for it now rather than waiting for something to eventually break. I’ve had the upgrade for almost a year now, and I think it was worth getting, partly because Tesla keeps providing new features (not FSD related) that weren’t even thought of when I got my car almost five years ago, and some of them aren’t available with the older non-upgraded hardware.

Yup. We won’t know for sure until we know for sure.

-IGU-

Unless, as you yourself it requires new hardware sensors, differentiated from what already exists. And/or news softwares, unable to be run on the current platform. And/or a whole new suite of hardware, OS, sensors, etc. IT seems unlikely that the company will be able to retrofit everything they’ve ever sold. Not perfectly parallel, perhaps but close enough to how Apple won’t retrofit your 1999 iPod to work with iTune 14.8 because they can’t.

I was involved in the airline industry for several years. I know what “autopilot” means - and does. And yes, I have a friend with a Tesla and have driven it, and no, it does not do what “autopilot” in and airplane does. It has some of the characteristics, but not all. It’s mislabeled, as even the government has decided:

IF they had called it “Driver Assist” I’m sure they would have no trouble, since that’s what it is. It’s not close to “autopilot” in any sense.

1 Like

No, any hardware upgrades needed for FSD (for somebody who bought FSD) should be $0 regardless. That’s what they’ve said and what I expect they’ll do. But, of course, we won’t know for sure until it happens.

Remember, one aspect of Tesla growing deliveries by >50% every year is that most of the cars on the road are newer. My 2017 vehicles with FSD are from a year when Tesla delivered ~103K vehicles, and in 2021 they delivered ~936K. That’s a CAGR of over 73%. So the upgrade burden to Tesla becomes relatively small over time. Not even close to “everything they’ve ever sold”.

And, of course, they would only have to upgrade those vehicles for which FSD was purchased, best guess being 10-20% in the US and 1-5% elsewhere. So, relatively small numbers.

Sure. “Involved.” Were you a pilot? Or a liability lawyer? Pretty much any other kind of “involved” doesn’t mean much. But if you do know something, please describe who’s responsible when things go wrong, and under what circumstances. I’m ignorant, having never been a pilot or a lawyer of any kind.

And you drove a friend’s Tesla once (or maybe more)? That certainly makes you more informed than most who post here and pretend to know something. As to your experience, when driving did you use Tesla’s “Autopilot”? What vehicle? What version of the firmware? Was it Autopilot, Extended Autopilot, or Full Self Driving? Capabilities vary.

And in what sense does it not do what an airplane’s autopilot does? In even its basic configuration, Tesla’s autopilot drives nicely in long distance limited access highway situations without much interference needed. Sure, complex and emergency situations require the human to take over, exactly as with airplanes. So please be specific if you’re going to claim it doesn’t measure up.

No, the government hasn’t decided that. The NHTSA is conducting one of its endless attempts to find some problem with Tesla’s systems. They have not concluded anything and have moved from the “should we investigate this?” phase to the “let’s collect some evidence” phase. And it’s not even a “doesn’t measure up” investigation.

As with all of these situations, if you look you’ll find that vehicles that are not Teslas on Autopilot cause far more crashes and mayhem than Teslas on Autopilot. Whether this is also true when adjusted for the relatively small number of cars on the road that are Teslas on Autopilot remains to be seen. My guess (and it’s just a guess at this point) is that they’ll find that Teslas on Autopilot are safer than both Teslas not on Autopilot and cars that aren’t Teslas. We’ll learn more eventually. Maybe. These investigations have a way of just quietly disappearing when the NHTSA gets tired of them.

One problem with this sort of exercise is that if they ever come up with anything it will be long after there is (almost) nothing on the road that looks like what they’ve investigated. Tesla updates its software regularly, often with changes to Autopilot. In the case of my Model S that I got almost five years ago, I’ve accepted 90 software updates.

The NHTSA has forced Tesla to change some things, all for the worse. One was something about custom horn sounds that I didn’t pay much attention to because I don’t have the hardware for that.

The most obnoxious change, one that makes FSD less useful, is that they forced Tesla to not allow “almost” stops at stop signs. For a while, the FSD beta software was driving like a human, doing rolling (2mph) stops at stop signs where there was no traffic of any sort on the cross street; but the NHTSA said that this was illegal and so Tesla couldn’t do it, despite it being the way most humans do it, and it being perfectly safe. What this has meant is that the car now comes to a complete stop even when it’s pointless, annoying any following drivers, so it’s polite to override the system and then reengage it if anybody is behind you. Not a win for anybody.

-IGU-