BusinessWeek has a pretty pessimistic article on the viability and future of AV tech. Distilled into one paragraph:
In the view of Levandowski and many of the brightest minds in AI, the underlying technology isn’t just a few years’ worth of refinements away from a resolution. Autonomous driving, they say, needs a fundamental breakthrough that allows computers to quickly use humanlike intuition rather than learning solely by rote. That is to say, Google engineers might spend the rest of their lives puttering around San Francisco and Phoenix without showing that their technology is safer than driving the old-fashioned way.
Yeah, unfortunately it’s written by somebody who seems to know nothing about autonomous driving. And the “fundamental breakthrough” mentioned is just BS. If you parse it correctly, you’ll see that it’s just obfuscating expert-speak for “we can solve the problem if we solve the problem”. But you don’t see them positioning themselves with the entire framework necessary for taking that fundamental breakthrough solution and turning it into a product. Or working to make that breakthrough themselves. So it’s just an excuse.
The money quote from George Hotz, of comma.ai, about how it’s all a scam, is taken completely out of context. He sure doesn’t think comma.ai is a scam. He doesn’t think Tesla is a scam. What he thinks is reprehensible are all the highly funded companies spending billions on something he thinks can never be economically viable. See the interview here (George Hotz: Fully Self-Driving Cars Are a 'Scam' and Silicon Valley 'Needs To Die').
Mainly, the piece seems to say that the people who haven’t even gotten close to solving the problem are bitter and disillusioned. Amen to that. The people who are continuing to make progress (like Tesla and Mobileye) are doing just fine.
It will be interesting to see how the IPO of Intel’s Mobileye division goes.
I liked the PBS program better. It showed two different approaches to the problem, and revealed just how far we are from actually making it work without human intervention. If you support PBS you have access easily. Otherwise you may have to hunt. Look Who’s Driving.
Don’t get me wrong; it’s amazing how well it works. But there are some HUGE problems to make it work as well a human operator. The program explains why.
That product is fascinating - but Comma 2 appears to be operating entirely in the Level 2 space. Indeed, part of its functionality - and the reason it rated so highly compared to OEM models - is keeping the driver engaged and making sure the driver only uses it as a driver-assistance tool, rather than as a step towards self-driving. Watching the interview, it certainly seemed to me that Hotz didn’t think that Level 4 or Level 5 autonomy is attainable any time soon.
I don’t think “fundamental breakthrough” is obfuscating expert speak for “we can solve it if we solve the problem.” I think it’s a way of saying that the reason the current approach cannot work no matter how much we perfect it - akin to noting that no matter how perfect your zeppelin, you cannot reach the moon with that technology. Or to use a more pertinent example, it’s to note that while you can design a brute-force lookahead program that can win in chess, you can’t do that in the game of go - because it’s fundamentally a problem that can’t be solved by pumping more speed and computational power into the machine. So in this context, it’s a claim that no matter how much data your expose your rote-learning neural net AI to, no matter how much data it has processed, no matter how much it has memorized, the approach currently taken cannot “solve” driving - no matter how much you ‘perfect’ that approach.
I don’t know whether that’s true or not, but the slow pace of making the jump from Level 2 to a safe Level 4 certainly suggests the problem is much harder than originally forecast. I suspect, based on his public statements, that Musk genuinely thought his system would be capable of operating robo-taxis by now.
Oh, yeah, Musk has been wrong on autonomy over and over. He should really just admit he hasn’t a clue and then shut up about it. The problem, so far as I can tell, is that software has no “first principles”, no physics. So his primary superpower just doesn’t work. Why he doesn’t recognize that I don’t know. Probably for the same reason he doesn’t recognize his inabilities in other areas – common human failing.
You’re not parsing it correctly. The breakthrough he describes is essentially solving the problem. If we could just figure out how to climb the mountain we could get to top. Well, duh. But he dresses it up a bit in engineering speak.
And the truth is that they’re all at various dead ends and can’t start over. Tesla has started over three times now, and may have to start over again. That takes real courage, and is what it takes to succeed. The biggest killer for these guys is that, as Musk has been saying from the beginning, Lidar doesn’t help. It gives you some quick wins in the beginning, but leads nowhere.
You can see that Tesla is making serious progress in that not only did they eschew Lidar, but they’ve ditched radar and now ultrasonic sensors as well. As they have improved vision and object recognition they’ve made more and more possible. Vision has now gotten good enough, I think, to solve all the autonomy problems. So they can concentrate on the rest of it.
Well, yes - but that doesn’t make it obfuscating or engineering speak. He’s just contrasting a fundamental breakthrough with the idea of refining or improving the ideas we’re currently trying. Again, think of the zeppelin. You can’t get to the moon using a zeppelin. Ever. No matter what incremental improvements in zeppelin technology - even achieving a Perfect Zeppelin - you can’t reach the moon using that tech. You need a fundamental breakthrough in propulsion in order to get to the moon.
Of course that fundamental breakthrough - rocketry - is essentially “solving the problem.” But the point he’s trying to make is that the problem can’t be solved by just going further on the road we’re on - and thus may not be solvable until much farther in the future than our partial progress suggests.
It might be tough for Tesla to “start over.” Could they even do that? They’ve already pre-sold the product to a lot of people. They’ve kind of committed to the idea of Level 5 autonomy product - but perhaps only in their marketing and public-facing press? I wonder if there’s a copy of their Purchase and Sale Agreement (or whatever their equivalent is) on the web - that would contain the specific promises that they’re making to the customer when they buy the FSD package.
I think the problem is that you have no engineering intuition. I don’t think you’ve ever engineered anything, so it’s very hard to convey to you what’s obvious to an engineer. I’ve tried analogies, but they don’t really do the job.
So, sorry, it’s not like your zeppelin analogy. It’s just BS. That’s not how an engineer actually trying to solve a problem talks. I can’t explain it better. Maybe somebody else can.
As I said, they’ve started over several times already. The first time is when they ditched Mobileye and wrote their own vision software. But, starting over doesn’t mean they take away what they have from everybody and say “wait while we redo this”. What it means is more like there are six separate pieces and they redo one at a time, establishing new interfaces, and gradually migrating over to using them.
Or it might involve setting up a new team to do the new thing while the old thing bumbles along for a while. But then you might run into all sorts of social problems. Who wants to be on the new team doing the cool new thing? Who wants to be on the old team fixing bugs in code destined for the scrapheap?
It takes courage, because it usually involves slowing progress on new features, and spending more money than predicted. And admitting you were wrong. So people usually avoid it.
And the more time you let go by pursuing your inadequate solution (i.e. Lidar) , the harder it becomes to explain to management. So you don’t. You start talking about breakthroughs, which if management understood anything they’d realize that if you had that breakthrough then you wouldn’t have needed your inadequate solution in the first place.
But is it possibly how an engineer might communicate that a problem can’t be solved with technology we have today? You’re right - I’ve never engineered anything, and don’t know how engineers talk. Which is perhaps why I read that comment differently than you. To me, it scans as saying that nothing we currently know how to do will work to solve this problem, and we have to wait until someone comes up with some completely revolutionary new technology before it can be done.
For example, you’d need a real fundamental breakthrough in a lot of areas before you could mass produce a functional laptop for less than $50. It can’t be done within anything we know how to do today (which is why the One Laptop Per Child effort to make a $100 laptop failed). If it is possible to do it at all, you’d need a massive change in something that reduced the costs involved drastically.
Well, what I mean was whether they could “start over” and take a completely different approach to AI. If it turns out that the approach that they (and others) is simply not capable of ‘solving’ Level 5 autonomy, no matter how well done it’s accomplished, would they be in a position to tell their customers that autonomy will have to wait until someone figures out a different way to teach machines how to drive?
BTW, I did a little legal (not engineering!) research, and found out that as a contractual matter the answer is yes. When they sell you FSD, they’re only selling you the services they currently know how to provide (a suite of Level 2 driver-assistance tools) and the promise to update your car as they develop new tools. Even if they never deliver autonomy, they’re probably okay from a legal perspective.
You are correct. After viewing and understanding the program, I am now convinced that without a major breakthrough, the dream of fully autonomous driving will not happen within my lifetime. I won’t say “never”. But not in the next 20-30 years. It’s a much more complex task than most people realize. We do enormous amounts of processing in our heads, very quickly, and utilizing various shortcuts. We aren’t even aware we’re doing it. The present approaches to machine learning and/or big data are unable to replicate that. Which is why the machine didn’t recognize the guy with the pizza box as a pedestrian, for example (it was in the program).
On a side note, TMF just “promoted” me?? Weird. I’m now “trusted”.
I think ultimately, all vehicles on the road will communicate localized position via a network, improving the accuracy of autonomous navigation… Pedestrians too? (albeit the privacy issue comes into play)…
Ooooh! The Thetans! Do level 7s get their own private boards away from us plebes?
By the way, a majority of Americans describe themselves as “conservative”. I don’t know why Xians (the majority in this country) and conservatives (also the majority in this country) seem to think they’re picked-on. Not that either group is monolithic. And self-description sometimes is different from policy positions (e.g. a majority of Americans supported some form of universal healthcare even as conservative leadership did not).
Back to autonomous driving, the networked vehicles is a neat idea, and some form of that may be implemented. But it still doesn’t solve the larger problem. Also, autonomous driving is almost exclusively tested in sunny places (I see the vehicles frequently in Phoenix). They have problems in the rain and snow, especially snow when the cameras can’t see the lines). Human operator required.
I think ultimately, all vehicles on the road will communicate localized position via a network, improving the accuracy of autonomous navigation… Pedestrians too? (albeit the privacy issue comes into play)…
I don’t yet believe it can’t be solved incrementally, although I have not read the original cited article. I have engineered a lot of reasonably complex problems, and my experience is you are lost in the woods of despair for a long time, and then after a while you go over the top and things get easier and you know you will get there.
I think the path forward is through “driver assist.” I think we just keep enhancing driver assist which means the driver has to do less and less stuff to keep alive, as automation keeps capturing failure case after failure case and solving them and keeping that result as it further improves.
I am under the impression that Tesla has eschewed using location specific algorithms and I suspect this leaves a lot of money on the table. As I drive, the routes I drive every day I CLEARLY use location specific knowledge. I know about the big turn up ahead, the light where you can go so far in the middle but have to get in the left lane early to turn, the hills where a driveway pops into view suddenly, etc. etc. Driving a route over and over is like memorizing a piece of music. You play it BETTER, and you still have to ability to “drive” it, play it in different styles, with different emotions, etc.
But I wouldn’t imagine location specific algorithms are the last step missing link that solves everything.
I also question getting rid of sensors, especially if they are not very expensive. Of course we know humans do it without ultrasound or radar or lidar, but so what? Birds get propulsion from moving their wings, our aviation industry is based on jet engines and I don’t think there are any big wins available rethinking that. Nothing in nature uses wheels, I don’t think we have a lot to gain by rethinking that either.
I like eclcectic approaches. “Just do it” don’t have a philosophy of purity or cleanliness or imitation of biology. I don’t have anything to support this preference.
Unlikely. I mean will we require bicyclists and b1cyclists to have a location transmitter? How about electric scooters. And Pedestrians? Even little kids chasing after a ball that rolls into the street. If these things do not have location transmitters then the cars must be able to “see” them and react properly. Having a few people and cars with transmitters doesn’t really help, IMO.
Welcome b1cyclist! Always nice to hear new voices.
Anyhoo, early on there were lots of thoughts about autonomous cars communicating information about themselves either directly to other cars on the road or to a network in the roadway infrastructure (and vice versa). But I don’t think anyone’s seriously pursuing that any more. Most of the major names in the field are pursuing vehicles that either run entirely on their own and only use “ordinary” types of connections to existing networks (WiFi for OTA communications, GPS tracking, and the like).
No. That sounds a lot more like describing the details of the breakthrough and how it would help. This is what “I was just wrong and I need something to blame” sounds like.
True. Tesla has always been quite clear about that. You get what we have and we’ll try to do better and deliver it to you in the uncertain future. On the other hand, the “what we have” description has changed over time, as well as the description of what they plan to deliver eventually.
Also, at least for the earlier hardware, Elon said that they would upgrade the hardware if it turned out to be necessary when they delivered full autonomy. Of course, they haven’t gotten there yet, so…
And, to be clear, Tesla has delivered lots of improvements over the years. Some of what they mentioned and some just out of the blue. Possibly the best thing about owning a Tesla is that your car keeps getting better. Traditionally, cars are as good as they’ll ever be on the day you get them (except for hardware upgrades you might choose to buy).
One example that just kind of fell out of the FSD work is an optional chime on the relevant traffic light turning green, even when you don’t have the car helping you drive. This means you don’t have to watch the light while you’re sitting there. Nice!
Another example is all sorts of safety stuff, where the car, even those without FSD, take evasive action when something is about to run into them. Many accidents avoided. Since all the cars have all the hardware, even those who never bought any autopilot features get the safety stuff. And this is superhuman type stuff.
This is an example of what I said about the problem with your lack of engineering intuition. No, how different the software approach might be is irrelevant. It’s software. The gradual transition methods I described would work just fine. Tesla could develop a central server farm consisting of brains in vats (is that different enough?) to guide the cars
and they could still transition to it gradually.
Since Musk has clearly stated that Tesla will solve the pothole problem soon, there must be some sort of memory coming. And even something so crude initially as “this spot has some sort of issue” allows the car to know it ought to slow down a bit, which is how humans generally deal with iffy situations.
Note that Tesla already has location memory for places where it should adjust the suspension height (for those vehicles with an adjustable height suspension). So it’s already something they do at some level.
I’ve seen estimates of all-in cost anywhere from $200-800 per vehicle (I suspect closer to $200, but I know nothing about it really). Plus the space taken up on the line.
And, similar to getting rid of radar, there are major software savings. Sensor fusion is complicated and expensive. And, if different sensors’ perceptions of the world overlap, and they disagree, you need to decide which one to trust more. If they never disagree, then obviously they’re irrelevant. Getting rid of the neural nets associated with ultrasonic sensors is a win. Avoiding their training and testing is a win. Avoiding conflict and confusion is probably a win. Losing heterogeneity of perception is a loss.
My understanding is that the trade-offs have finally gone negative as Tesla Vision has gotten better and better. The “proof” that removing radar improved things is that Tesla safety scores on European official testing have improved on the Teslas that don’t use radar. At least it’s proof enough for me that they’re on the right track.
Software is the bottleneck. The reason we don’t have autonomous cars today is because we do not have a software program that can drive by itself.
We can’t write that program directly. We know that. It’s too complicated for a bunch of humans to sit down and actually write software code that’s complex enough to be able to drive.
We think we can indirectly write that program using other tools, most notably machine learning. We could never write a software program that could drive, but we can write a program that can “learn” to drive by engaging in a gazillion hours of practice and study.
But what if that doesn’t work? What if it can’t work? What if driving ends up being a task that can’t be resolved by machine learning? If that ends up being the case, then we won’t have autonomous driving until there is a “fundamental breakthrough” in creating software - a new way of creating software that is qualitatively different from how we’ve created software in the past. Just because it’s software, rather than hardware, doesn’t make it any less of an engineering problem, or any less real an issue to autonomy than getting the cameras right.
“Brain in vat” would be a very different way of solving that problem - but that runs into the issue that Hotz (quite correctly, though humorously) raised. We already have autonomous driving. It’s called an Uber. For a self-driving car to be a real product, it has to be non-trivially cheaper than the Uber alternative. A chip and software could be much cheaper than an Uber driver; it’s not at all clear that a “brain in a vat” system would be.
I’m sorry, albaby1, I don’t even know where to start. Your confusion runs so deep that unless you actually learn something about engineering, I don’t see how to even begin to give you a hint.
Well, maybe if you get rid of all those uses of “can’t”. There is no “can’t” in this area, because there is nothing like physics of software. There’s only “don’t know how yet”. And, in any case, it’s fairly obvious that if we put a humanoid robot behind the wheel, and we achieve artificial general intelligence (AGI), then we’ll have non-human autonomy. So it’s also obvious that a AGI car would be autonomous. And pretty much all the experts in the field think AGI is at most 50 years off, and some think less than ten. So all your “what if we can’t?” stuff is not at all what anybody in the field is thinking. Which is not to say that it has to be wrong, just that it’s not how experts think.
And, of course, even if we achieve only narrow artificial intelligence, it’s pretty clear that most of the work of both lawyers and engineers will be going away. A bit of work for the best of the best might be left for humans.
I’ll argue as well since pretty much any idiot can drive and few can be lawyers, that a driving AI won’t come long after a lawyer AI, but maybe not. Either way, both are coming.