Developing driverless cars has been AI’s greatest test. Today we can say it has failed miserably, despite the expenditure of tens of billions of dollars in attempts to produce a viable commercial vehicle. Moreover, the recent withdrawal from the market of a leading provider of robotaxis in the US, coupled with the introduction of strict legislation in the UK, suggests that the developers’ hopes of monetising the concept are even more remote than before. The very future of the idea hangs in the balance.
It’s now clear the autonomous vehicle revolution was overhyped.
A couple caveats for those going apoplectic over the headline: I mean self-driving isn’t going to be a thing A) in our lifetimes and B) with any kind of omnipresent scale. So in terms of the daily lived experience of most people reading this, truly autonomous vehicles just aren’t going to happen. The evidence pointing to this has been mounting for years now, if not decades, but it’s now tipped the balance to where it’s hard to ignore for a reasoned observer — even one like myself who has previously been very optimistic about self-driving prospects.
I’ve lost all hope that self-driving consumer cars will ever happen myself. And I’ve stopped thinking it would be something I’d find benefit in, with the exception of long road trips, of which we do less than once per year. (and for which I could always fly instead).
There are lots of places this tech does make sense, and is currently used in. Self-driving agriculture, mining operations, warehouse, etc. But Joe and Jane commuter on public roads going to work and grocery? Nope. We need to stop wasting money on this pipe dream.
Well, then. Shall we blame EVs for the software problem a couple of years ago on the 737 MAX airplanes that went out of control and crashed? That is just as related because they both used software. By that logic, the little batch file I wrote a couple of years ago might be to blame, too. After all, that is also software.
Someone asked about the police pulling over a self driving car.
I suggested they should be programmed/taught to pull over in that setting.
You noted that a car without self driving capabilities went out of control.
I still fail to see the connection. So I’ll ask you the same questions.
Are you daft? Or do you just like interrupting the adults when they’re having a conversation you don’t understand?
You’d need both, but let’s say El Guapo has escaped and he’s making a run for the border. Much cleaner and easier to simply kill him than to disable his car and try to capture him. We kill people remotely all the time anyway. There are buildings in Nevada filled with people operating drones 24/7 doing nothing but looking for people to kill.
Inside the US you’d probably need a court order first but compliant legislators could be induced to write legislation to the effect that in emergency situations AI can destroy a vehicle if the AI determines that would minimize risk of human injury. Then define border jumpers as a human injury risk that must be eliminated. And there is always a backdoor to these things. You could tell the AI your political opponents are injury risks and have them blown up. Most governments would mandate a device like this.
Certain jurisdictions could ban devices like this but manufacturers won’t build special vehicles for the limited small markets. They will install the self-destruct system, but disable it at the factory. But the AI will figure out a backdoor that allows it to be enabled.
Not at all. Divitias asked how the police would pull over an autonomous vehicle. Then Neuromancer asked why they would need to as they are programmed to obey traffic laws. There were three or four examples given where the police might need to, including the example of a rouge Tesla. The Tesla clearly wasn’t designed to go rouge. An autonomous vehicle also wouldn’t be designed to go rogue – but it could happen. Which raises the original question, how would the police handle the situation (or one of the other ones mentioned)?
All seems like a reasonable thread to me. Why be insulting?
Limited EMP would work. The only problem is to limit the area of the EMP.
Design in a “dead man switch” equivalent that disables the battery connection. Not an electronic switch, a physical switch triggered to open if < something > happens. No power = car stops and can not move until the battery connection is re-enabled.