No, they’ve gotten them to where they are now, which is nowhere close to actual robotaxis.
You have no idea what regulators will want with regard to true autonomy. We’re nowhere near yet.
Tesla is on a path to vehicular autonomy. Not sure about everybody, but Waymo and Cruise are nowhere useful. They’re making progress along a path to nowhere.
High definition maps are obviously a dead-end because the world changes constantly. In the end an autonomous vehicle must perceive the world around it, understand the relevant things about it, and react appropriately. In general, maps can help, but they can’t be the final word on anything because they’re always out of date.
So, you can make significant progress by faking it with HD maps and simply failing when the world has changed, but to actually solve the problem you have to perceive and understand the world. But once you solve that, you can throw away the HD map because you don’t need it any more. So, in reality, HD maps are a crutch that are just a distraction from solving the real problem. Great if your aim is faking it for PR purposes, but useless if you want autonomous vehicles.
Note: low definition maps are useful for routing and setting general expectations, but not useful for detailed path-finding.
Well, make up your own example that shows real autonomy. I think it needs to cover me sleeping in the back seat, and people who can’t drive getting from here to there. And not sort of maybe part of the way, selecting from a very limited area.
How about what’s going on in San Francisco right now? Today, you can take a driverless robotaxi between most points within the city - Cruise’s coverage area includes nearly the entire city limits. You can sleep in the backseat if you like.
Certainly it’s true that Cruise and Waymo (and others) are pursuing a different strategy than Tesla, both technologically and economically. They’re aiming for a Level 4 system - a car that is fully autonomous within limits. That’s all the autonomy that a robotaxi service needs. You’re not selling the car, you’re selling trips. If the trips are only for sale within metro areas, that’s more than enough coverage to support a TaaS business (if TaaS actually makes any economic sense). Or for a subscription autonomy service (which more below). A service doesn’t have to be available everywhere to be viable.
Tesla OTOH is aiming for total Level 5 autonomy - a car that can drive itself virtually anywhere, without limitation or restriction. That way they can sell the car, not just trips. Unlike selling a service - which doesn’t have to be available everywhere - consumer expectations about a product like a car are that it will work anywhere they take it.
The issue for Tesla is that Level 4 autonomy appears closer to being solvable with current technology than Level 5. All autonomy is moving more slowly than folks had originally forecast (or hopecasted). But there are actual real world deployments of Level 4 autonomous vehicles today. Level 5 still doesn’t exist. We still don’t know whether Level 4 TaaS systems make economic sense - but at least we know they’re technologically possible at present. We don’t yet have that for Level 5.
Now, TaaS might not make sense economically. There are some real issues with the business and operational model of robotaxi services. But another alternative to Tesla’s vision is privately-owned Level 4 vehicles - where the vehicles have full autonomous capabilities, but geofenced to cities and the routes between them. The car is capable of being driven anywhere (it still has steering wheel and mirrors and what have you), but will take over and self-drive whenever it’s inside the geofence if you’ve paid to have that feature activated. That’s where Mercedes (the only company that has Level 3 tech) thinks the tech will be in the intermediate term:
Now, with us in agreement on the above, consider two scenarios in which a sequence of driving decisions are made and sensor data are also recorded. The time intervals can be as small as we like within the constraints of Autopilot hardware and software.
Scenario 1: “Shadow Mode”
Time1 Human makes decision(s) that control the vehicle, Autopilot makes decision(s) in parallel that are recorded, sensor data are recorded
Time2 Human makes decision(s) that control the vehicle, Autopilot makes decision(s) in parallel that are recorded, sensor data are recorded
Time3 Human makes decision(s) that control the vehicle, Autopilot makes decision(s) in parallel that are recorded, sensor data are recorded
…etc
Scenario 2: “Autopilot Mode”
Time1 Human makes decision(s) that control the vehicle only to avoid incident, Autopilot makes decision(s) that control the vehicle unless human intervenes, sensor data are recorded
Time2 Human makes decision(s) that control the vehicle only to avoid incident, Autopilot makes decision(s) that control the vehicle unless human intervenes, sensor data are recorded
Time3 Human makes decision(s) that control the vehicle only to avoid incident, Autopilot makes decision(s) that control the vehicle unless human intervenes, sensor data are recorded
…etc
(incident would be an accident or violation of traffic law or some other issue requiring immediate human action)
In Scenario 1, all data recorded are 100% dependent on the human decisions.
In Scenario 2, all data recorded are 100% dependent on the Autopilot decisions, except in trips in which an incident must be avoided.
To conduct machine learning, a not unreasonable algorithm for discussion purposes is:
Collect data under Scenario 1 or 2 using the current version of Autopilot software.
Train Autopilot software using data from Scenario 1 or 2.
Update Autopilot software in the vehicle with learnings from Step 2.
Iterate Steps 1 to 3 until the desired level of autonomy is achieved.
My main point is: Scenario 1 is not sufficient to achieve a sufficiently high level of autonomy.
Or, stated in other words (which I was trying to communicate in my first post), there is a level of autonomy that is too high to be achieved using data only collected from Scenario 1, because all data in Scenario 1 results from human decisions whereas the very definition of autonomy is that the current state of the vehicle results from the history of decisions made solely by the software.
If you believe any statement above is not true (or at worst not a very reasonable premise for this kind of nontechnical forum), then kindly explain why. I am not claiming perfect knowledge here, but the above makes sense to me.
I work for Nvidia, and we are obviously putting a lot of energy and effort into solving the autonomous car problem. And I see great applications in many areas. But I’ve yet to be convinced I want Level 5 in my personal car. I just don’t see driving as being a task so difficult that I want to offload that to a machine. Driving myself to the grocery store is just not something I think I need a machine to do for me.
It’s been said only 10-15% of Tesla owners pay for the soon-to-be-released software. I wonder how many would have paid for it if it was available immediately rather than in the future. In other words, what is the demand, really, for a self driving car for most people?
It’s probably no great insight to say that it depends what it costs and how much it improves safety.
If it’s not too expensive and dramatically improves safety over human driving, then I think adoption will be really strong - and you could see a pretty quick push to self-driving being mandatory for new cars (either by regulation or litigation). And to the contrary, if it’s very costly and there’s not a massive safety benefit, you’d see that happening more slowly.
Plus, I think it depends on whether MaaS works economically or not. If it does, we might see autonomy adoption move much more quickly.
A busy mom with two small kids would LOVE a self-driving car to safely take them wherever they need to go (within reason). So would pretty much ALL the small-package delivery companies. Plus grocery stores, retailers, and so on. Low cost delivery fees level the playing field for most companies because there is no real savings if you DIY.
Good description of your reasoning. I think this sentence marks where you go wrong. And it sounds like you go wrong because you’ve likely never worked directly on this sort of thing.
Firstly, there’s no simple way to describe what data is needed to achieve autonomy, never mind any particular level of it. Nor does it matter where the data comes from. Some people are trying to do it entirely by simulation. Tesla uses both real world data and simulation for training according to presentations at conferences and at Tesla’s Autonomy Day.
Secondly, it really matters not at all who was deciding what the path was to get to the current state. The state is what it is, the path was what it was, and a decision as to how to proceed needs to be made.
For the record, I haven’t worked on self-driving software either, so take my words for what they’ve worth. I’ve read about machine learning and neural nets quite a bit (since as long ago as the '80s), and I’ve got a career in software behind me, but I have no deep understanding at this point. The progress in the last decade has been truly stunning.
Level 4 and never getting beyond that. They might make a business out of it, but their approach means that it will probably be too expensive and limited to be viable.
And it’s also the case that Tesla’s approach, if it eventually works, will put them out of business immediately. Tesla’s approach is structurally far cheaper. And since nobody knows when Tesla might suddenly improve to being Level 5, how much can other companies risk on growing such a business? Tesla has a couple of million vehicles out there that have good enough hardware and can get updated software overnight with no cost or effort. Risky!
Now whether Tesla can make any sort of business out of a robotaxi service is a different question. I’ve seen detailed analyses that conclude it would be incredibly profitable, simply printing money. And I’ve seen others that conclude it would start out somewhat profitable and quickly devolve to break-even at best. Me, I have no real opinion other than that it would be a nice option to have.
There are many people who can’t or won’t or shouldn’t drive, and would love to own or rent a vehicle in which to get around. Start with the old and infirm, expand to children, and the audience is huge. So it’s clearly a desirable product at the right price.
I would be very surprised, given Hertz’s forays into the Tesla rental business, it they haven’t discussed how to expand the offering once the vehicles achieve autonomy. Could happen quite fast.
You’ve alluded to this a few times, but I genuinely don’t understand the argument. Tesla is aiming for Level 5 autonomy that is (almost) entirely vision-based, and that’s how humans drive. But there’s no obvious reason why that has to be where autonomous driving ends up for the next ten or twenty years.
If Level 4 with radar and ultrasonic sensors and LiDAR works for nearly everywhere within a metropolitan area, then that might be good enough for widespread adoption across the country. All those people who can’t or won’t or shouldn’t drive won’t care (for the most part) that they can’t take a robotaxi off-roading. If they can get to all of their doctor’s appointments and soccer team practices (which will all be inside any reasonable geofence), why would they care that the vehicle is doing that with maps and LiDAR instead of nearly-pure vision?
Are you saying zero data from Scenario 2 are required to achieve a sufficiently high level of autonomy?
And maybe more importantly, sufficiently high autonomy could be achieved on as fast a timeline even with zero data from Scenario 2?
(By sufficiently high I mean, we could keep raising the degree of autonomy we seek until data from Scenario 2 are required to achieve that degree of autonomy.)
Because (and I have described this before), high definition mapping is required for these systems to work at all.
This is very expensive to do in a changing world. This is an addition to the underlying technology being expensive.
A business can’t get away with “Only the first of our cars that encounters the open manhole will break an axle and get stuck in the opening! After that the others will avoid it.”
But unexpected changes in the world happen all the time. Sometimes a kid will crash a bicycle and fall down in the middle of the street. It isn’t just the weird edge cases like a kid at Halloween dressed in a STOP sign costume crossing the street. Any system that required HD maps will never get cheap.
True - but that’s why they don’t solely rely on HD maps. They have real-time sensors on them as well. Visual, radar, ultrasonic, LiDar, and others. They can “see” objects and obstacles in the road, even if they’re not on HD maps. That’s how they’ve been able to actually operate for several years now. They’re not crashing into anything that wasn’t already on the HD map.
It works. Not perfectly, of course. And until Cruise and Waymo start deploying some purpose-built vehicles, they suffer some of the same problems that would bedevil the as-yet-theoretical Tesla Network - since the car can’t close it’s own door, a passenger that fails to close the door behind them paralyzes the car. But there are about a million people in San Francisco who now have access to fully self-driving vehicles that can take them almost anywhere in the city. The elderly, the children, the ones who can’t or shouldn’t drive - they have access to robotaxis today. And they’re not busting their axles in any open manholes…
Certainly true. I might only have a choice of 30 pumps instead of the 60 that are within two miles of my house. Oh, the horror.
Great. No radar. Useless. OK, now: new radar. Genius!
Transcript from regulatory meeting: “Don’t worry about it Don. Some guy on the internet says it’s OK, and he actually drives a Tesla.”
Sure. Because of those regulator things. Maybe the cars so equipped can run “manually” outside the geofence, and quickly assemble the information needed to drive themselves elsewhere? I recall the first internet maps were pretty crude, but they quickly got up to speed. Heck, Google drove up and down every street in the country (supposedly) with a few cars. What would a fleet of (users’) cars provide. Just record, uplink, then downlink to a particular vehicle once it moves from zone to zone. I can update my GPS in my garage, with computing power double/tripling every time I turn around, maybe your car’s “geofence” will be, say “California”? It’s a different approach. Way too soon to say it can’t work. Remember the cool way MySpace took over the world? No? Me neither.
Back to the regulatory meeting:
“No, really, I’ve used it. Therefore what I say goes. Pay no attention to the defects in the dataset, or how it’s unrepresentative.”
“Well, you’ve convinced us. It’s unanimous then, right guys?”
My understanding (perhaps wrong or out of date) is that they do rely on them to be Level 4. Without them they’re Level 3, and so they have to be monitored by a human. So they fake it by as soon as they become aware of construction or a problem on a street they solve the problem by removing it from their potential routes. But that doesn’t solve the problem of being the first to discover the problem. I suppose they solve that by being ultra-cautious at any deviation from the HD maps, but I don’t really know. Maybe that explains why we regularly hear about mysterious jam-ups of these not really autonomous cars in San Francisco.
We’re discussing a system that neither of us has ever used, so our opinions aren’t worth much.
It works well enough to experiment with in off-hours at low volume. It doesn’t work well enough that anyone is even imagining it can make money.
Actually, a Model X can close its own doors. Definitely an awesome feature that I wish my Tesla had. But sure, anybody can actively disable a car just by standing in its way. Level 5 requires that an autonomous vehicle can drive where and when a human can drive, not where and how no human can. I’m not at all sure what’s going to be done to deal with active countermeasures to autonomous vehicles. Perhaps notifying the cops, with video of the problem, will mitigate that concern.
Not really. The cars can sometimes take them to varying places in the city depending on conditions. It might be good enough to experiment with, but it’s nowhere near enough to be a profitable paid service. And I don’t think it can because of the cost of these systems.
Please don’t act the fool. Old radar, not good enough. No radar, better (as shown by Europe’s testing). New radar, probably much better or why would they bother? Tesla relentlessly cuts costs, so adding a new radar for no good reason would be way out of character.
You are really insisting on playing the fool. There are no meaningful regulatory meetings regarding autonomy at this point. Nobody has anything to regulate at the detailed level. The regulations are all vague handwaving, like autonomous vehicles have to usually obey the law. Of course that means the don’t drive like humans, which is a problem, so it all gets vaguer.
No, they can’t. They require high definition maps so they know where drivable road is. That means the geofence regularly gets smaller on a real-time basis as they are informed that something has changed on a road previously mapped. Maybe there’s construction, maybe a pothole. But surprises like that make the road fairly undrivable for these systems because they rely on the accuracy of the maps.
Increasing the area covered is expensive, much more than the old low-definition Google maps. So the geofenced areas stay small, at least during the testing phase. I don’t know when that will end.
Okay, not just playing the fool, but playing the obnoxious fool who pretends that specific statements can be taken out of context and generalized into dimwittery.
So, no, I believe that having actually used FSD I can say that Tesla’s safety numbers are not misleading, but I don’t believe I know everything about everything they might want to regulate.
But I certainly believe that I know way more than you, who apparently knows nothing about the engineering of vehicular autonomy, has never tried Tesla’s FSD, and has never even experienced anything claiming to be an autonomous vehicle. Does that about cover your lack of credibility?
Please stop pretending to be stupid. It doesn’t suit you.
Sure, in the same way that Teslas rely on their cameras to be Level 2 (and will rely on them to be Level 5, perhaps, one day). The HD maps are part of the toolkit that the cars use to navigate the physical world - along with vision, ultrasonic sensors, radar, and Lidar.
I think you misunderstand. I wasn’t talking about deliberate efforts to thwart autonomous taxis. Rather, just the sort of routine mistakes that users might make that are trivial to correct in regular taxis, but cause major problems if you try to have ordinary cars serve as robotaxis. That’s why Cruise has developed purpose-built robotaxi vehicles, and probably why Tesla is as well.
Failing to properly close a door behind you as you leave is one such problem. If a passenger inadvertently doesn’t close their door properly as they leave, the car ends up being stuck. It can’t close its own door (except the Model X, as you point out). Cruise is going to use purpose-built vehicles that have dual sliding doors on both sides, so that passengers don’t have to operate them and can exit easily on either side of the vehicle - an advantage over using a car designed for human drivers for a robotaxi:
The geofenced area for San Francisco now extends to almost the entire city limits.
I think you’re overstating the weaknesses of these systems. Even my smartphone, using Waze, can tell me when a road is closed due to construction and direct me to another route. And their onboard sensors can detect potholes or other obstructions in the roads that aren’t on the maps - again, they need to be able to “see” cars and pedestrians and countless other physical objects in their environment. They’re not just driving solely by map, utterly baffled by the presence of even a single deviation.
I am able to read and process. More to the point, I don’t dismiss everything that’s fault finding of a company and swallow whole the hagiography that infest the Tesla fan-bois set. In that way I am able to make distinctions and evaluate critically, Your tactic is to tell everyone who disagrees that they’re ignorant and that you are the fount of received wisdom. I don’t think that plays as well as you think it does.
No, you’re really not in the case of Tesla. You have no “ground truth” so you’re unable to evaluate whether what you’re reading is true or just a pile of BS. So you just rely on reputation.
I’m sure you’re familiar with the Gell-Mann Amnesia effect.
As it turns out, most of what appears in the popular press about Tesla is just nonsense. And sources which have typically proven fairly reliable (to my mind anyway) in the past, such as the NYT, LA Times, Wall St. Journal, etc. are almost always laughably off base to anybody who is familiar with Tesla the company or Tesla’s vehicles.
So, when the sources of information that you believe are feeding you BS, how do you think you are able to “make distinctions and evaluate critically”.
As I’ve said many times before, if you want accurate information about Tesla, look to documents filed with the SEC and court depositions made under oath. Quarterly conference calls can also be trusted. But it’s hard to trust even direct quotes from Tesla executives, because if you read them in the press they are usually taken out of context and sometimes misquoted entirely.
Since you’re not an avid Tesla watcher, I assume you don’t go to primary sources for your information, thus you are almost certainly misinformed. Sad, but true.
And no, I don’t know how to quickly find accurate information on any particular Tesla subject from non-Tesla sources. It’s usually bits of truth buried in an avalanche of nonsense.