Again, we have to highlight today. Waymo has partnered with Toyota and Hyundai (and probably some others) to equip Driver on their vehicles. Both of those companies are much larger than Tesla. And is currently partnered with Magna Steyr, who can build whatever brand you want. So if we look into the future a bit it isn’t hard to see how Waymo could scale rapidly with its manufacturing partners.
Except the cost per vehicle is still likely to be quite high - higher than the cost of a Model Y, maybe double.
And even without regulatory approval, the better Tesla’s FSD’s L2 offering gets, the more sales / subscriptions it’ll have. In the meantime, no-one would buy a Toyota with LiDARs and other sensors with permits, regulations, etc. Yes, same is true for eventual Tesla robo-driving, but Tesla has a path without regulatory hurdles to get there in the meantime.
Okay, throw out all FSD drives that have a disengagement. How different are non-disengagement FSD drives compared to robotaxi drives? (It’s a real question, I don’t know how different the self driving stack is for each of the two)
I don’t think this is the case. Sometimes FSD can’t handle a situation, so it pauses slightly, and then figures out how to handle it. It is VERY rare that it throws up the red hands and makes the driver take over. In fact, in the last few versions of FSD, I’ve only experienced the red hands once (3+ months). For example, sometimes FSD can’t quite handle pulling out of my driveway, it seems to think there’s an obstruction (maybe the neighbor’s trash cans that aren’t really in the way). If I am in a hurry, I intervene and push the accelerator, and the car inches forwards and then drives. But a few times, when not in a hurry, I did nothing at all, and after a while, maybe 15 seconds, the car inched forward a bit, and then figured it out. Maybe it inches forward so it can get a slightly different “view” of the surroundings to help it more accurately determine distances to things it sees? I don’t know.
Yes, the pick up and drop off is a little different, however we’ve used FSD close to that way a few times. For example, two weeks ago I picked up one of my kids from the airport. I had FSD drive me all the way to the terminal, but I stopped it myself as soon as I saw the kid waiting at the curb. Then we did FSD all the way home right from that location. It was just a experiment, usually I disengage FSD when entering the terminal area (because I can be a bit quicker navigating the vagaries of airport terminal traffic), but this time I was very early so I let it do its thing (slowly).
No clue. They apparently are using a different iteration of the software for the robotaxis than that in individual cars. There’s no way to know at this point what the differences are, either in capabilities or the ‘tuning’ of the systems.
That’s not that rare. I don’t know your driving habits, but presumably that’s a fail rate that’s occurring every few thousand miles, and by someone who is self-selecting whether to have FSD in the first place and when to use it. A car like that needs to have a monitor, which limits scale opportunities. So that’s exactly what an AV developer needs to solve for - how to get the car to do the right thing in those situations. That’s the critical data, at this point in development: what does the car do when it is confused/blinded/unable to proceed?
It’s something that FSD users do very infrequently, but something that a robotaxi does for every trip - allow a passenger to enter the vehicle from something other than fully parked with the engine off, and manage a drop off in the same circumstances. The airport is actually one of the easier places to do the PU-DO, because the entire traffic circulation infrastructure is designed with that in mind and nearly all other vehicles are doing the exact same thing (the rest of the airport is a challenging environment for drivers, but the PU-DO is comparably accommodating). Figuring out what to do when a passenger is mid-block in an office area with ongoing traffic and no available off-street parking is a different challenge, and one that normal FSD provides very little data on.
Yep.
And I’m still waiting for an answer this question.
When and how does this data advantage materialize?
Apparently we are not yet seeing it this week. And all of the weeks that came before.
But, again, I’m open to persuasion.
I don’t think the data advantage is the big deal, although it’s something. The really big deal is if Tesla can pull off the robotaxi thing with standard, stock vehicles, of which there are already millions of the street. That’s “instant scale”, unlike every other provider which would have to build their network brick by brick.
I have my doubts, but weirdly one of the things that convinced me it might be possible is the Chinese using “not the best chips” and still producing a competitive AI. (I also have doubts that the current software/chips in a random Tesla would be up to the challenge, but since I don’t know the full capabilities I can’t really comment.)
There are obvious holes in the Tesla data set (picking up passenger in odd locations) which Waymo doesn’t have. OTOH, Tesla has millions more miles of possible “edge cases” which Waymo doesn’t. So which is easier to fill? Dunno, not my job, man.
This might be fine for an early adopter, who is used to such glitches in early iterations of things, but in a general consumer market this sort of thing would inspire, let’s say, a lack of confidence and could be a serious detriment to rapid acceptance. Think about the many videos from “Tesla’s first day”, and multiply it by a million, including riders’ personal experience.
There’s a lot of work yet to be done.
Waymo cannot generate profit. The tech cannot scale. It is too complex with too many sensors and too manual.
Waymo and Boston Dynamics are cool science lab experiments that have run their course. They should shut down the business. Join the graveyard with Apple car, Google Glasses, Metaverse and Apple’s vision pro.
It would be interesting to know more about Tesla’s lidar approach.
How detailed is it?
Like you say, is this required for each locality?
Every street?
The more detailed the lidar data needs, the weaker the case for Tesla having a data advantage from its camera only retail fleet.
But we just don’t know.
If they can successfully correlate camera with lidar data, as explained upthread in several different posts, then they can still retain the manufacturing benefit of camera only - if, of course, the camera-only vehicles can be successful autonomous AI drivers.
Here’s another example of pick up and drop off being different. When you hail a ride on the street, the car should use GPS to attempt to get as close to you as possible. I would further add that the app should track your GPS while the car is heading your way, so if you move a bit, it can find you more accurately.
But when you are inside a building, let’s say a hotel, and you hail a ride, the system needs to be a bit smarter and NOT use only GPS to determine where you will be when the ride arrives. It shouldn’t head to the nearest street location where your GPS indicates, but instead it should head to the entrance of that location, or to the designated rideshare pickup location for that location (let’s say a ballpark, or an airport, etc). In this case, someone hailed the ride from their hotel room, and the car went to the street under the hotel room window instead of to the street where the hotel entrance is located. The system will need to know some more information about the locations customers are hailing rides from/to.
Apropos of some earlier conversations on this thread (forgive me, I couldn’t go back through 270 posts to find it), it appears that the vehicles Tesla is using for the rollout in Austin have some bespoke juiced up telecommunications hardware to allow for significantly more bandwidth for video. Presumably to enable teleops:
Another example is when you are at an office building. My company is one of a dozen or so in a campus that has about 8 buildings. The landlord installed about 5 designated ride share pickup locations with signage. When you request an Uber/Lyft within some GPS fenced area their apps require you to tap on one of the numbered pickup spots.
Airports work in a similar way.
There must be some master database of these types of pickup/dropoff locations – or Uber & Lyft keep this info private and make others learn about them as well.
Mike
Out of curiosity I searched for this and it turns out that Google has an API for this that includes:
- Find Nearby Places: Returns a ranked list of nearby places based on a given location.
- Find Pickup Points for a Location: Identifies suitable pickup points near a specific geographic coordinate.
- Find Pickup Points for a Place: Suggests pickup points for a known place (e.g., a business or landmark).
So, free to use for any app, it seems
Mike
In the same way that we can see the potential opportunity for Tesla to scale its AI driving software across a fleet of its current hardware given Tesla’s manufacturing efficiency (pending development of a successful L4+ driver), we also have to admit the potential opportunity for a competitor, such as Waymo, to improve its manufacturing efficiency to be sufficiently scalable and economical so as to deliver a meaningful quantity of units for some relevant use case, such as robotaxi.
Now we are solidly in the world of possibility.
Give the long time of r&d on data collection and AI training that we have witnessed with both Waymo and Tesla, it seems to me that the data and AI problem (identify a sensor configuration and associated data processing and train a successful AI driver on that data) is much more difficult than the manufacturing problem (given known vehicle hardware, scale its manufacture).
We know how to manufacture many things.
We don’t know how to AI drive, as of yet.
I would add to just define what this means to do.
It means to “define the sensor configuration and associated data and computational processes (eg, machine learning model) that can implement successful driving decisions according to some benchmarks (safety, transport utility, economics).”
In another thread I put sown some current predictions from Tesla and Waymo, with outcomes not yet known.
See:
It is a question of coloring the difficulties properly. Yes, we already know a lot about manufacturing. Among the things we know is that it is capital intensive and can easily take a couple of years to get to volume. While the obstacles to full FSD are not known in the same sense, we do know there have been some dramatic improvements in the last year. We also know that Tesla thinks the current hardware will do the job.
Do we know that? Do they really think that or are they just saying that? Musk said HW3 would do the job. Clearly not the case.
And if it can do the job, how come it isn’t doing the job?
Agreed. But we do know how to quite accurately determine if some iteration of HW is adequate or not. You collect all the data and run it (for example) at half speed on the input side but full speed on the processing side. If that gives you adequate response times then “all” you have to do is to build HW that is twice as fast and has twice the bandwidth and the problem is solved.
You do this with 2 or 3 steps, the first being to build chip(s) that have that throughput, but may be low yielding and costly, then iterate a couple of more times with Moore’s Law and the problem is solved in 2-4 years.
Simple. But only if the software is really correct and it just needs twice the time to process
Mike
So Three days in:
Reuters:
Tesla’s robotaxi peppered with driving mistakes in Texas tests
SummaryTraffic problems, driving mistakes feature in Tesla’s robotaxi - passenger videos
Robotaxis entered wrong lane, dropped passengers off in the middle of roads among other issues
Problems show limitations of Tesla’s self-driving software - safety expert
Driverless vehicle rivals Waymo and Cruise also faced issues in their rollouts
A first public test of robotaxis by Tesla in Austin, Texas led to multiple traffic problems and driving issues, videos from company-selected riders showed over the first few days.
Chief Executive Elon Musk has tied Tesla’s financial future to self-driving technology, and with [Tesla sales down], the stakes are high. He said Tesla would roll out the service to other U.S. cities later this year and predicted “millions of Teslas” operating “fully autonomously” by the second half of next year.
The Tesla fans invited to the trial were strongly supportive and posted videos of hours of trouble-free driving, but issues drew questions from [federal road safety regulators] and auto safety experts.
Issues included Tesla robotaxis entering the wrong lane, dropping passengers off [in the middle of multiple-lane roads] or at intersections, sudden braking, [speeding] and driving over a curb.
One of the most unnerving (and frequent) is the issue of phantom braking, the taxi stepping on the brakes for no reason. Speculation is that it “sees” shadows, or something it doesn’t understand, and can’t tell how far away it is, so for caution does “sudden braking.”
The City of Austin has set up a website so “incidents” can be reported. I know now how those are vetted, but it will be interesting to watch. In other news, Charlotte has removed Tesla from its approved list of vendors for city vehicles. Don’t know why.