Waymo self-driving cars -- progress

  1. OK.
  2. OK.
  3. OK.

Anything else? Meanwhile since such data is not forthcoming, not from Waymo and certainly not from Tesla, all we can do is speculate. You think we shouldn’t do that?

Given how little we really know about what Waymo is doing in CA, let me be skeptical …

These are the examples I provided upthread, quoted again below.

The first example is a question about how to measure (define) “extra/more information,” driven by a realistic, but non-technical example.

The second example illustrates that having “extra/more information” might require more (not less) analytical processing (more or smarter or “less dumb” brain power).

I don’t know the answers, but I believe these are very legit examples that counter or maybe deepen the discussion on the claims made upthread.

It is ALL about real world data. The debate about vision only or vision + multiple sensors is over.

Tesla’s has automated data annotation with video feeds coming in from millions of cars all over the world every day and is using for model training.

On the other hand, Waymo is paying drivers to drive around and collect data and hdmaps. You cannot generate the data synthetically. The fleet is very small and expensive even when compared with Uber. This is a dead end.

1 Like

The debate is NOT over for the very simple reason that Tesla has yet to have a working robotaxi service, and yet Waymo does. This, after years and years and millions and millions of miles driven and data collected. I’m puzzled as to how this debate is over.

As per synthetic data, this is exactly what Nvidia does with their Omniverse system for training systems from self driving to autonomous robots.

Disclaimer: I work for Nvidia, but have zero connection to the Omniverse system or data generation of any kind.

4 Likes

I was scratching my head as well. The debate is over and the company who doesn’t have a robotaxi won???

5 Likes

I think synthetic data is required to augment real data. AI needs lots of samples of each type of situation in order to properly train the models. There are two ways this is done. First, take an existing real image/video and augment it, such as making it a mirror image, adding noise, adding adverse weather conditions changing the color, brightness and so on.
Second, completely fabricate new data samples. This is important when you want to simulate scenarios that happen infrequently but are dangerous enough to want to add to the dataset…for example cars running red lights and potentially or actually smashing into your car or a car next to you so your car can avoid or minimize damage or injuries. These types of simulations could be run in all different lighting, weather, time of day (think building shadows) and with many different speeds and arrival times of the other cars.
You just aren’t going to happen to get this type of data by collecting lots of dash cam videos.

Mike

2 Likes

In response to:

I wrote:

Since no one answered the above counter examples, I’ll take a quick stab.

Here’s some info to help understand example 2, from above.

Bolded parts indicate additional sensor data (or “extra information”) might sometimes require more analytical processing (as opposed to “dumber” AI) to maintain a given level of system performance.

Yeong et al Sensors 2025 , 25 (3), 856

But Trump is getting ready to flip the switch…

Oh boy, you do suffer severely from commitment bias. Do you even entertain a remote possibility of both being successful?

3 Likes

Waymo is a dead end. It is a failed business that cannot survive without big daddy Google. It will need to pivot to “vision only” model if it wants to survive.

The debate is long over.

It is ironic that Google is the one that published the transformer paper which is enabling vision (and OpenAI etc.). Tesla was also stuck at a local minima and spaghetti code for long time.

1 Like

Not for nuthin’ but a Boeing 747 black box monitors over 1,000 discrete inputs simultaneously. And as we have seen from the 737 Max debacle, it uses those inputs to make decisions that can override the human pilots’ directions (obviously in this case in error).

Typically an airliner has 3 processors monitoring the range of inputs, and at least 2 of 3 must agree before deciding on a course of action: from things as simple as broadcasting a warning (“Terrain, Terrain! Pull up!”) to taking over various inputs to try to recover from stalls or other perilous situations.

So clearly there are microprocessors capable of handing multitudes of simultaneous inputs and making “independent decisions” based on them.

PS: You can get one for yourself for about $225 on eBay. :wink:

1 Like

He still hasn’t addressed this. :smiley:

5 Likes

Tesla has won.

There were hundreds of startups with sensor based autonomy that have folded.

It will be apparent in coming months.

1 Like

Until Tesla is making money with a working robotaxi service they have not won.

What do we know today?

  1. Waymo’s solution is expensive, is difficult to scale, and may not be cheaper than human drivers
  2. Tesla’s system is cheaper but has not proven to be good enough to actually work as a robotaxi. It’s not even universally that much better than SuperCruise or BlueCruise for that matter.

Just because Tesla’s system is cheaper does not mean it will actually work. It is very likely that NEITHER system will make for a viable, profitable robotaxi service. There is the chance that neither will win. And you cannot claim victory until Tesla has a lot of time and a lot of miles actually running a service.

EDIT: Forgot to add “what we know number 3”: Tesla promised us this ability 6 years ago.

4 Likes
4 Likes

I agree that Tesla isn’t yet proven as a robotaxi, but this particular statement isn’t at all true. Tesla “FSD” is not better or worse than BlueCruise or SuperCruise … that’s because it is completely different than those two things. Tesla FSD drives me from my driveway to the supermarket, to the movie theater, to the mall, to my friends house, etc. It drives there, via surface roads, stopping at stop signs and traffic lights, it turns, it adjusts speed limits as necessary, and it’ll enter the highway, drive, and exit the highway, all on its own. And sometimes when it arrives at the destination it’ll even choose a parking spot and park the car! All I have to do is enter the car, choose my destination and then press the blue button to “GO”. BlueCruise and SuperCruise doesn’t do those things and doesn’t claim to do those things.

2 Likes

Elons only accomplishment here being that he acquired (bought) the company and got himself paid handomsly though stock option early in their technological adoption cycle? Maybe also as a very successful salesman for them?

5 Likes

Consider two engineering teams for AVs.

Team 1 has a sensor array that is camera only.
Team 2 has a sensor array that is camera and LIDAR.

Both companies invest in research and design and collect many miles of test data for many years.

After years of data collection and testing, consider 3 distinct outcomes:

  1. “Camera only” is the most economic to manufacture and maintain (function per dollar, not including the investment in research and design) for a commercial AV (some defined functionality)
  2. “LIDAR only” is the most economic to manufacture and maintain for a commercial AV
  3. “Camera and LIDAR” is the most economic to manufacture and maintain for a commercial AV

Team 1 may win* (have best unit economics) under only outcome 1.
Team 2 may win* (have best unit economics) under any of the three outcomes.

Hence, in the admittedly simplified situation above, Team 2 has the more robust strategy with respect to risk of success of different sensor configurations. (Team 2 has the more diversified sensor solution.)

There is the possibility that Team 2 requires higher investment costs to research and design more sensor types (2 vs 1), but at today’s stage of AV development I don’t think we can necessarily assume higher costs for Team 2 (because we don’t yet know how all of this tech is going to evolve into a commercial product, amount of data and testing involved (including for various sensor configurations), regulatory regime, and probably many other factors). If a team exhausts funds before developing the defined functionality, then that team fails in all outcomes.

*note the qualifier “may,” ultimate success will depend on the long-term economic viability of the commercial AV, the above outcomes are necessary conditions for economic success (superior sensor array and associated tech unit economics), the point is to illustrate the relative scope for possible success given different sensor arrays and possible technological outcomes

2 Likes

Musk said today that the pictures of Robotaxis were actually pictures of cars that they just produced going to be parked. Tesla also confirmed that all robotaxis will be remotely monitored so if something happens they can be taken over by a human. But that means you better hope they have connectivity at all times. But how many times have you lost signal on your cell phone while driving?

1 Like