On the Q3 conference call Tesla said something that sounds very reasonable regarding autonomous driving (I know, newsworthy).
They said that they are using a simulation of real-world driving for training the model for their AI driver.
This means they can simulate any normal and very abnormal driving scenario and make small or large variations to scenarios and train over and over on a ton of data, weighting various scenarios more or less to tune their model behavior to the outcome that they desire.
Assuming the simulations sufficiently represent real-world images and driving scenarios, including enough quantity and variety of edge cases, this will allow them to train at a rapid pace and be much less dependent on collecting real world data (which they already have a ton of, but for weird and edge scenarios who knows).
I wonder if they are just now really embracing and scaling this simulation approach?
If yes, why weren’t they doing this much sooner?
Like years ago? (for example, I believe Waymo has done a ton of simulations)
It sounds like they are just now really embracing this simulation approach (hard to say, but appears that way, they say how they use it for reinforcement learning and that it’s not in one of their latest FSD versions, 14.1).
Obviously this is still a very difficult machine learning problem, but this gives me more confidence in their methodology to, eventually, solve AI driving.
Big challenges that remain are:
Is their camera-only approach robust enough to the wide variety of real-world perception scenarios? (night, glare, rain, weird angles)
How much data and compute does their end-to-end model need to train in order to reach sufficient safety levels?
Can they manage with the low interpretability and explainability of an end to end model?
Can they, without too much trouble, not overtrain to specific scenarios and weaken the ability of their model to generalize? (eg, overtrain in Austin, or any other scenario large or small).
Our world simulator for reinforcement learning is pretty incredible.
Our Tesla reality simulator, when you see it, the video that’s generated by the Tesla reality simulator and the actual video looks exactly the same. That allows us to have a very powerful reinforcement learning loop to further improve the Tesla AI. We’re going to be increasing the parameter count by an order of magnitude. That’s not in 14.1. There are also a number of other improvements to the AI that are quite radical. This car will feel like it is a living creature. That’s how good the AI will get with the AI four computer before AI five. AI five, like I said, is by some metrics forty times better. But just to say safely, it’s a 10x improvement.