As best I can tell, Tesla is the only major AV player with a pure camera-only sensor array for informing AI driving decisions. (maybe there are others?)
Arguments can be made about the possible relative benefits of camera-only versus multi-sensor configurations (hardware, data and software complexity, utility and cost).
And I don’t have a strong view about which sensor configuration is best - I really wouldn’t claim to know.
And it very certainly can be that camera-only and multi-sensor are both viable solutions to AI driving.
I’m more interested in what the available data says (or doesn’t say) about how various sensor configurations perform with respect to safety, time to market, unit economics, R&D costs, etc.
But, to me, it could be telling that Tesla may be a bit alone in its sensor configuration, with other companies using cameras plus other sensors like lidar, radar, ultrasonic.
Another interesting tidbit is that Tesla apparently uses lidar data together with camera data in much more limited testing (vs the large camera-only data from its retail fleet).
We could also debate the degree to which map data are another sensory input for driving (vs navigation).
But my main question is, did Tesla arrive at the camera-only decision because the data suggested it as the best path versus other sensor configurations, or was there a push from leadership, that was much less informed by data, that resulted in the camera-only approach?
If the answer is the latter, that could be a serious risk for Tesla’s AV plans.
Probably most people on this board have experienced executive decisions that turned out to be poorly informed and then saw the consequences of those decisions.
Disclaimer: I’m open to definitions of things like “major AV player”, “driving”, “navigation”, etc. I’m also open to being wrong or not fully informed on any of this.