I hadn’t really given much thought about the behind the scenes “behind the scenes” part of autonomous driving. The first part: the sensors, the chips, the cars themselves, yeah, sure.
Here’s a short documentary on how the chips get to know what’s what in the first place. I just assumed they shows pictures of a zillion cars and somehow the software knew “OK, car.” Apparently not really. Everything must be annotated first: every object, every human, the sky, the trees, the vegetation, literally everything, and then the software can begin to “understand”.
This short doc takes you through Venezuela, Kenya, the Philippines and elsewhere into the jobs of the annotators, who delineate thousands of screens to help the AI along. (Various interesting side issues, such as how they have learned to use VPN to convince the tech companies they’re somewhere they’re not to get higher pay.)
Also, and not explored in the doc, why whites are better represented and more likely to be recognized by the AI than other races, along with other cultural differences which means that AI training will have to go on as autonomous driving moves from continent to continent.
Anyway: free gift link. Doc is about 20 minutes, the first 2-3 minutes are very slow, but it picks up:
The film is titled “Self Driving Cars Can’t See Without Their Eyes”. Yep.