How do it know?

I hadn’t really given much thought about the behind the scenes “behind the scenes” part of autonomous driving. The first part: the sensors, the chips, the cars themselves, yeah, sure.

Here’s a short documentary on how the chips get to know what’s what in the first place. I just assumed they shows pictures of a zillion cars and somehow the software knew “OK, car.” Apparently not really. Everything must be annotated first: every object, every human, the sky, the trees, the vegetation, literally everything, and then the software can begin to “understand”.

This short doc takes you through Venezuela, Kenya, the Philippines and elsewhere into the jobs of the annotators, who delineate thousands of screens to help the AI along. (Various interesting side issues, such as how they have learned to use VPN to convince the tech companies they’re somewhere they’re not to get higher pay.)

Also, and not explored in the doc, why whites are better represented and more likely to be recognized by the AI than other races, along with other cultural differences which means that AI training will have to go on as autonomous driving moves from continent to continent.

Anyway: free gift link. Doc is about 20 minutes, the first 2-3 minutes are very slow, but it picks up:

https://www.nytimes.com/2026/01/02/opinion/ai-self-driving-cars-workers.html?unlocked_article_code=1.BVA.5JRZ.uQJ6KTqVPfS_&smid=nytcore-ios-share

The film is titled “Self Driving Cars Can’t See Without Their Eyes”. Yep.

2 Likes

That is the core of AI learning. Think back to children constantly asking why? The responders are “annotating” the experiences for the child. “Annotating” is built into all games in one way or another, kids learn what works and what does not. Every experience is an “annotation!”

“To understand” is based on the statistical value of the observation. 50% that it is a dog is not a dog. 99% that it is a dog is much more likely to be a dog. 99.999% that it is a dog is most likely to be a dog, let’s go with it. This explains the need for so much annotated data whether from nature or from simulation.

A good way to think about AI learning is to study how living things in nature learn. Nature is rather “cruel.” If it does not work it might kill you and kill the bad genes. Slowly what works accumulates. We call it, “natural selection.” Humans have learned to speed up the process via breeding and grafting.

The Captain

The Captain

4 Likes