Cameras, depending on resolution, CAN be far superior to humans.
HOWEVER, humans CAN be far superior in several ways, such as:
if the windshield has bugs/glare/fog/blurry spots we can move our heads or take other actions, while, at best FSD can turn on the wipers which may be useful or not
human retention of previously seen objects (moving or not) can be much better
I also think humans are better at interpreting what we see, for example other driver intentions, strange objects in the road (dangerous like a big rock, vs a big plastic bag blowing), a pedestrian looking at you vs not while stepping off the curb
Of course, humans can get tired, lazy (fail to look both ways, etc.)
As are computers. Again, one of the most visible and problematic issues that LLM Chatbots have is their tendency to hallucinate - to confidently provide answers that are wrong. They don’t know when they don’t know something. They can’t always determine when answering a question would exceed their current capabilities.
Absolutely. Obviously computers can be programmed to do self-assessments, and determine if what they’re being asked to do in the moment exceeds their capabilities. An AV system can be programmed to check whether visibility conditions allow it to drive safely. But there’s no guarantee. LLM AI’s show that it’s absolutely possible to have computer programs that don’t do that.
So you can’t assume, as MarkR did upthread, that an AV system will never overestimate their abilities. Unless someone has deliberately designed the system not to do that (or unless that trait emerged from its training), it’s entirely possible that an AV system will try to drive in circumstances when it doesn’t have the capability to do so safely.
(To say nothing of edge case scenarios where the visibility deteriorates too fast - here in Florida, thunderstorm cells are sometimes so compact that you can be on the expressway and go from bright sunshine to near-blackout rain in less than a second, just drive from clear skies into a literal wall of rain).