The Dawn Project vs. Tesla

Today I was watching the news and saw an ad twice seriously dissing Tesla’s self-driving car software by showing one repeatedly driving over a child-sized test dummy. The second time I saw the add I noticed that it was paid for by the “Dawn Project”.

Being a cynic, I was expecting to find that it was paid for by the UAW, the US IC car companies or some such, I googled 'Who is the Dawn Project? Tesla:

https://www.google.com/search?q=what+is+the+dawn+project+tes…

This gave a long list of similar pieces describing the same pitch, including that they took out a full page ad in the New York Times.

Way down on the page I found this:

https://www.vice.com/en/article/pkp597/the-billionaire-runni…

The “Dawn Project” is a function of Dan O’Dowd, a self-proclaimed “self-made billionaire” from Santa Barbara. He claims that he has spent the past 40 years running the obscure computer company Green Hills Software and writing software code that he claims “never fails and can’t be hacked” for nuclear weapon systems, military encryption systems, the FBI, car companies, and others.

It’s not an exaggeration to say that O’Dowd has become obsessed with Tesla’s full self-driving cars. He owns three Teslas and claims to have driven nothing else for over a decade, but believes the full self-driving cars have become so dangerous that they need to be banned immediately. California Senate candidate Dan O’Dowd will not talk about taxes. Or homelessness. Or climate change. Or inflation. Or housing. Or jobs. All he will talk about is how much he hates Tesla’s self-driving cars and the existential threat computers pose to humanity.

“My issue is more important than all of them, because it’s basically about survival,” he told Motherboard. “When cyber Armageddon hits, and everything goes down, I don’t think anybody’s gonna care about taxes.”

So, anyway, I’m guessing you’ll see these ads at some point and just thought that it’s always important to at least make an attempt to find out who pays for this stuff and what their interest is. This one, right or wrong, seems to be a guy with an axe to grind about the fallibility of Tesla’s self-driving software who apparently has the money (or has acquired the backing of other “interested parties”) to buy his own advertising campaign.

Jeff

18 Likes

Jeff,

My nephew is an MIT computer science grad. He would say self driving cars are ready to go now. According to him the problem is the law has not caught up. There will be accidents and the law does not know who to blame or hold accountable.

According to him the problem is the law has not caught up. There will be accidents and the law does not know who to blame or hold accountable.

Seems unlikely that’s the reason. There’s no issue with knowing who to blame or hold accountable if the manufacturer of the self-driving vehicle (like a Waymo, for example) also owns and operates the transportation service. Given the vast sums of money to be made with a self-driving product, it would be trivial to structure the ownership of such an operation to avoid any problems with people knowing who to sue.

The more likely reason is that self-driving hasn’t matured to the point where they can demonstrate to regulators that it’s safe for general use, rather than limited to pilot programs in certain cities (typically geofenced with a human prepared to intervene when necessary, either in the car or remotely).

Albaby

5 Likes

I suspect that, if fully autonomous vehicle appear imminent, there will be a push for “transponders” so cars can easily locate each other as well as “smart” traffic signs, parking spots, fire hydrants, bus stops, road lanes, etc. as well as human transponders which will allow them to be seen more easily by cars.

Jeff

I suspect that, if fully autonomous vehicle appear imminent, there will be a push for “transponders” so cars can easily locate each other as well as “smart” traffic signs, parking spots, fire hydrants, bus stops, road lanes, etc. as well as human transponders which will allow them to be seen more easily by cars.

I’m not sure about that. In early days, that was one of the approaches that these companies were taking to solving the problem - V2V technology where vehicles would be communicating with their environment. But I think the major companies are relying more on improving the ability of the car to “see” its environment and process that information onboard to get a picture of its environment, without relying on other vehicles or stationary objects signaling to them, with or without GPS/mapping resources.

After a quick google, here’s an article giving brief blurbs on where the various companies are. Several of them are operating (or will soon be operating) pilot programs for the public to get rides in self-driving taxies, but in very limited environments.

https://seekingalpha.com/article/4531909-update-on-leading-g…

Albaby

1 Like

I wrote this article in June of 2019 and I still think its applicable today:

https://www.linkedin.com/pulse/we-dont-tolerate-types-mistak…

(you might need a LinkedIn account to view that, don’t know)

Highlight: “(Driverless cars) are safer but make different mistakes. Mistakes people can’t understand and therefore assume it’s rubbish.” This is a quote from someone I know deep in the software realm of self driving cars.

Highlight: “Autonomous robots are already working in agriculture. Lettuce farming. Apple picking.”

Highlight: “We already have autonomously driven vehicles, in production and in customers hands, used on a daily basis. Hauling 250 tons at a time. All by itself. Caterpillar Self Driving 250 ton Dump Trucks.”

4 Likes

There will be accidents and the law does not know who to blame or hold accountable.

Who has the deeper pockets? The auto company or the software company?

DB2

You guys are missing the obvious. Of course AI makes miskates, we al doo BUT…

How difficult is it to train Tesla’s FSD AI to avoid “child-sized test dummies” lying on the pavement?

I love watching Mentour Pilot. Airliners continue having accidents and with each accident the procedures are improved and made safer.

https://www.youtube.com/c/MentourPilotaviation

The Captain
as a teenager acquired a panic of flying but later managed to overcome it with some help. Thank you Dr. Vega! Years later I walked by the door of a Dr. Vega. “It can’t be!” It was her daughter, an obstetrician. :wink:

How difficult is it to train Tesla’s FSD AI to avoid “child-sized test dummies” lying on the pavement?

That question immediately put me in mind of this cartoon:

https://xkcd.com/1425/

At present, some things that are simple for humans to do are extremely difficult for computers to do (and vice-versa, of course). Our brains are very, very good using visual information to identify patterns. That’s not a simple task in computing.

But to quote the Simpsons, it can be two things. We’re probably reluctant to embrace AV’s because they still make weird mistakes that humans can easily avoid. And AV designers have not yet finished demonstrating to regulators that they are overall safer than human drivers. Hence, the many pilot programs going on out there.

Albaby

3 Likes

Our brains are very, very good using visual information to identify patterns. That’s not a simple task in computing.

It was true for boolean computing but image recognition, pattern matching, is what neural networks do for a living.

Neural networks reflect the behavior of the human brain, allowing computer programs to recognize patterns and solve common problems in the fields of AI, machine learning, and deep learning.

https://www.ibm.com/cloud/learn/neural-networks

Sorry, that cartoon is past its “use by date!” :frowning:

The Captain

It was true for boolean computing but image recognition, pattern matching, is what neural networks do for a living.

Of course. But they’re still not nearly as competent at it as humans. It’s a much harder problem for computers to solve than other types of problems. That’s why you need vastly more computing power to run an onboard autonomy program than to do other functions. And that’s why these vehicles still sometimes fail to recognize certain situations that humans would easily identify.

Albaby

3 Likes

And that’s why these vehicles still sometimes fail to recognize certain situations that humans would easily identify.

Exactly. Visual identification is pattern matching, and neural networks excel at that, but for visual tasks they still have a long ways to go to match a human. Consider the case where an NN was trained on animals, including snow foxes, and it mis-identified any scene with snow for being a snow fox (because the only images of snow foxes it was trained on included snow). Consider how some of the NN’s used in self driving cars can be fooled by subtly altering road signs.

Part of the problem is we have no way to know how the neural networks are arriving at their answers. Part of the problem is going to be in the training data. And a lot of the problem boils down to nothing more than “experience”.

Humans gain experience through living their normal life. And that mass of gray matter that is our own neural network is trained all the time, through our entire life. Most of us had 16 years of training under our belts before we started to learn to drive.

For computers, training data is the equivalent to experience for humans. Computer neural networks lack this depth of training. And more importantly, the BREADTH and WIDTH of training. They are trained on narrow tasks (out of necessity) with narrow sets of experiences.

There is a reason we prefer to educate our children on a wide variety of subject matters, to get well-rounded educations. Even my Computer Science degree required 3 years of humanities and social science courses before I could get my diploma. Maybe we need to learn a lesson from this.

1 Like

Part of the problem is we have no way to know how the neural networks are arriving at their answers. Part of the problem is going to be in the training data. And a lot of the problem boils down to nothing more than “experience”.

The next big step in AI will be an AI that can statistically examine its own output for questionable patterns.

The big step after that will be an AI that can examine its own “thought” processes.

Highlight: “Autonomous robots are already working in agriculture. Lettuce farming. Apple picking.”

=======================================

What are the kill statistics on lettuce heads and apples?

Jaak :grinning:

We’re probably reluctant to embrace AV’s because they still make weird mistakes that humans can easily avoid.

Another problem with AVs getting past all of the “edge-driving” obstacles, is the competition that may cause the industry to under report issues, as with the Boeing Max.

Prof. Philip Koopman is an internationally recognized expert on Autonomous Vehicle (AV) safety whose work in that area spans 25 years. He says that "At present, every company basically gets one free crash” due to the under regulation of the industry.
Unlike human errors with driving, AV errors create a great fear. When a human kills someone in a car accident we feel that we can hold him responsible, when a robot kills someone we feel like society as a whole is threatened.

Uber’s (AV startup Pony.ai) driverless test permit was recently suspended in California after a serious collision involving one of its vehicles. Up to that point they were on track to use AV vehicles in San Francisco.

1 Like

What are the kill statistics on lettuce heads and apples?

Zero of course. And that applies to both humans and robots of course. But I wonder how many human injuries are saved with the autonomous robots picking lettuce and apples?

1 Like

Another problem with AVs getting past all of the “edge-driving” obstacles, is the competition that may cause the industry to under report issues, as with the Boeing Max.

Maybe they should bring back an idea from the last century, when motor cars first appeared…

… a man with a red flag to walk in front of the metal monsters so as not to scare the horses…

The Captain

Ralph Nader has decided to stick his oar into the water.
https://finance.yahoo.com/news/one-most-dangerous-irresponsi…
The man who forever changed vehicle safety standards in the U.S. has a scathing new message for Elon Musk and Tesla’s self-driving car technology: He thinks it’s dangerous, and regulators should get involved.

Nader has called Tesla employing FSD systems in its cars “one of the most dangerous and irresponsible actions by a car company in decades” and that Tesla should “never have put this technology in its vehicles,” in a statement released Wednesday on his personal website.

1 Like

It seems that the Dawn Project made a serious mistake or faked part of what they disclosed.

https://electrek.co/2022/08/10/tesla-self-driving-smear-camp…

Here is my summary:
(I should disclose that I’ve been a FSD beta tester since last October so I understand the details discussed in the link)

The video shown in TV ads that I viewed were clearly showing a Tesla that was NOT in AutoPilot FSD mode even though the overlaid text says “actual full self-driving mode”
When you are in AutoPilot there are two indicators on the visualization display. First, near the upper left corner above the speedometer (largest digits) you see a steering wheel that is blue. It is greyed out when not enabled. This is difficult to see in the video, but you can see no blue all grey in that area.

Second, there is a projected path line the car is going to follow. With AP “off” it is grey like in their video, but when AP is “on” it is a blue line. Hundreds of Youtube videos show this. There are actually two choice here. If you just turn on AP you’ll see two blue lines, sort of one per wheel (really the left and right lane boundaries ahead). If you entered a GPS destination then you’ll see just the one centered line in “blue.”

Note that in their video there is an error or warning message (black oval with text in lower left).
This error can’t be read due to the resolution of the video. But it probably says something like AutoPilot not currently available – one way that you get this is when you are not driving on a street on the GPS map – just like the test track where they shot the video.

Mike

4 Likes

How difficult is it to train Tesla’s FSD AI to avoid “child-sized test dummies” lying on the pavement?

Very very VERY difficult. Avoiding it isn’t the issue. Distinguishing a stain on the roadway from a child-sized test dummy is the issue. Just like, when the lighting is just right, and shining on something ahead, sometimes the Tesla software thinks it’s a stopped tractor trailer ahead, and it suddenly and vigorously applies the brakes (“phantom braking”).

1 Like