AI = Artificial Ignorance

Bloomberg created 1,000 similar(ly qualified) resumes, and generated 800 names from a pool of “racially common” names and asked AI to rank them on their appropriateness and likeliness of success in various fields.

The results, as you may imagine, were not happy:

I would prefer to link directly to the chart, but that’s not how Bloomberg rolls, so this photo from the most recent issue is offered.

Hispanic women will excel at HR, White men will not.

White women will do OK as software engineers, black women will not.

Asian women are apparently quite good at retail management, white men not so much.

And as financial analyst, it’s Asian women all the way. Worst place: black women.

Remember, the qualifications on the resumes are similar, and only the names of the applicants have been randomly assigned. Be careful what you name your kid, it could affect them for life.


In Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy Cathy O’Neil looks at the uses and misuses of algorithms. Unless the algorithms are open to examination, the prejudices of those creating them will always work their way into them. It looks like it is the same for AI.


Aren’t these algorithms simply reflecting how we ourselves assign attributes to names we come across in almost every walk of work/life - regardless of whether we’re designing/ coding algorithms?

those creating them are, after all, none other than ‘us’ :slight_smile:


Yes. That is one of O’Neil’s points. One major problem is that the many of the most dangerous algorithms or AI are proprietary and not subject to inspection for biases or other problems. It has been a few years since I read the book, and I am probably not doing it justice. I highly recommend it to everyone. It is very readable.


When it comes to neural network based AI it’s much worse because there is no way to know the logic of the output (no heuristic to study). A recent humanoid robot presentation (I think it was) asked the robot how he/she/it came up with the output.

With neural network based AI there are no algorithms to study, the output is based mostly on the deep learning input data.

The Captain


Yes. But Captain is right about the algorithms. The algorithms have to do with the mathematics of training the neural network. How weights get assigned based on training data. That algorithm is completely agnostic of the data and the task at hand.

Humans are biased, and we are ultimately the source of the training data. Bias will be inherent unless we are really, really careful. And that is a shame.

Throughout history man has made god in his own image. And with AI we risk doing it once again.


Algorithms are an integral part of Neural networks. It’s knowing how those algorithms process their inputs and come up with their output that sometimes eludes the designers/ supervisors of the neural network.
Particularly in the case of unsupervised deep learning networks

The Universe is like a layer cake.

  • Particles make up atoms
  • Atoms make up molecules
  • Molecules make up substances and cells
  • Cells make up life forms
  • etc.

The WWW is a seven (or more) layer cake

  • WWW
  • Internet
  • Optic fiber
  • etc.
  • etc.
  • etc.

Neural network AI is also a layer cake and, as you point out, Algorithms are an integral part of Neural networks. The issue is where in the layer cake do these algorithms function.

With heuristics the algorithms are at the top of the cake while with neural networks they are below the neural networks, they are used to create and modulate the neural networks. Training data is above the neural networks. With neural network architecture the algorithms are too far removed from the AI output to be useful to find any bias the AI might have, That bias, if any, comes from the training data.

The Captain

1 Like

Tesla’s FSD 12.3 is probably the largest example of real world neural network based AI. Best of all, up to version 11.x FSD was heuristics based which lets us compare the two technologies in the real world setting. This linked video is a practical illustration of my argument in the previous post.

Tesla FSD 12.3 is a Major BREAKTHROUGH! w/ Chuck Cook

The Captain

1 Like

AI might all be folly. The jury is still out.

Imagine Musk dying of old age and still no workable AI FSD EVs or Robots. He’d end up like Howard Hughes.

So when the AI of the FSD car has the trolley problem confronting it and makes its choice of who to kill, who will the family of the unlucky soul(s) sue? The person in the car who switched on FSD, the car manufacturer, the creator of the AI, the AI itself, who is responsible for the choices that the AI makes?

The obvious answer is “all of them.” When you sue, you sue everyone.

But the ones who might be on the hook financially would most likely be the manufacturers - of the car and the AI - if there was a defect in the AI. Given your reference to the trolley problem, though, I’m not sure if anyone would be liable. You can only recover if the product is defective and/or has done something wrong. In a trolley problem scenario, the whole point is that there’s no obviously correct choice among difficult alternatives - so it’s not very likely that that the family would be able to establish that anyone had breached their duty of care.


Nah the insurers take on the costs if you sign up for insurance.

There is a gas pedal on a ICE and no one has sued GM as you are alluding to.

Depends on the law of the land. In Venezuela it would be the owner of the car. In America maybe whoever has the deepest pockets. LOL

The Captain

1 Like