How AI Thinks, if it does

I figured I’d read the article before commenting. And I figured out at least one of the issues in the AI example given!

The result did not look anything like a street map of Manhattan. Close inspection revealed the AI had inferred all kinds of impossible maneuvers—routes that leapt over Central Park, or traveled diagonally for many blocks.

I know why!!! It’s because the AI learned. It saw that Broadway goes on a diagonal for quite some length (from a little before Union Square Park all the way past Columbus Circle and into the upper west side), so it figured others could also go on a diagonal. And it saw some roads that cross through Central Park, so it figured there could be more. I’m pretty sure that’s how it happened!

So maybe, in this case, the problem with the AI is that it isn’t human and it never actually traversed streets in a city, and it doesn’t know that they are generally fixed things and that their locations and directions were determined by things in history. There’s no “rule of thumb” that it can learn that some cities are mostly grid-like (NYC) while others are more topsy turvy (Atlanta) usually based on history and how the city grew over the years or all sorts of other things that could affect it (rivers, historical buildings/parks/monuments, etc).

The things I am waiting for from AI are true creation. Much of human creation comes from intuitive trial and error or even random mistakes (vulcanization of rubber, pyrex, etc). I think the trial and error part will be handled very well by AI. For example, AI can run through thousands of slightly different compounds to figure out which are worth further studying for medical treatment, and which are useless or harmful. And AI can do this part much quicker than any human or group of humans can. But the “intuitive” part will be much more difficult. Sometimes a human that has been studying a class of compounds over decades has a “feel” for certain things related to that class of compounds. I don’t know if an AI will ever have that “feel” that leads to intuitive winnowing of the useful from the less useful.

And a lot of current AI is indeed lists of information. For example, a few hours ago I posted about how only 4 or 5 money managers have been able to “beat the market” over a long term. And I wanted to list some of their names. I remember, of course, Buffett, and I remembered Miller, but I wanted at least one more. So I asked an AI, and sure enough it reminded me about Simons. That information is simply coming from lists of information (or links, or however it stores what it learns) in the AI somewhere.

My gut tells me that until some randomness is added to AI, it won’t gain any of these important things. So to me that means that AI doesn’t come close enough to human until it is comprised of quantum computing elements that do indeed have all sorts of inherent randomness.

4 Likes

Dear Mark,

Sorry but I am more or less scanning that response for the humor. Am I on target?

LOL

Did you mean “place?”

I welcome freedom of choice so what’s the problem?

Ir’s a comment on the par with “The Sun rises in the morning.” Absolutely no new information is provided but taking it in context it misrepresents the current state of AI. As I said, it is a Show Stopper, blame it for my not reading further. Like I said above, maybe the author failed English Composition.

Both statements are accurate.

Another accurate statement.

Having purposefully not read the article I did not and cannot comment on the article.

No. I mean the unidentified person.

Mazel tov!

I’ll never know!

On which side of the argument?

The Captain

Dear Captain,

You won’t like your own guess. So don’t guess.