Sean Carroll: AI Thinks Different
I’m part way through a Sean Carroll uTube video and I have to point out a fallacy in his thinking. He claims that Large Language Models don’t have a model of the world. Having been trained on just about everything ever written they create new sentences that sound plausible at first sight but are not necessarily accurate or true. He is right but that’s how human innovation actually works. AI does NOT think different!
Colonel John Boyd gave an excellent explanation about how creativity works. From SoftwareTimes, my website:
Boyd described his method of innovation in the briefing titled “Destruction and Creation”
-Imagine that you are on a ski slope with other skiers-retain this image.
-Imagine that you are in Florida riding in an outboard motorboat-maybe even towing water-skiers-retain this image.
-Imagine that you are riding a bicycle on a nice spring day-retain this image.
-Imagine that you are a parent taking your son to a department store and that you notice he is fascinated by the tractors or tanks with rubber caterpillar treads-retain this image.
Now imagine that you:
-Pull skis off ski slope; discard and forget rest of image.
-Pull outboard motor out of motorboat; discard and forget rest of image.
-Pull handlebars off bicycle; discard and forget rest of image.
-Pull rubber treads off toy tractors or tanks; discard and forget rest of image.
This leaves us with:
Skis, outboard motor, handlebars, rubber treads
Pulling all this together
What do we have?
That’s what LLMs do and do it rather well.
What about the claim about truth and accuracy? The LLM output is no different from what scientists do, they come up with hypotheses which must be confirmed by experiment. Most of what scientists dream up is neither true nor accurate. It is put to test and a Darwinian selection happens, what works survives, what does not work is discarded.
What LLMs do is very human like in m view. Algorithms must output correct results. Human intelligence outputs plausible hypotheses that must withstand the scientific method. Sean Carroll’s GAI would be an omnipotent god.
At around minutes 20 to 30
o o o o o o o o o o o o o o o o o o o o o
PS: after a few more minutes I lost interest, it was mostly about how to fool LLMs