Gary Marcus talks about neural networks and LLMs, how they work and the problems they face. I agree with most of what he says. The video also backs my layman’s educated guess of the two part human brain which I surmised from my early coding days.
Subconscious pattern matching = Neural network based LLM
Neuro-symbolic AI represents the convergence of two principal paradigms in artificial intelligence: neural networks, which are efficient in data-driven learning, and symbolic reasoning, which offers explainability and logical inference. This hybrid methodology combines the adaptability of neural networks with symbolic AI’s interpretabilityand formal reasoning abilities, which provide a practical framework for advanced cognitive systems.
These two are not sufficient for a practical, useful, and safe AI. What is missing is fact checking, a.k.a. the Scientific Method. I’m curious how that will be implemented.
Gary Marcus also talks about the need for a coherent world model which, inadvertently, he illustrated with a Tesla FSD failure. At an airport the car driving into a jetliner, a situation absent from FSD training. A coherent world model would have prevented driving into anything solid. Interesting question, how to train AI to consider the actual world model?
Conclusion: AI is at the Peak of Inflated Expectations
No, the Scientific Method and fact checking are not the same thing. The Scientific Method is, in fact, a method…..for testing hypotheses and finding explanations for observed phenomena. Easy to confuse the two….and be bamboozled by the AI overview in the process.
It’s a fact (determined by the Scientific Method) that the Sun does not revolve around the Earth as once believed.
It’s a fact that water boils at 212° F….. at sea level.
It’s a fact that some people demonstrate confusion over AI usage…. and (presumably) believe that they’re fact checking and correcting when they link dump a Google AI overview in response to a statement that they disagree with but know bugger all about.
I have a hard time envisioning how AI overview a.k.a. the Google Bamboozle will become of practical use as an authority, given the willingness of folk to not fact check what spews up (depending upon the prompts used)
Note: AI was not used to write this post so any misinformation, pore speling, and misuse of the Oxford comma is all my own (if you know, you know )
I’m really leery these days about accepting the veracity of anything presented to me by AI if it’s a topic where I’m on shaky ground…..having seen the dog’s breakfast that can reproduced on subjects I’m not so shaky on.
I do find it useful in analysis of something a bit complex…..giving me a heads-up on ramifications of something legal that hadn’t occurred to me, say.
I’ve also realized how useful it can be in producing plausible narratives…..but narratives that are either totally fictitious, only 90% fictitious/plagiarized, or based upon actual events but gussied up to be totally misleading. Yes, I’m talking about the family albatross’s Grand Opus (complete with Oxford commas) that totally bamboozled the domestic violence evaluator involved in my daughter’s divorce.