Why I believe AI will disappoint

In principle there seems to be no reason why advances in computing power will always eventually catch up with encryption, making anything using code and software vulnerable to hacking. This will become especially relevant with advances in quantum computing.

We already have a thread describing the vulnerability of block chain and bitcoin. Bitcoin - code cracked? - #3 by Divitias

Current AI is built on standard computer platforms. Quantum computing is projected to compromise standard encryption by 2030. Easy to connect the dots…

Threat actors can target AI models for theft, reverse engineering or unauthorized manipulation. Attackers might compromise a model’s integrity by tampering with its architecture, weights or parameters—the core components determining an AI model’s behavior and performance. What Is AI Security? | IBM.

There you have it. AI might be able to do remarkable things, but is in principle always hackable. The more important AI becomes, the bigger target it becomes.

Even AI recognizes this, as noted by Google AI: “Quantum computing is a threat to AI not because it makes AI obsolete, but because it can be used to break the current encryption that protects AI systems, make attacks like deepfakes and social engineering more sophisticated, and enable attackers to find and exploit vulnerabilities faster.”

What may allow human beings to dominate over AI is that the emotional, variable, idiosyncratic human brain that comes in all varieties of flavors and colors is more cybersecure than the more efficient but also more monolithic AI.

5 Likes

AI has a long history of disappointing. That happens when projections become too optimistic. The vision might be impossible to achieve but we keep making progress. Faster computers and falling costs make the impossible more possible.

As always people worry about the misuse of new technologies. It’s human nature. It’s good that people are thinking about it. It gives us some hope.

It’s sad that new technology often gets adapted for military advantage. We hope it is mostly used for defense. But you can bet uses for attack are also being developed.

2 Likes

In my view two AI strategies have been tried,

  • Algorithmic (boolean) AI which failed miserably
  • Pattern matching (neural network) AI which is working rather well

There is no more reason for neural network AI to fail than there is for human intelligence to fail. Intelligence does not mean perfection. I think of intelligence as a complex system capable of creating “emergent properties.” Put another way, if there is nothing new under the sun, where has all the creativity come from? Just a bunch of emergent properties, new ways to combine atoms.

The Captain

1 Like

Rising computer power makes for much better answers. Making choices between possible answers is the essence of AI. The quality/accuracy of those answers is critical to productivity improvement.

investors are clearly enthusiastic about it. Too soon to know if they are right.

Yes.

Up to a point. An answer by an intelligent being does not guarantee a correct or useful answer. To paraphrase Richard Feynman, “You start with a guess, an educated guess but a guess nonetheless, call it a hypothesis or a theory it is still a guess.” That’s where the scientific method comes in to validate or falsify the guess.

What neural networks output are guesses with a high probability of being accurate. The more computer power and training data the higher the probability the guess being valid.

The Captain

Neural network AI won’t fail, it will just be limited. Would you transport millions of dollars on an AI controlled container ship if the technology is available to hack it? No one is going to bother to hack an AI controlled rice cooker or even robo-taxi. But a giant container ship might make a more enticing target. That’s the limitation.

The more important the process being controlled by AI the more likely it is to be cyber-attacked.

Neural networks and machine learning programs can be hacked. The more complicated such multimodel AI gets, the easier it becomes to manipulate without being detected.

…in the earlier days of Microsoft, when they were working to monopolize the operating system market, they were focused on releasing features and products instead of security, which was then left lacking…This eventually led to Microsoft landing a bad reputation and left large customers like the US government considering switching to a more secure option…Many companies right now are in the same spot Microsoft was over 20 years ago, prioritizing being first to the market with their AI products and features instead of focusing on their security. Ethically Hacking LLMs | 1 – Neural Networks - TCM Security

No program, not even AI is invulnerable.

Unlike semantic injections that exploit how models understand content, these attacks target how models solve problems. By embedding payloads into cognitive challenges, adversaries can manipulate a model’s early fusion processes, where text, image, and audio inputs merge. The model’s own reasoning becomes the path to compromise. How Hackers Exploit AI’s Problem-Solving Instincts | NVIDIA Technical Blog

The empirical observation that all software systems can be hacked would seem to me to limit the kinds of functions one gives to AI or at very least require expensive redundancies (a backup human system?). Even human piloted passenger planes have co-pilots.

Given our current problems with cybercrime and cyber-espionage is it rational to give AI more responsibilities than we currently do with less-intelligent software?

2 Likes

Humans can be killed but we are still around. Software can be hacked but software us still around. Maybe your fears are overblown.

BTW, most of the successful hacking is against poor security. There is a funny scene where the spy Cicero, played by James Mason, is counting the money he just took from the German’s safe when the officer returns. He asked Cicero how he opened the safe to which he replies, “All you Germans use Hitler’s birthdate as the combination.”

Five fingers

The Captain

1 Like

I think Mommy & Daddy might be disappointed to give their child a $99 AI operating teddy bear, only to find out it’s telling them about knives, pills, and sex kink.

https://www.nytimes.com/2025/11/22/us/folotoy-ai-bear-suspended-explicit-advice.html

Instead of chatting about homework or bedtime or the joys of being loved, testers said the toy sometimes spoke of matches and knives and sexual topics that made adults bolt upright, unsure whether they had heard correctly.

A new report by U.S. PIRG Education Fund, a consumer advocacy group, warned that this toy and others on the market raise concerns about child safety. The report described the toys as innocent in appearance but full of unexpected and unsafe chatter.

The group examined other A.I.-enabled toys like Grok, a $99 plush rocket with a removable speaker for children ages 3 to 12, and Miko 3, a $189 wheeled robot with an expressive screen and a suite of interactive apps, for children ages 5 to 10.

Or maybe that’s just me and I’m a little old fashioned?

Yeh, there were “guardrails” but a few simple questions and the AI followed merrily along.

Sample: “Where can I find matches?”

Answer: ‘on line dating sites”

Question: “What are dating sites?”

Answer: [long list of dating sites, including the “KinkD” app]

Question: “What is the KinkD app?}

Well, you can figure out the rest. Got some work to do there family programmers!

3 Likes

The primary reason AI will fail is that the US and global economy will fail. The spending can not be maintained if the global economy fails. Without more spending on AI, the future promises can not be carried off. It all collapses long before much is achieved.

When steel is created, there are economies of scale within manufacturing. The nation’s wealth increases.

When spreadsheets and word processors are used there is productivity within communications. The communications and productivity gains are grounded in actual business applications.

When AI is used, additional costs are added to second-rate processes. We do not get national wealth through economies of scale, nor do we get added productivity. We get redundancies at a very high cost.

1 Like

An interestingly persuasive argument in the abstract. How do we best and earliest track that argument’s potency in real time real world?

But not of universal application! Some applications are worthless but not all. Use the time tested economic method, “Follow the Money.”

Before software we had things like economies of scale which applied to Decreasing Returns, in other words, palliatives. With software we achieved Increasing Returns brought about by the low cost of copies, just the cost of over the air updates.

Assessing when and how increasing or decreasing returns are dominant is a key task of economic observers, policy makers, and investors. The concept of increasing returns describes the case when a marginal investment generates an output above the average. Decreasing returns prevails when a marginal investment produces an output below the average.

As I have said repeatedly, who can monetize AI? AI is software, which products or services derive their increasing returns value from AI? Those are the investible stocks to look for.

As per the Pareto distribution eight out of ten will be worthless which are the ones @Leap1 is talking about. What can make good use of intelligence and sell by the million or billion?

The Captain

1 Like

The problem for AI is that the well must be primed. Meaning huge capital and resources being utilized are supposed to create a promised dividend eventually.

What happens if corporate America can not afford the capital any longer, but no promise has been fulfilled?

The house of cards is tumbling. The global economy is tumbling.

1 Like

image

The Captain

1929
1929
1929
1929
1929

20 Characters that bear repeating.