In principle there seems to be no reason why advances in computing power will always eventually catch up with encryption, making anything using code and software vulnerable to hacking. This will become especially relevant with advances in quantum computing.
Current AI is built on standard computer platforms. Quantum computing is projected to compromise standard encryption by 2030. Easy to connect the dotsâŚ
Threat actors can target AI models for theft, reverse engineering or unauthorized manipulation. Attackers might compromise a modelâs integrity by tampering with its architecture, weights or parametersâthe core components determining an AI modelâs behavior and performance. What Is AI Security? | IBM.
There you have it. AI might be able to do remarkable things, but is in principle always hackable. The more important AI becomes, the bigger target it becomes.
Even AI recognizes this, as noted by Google AI: âQuantum computing is a threat to AI not because it makes AI obsolete, but because it can be used to break the current encryption that protects AI systems, make attacks like deepfakes and social engineering more sophisticated, and enable attackers to find and exploit vulnerabilities faster.â
What may allow human beings to dominate over AI is that the emotional, variable, idiosyncratic human brain that comes in all varieties of flavors and colors is more cybersecure than the more efficient but also more monolithic AI.
AI has a long history of disappointing. That happens when projections become too optimistic. The vision might be impossible to achieve but we keep making progress. Faster computers and falling costs make the impossible more possible.
As always people worry about the misuse of new technologies. Itâs human nature. Itâs good that people are thinking about it. It gives us some hope.
Itâs sad that new technology often gets adapted for military advantage. We hope it is mostly used for defense. But you can bet uses for attack are also being developed.
Pattern matching (neural network) AI which is working rather well
There is no more reason for neural network AI to fail than there is for human intelligence to fail. Intelligence does not mean perfection. I think of intelligence as a complex system capable of creating âemergent properties.â Put another way, if there is nothing new under the sun, where has all the creativity come from? Just a bunch of emergent properties, new ways to combine atoms.
Rising computer power makes for much better answers. Making choices between possible answers is the essence of AI. The quality/accuracy of those answers is critical to productivity improvement.
investors are clearly enthusiastic about it. Too soon to know if they are right.
Up to a point. An answer by an intelligent being does not guarantee a correct or useful answer. To paraphrase Richard Feynman, âYou start with a guess, an educated guess but a guess nonetheless, call it a hypothesis or a theory it is still a guess.â Thatâs where the scientific method comes in to validate or falsify the guess.
What neural networks output are guesses with a high probability of being accurate. The more computer power and training data the higher the probability the guess being valid.
Neural network AI wonât fail, it will just be limited. Would you transport millions of dollars on an AI controlled container ship if the technology is available to hack it? No one is going to bother to hack an AI controlled rice cooker or even robo-taxi. But a giant container ship might make a more enticing target. Thatâs the limitation.
The more important the process being controlled by AI the more likely it is to be cyber-attacked.
Neural networks and machine learning programs can be hacked. The more complicated such multimodel AI gets, the easier it becomes to manipulate without being detected.
âŚin the earlier days of Microsoft, when they were working to monopolize the operating system market, they were focused on releasing features and products instead of security, which was then left lackingâŚThis eventually led to Microsoft landing a bad reputation and left large customers like the US government considering switching to a more secure optionâŚMany companies right now are in the same spot Microsoft was over 20 years ago, prioritizing being first to the market with their AI products and features instead of focusing on their security. Ethically Hacking LLMs | 1 â Neural Networks - TCM Security
No program, not even AI is invulnerable.
Unlike semantic injections that exploit how models understand content, these attacks target how models solve problems. By embedding payloads into cognitive challenges, adversaries can manipulate a modelâs early fusion processes, where text, image, and audio inputs merge. The modelâs own reasoning becomes the path to compromise. How Hackers Exploit AIâs Problem-Solving Instincts | NVIDIA Technical Blog
The empirical observation that all software systems can be hacked would seem to me to limit the kinds of functions one gives to AI or at very least require expensive redundancies (a backup human system?). Even human piloted passenger planes have co-pilots.
Given our current problems with cybercrime and cyber-espionage is it rational to give AI more responsibilities than we currently do with less-intelligent software?
Humans can be killed but we are still around. Software can be hacked but software us still around. Maybe your fears are overblown.
BTW, most of the successful hacking is against poor security. There is a funny scene where the spy Cicero, played by James Mason, is counting the money he just took from the Germanâs safe when the officer returns. He asked Cicero how he opened the safe to which he replies, âAll you Germans use Hitlerâs birthdate as the combination.â
I think Mommy & Daddy might be disappointed to give their child a $99 AI operating teddy bear, only to find out itâs telling them about knives, pills, and sex kink.
Instead of chatting about homework or bedtime or the joys of being loved, testers said the toy sometimes spoke of matches and knives and sexual topics that made adults bolt upright, unsure whether they had heard correctly.
A new report by U.S. PIRG Education Fund, a consumer advocacy group, warned that this toy and others on the market raise concerns about child safety. The report described the toys as innocent in appearance but full of unexpected and unsafe chatter.
The group examined other A.I.-enabled toys like Grok, a $99 plush rocket with a removable speaker for children ages 3 to 12, and Miko 3, a $189 wheeled robot with an expressive screen and a suite of interactive apps, for children ages 5 to 10.
Or maybe thatâs just me and Iâm a little old fashioned?
Yeh, there were âguardrailsâ but a few simple questions and the AI followed merrily along.
Sample: âWhere can I find matches?â
Answer: âon line dating sitesâ
Question: âWhat are dating sites?â
Answer: [long list of dating sites, including the âKinkDâ app]
Question: âWhat is the KinkD app?}
Well, you can figure out the rest. Got some work to do there family programmers!
The primary reason AI will fail is that the US and global economy will fail. The spending can not be maintained if the global economy fails. Without more spending on AI, the future promises can not be carried off. It all collapses long before much is achieved.
When steel is created, there are economies of scale within manufacturing. The nationâs wealth increases.
When spreadsheets and word processors are used there is productivity within communications. The communications and productivity gains are grounded in actual business applications.
When AI is used, additional costs are added to second-rate processes. We do not get national wealth through economies of scale, nor do we get added productivity. We get redundancies at a very high cost.
But not of universal application! Some applications are worthless but not all. Use the time tested economic method, âFollow the Money.â
Before software we had things like economies of scale which applied to Decreasing Returns, in other words, palliatives. With software we achieved Increasing Returns brought about by the low cost of copies, just the cost of over the air updates.
Assessing when and how increasing or decreasing returns are dominant is a key task of economic observers, policy makers, and investors. The concept of increasing returns describes the case when a marginal investment generates an output above the average. Decreasing returns prevails when a marginal investment produces an output below the average.
As I have said repeatedly, who can monetize AI? AI is software, which products or services derive their increasing returns value from AI? Those are the investible stocks to look for.
As per the Pareto distribution eight out of ten will be worthless which are the ones @Leap1 is talking about. What can make good use of intelligence and sell by the million or billion?
The problem for AI is that the well must be primed. Meaning huge capital and resources being utilized are supposed to create a promised dividend eventually.
What happens if corporate America can not afford the capital any longer, but no promise has been fulfilled?
The house of cards is tumbling. The global economy is tumbling.