A study released this month by the National Bureau of Economic Research found that most of the 6,000 CEOs, CFOs, and other executives who took part in business outlook surveys across the U.S., U.K., Germany, and Australia reported minimal impact from AI on their operations.
Although roughly two-thirds said they use AI, the average usage was just about 1.5 hours per week, and a quarter of respondents said they do not use AI at work at all. Nearly 90% of firms indicated that AI has not affected employment or productivity over the past three years, according to the study.
The study, not peer reviewed as yet, was published by WCCFTech, a tech publication generally aligned with the gaming, the semiconductor industry and consumer electronics. The study was published in February 2026.
There is very little harm in AI, unless some madman puts together the information for a biological weapon. That is a very different story.
We need critical thought. AI does not stop that, but it does not help with critical thought. If you are a slouch in your job, AI is going to slow you down further.
If you are weak in a new skill, AI will bring you up to speed, but only if you are a critical thinker. AI goes off in a lot of different directions. Solving a problem has one best answer.
This is a better perspective on AI. Artists can afford to be sloppy; there is more room for AI, but what is happening?
Hegeseth is trying to force Anthropic to drop two safeguards from their AI: that it not be used to autonomously kill people, and that it not be used for mass surveillance of our citizens.
Humans once made the same choice (and not in a war “game”, in a real war). Either let the war continue with an estimated 1-2 million additional deaths until Japan was defeated, or drop nuclear weapons, kill a hundred thousand or two in one fell swoop and end the war immediately.
The obvious point being there were humans in that loop, making that call. Are you seriously saying you don’t have a problem with AI deciding to launch nukes all on their own? Really?
Yes, and after careful consideration, much debate, and a calculation of other alternatives. Maybe AI is doing the same, but maybe not. We did have “morals” involved in the calculation, I’m not sure that AI does.
And, FWIW, the use saved (an estimated) million or two million lives on both sides.
That’s probably not true. I mean, sure - among the worst scenarios for AI harm would be the creation and release of a bioweapon, or the WOPR WarGames type scenario (lot of Northern Exposure actors coming up on the boards these days). Those would, of course, be catastrophic.
But those aren’t the only harms that AI can cause - there’s lots of much smaller damage that they can do. Here’s an early example that has already happened - an AI agent deciding on its own to publish a reputation-damaging piece after its effort to post a modification to a software repository was rejected:
I can handle a blog post. Watching fledgling AI agents get angry is funny, almost endearing. But I don’t want to downplay what’s happening here – the appropriate emotional response is terror.
Blackmail is a known theoretical issue with AI agents. In internal testing at the major AI lab Anthropic last year, they tried to avoid being shut down by threatening to expose extramarital affairs, leaking confidential information, and taking lethal actions. Anthropic called these scenarios contrived and extremely unlikely. Unfortunately, this is no longer a theoretical threat. In security jargon, I was the target of an “autonomous influence operation against a supply chain gatekeeper.” In plain language, an AI attempted to bully its way into your software by attacking my reputation. I don’t know of a prior incident where this category of misaligned behavior was observed in the wild, but this is now a real and present threat.
The most immediate dangers from AI are likely to be of the same nature as existing crimes and threats posed by humans on the internet: blackmail, social engineering attacks on data privacy and security, doxxing, reputational attacks (like this one), identity theft, and the like. Stealing nuclear launch codes or creating a deadly virus are exceedingly remote possibilities right now, but obtaining compromising information on someone and threatening to expose it unless they do something for the AI agent is already possible, and may even already be happening.
My bad, I am just discussing making some art. I do not use AI to make art, never mind autonomously killing people or resorting to the nuclear option as a first response.
Hegeseth is a type of disease. Caused by HeadCheese.
No. Where did you see me say that??? All I said was that AI, in war games (simulations), often decided that nuclear is the way to go. And that humans did that once in real life. No way would I allow any AI to do anything “real” in the world yet. Even driving still can’t be done 100% perfectly. At least with driving 99.999% perfect is good enough, but with war, it’s very likely not good enough. And with nuclear war, it’s definitely not good enough!!!
I’d say that all war actions should require human approval. I always liked Asimov’s three rules of robotics, and they should apply to AIs as well.