Palantir’s Mosaic—the AI model developed for the International Atomic Energy Agency (IAEA)
Palantir’s AI platform, Mosaic, has been integrated into the IAEA’s monitoring systems since 2015. It processes massive datasets—satellite imagery, surveillance footage, social media, and even Mossad intelligence—to detect anomalies in Iran’s nuclear program. The report that prompted the IAEA to declare Iran in breach of its non-proliferation agreement was likely based on Mosaic’s prediction.
Of course, there are reports claiming that the IAEA has a bias toward Israel, that Mosaic used Mossad’s data in its predictions, and that Palantir is also deeply involved in Israel’s operations in Gaza through a similar algorithm called Lavender. But the relevant point here is that this explains why Trump had the audacity to dismiss his own head of intelligence, Tulsi Gabbard, when she said there was no intelligence to confirm that Iran was pursuing nuclear weapons.
Trump’s “intelligence” was probably based on Israeli sources, which likely relied on the IAEA’s predictions—which were, in turn, based on Mosaic’s algorithm.
the “intelligence” of an AI model was prioritized over that of an actual human intelligence agency.
https://www.npr.org/2025/06/26/nx-s1-5442682/ai-chatbots-fact-check-videos-images-israel-iran
With AI-generated images and videos rapidly growing more realistic, researchers who study conflicts and information say it has become easier for motivated actors to spread false claims and harder for anyone to make sense of conflicts based on what they’re seeing online.
“Initially, a lot of the AI-generated material was in some early Israeli public diplomacy efforts justifying escalating strikes against Gaza,” said Brooking. “But as time passed, starting last year with the first exchanges of fire between Iran and Israel, Iran also began saturating the space with AI-generated conflict material.”
The report found that when asked to fact-check something, Grok references Community Notes, X’s crowdsourced fact-checking effort. This made the chatbot’s answers more consistent, but it still contradicted itself.
I agree with the statement below:
An unverified AI-generated threat assessment triggered a preemptive US strike on Iran, exposing the dangers of outsourcing national security decisions to speculative algorithms and geopolitical manipulation, while revealing how intelligence distortion and technological overreach can escalate regional conflicts and undermine global stability.
An unprincipled politician will add AI to their bag of tricks to initiate warfare.