Particle.news

Reports Say U.S. Military Used Anthropic’s Claude in Iran Strikes After Federal Ban

The episode spotlights how deeply military workflows now rely on commercial AI.

Overview

  • President Donald Trump ordered federal agencies to phase out Anthropic’s AI within six months, and the Pentagon labeled the firm a supply‑chain risk to national security as Anthropic vowed a legal challenge.
  • People familiar with operations told the Wall Street Journal that U.S. Central Command used Claude for intelligence assessments, target identification, and battle simulations during the Iran strikes hours after the directive.
  • Anthropic refused Defense Department demands for AI usable for “any lawful purpose,” maintaining red lines against mass domestic surveillance and fully autonomous weapons.
  • OpenAI announced a deal to deploy its models on classified military networks and touted safeguards, while critics noted the contract’s allowance for “all lawful purposes” could hinge on government interpretation; CEO Sam Altman acknowledged the agreement was rushed.
  • Public reaction intensified: Claude climbed to No. 1 on Apple’s App Store and experienced outages blamed on unprecedented demand, while QuitGPT and #CancelChatGPT campaigns urged users to cancel ChatGPT subscriptions.