Particle.news

OpenAI Tightens Pentagon AI Terms After Backlash as U.S. Still Uses Anthropic’s Claude

The revised deal highlights how deeply classified workflows now depend on commercial models.

Overview

  • OpenAI added explicit language barring intentional domestic surveillance under its Pentagon contract, with Sam Altman conceding the rushed rollout looked opportunistic and saying any intelligence-agency use would require a separate modification.
  • CBS reports two sources saying the U.S. used Anthropic’s Claude in weekend strikes on Iran and that it remains in use during a six‑month phase‑out, as a Pentagon official described current uses like document synthesis and logistics.
  • Defense Secretary Pete Hegseth designated Anthropic a supply‑chain risk and President Trump ordered agencies to stop using its technology with a six‑month transition, while the company vowed a legal challenge to the designation.
  • Under Secretary Emil Michael warned prior AI contracts contained dozens of restrictions that could halt operations mid‑mission if breached and said Claude had been the only model on classified systems at the time of his review.
  • Anthropic submitted a proposal for a Pentagon drone‑swarm prize focused on translating commander intent and coordinating drones without autonomous targeting, but it was not selected as SpaceX/xAI and teams partnering with OpenAI advanced.