Overview
- Sam Altman said OpenAI will add explicit language barring intentional domestic surveillance of U.S. persons under its Pentagon agreement, with any intelligence‑agency use requiring a follow‑on modification.
- Two sources told CBS News the U.S. military used Anthropic’s Claude in the Iran strikes and is still using it, even as agencies have six months to remove the model from government systems.
- Defense Secretary Pete Hegseth ordered agencies and contractors to stop using Anthropic and designated the company a supply‑chain risk; Anthropic says it will challenge the move in court.
- Pentagon CTO Emil Michael said Claude has been used to synthesize documents and improve logistics and supply chains, highlighting operational dependence on commercial AI.
- Anthropic was not selected for a $100 million drone‑swarm prize challenge it entered, while winning bids included SpaceX/xAI and an Applied Intuition team listing OpenAI in mission control; consumer backlash boosted Claude to the top app charts as ChatGPT saw a spike in uninstalls.