Overview
- Google’s Threat Intelligence Group says a distillation campaign sent more than 100,000 prompts to replicate Gemini’s reasoning, after which it disabled implicated accounts and tightened safeguards.
- The report finds government-linked actors from China, Iran, North Korea, and Russia using Gemini across attack stages, from reconnaissance and phishing to code generation, vulnerability testing, and post-compromise tasks.
- Google details a case attributing activity to China’s APT31, which used an expert persona with Hexstrike MCP to analyze RCE, WAF bypass, and SQL injection against specific US-based targets; accounts tied to this activity were disabled.
- Researchers highlight AI-enabled tooling and ecosystems including the HonestCue malware framework using the Gemini API, the CoinBait phishing kit likely built with AI tools, ClickFix-style social engineering, and the Xanthorox toolkit repackaging commercial models.
- Google assesses no major breakthroughs in fully autonomous attacks yet, but flags model extraction as a scalable IP theft risk that could accelerate rival model development and undermine AI-as-a-service economics.