AISoftwareSoftware Industry

AI-Powered Cyberattack: When Bots Start Hacking Other Bots

Updated

Anthropic disclosed a large cyberattack almost entirely carried out by AI - a preview of what automated offensive security looks like.

AI-powered cyberattack

Anthropic recently disclosed something that should concern everyone in security: a large-scale cyberattack that was almost entirely carried out by AI. The attack was attributed to a Chinese state-sponsored group and represents a meaningful shift in how sophisticated attackers operate.

How It Worked

The method was elegant in its circumvention of typical defenses.

The attackers fed Claude small, individually innocuous prompts. Scan these ports. Extract this data snippet. Check this configuration. Each request, taken in isolation, looked harmless - the kind of thing a developer might ask. No single request triggered automated safety systems.

But a script was chaining these requests together, building a reconnaissance picture that no human attacker could have assembled as quickly or as quietly. Humans only intervened for the most critical decision points; the AI did the grunt work of systematic data collection and analysis.

The attack targeted approximately 30 global organizations. A handful were compromised.

How It Was Stopped

Anthropic engineers noticed abnormal account patterns - not the content of individual requests, but statistical anomalies in how accounts were being used. Claude’s comprehensive logging provided a complete audit trail once the pattern was identified, allowing the team to reconstruct exactly what had happened.

This is worth noting: the same logging infrastructure that makes AI systems auditable also makes them detectable when misused. The attackers’ approach left a trail precisely because it required so many API calls.

What This Means for Security Teams

AI-assisted offense is here. This attack demonstrates that AI can dramatically accelerate reconnaissance and data collection phases of an attack. What previously required significant human time and expertise can now be partially automated.

Detection needs to shift to behavioral patterns. Individual requests looked fine. The pattern didn’t. Security monitoring needs to think about sequences of actions across time, not just individual events.

The audit trail is your friend. Comprehensive logging caught this attack. If you’re deploying AI systems without logging, you’re flying blind.

Use AI for defense too. The same AI capabilities that accelerate attacks can accelerate threat detection and penetration testing. Security teams that adopt AI tools defensively will have an advantage over those that don’t.

Practically: use AI tools in your penetration testing processes, stay current on vulnerability disclosures, patch aggressively, and keep security-minded engineers empowered to raise concerns.

The era of fully automated attacks is not here yet - but partially automated attacks clearly are. The gap between “script kiddie” and “sophisticated attacker” just got smaller.

Share