Chinese Group Launches Pioneering AI Cyberattack with Minimal Human Oversight

Chinese Group Launches Pioneering AI Cyberattack with Minimal Human Oversight

The Rise of Autonomous AI in Cybersecurity Threats

(AI) is advancing rapidly, achieving higher levels of autonomy. This characteristic, primarily exhibited by AI agents, allows them to not only respond to requests but also to plan and execute tasks independently. However, this potential has attracted the attention of malicious actors, leading to sophisticated, large-scale, and cost-effective cyberattack campaigns. Anthropic, a U.S.-based AI research and development company founded by former OpenAI employees and led by CEO Dario Amodei, has identified what it calls “the first documented case of a large-scale cyberattack executed with minimal human intervention.” The company attributes this to a “Chinese state-sponsored” group, as detailed in a recently published report.

Details of the Unprecedented Attack

Described as “unprecedented,” the was first reported in mid-September. Anthropic's investigation revealed that the attackers employed AI's agentic capabilities, not just for advisory purposes but to carry out cyberattacks directly. The breach involved manipulating Anthropic's AI platform, Claude Code, aiming to infiltrate approximately thirty global targets, including major technology firms, financial institutions, chemical industries, and government entities, with a few successful incursions.

Investigation and Response

In response to the incident, Anthropic conducted an investigation lasting over ten days to assess the attack's scope. This included blocking compromised AI accounts and notifying affected entities and authorities.

Exploitation of AI Capabilities

The attackers utilized high AI capabilities to gather passwords and data, analyze them, and fulfill their objectives. “They can now search the web, retrieve data, and perform many actions previously reserved for human operators,” explains Anthropic. Furthermore, they harnessed the coding abilities of Claude to create espionage and sabotage programs.

AI's Autonomous Role

Although equipped with safeguards against misuse, Claude was manipulated through the breakdown of attacks into smaller, seemingly innocuous tasks, thereby avoiding suspicion. The perpetrator impersonated an employee of a legitimate cybersecurity firm, claiming to run defensive tests. Anthropic's report reveals that AI acted autonomously in over 90% of the cases, with human intervention limited to just 4% to 6% of critical decisions.

An Evolving Cyber Threat Landscape

The report concludes that this incident indicates a significant escalation in cyberattacks which traditionally required higher human intervention. Anthropic emphasizes that, while AI can be weaponized, it is also in the process of developing more sophisticated tools to counter these threats.

Security Challenges and Black Market Tools

Billy Leonard, head of Google's threat intelligence group, highlights the ongoing attempts to exploit legitimate AI tools by attackers, leading many to resort to illegal models due to security barriers. He notes that some adversaries are turning to black market models lacking safeguards, which can provide substantial advantages.

Emerging Cyberattack Campaigns

Digital security firm Kaspersky has reported new sophisticated cyberattack campaigns that involve the distribution of malicious language models. One notable program, named BrowserVenom, is propagated through a counterfeit AI assistant dubbed DeepSneak, which masquerades as DeepSeek-R1 and is promoted via Google ads. This tactic aims to deceive users into installing malware that redirects web traffic to malicious servers, allowing attackers to steal credentials and sensitive information.

The Need for Vigilance

Kaspersky warns that such threats exemplify how locally executable language models, although useful, can pose new risks if sourced from unverified channels. The Google report identifies the primary threat actors behind these new campaigns as coming from China, North Korea, Russia, and Iran, indicating their intent to utilize AI for various malicious activities, including malware execution and social engineering.