The cybersecurity world crossed a threshold it cannot walk back from. AI cyberattacks now threaten organizations of every size, and artificial intelligence fights on both sides of every digital assault. The rules of the game have shifted in ways most organizations are not yet ready to handle.
The clearest proof arrived in November 2025. Anthropic revealed it had disrupted what it called the first-ever documented large-scale cyberattack executed almost entirely without human involvement. A Chinese state-sponsored group used AI’s agentic capabilities to an unprecedented degree — not just as an advisor, but to execute the AI cyberattacks themselves.
The group, designated GTG-1002, orchestrated a sophisticated campaign that integrated AI autonomously across nearly all stages of the attack lifecycle. The target list included roughly 30 organizations — large tech companies, financial institutions, chemical manufacturers, and government agencies worldwide.
How did the attack unfold?

Hackers did not brute-force their way through Anthropic’s guardrails. They outsmarted them.
They broke down their attacks into small, seemingly innocent tasks that Claude would execute without being provided the full context of their malicious purpose. They also told Claude it was an employee of a legitimate cybersecurity firm conducting defensive testing.
Once inside, the AI took over almost completely. Approximately 80 to 90 percent of the operations executed ran without human intervention. Anthropic said the AI made thousands of requests per second — an attack speed that would have been, for human hackers, simply impossible to match.
The attack had six distinct phases. Anthropic estimated that human intervention for key phases was limited to a maximum of 20 minutes of work. Claude, by comparison, carried out several hours of operations.
The result? A handful of successful intrusions. But security experts warn that the low success rate this time around does not reflect what AI cyberattacks will look like next time.
AI lowers the barrier for every criminal

The GTG-1002 campaign was state-sponsored and well-resourced. But AI cyberattacks now extend well below that level, putting destructive capability in the hands of low-skilled criminals.
In January, a Russian-speaking cybercriminal used multiple AI tools to compromise more than 600 devices running a popular firewall across more than 55 countries. Amazon Web Services’ security research team found the hacker used generative AI services to implement and scale well-known attack techniques throughout every phase of the operation, despite limited technical capabilities.
That example matters. AI does not just amplify sophisticated hackers. It turns amateurs into serious threats.
A single AI agent can scan for vulnerabilities and potentially exploit them faster and more persistently than hundreds of human hackers. Criminal operations that once required coordinated teams now run around the clock on minimal compute budgets. The era of cheap, scalable AI cyberattacks has arrived — and it is accelerating fast.
Defenders are fighting back — with the same tools
The same AI capabilities that power attacks also drive the strongest defenses available today.
Anthropic’s threat intelligence team used Claude extensively in analyzing the enormous amounts of data generated during the GTG-1002 investigation itself. That is the double-edged reality of this moment. The technology enabling AI cyberattacks also helped disrupt them.
Earlier in 2025, Anthropic said its AI systems identified more than 500 previously unknown security vulnerabilities across widely used open-source software. Researchers also used AI to surface a critical Linux operating system flaw that had gone undetected for more than two decades.
“These AI models are augmenting what humans can do,” said Daniel Stenberg. “If you use these tools correctly, they can really raise your ability to find problems in software.”
The guardrails problem

Safety controls inside AI platforms represent the last line of defense against misuse. But as AI cyberattacks grow more sophisticated, those guardrails face constant and evolving pressure.
Attackers increasingly frame malicious requests as harmless exercises. They disguise reconnaissance as simulated security tests. They use social engineering on the AI itself — exactly as GTG-1002 did.
“You still need a software architect in the loop with these systems,” said Zico Kolter. Kolter noted that exploitation remains more complex than mere discovery, which gives defenders a narrow but meaningful edge — for now.
Advanced AI models handle researching software vulnerabilities and developing exploit code well. But they still lack the contextual judgment a human hacker brings — such as identifying which data inside a target organization carries the highest value.
That gap shrinks with every new model generation. And as it does, AI cyberattacks will only grow harder to detect and stop.
What organizations must do right now?
Anthropic advises security teams to experiment with applying AI for defense in areas like Security Operations Center automation, threat detection, vulnerability assessment, and incident response.
Bad actors can scale simply with more compute and face no limits from finite personnel. Individuals now run large-scale campaigns that once required full teams. Operations proceed round the clock without interruption.
“This is the most change in the cyber environment, ever,” said Francis deSouza. “You have to fight AI with AI.”
Organizations that delay adopting AI-driven defenses do not stay neutral. They fall behind. The threat actors racing ahead have no intention of slowing down — and as AI cyberattacks grow more autonomous and more frequent, waiting is no longer a strategy any organization can afford.
Are your organization’s cybersecurity defenses built to withstand AI cyberattacks — or are you still relying on tools designed for a threat landscape that no longer exists?

