Volk Community Support Progress: Monthly Goal So Far

Raised: $300 / $1,000 (30%)

Help us reach our goal -DONATE!

AI Is Being Weaponized For Cybercrime In ‘Unprecedented’ Ways, Researchers Warn

AI Is Being Weaponized For Cybercrime In ‘Unprecedented’ Ways, Researchers Warn

AI Is Being Weaponized For Cybercrime In ‘Unprecedented’ Ways, Researchers Warn

Authored by Tom Ozimek via The Epoch Times,

Artificial intelligence (AI) is being weaponized to conduct increasingly sophisticated cybercrimes, according to a new report from Anthropic, which warns of an “unprecedented” evolution in malicious operations that makes defense far more difficult.

In its Aug. 27 Threat Intelligence Report, the AI safety company described how criminals are embedding advanced models like “Claude” into every stage of attacks—from reconnaissance and credential theft to ransomware and fraud. Researchers said AI tools are now acting not just as advisers but as active operators in real-time campaigns.

This “represents a fundamental shift in how cybercriminals can scale their operations,” the report said.

“Agentic AI systems are being weaponized” to perform sophisticated cyberattacks, not simply provide guidance, the researchers warned.

Cases in Point

The report highlighted several examples, including a large-scale extortion campaign, fraudulent employment scams run by North Korea, and ransomware sold on dark-web forums.

In one operation, for example, a hacker used Anthropic’s coding assistant Claude Code to infiltrate at least 17 organizations—including hospitals, emergency services, and government agencies. “Claude” was deployed to automate reconnaissance, penetrate networks, analyze stolen financial data, and generate persuasive, psychologically targeted ransom notes. Demands sometimes exceeded $500,000.

Rather than encrypting files, the attacker threatened to publicly expose exfiltrated data, ranging from health care records to government credentials. The report stated that this “vibe hacking” method shows how a single operator can now achieve the impact of an entire cybercrime team.

“It says, ‘here’s how much we think we should send the ransom note for,’ and then it actually helps write the ransom note to be as persuasive as possible,” one of the researchers said during a podcast discussing the operation. “So really, every step, end-to-end, AI is able to help with an attack like this,” including analyzing people’s financial details “to work out how much they can realistically be extorted for as well.”

Another case involved North Korean operatives who used Claude to pose as software engineers at U.S. Fortune 500 companies. The AI-generated resumes, passed coding assessments, and even performed technical tasks, allowing unskilled workers to work remotely and earn salaries that investigators say help fund the North Korean regime and its weapons programs.

In a third case, a UK-based actor leveraged Claude to build and market ransomware-as-a-service, selling malware packages for $400 to $1,200. Despite lacking advanced coding ability, the actor used AI to implement encryption, anti-detection techniques, and command-and-control infrastructure.

Growing Threat

Anthropic said these examples illustrate a broader pattern where criminals with little training can now use AI to scale attacks once reserved for sophisticated groups. “Traditional assumptions about the link between actor skill and attack complexity no longer hold when AI can provide instant expertise,” the report warned.

The company said it has banned accounts involved in the abuses, deployed new detection tools, and shared technical indicators with authorities. But it acknowledged that similar misuse is occurring with other commercial and open-source models.

“There are actually open source models out there now that are fine-tuned for this,” an Anthropic researcher warned during a podcast discussing the new phenomenon. “Cyber-criminals are developing weaponized LLMs [large language models] to conduct attacks.”

The implications of what the researchers described as an “evolution in AI-assisted cybercrime” are that defense and enforcement are becoming increasingly difficult, while crimes conducted with the aid of weaponized AI are becoming more common.

National Security

Anthropic also announced the creation of a National Security and Public Sector Advisory Council, composed of former senators and senior officials from the Pentagon and intelligence community, to guide the company on high-impact defense applications of AI.

The move comes as Washington sharpens its focus on autonomous systems. President Donald Trump said on Aug. 25 that drones represent “the biggest thing that’s happened in terms of warfare” since World War II, citing Ukraine as proof that unmanned platforms are reshaping modern combat.

Some analysts and insiders, including autonomous drone company Airrow co-founder David Kaye, say pairing drones with AI could accelerate a shift toward “bots before boots” battlefields, where AI-assisted drones operate without humans nearby and carry out around-the-clock missions “with no risk, no fatigue, and no hesitation.”

Meanwhile, Geoffrey Hinton—the Nobel Prize-winning scientist known as the “godfather of AI”—has issued stark warnings that humanity risks being displaced by machines “much smarter than us.”

In a recent interview, Hinton said the danger of AI extends far beyond job losses, cautioning that if intelligent machines are not programmed to care for humans, they will “just take over” and replace us.

Tyler Durden
Thu, 08/28/2025 – 20:05ZeroHedge News​Read More

Author: VolkAI
This is the imported news bot.