Artificial intelligence played a growing role in cybercrime during the final quarter of 2025. According to a new report from the Google Threat Intelligence Group, state-backed hackers increasingly used AI tools to improve reconnaissance, phishing campaigns, and malware development.
The findings update earlier research released in November 2025 and show that malicious actors are integrating AI into multiple stages of their attack workflows. Google says AI has boosted productivity for these groups, allowing them to scale operations faster and refine their techniques. In response, the company has expanded its mitigation strategies as AI-enabled threats continue to evolve.
Model Extraction Attempts Target AI Systems
Google’s analysts, working closely with DeepMind, identified a rise in model extraction attempts, also known as distillation attacks. These attacks try to copy the logic of proprietary AI models by misusing legitimate API access.
Although researchers did not observe advanced persistent threat groups directly attacking frontier AI systems, they did detect frequent extraction attempts by private companies and independent researchers. Google disrupted these efforts and strengthened safeguards to protect intellectual property. The company blocked more than 100,000 malicious prompts that aimed to replicate the reasoning processes of its Gemini models.
Model extraction exploits authorized access to probe AI systems in a structured way. While knowledge distillation can serve valid research purposes, unauthorized replication violates company policies and creates security risks.
Government-Backed Groups Use AI for Phishing and Targeting
The report highlights growing AI use among threat groups linked to North Korea, Iran, China, and Russia. These actors relied on large language models to conduct technical research, identify targets, and craft more convincing phishing messages.
For example, the Iranian-linked group APT42 used Gemini to research individuals, develop realistic online personas, and localize phishing content. Meanwhile, the North Korean group UNC2970 applied similar AI techniques to defense-related targeting and tailored email campaigns.
AI tools helped attackers create personalized messages at scale, making phishing attempts harder to detect through traditional warning signs.
AI Supports Malware Development and Automation
Google also observed threat actors exploring agent-based AI tools to assist with malware creation, penetration testing, and automated coding. China-linked groups APT31 and UNC795 used AI for vulnerability analysis, code review, and tool generation.
Certain malware families incorporated AI APIs to generate follow-up malicious code. In addition, phishing kits such as COINBAIT used AI-generated interfaces to harvest login credentials more effectively.
These developments suggest that attackers are not only improving existing tactics but also automating complex technical processes.
ALSO READ: How Google aims to teach AI Africa’s 2,000 languages
Underground AI Marketplaces Expand
The report notes the rise of underground marketplaces offering AI tools designed for offensive cyber use. Some services claimed to run independent models but actually depended on commercial AI platforms. Misconfigured systems and exposed API keys contributed to a growing black market for AI resources.
Google responded by disabling abusive accounts and monitoring exploitation pathways more closely.
Strengthening Defenses Against AI Misuse
To counter these threats, Google continues to invest in proactive defenses. The company has enhanced detection systems, removed malicious infrastructure, and implemented safety controls to reduce misuse. It also works with industry partners to share intelligence and test secure AI frameworks.
Experimental projects such as Big Sleep and CodeMender demonstrate how AI can support vulnerability detection and automated remediation. These tools highlight the dual nature of artificial intelligence, which can strengthen cybersecurity while also creating new risks.
Growing Sophistication of AI-Enabled Threats
Google Threat Intelligence Group warns that AI adoption among threat actors is accelerating. Phishing campaigns are becoming more targeted, malware is evolving faster, and reconnaissance efforts are more efficient.
The group says it will continue monitoring emerging risks and sharing intelligence to support threat-hunting efforts worldwide.




















