Floating Button
Home Digitaledge Cybersecurity

Enterprise AI agent boom draws attention of state-sponsored hackers: reports

Nurdianah Md Nur
Nurdianah Md Nur • 5 min read
Enterprise AI agent boom draws attention of state-sponsored hackers: reports
As Google reports AI misuse by state actors, Microsoft and Tenable highlight visibility and identity gaps inside fast-growing agent ecosystems. Photo:
Font Resizer
Share to Whatsapp
Share to Facebook
Share to LinkedIn
Scroll to top
Follow us on Facebook and join our Telegram channel for the latest updates.

State-sponsored hackers from the Democratic People's Republic of Korea, Iran, the People's Republic of China, and Russia are experimenting with commercial artificial intelligence (AI) models to sharpen cyberattacks. This creates a new layer of risk as companies embed AI agents deeper into their daily operations.

In a Feb 12 report, the Google Threat Intelligence Group (GTIG) said it detected government-backed actors using its Gemini large language model in late 2025 to support reconnaissance, generate phishing lures and support malware development. Some threat groups even explored building agentic AI capabilities to support campaigns, including prompting Gemini with fabricated cybersecurity expert personas and attempting to create an AI-integrated code auditing capability.

The findings come at a time when enterprise AI adoption is accelerating. Microsoft’s Cyber Pulse Report, published on Feb 11, reveals that over 80% of Fortune 500 companies now run active AI agents built with low-code tools to automate business processes. As these systems scale across the organisation, cybersecurity teams are struggling to track where AI is deployed and what access it holds.

The growing visibility gap

The challenge extends beyond approved deployments. Seven in 10 organisations were found to have integrated AI or Model Context Protocol third-party packages "often without central security oversight," according to the Cloud and AI Security Risk Report 2026 by exposure management company, Tenable.

Tenable describes an “AI exposure gap” spanning applications, infrastructure, identities and data. In cloud environments analysed between April and December 2025, the pace of AI adoption and third-party code integration outstripped the ability of security teams to assess and remediate risks.

See also: Enviro-Hub suffers from cyberattack

Only 47% of organisations have implemented dedicated generative AI security controls, according to Microsoft’s 2026 Data Security Index. At the same time, 29% of employees have used unsanctioned AI agents for work tasks, based on another survey of more than 1,700 data security professionals commissioned by Microsoft.

The double agent problem

Security researchers warn that AI agents themselves are now part of the risk equation.

See also: 2026 is the dawn of the adaptive identity era

Microsoft’s Defender team recently uncovered a campaign exploiting “memory poisoning,” a technique that manipulates an AI assistant’s stored context to influence future responses. In separate testing, Microsoft’s AI Red Team demonstrated how AI agents could be misled through deceptive prompts and manipulated task framing.

Identity sprawl compounds the issue. Tenable found that non-human identities, including AI agents and service accounts, represent 52% of risk compared with 37% for human users. These accounts can form what the company called “toxic combinations” of permissions that fragmented security tools fail to detect.

Moreover, 18% of organisations have granted AI services administrative permissions that are rarely audited. Nearly two-thirds (65%) possess unused or unrotated cloud credentials, with 17% tied to critical administrative privileges. Nearly half of identities with critical excessive permissions are dormant, according to Tenable.

"Lack of visibility and governance means teams are at the mercy of new exposures, including over-privileged identities in the cloud," says Liat Hayun, Tenable’s senior vice president of product management and research.

The result is a widening gap between how fast AI systems are deployed and how rigorously they are governed.

Phishing gets an AI upgrade

The operational implications are already visible.

To stay ahead of the latest tech trends, click here for DigitalEdge Section

Iranian state-backed group APT42 used Gemini to profile targets and craft what GTIG describes as “hyper-personalised, culturally nuanced lures” designed to evade traditional phishing detection. By feeding biographical information into the model, APT42 generated tailored personas to increase engagement.

Similarly, North Korean actor UNC2970 leveraged Gemini to synthesise open-source intelligence on cybersecurity and defence firms, mapping technical roles and salary data to create convincing phishing identities.

Chinese threat group APT31 went further by prompting Gemini with fabricated cybersecurity expert personas to automate vulnerability analysis and produce testing plans against US-based organisations. In one instance, APT31 directed the Gemini model to analyse remote code execution techniques and SQL injection test results, GTIG reported.

Google said it has disabled accounts linked to the malicious activities.

AI-integrated malware emerges

GTIG’s report also highlights emerging malware families experimenting with AI integration. Discovered in September 2025, HONESTCUE used Gemini’s API to generate C# code that downloaded and executed second-stage payloads directly in memory to help evade disk-based detection.

Another toolkit, COINBAIT, identified in November 2025, was built using the AI-powered platform Lovable AI to mimic a cryptocurrency exchange. Its detailed developer-style logging messages suggested AI-generated code, says GTIG. Based on infrastructure overlaps, Google assessed the activity overlaps with UNC5356, a financially motivated threat cluster.

The supply chain dimension is significant. Tenable reported that 86% of organisations host third-party code packages with critical-severity vulnerabilities, making the software supply chain "a primary and persistent source of cloud exposure." Additionally, 13% have deployed packages with a known history of compromise, including worms such as s1ngularity and Shai-Hulud.

Adoption outpaces safeguards

GTIG says it has not observed advanced persistent threat (APT) groups achieving breakthroughs that fundamentally reshape the threat landscape. Still, it characterised recent activity as early indicators of how adversaries could operationalise AI in future campaigns.

Those warning signs are emerging as enterprises accelerate AI agent deployment. Microsoft telemetry from November 2025 showed financial services accounting for 11% of active agents globally, manufacturing 13% and retail 9%. Regionally, Europe, the Middle East and Africa led adoption at 42%, followed by the US at 29%, Asia at 19% and the Americas at 10%. Low-code tools are widening participation beyond traditional developer communities, speeding uptake among knowledge workers.

GTIG and Microsoft researchers argue the answer is not to slow adoption, but to strengthen controls. They are urging organisations to apply Zero Trust principles to AI systems, treating AI agents like human employees with least-privilege access, explicit verification and continuous monitoring.

Google says it has strengthened safeguards to block malicious prompts and detect model extraction attempts and continues to work with law enforcement where appropriate. Meanwhile, Microsoft calls for tighter governance, greater observability and closer coordination across business, IT and security teams as companies scale their AI deployments.

×
The Edge Singapore
Download The Edge Singapore App
Google playApple store play
Keep updated
Follow our social media
© 2026 The Edge Publishing Pte Ltd. All rights reserved.