If there is anything that the last few years of digital transformation have taught us, it is that resilience is the currency of today’s modern economy. Recent high-profile infrastructure outages have dominated the news cycle, reinforcing the fears or CIOs and business leaders about critical systems going down.
In the traditional digital economy, a system crash means screens go black, transactions halt and supply chains freeze. While the cost to end users is downtime, the true cost to businesses is the revenue lost per minute.
As businesses across Asia Pacific (Apac) aggressively adopt new technologies such as Agentic AI, the definition of a system crash will fundamentally change. When an AI model is compromised, whether through prompt injection, data leakage, or adversarial manipulation, the system likely remains online, continuing to respond to queries and process data. On the surface, the engines appear to be running, but beneath the hood, the damage is already done.
In this context, the cost is not downtime, but trust. In 2026, running a system that lies to you, or worse — leaks your secrets — is far more dangerous than a stopped system that tells you nothing.
From chatbot to digital employee
We are rapidly moving past the era of the simple chatbot with cookie-cutter responses. Today, companies are effectively hiring "digital employees" in the form of Agentic AI systems capable of autonomous decision-making. These agents are already handling everything from customer service resolution to analysing complex legal contracts and managing supply chain workflows.
See also: Revolutionising the foundations for corporate infrastructure transformation
The scale of this shift is massive. Recent data from McKinsey indicates that 88% of organisations now use AI in at least one function. What we may be failing to realise is that we are also effectively handing these agents the "keys to the kingdom.”
In some cases, this means granting them access to sensitive proprietary data and customer interactions — often without the guidance and supervision typically required of a human hire. This is reflected in the Thales 2025 Data Threat Report, which reveals that more than 65% of Apac organisations view the rapid pace of AI development, particularly generative AI (GenAI), as the leading security concern associated with its adoption.
We wouldn’t hire inexperienced employees, give them access to our most sensitive IP and leave them without supervision or contractual safeguards. Yet, many organisations deploy AI agents without the digital equivalent of a contract, which are the security policies that govern their behaviour and enforce boundaries.
See also: Namibia orders Elon Musk's Starlink to cease all operations in the country
The risk of an AI "breaking"
The unique threat of Agentic AI is that it doesn't need to be hacked in the traditional sense; it just needs to be tricked. Competitors or bad actors can use "social engineering" tactics on your AI, much like they might try to manipulate a human receptionist. Through prompt injections, an attacker can coerce an AI to ignore its safety guidelines, potentially tricking it into revealing confidential pricing strategies, unreleased product roadmaps, or private customer data.
Furthermore, the threat isn't just information leakage; it is resource exhaustion. Just as a DDoS attack can crash a web server, an LLM (Large Language Model) can be paralysed by complex, anomalous queries designed to tie up its computing power.
Worse, attackers can force your AI into expensive processing loops that not only block legitimate users but also rack up massive API bills in the background. This creates a dual failure: a system that is ineffective for your employees and costs you money every minute it stays online.
Far from being just technical glitches, these are business failures which result in financial risk, reputational damage, and loss of trust, which are harder to fix than a server reboot.
The solution: Security that “thinks”
How do we secure a workforce that moves this fast? Traditional cybersecurity tools are like perimeter fences; they protect the office door. But AI operates inside the room. We need a security architecture that sits alongside the AI as it thinks, checking its logic in real-time.
This approach is known as Runtime Security. Think of Runtime Security as a digital compliance officer sitting next to every AI agent. It monitors the conversation in real time, ensuring the AI doesn't access restricted files, such as HR records or unreleased financial results, and that it doesn't answer malicious or manipulative questions.
Crucially, this allows the AI to retain access to the data it needs to be useful, without exposing the company to data leakage and the resulting regulatory fines, by shifting security from a gatekeeper to a guardrail.
To stay ahead of the latest tech trends, click here for DigitalEdge Section
Confidence to scale
For too long, security has been viewed as a roadblock to innovation. In the AI era, this dynamic is reversed, where security is considered an enabler.
Take, for example, a Formula One car that can travel at 300 km/h, not just because it has a powerful engine, but because it has control systems capable of handling that speed. Without the confidence that you can stop or steer safely, the car cannot operate at its highest performance level. The same applies to Agentic AI: you cannot scale what you cannot control.
To truly capture the value of AI for business growth in 2026, leaders must demand the same level of resilience for their “digital workforce” as they do for their physical infrastructure. Only by securing the fabric of our AI can we move from experimentation to true enterprise scale.
Daniel Toh is chief solution architect (Asia Pacific and Japan) at Imperva, a Thales company

