AI Is Shrinking The Time To Compromise. Most Firms Still Can’t Recover Control
As AI shortens the path from vulnerability to attack, most organizations are still unprepared to regain control once systems are compromised

Editor’s Note: Asia Tech Lens turns one this week. The goal when we started was straightforward: track Asia’s technology rise with perspective, without the hype, and free from rhetoric. A year in, readers from nearly 60 countries have found their way here—many in the U.S., the U.K., and across Europe, alongside strong communities in Singapore and India. Different geographies, one shared question: how technology actually gets built and scaled in this part of the world.
That readership has clarified what this publication is.
Asia Tech Lens is defined less by where the stories come from than by how we approach them. We look at Asia the way a builder would—execution, trade-offs, and the systems beneath the headlines and explain what we find in a way that travels across markets.
The direction is set. The next year is about going deeper.
Thank you for being part of our journey. Here’s to the next chapter.
Much of the discussion around AI in cybersecurity has focused on faster, more scalable attacks. But that framing is incomplete. As AI shortens the gap between a weakness being found and being exploited, the bigger question for operators is what happens next. Once core systems are hit, can they regain control and keep the business running?
There is a gap in recovery. Veeam’s 2025 ransomware research found that nearly 7 in 10 organizations experienced at least one cyberattack in the previous year, but only 10% recovered more than 90% of their data, while 57% recovered less than half.
Many organizations believe they are ready because they rely on plans, backups, and recovery targets. In practice, those do not guarantee that a critical system can be restored under real conditions.
As the window to respond shrinks, the problem shifts. Restoring systems alone is not enough when identity, access, backups, and dependencies are uncertain. Operators also have to rebuild trust in what comes back online.
AI Is Shrinking The Time Between Vulnerability and Compromise
AI is making it easier to surface vulnerabilities across systems and applications. In Singapore, cybersecurity awareness is relatively mature. Still, AI is already changing how enterprises think about exposure, according to Joey Lim, Country Manager at Exclusive Networks Singapore, a global cybersecurity specialist.
“On the ground, we see a shift from periodic security assessments toward a more continuous posture,” Lim told Asia Tech Lens. “Organizations are asking harder questions about their attack surface, not just what they know about, but what they don’t. And that’s the right instinct.”
That compression changes which capability matters most. When the window between vulnerability and exploitation was measured in weeks, detection and patching kept most incidents from reaching the recovery phase. As that window shrinks, more incidents will get through. Recovery stops being the fallback—it becomes the front line.
That leaves operators with a harder question. When something gets through, can they respond quickly, regain control, and recover before the damage spreads?
The Real Bottleneck Is Recovery
Backups can make organizations feel safer than they are. A completed backup job shows that data exists somewhere, but it does not prove the business can recover.
For Gareth Russell, Field CTO, APAC, at Commvault, the starting point is no longer how fast an organization can restore systems. “In a cyber incident, speed without trust is a huge risk,” he told Asia Tech Lens.
The more important question is how quickly a company can identify a known clean state, reestablish trusted control, and bring back a service it can rely on. That is the difference between trusted recovery and simply powering systems back on.
Traditional metrics such as recovery time objective (RTO) and recovery point objective (RPO) still matter, but Russell said they often reflect intent rather than reality. What matters most is whether organizations can recover a clean, usable service end-to-end without reintroducing the threat.
“When I talk to CIOs and CISOs about recovery readiness today, we look at things like time to clean recovery, coverage of immutable and verified data, the ability to regain control of identity systems, and whether recovery has been tested under realistic conditions,” Russell said.
Across Asia, Commvault has found that 85% of organizations have incident response plans, but only 30% test all mission-critical workloads. When a real incident hits, those plans often do not hold, and recovery takes longer than expected.
The gap shows up in execution. Restoring a database or application is one thing. Getting the business running again during a real incident is another.
Recovery, in this sense, means testing the full process before a real incident leaves no room to guess.
Identity Is Where Recovery Often Breaks
If identity is compromised, recovery can’t start with simply restoring workloads. The organization first has to decide who and what can still be trusted. Otherwise, bringing systems back may also bring back the attacker’s access. Russell said identity failure changes the nature of recovery.
“In most incidents, it is not just that access is lost, it is that you cannot trust who or what is accessing anything,” he said. Federation fails, tokens may still be valid, service accounts may keep running, and organizations can lose control of the control plane.
“Teams often try to recover workloads before identity is stable. That is where things fall apart. If identity is not clean, nothing you bring back can be trusted,” Russell added.
Lim described how this failure unfolds in real time. A threat is detected, but the scope is unclear, so escalation is delayed. By the time leadership is engaged, critical hours have passed. The response team then realizes the incident response plan no longer matches the current environment. Systems have changed, contacts are outdated, and dependencies are unclear.
At the same time, attackers move laterally and target credentials. The team now faces a more dangerous problem. They no longer know which accounts, systems, or access paths can be trusted. Recovery slows as every action needs to be verified.
This is where the incident turns. Shut down too broadly and disrupt operations, or move too carefully, and the attackers may still be active. “This is often where a serious incident becomes a catastrophic one,” said Lim.
When identity is uncertain, recovery becomes a series of high-risk decisions made without clear visibility of what is safe.
Operators’ Takeaways
Do Now
Prove recovery. Test one critical service end-to-end, with clean data, trusted access, and dependencies restored in the right order. Russell said the minimum proof is a full recovery under real conditions, where users can log in, and the service works.
Map your exposure. Understand not just known assets, but shadow IT, cloud workloads, and third-party integrations. Lim said the assets that organizations do not track are often the ones that get exploited first.
Harden identity. Reduce over-privileged accounts, clean up service accounts, and enforce consistent multi-factor authentication. Lim warned that many teams are still “working blind” when identity is compromised.
Run real drills. Test recovery under realistic conditions, not just tabletop exercises. Recovery needs to be proven in execution, not assumed because plans exist.
Wait
Hold off on more AI-security tools until recovery basics are proven. Lim said the bigger issue is not adding AI features, but whether the response is built for machine-speed attacks.
Be realistic about in-house response. A full 24/7 detection-and-response capability is “not realistic for most” organizations.
Delay framework rewrites if plans have not been tested. According to Russell, recovery readiness comes from repeatable execution, not documentation.
Avoid
Do not assume backups guarantee recovery. Russell said untested and compromised backups are common failure points.
Do not restore systems before identity is trusted. If identity is still compromised, restored systems cannot be trusted. “Break glass only works if it is genuinely separate, operationally ready, and trusted when everything else is not,” said Russell.
Do not treat compliance as resilience. Compliance sets a baseline but “doesn’t give you resilience,” said Lim.
Do not treat cybersecurity as only a technology problem. Communication and decision-making often fail first.
Related Reads On Asia Tech Lens
AI Is Accelerating Cybercrime—and Southeast Asia Feels It First
AI is making cybercrime faster, cheaper, and easier to scale across Southeast Asia. This piece explains why organizations have less time to detect, contain, and recover from attacks.
Agentic AI Can Act. Singapore’s New Rulebook Says It Needs Guardrails
As AI systems gain more autonomy, the risks shift from what they can generate to what they can do. We look at why permissions, oversight, and recovery planning matter before AI agents are allowed into real workflows.
Two AI Phones. Two Access Models. One Critical Difference.
AI access is becoming an operational risk, not just a product feature. This piece compares two approaches to AI control and shows why permissions, identity, and trust layers matter as systems become more automated.
Why ByteDance’s AI Phone Hit a Wall
When AI agents start acting across apps and services, security and accountability become the real constraints. The piece looks at why uncontrolled access can quickly turn AI capability into operational risk.
India’s AI Push Is Real. Production Access Is the Constraint
AI ambition only matters if systems can work under real operating conditions. This piece examines why production access, auditability, and incident ownership are becoming the true tests of AI readiness.

