Agentic AI Can Act. Singapore’s New Rulebook Says: Prove You Can Stop It.
Singapore’s new Model AI Governance Framework for Agentic AI is voluntary, but it is already a procurement and audit baseline: bound autonomy, prove oversight, and design rollback before you scale
Agentic AI, advanced autonomous systems that can act without human supervision, is becoming more common and sophisticated.
An agent that can draft an email is one thing. An agent that can send it, trigger a payout, or change settings in a live system is a very different kind of risk. The operator question is no longer whether AI can help with everyday tasks, but can we safely let it execute actions in regulated systems this year, or keep it in recommend-only mode until controls are provable?
This sounds practical and convenient, but it also comes with serious risks. Failures can threaten safety and financial stability, making governance and control mechanisms central to deployment decisions.
IMDA launched Singapore’s Agentic AI Guidelines in January to promote responsible deployment. The voluntary framework is intended to move faster than legislation and shape what companies can defend in audits, risk reviews, and procurement discussions.
Prof Wei Lu from NTU’s College of Computing & Data Science bluntly describes the shift. “At a fundamental level, the shift from generative AI to agentic AI marks a move from AI as an ‘advisory co-pilot’ to an ‘operational actor’,” he tells Asia Tech Lens.
The biggest risk in early AI rollouts is giving an agent too much power over old, clunky systems, where small mismatches can cascade once automation is introduced. As Lu pointed out, the fix is to set strict boundaries and only give the AI the bare minimum access it needs to do its job.
Before an agent gets permission to act, operators need to answer three things: do you know exactly what the agent is allowed to do, can you audit every action, and can you roll it back quickly when needed? If you cannot answer yes to all three, you are not making a technical choice; you’re taking a governance risk.
Charmian Aw, partner (Data, Privacy, and Cybersecurity) at Hogan Lovells, believes that operators need to be extra careful when using agentic AI in sectors such as finance and healthcare.
“These are repeatedly identified in the Guidelines as high‑impact areas where erroneous or unauthorized autonomous actions can lead to significant harm for individuals,” she tells Asia Tech Lens.
For anyone running these systems, the message is clear: pick exactly which tasks the AI can touch, and make sure humans are always in the loop to spot and stop mistakes before they spread.
Why Voluntary Rules Still Matter
While Singapore’s guidelines are not law yet, they give companies a head start on future regulations.
Aw points out that in a market without strict AI laws, these guidelines serve as a much-needed benchmark. Companies can use them to assess their own risks and conduct audits, demonstrating they’re keeping up with emerging regulatory expectations.
“In practice, commercial entities frequently adopt non‑binding frameworks as procurement and vendor‑management baselines, which may create commercial pressure for service providers to align with the guidelines even without legal compulsion,” Aw said.
For operators running the system, use the guidelines as a checklist. Ask yourself if your AI’s actions are restricted, easy to track, and most importantly, reversible.
“We see the guidelines as likely to influence deployment and rollout, with organizations in highly-regulated sectors adopting tighter controls such as autonomy limits and layered approvals, consistent with the guidelines’ emphasis on restricting bounded autonomy and meaningful human oversight,” Ciara O’Leary, associate (data, privacy, and cybersecurity) at Hogan Lovells, explains.
Even though it is not binding, the framework raises the bar. Teams will need to show they can monitor what an agent does, explain why it did it, and roll back mistakes before giving it more freedom.
What Breaks First
Early failures in agentic AI deployments often occur at the interfaces among agents, tools, and human workflows, especially in legacy environments that rely on implicit judgment rather than explicit, machine-readable rules. In such settings, small mismatches can quickly escalate once automation is introduced, according to Prof Lu.
Most incidents will not look like sci-fi autonomy. They will look like silent misrouting: the agent writes to the wrong field, triggers the right workflow with the wrong parameters, or sends the right message to the wrong customer.
This makes observability an important bottleneck.
“Agentic systems operate at machine speed across multiple tools and environments, generating large volumes of unstructured reasoning traces and execution logs that are difficult to monitor or interpret in real time using existing enterprise tooling,” Lu said, adding that this makes it challenging to detect abnormal behavior early or to reconstruct what went wrong after an incident.
Accountability is messy when too many people are involved. To stay safe, design rollback paths wherever you can. The operational rule is simple: don’t give the systems more power than your safety nets can catch. If you can’t detect and fix a mistake quickly, keep it recommend-only.
Controls Before Scale
Early deployments of agentic AI will likely incorporate human-based accountability mechanisms, which the guidelines explicitly emphasize and that are reflected across global regulatory approaches, according to Aw.
“Given the relative immaturity of agentic systems, sophisticated organizations will likely prioritize controls that strengthen accountability, observability, and reversibility,” she says.
In practical teams, prioritize human approvals. Don’t let the AI make big calls like making a payment or changing safety settings without a human signing off first. Keep a clear paper trail of everything it does, test the system regularly, and make sure you’re still the one in control.
Echoing this view, Prof Lu said that governance must evolve from content moderation toward more demanding priorities, such as enforcing behavioral guardrails with clearly defined action-space boundaries, and extending the principle of least privilege from human users to agent identities.
“Critically, these boundaries must be defined at the design stage, not retrofitted after deployment,” he said.
For operators, think of agentic AI as a step-by-step process. Start with low-risk, easy-to-undo tasks. Only give the AI more freedom once you’ve seen it work and make sure you know how to catch any mistakes before they spread.
Regional Implications
Singapore is ahead of its neighboring Southeast Asian countries in guiding agentic AI, but that doesn’t mean the guidelines will become a single regional standard.
“Southeast Asia reflects a highly heterogeneous and decentralized regulatory landscape. Jurisdictions are taking divergent, and in many cases, deliberately sovereign approaches to AI governance, shaped by differing political priorities, institutional capacities, and levels of digital‑economy maturity,” said Aw.
That said, while the guidelines may inform regional thinking, they seem unlikely to replace domestic regulatory preferences.
For Chinese tech companies operating in Singapore or Southeast Asia, Aw believes that the implications are limited. She points out that China already operates under a set of binding and prescriptive AI regulations, including the Algorithm Recommendation Rules, the Deep Synthesis Rules, and the Generative AI Measures.
ByteDance’s Doubao “AI phone” episode is a reminder that agents are arriving before the scaffolding is ready. If the guardrails do not align with platform rules, autonomy becomes untraceable and reversibility becomes theoretical. For operators, strategy comes first, define controls and boundaries before deployment.
Singapore has set a reference point, not a universal template. For operators deploying agentic AI across Asia, treat Singapore’s framework as the reference procurement baseline, then tune autonomy and evidence requirements to each regulator’s enforcement posture. Do not assume portability across markets unless your control plane is portable.
Related Reading On Asia Tech Lens
Budget 2026 Puts AI Into Execution Mode. Operators Need To Fund Foundations Before Features
Singapore’s AI push is shifting from experimentation to production.The Chinese New Year AI Gateway War: The Big Four’s Fight for Daily Habit
A distribution-first AI race: red packets, subsidies, and bundling tactics designed to force trial at scale.What Tencent’s ‘Yuanbao PAI’ Reveals About Its AI Strategy
Tencent’s bet is “Social + AI” - one that rides the existing WeChat social mechanics rather than trying to build a new habit from scratch.
Why ByteDance’s AI Phone Hit a Wall: Security, Fair Play, and the Economics of Attention
A cautionary tale of cross-app agency. When an assistant behaves like an operator across apps, platforms start treating it as automation abuse, triggering security and policy constraints that can kill distribution even if the product works.Asia’s Agentic Moment: The Manus Interview
The interview frames the real operator risks (permission creep, brittle tool integrations, and silent workflow errors) and the controls that separate a pilot from production: scoped identities, least privilege, full action trails, and fast rollback.At WAIC Hong Kong, the AI Conversation Has Moved Past the Model Race
A conversation with Steven Hoffman on where the market is actually heading: away from “who has the best model” and towards who can monetize, distribute, and operate AI at scale.AI Is Accelerating Cybercrime, and Southeast Asia Is Where The Damage Shows Up
Maps AI capability to attacker advantage: faster fraud, better impersonation, and identity compromise at scale. Reinforces why agentic deployment must be paired with monitoring, controls, and incident response, not just model upgrades.


