Vietnam’s New AI Law: The Road Ahead For Businesses
Vietnam’s AI Law phases in enforcement. Use the transition window to build evidence trails, vendor change control, and disclosure gates or procurement and audits will stop deployments

A chatbot flow gets updated. A vendor proposes a “slightly better” model. A team automates a step in approvals.
Nothing dramatic. Just the steady drip of AI becoming normal.
Then comes the question businesses rarely prepare for: can you prove what this system is doing, who it affects, and what happens when it changes?
Vietnam’s AI Law makes those questions unavoidable. It took effect on 1 March 2026, but businesses running existing AI systems have a transition window: 18 months (1 Sept 2027) for healthcare, education, and finance, and 12 months (1 March 2027) for other sectors. But that window is not a pause button. It is the time you have to build documentation, logging, vendor change control, and disclosure workflows before procurement, audits, or regulators can block deployment.
The law’s core mechanic is straightforward: classify AI by risk, then apply obligations accordingly. Systems fall into high-risk, medium-risk, or low-risk, and providers must self-classify, keep documentation supporting that classification, and for medium- and high-risk systems, notify the Ministry of Science and Technology through the national AI portal before deployment.
The International Association of Privacy Professionals (IAPP) describes this as a “risk-based management” approach, similar in structure to the EU’s AI Act, which in practice means organizations need to explain, consistently, how systems were classified and what controls sit around them.
This will not fail because a model is unsafe. It will fail because the business cannot prove what model ran, on what data, with what oversight, and what changed. In practice, the pressure shows up in three choke points. Procurement blocks deployments when vendors cannot provide audit trails or change notes. Audit blocks when the organization cannot reconstruct decisions, including what version ran and what oversight existed. Disclosure blocks when labeling exists only as policy and the workflow does not enforce it consistently.
Vietnam is borrowing two enforcement instincts: the EU forces pre-deployment discipline; China forces labeling/platform responsibility. Vietnam borrows from both, so the first friction will appear in procurement, auditability, and disclosure workflows.
A defining feature of the AI Law is that it regulates by role, not by industry. It distinguishes between the developer, provider, deployer, user, and affected person. That role mapping matters because it determines who is responsible for documentation, oversight, vendor obligations, and disclosure.
Build the AI Inventory
AI already appears in more places than internal reporting suggests. It is embedded in customer chat and automated replies, fraud alerts and transaction monitoring, queue triage and routing, recommendation systems, and content workflows. Some uses are explicit. Others are bundled into vendor products and automation features that may not be labeled as AI.
The point of the inventory is to make it clear what AI is in use, where it matters, and who owns it, so businesses can move on to risk triage, minimum controls, vendor requirements, and disclosure decisions.
For each system in use, capture:
System name and business owner.
Where it is used (customer interaction, approvals or eligibility, prioritization/triage, safety monitoring, fraud/AML flags, content generation).
What it influences (recommendation only, or automated action).
Sourcing (internal build, vendor system, or hybrid).
Data exposure (customer, health, financial, operational).
Change history (customization, fine-tuning, new data sources, expanded use cases).
That final point is often underestimated. In a role-based regulatory structure, incremental modifications can shift responsibilities and trigger re-assessment expectations. A strong inventory makes such changes visible early, before a vendor update, an incident, or an audit forces the issue.
Triage What Will Become High-Risk
A workable triage starts with two questions.
First: does the system influence consequential outcomes?
If an AI system can affect safety, legal rights, access to services, or material financial outcomes, treat it as high-risk until assessed otherwise. This commonly includes approvals and eligibility decisions, credit or insurance outcomes, medical or safety-related triage, and automated enforcement actions.
High-risk status matters because conformity assessment is a condition for being put into use. Assume modifications trigger reclassification duties, especially new data sources, new user groups, or expanded decision authority.
Also, most companies are deployers even when they think they are “just using software,” and deployer obligations do not disappear because the model is “the vendor’s problem.”
The cliff edge is the Prime Minister’s high-risk list. It will specify which high-risk categories require mandatory pre-use conformity certification before being put into service, and it may arrive late in the transition window. Do not treat the absence of the list as clearance. Treat any high-risk likely system as if it could land on the certified subset and build the evidence trail now: classification memo, logs and versioning, human oversight and rollback, and vendor change control that survives procurement and audit.
Second: could it reasonably mislead, influence, or manipulate people who do not realize they are dealing with AI?
Many customer-facing systems land here. Medium-risk systems include situations where customer support chat that sounds human and provides account or service guidance without explicit disclosure that it is AI.
The inventory should already reveal which systems will be hardest to defend later. The goal of triage is to surface those systems early, before a vendor update, expanded use case, or external scrutiny turns a workable deployment into a stop-ship.
Minimum Viable Compliance
AI inventory exists (named owner + system purpose + data class + deployment surface)
Risk classification memo exists (why it’s low/medium/high + what would change that)
Evidence capture works (logs/versioning/inputs/outputs stored and retrievable)
Human override path exists (who/when/how recorded)
Rollback path exists (pause/revert/manual route)
Vendor change control exists (notification + audit trail access + incident support)
Disclosure is enforced in workflow (UI/script/publishing gate + audit evidence)
Install Controls That Survive Audits
Logging and Traceability
Businesses should be able to reconstruct what happened when a decision is questioned. At minimum, that means recording the key inputs used, the output produced, the version of the model or system, and the business action taken as a result.
Human Oversight
It should be clear who is authorized to intervene or override, when intervention is required, how overrides are recorded, and what happens when the system behaves outside expected boundaries.
Rollback Capability
Medium- and high-impact systems should not be treated as one-way deployments. There needs to be a defined method to pause the system, revert to a prior configuration, or route decisions back to manual handling when performance degrades, unexpected behavior is detected, or risk thresholds are breached.
Incident handling is treated as a shared obligation across the AI value chain. That means setting incident thresholds, escalation procedures, and operational readiness to suspend or withdraw systems when required, including content generation incidents where outputs impersonate a real person or create deepfake-like media.
These controls only work if the business can access the underlying evidence consistently. For many deployments, that evidence sits with third-party suppliers, which is why contract terms become the next practical control surface.
Vendor Discipline
Many AI systems used in business workflows are vendor products or vendor models embedded into platforms, which means the information needed for traceability and incident handling may not be available by default.
Before renewing, expanding, or embedding a vendor system into a regulated workflow, operators need to ask four questions
Can the third party provide model and version change logs?
Can the third party provide audit evidence without requiring source code disclosure?
What is the third party’s incident notification SLA, and what investigation support is provided?
Who is the third party’s lawful local contact point or authorized representative in Vietnam, if the system is high-risk?
For high-risk systems supplied by foreign providers, vendor due diligence also needs a Vietnam-specific check. Tilleke & Gibbins notes that foreign providers of high-risk AI systems must establish a lawful local contact point, and systems subject to mandatory pre-use conformity certification may require a commercial presence or an authorized representative in Vietnam.
Operational contract clauses to prioritize:
Model change notifications: Versioning and change notes that let you assess whether classification, controls or disclosures must be revisited.
Audit trail access: Logs, documentation, and functional explanations that support auditability without requiring source code disclosure.
Incident disclosure: What counts as an incident, notification timelines, and obligations to support investigation and remediation.
Transparency Duties and Disclosure
Providers must ensure that users are aware when they are interacting with an AI system, and must ensure that AI-generated audio, images, and videos are marked in a machine-readable format as prescribed by the Government. Deployers must clearly notify the public when AI-generated or AI-edited content could cause confusion about authenticity, and must apply easy-to-recognize labels for content that simulates real persons or recreates events.
The failure mode is predictable. Labeling is written into a policy, but nothing in the product or publishing workflow enforces it. The result is uneven compliance across teams, vendors, and channels.
A workable transparency workflow answers three questions:
Who decides what gets labeled and what exceptions exist.
Where it is enforced (product UI, customer support scripts, publishing gates, creative pipelines, metadata steps).
What happens when it fails (escalation, correction or removal, evidence retention).
What This Looks Like in Three Common Settings
Banks
Where does AI influence eligibility, pricing, credit limits, fraud escalation, or collections prioritization, and what is automated versus advisory?
For any disputed outcome, can the business reconstruct inputs, model/version, output, and any override decision, including who overrode it and why?
Do third parties commit contractually to change logs, audit evidence, and incident notification SLAs, including investigation support?
Hospitals
Which clinical workflows use AI for triage, imaging support, prioritization, or safety-related decisions, and which of those should be treated as high-risk likely?
If an outcome is questioned, can the hospital reproduce what ran and when, including model/version, key inputs, human oversight, and escalation actions?
Is there an operational rollback path to manual care pathways, and an incident escalation process that works 24/7 rather than relying on a single person
Logistics players
Where does AI influence routing, warehouse vision, safety monitoring, or prioritization decisions, and what are the operational consequences when it is wrong?
Can the business trace an operational decision end to end, including inputs, model/version, output, and the action taken, and can it pause or revert the system quickly?
Where do disclosure and labeling obligations show up in customer-facing or worker-facing workflows, and is enforcement built into the process rather than left to policy?
Vietnam’s AI Law does not require businesses to stop using AI. It requires businesses to be able to stand behind it.
That means risk classification that holds up under scrutiny, baseline controls that make systems traceable and interruptible, vendor contracts that prevent silent model changes and missing audit trails, and transparency that is enforced where real work happens.
If you run medium/high-risk AI and can’t produce evidence on demand, treat the next 90 days as a compliance build sprint—because the first enforcement you’ll meet is procurement and audit, not a courtroom.
Related Reading On Asia Tech Lens
Agentic AI Can Act. Singapore’s New Rulebook Says: Prove You Can Stop It
Why “prove oversight and rollback” is becoming a procurement and audit baseline, even before hard law.
Indonesia Is Racing To Regulate AI. The Messy Part Is Implementation
A practical read on why fast regulation creates ambiguity, and how operators should pace deployments when enforcement detail lags policy intent.
AI Is Accelerating Cybercrime and Southeast Asia Feels It First
A risk-and-controls lens on fraud, abuse, and why incident thresholds and escalation paths need to mature alongside AI adoption.
The AI Battleground: How Southeast Asia Is Forging a New Path Between Superpowers
A regional framing on how Southeast Asian governments balance adoption and governance, and what that means for enterprise deployment constraints.
Why ByteDance’s AI Phone Hit a Wall: Security, Fair Play, and the Economics of Attention
A deployment-constraints case study on what blocks “working” AI in the real world when security, platform rules, and operational evidence are not ready.

