Before AI Can Work, Southeast Asia’s Enterprises Need To Fix Their Data
Companies across the region are rushing to deploy AI, but messy documents, fragmented workflows, and weak data foundations are making automation harder to scale, says Sansan's Kazunori Fukuda
Companies across Southeast Asia are approving AI pilots before answering a more basic question: can the underlying workflow actually support automation at scale?
For operators, that question matters more than the model being tested. If invoices, contracts, procurement records, and customer data are still scattered across paper, PDFs, spreadsheets, emails, and legacy systems, AI can quickly become another layer on top of an already fragmented process.
“They also want AI to make their workflows smarter, so they can act faster and with greater insight. In the process, of course, they reduce overhead and positively impact the bottom line,”
Kazunori Fukuda, managing director at enterprise software company Sansan.
Fukuda pointed to one Sansan client, a construction company in Thailand that was processing more than 2,000 invoices every month, with over 90% still exchanged on paper. Each invoice took about 20 minutes to process manually, delaying the monthly closing and making it difficult for headquarters to monitor activity across construction sites.
After digitizing the workflow, the company was able to consolidate invoices from different offices and sites into a single system. The shift cut processing time to eight minutes per invoice and saved around 4,800 work hours a year.
The case is a success story, but its starting point is the more important lesson. Before the company could automate invoice processing, it first had to make a messy paper-heavy workflow visible, structured, and usable.
“AI can only deliver value when it is built on well-structured, high-quality data,” Fukuda told Asia Tech Lens.
The issue is not necessarily that companies lack AI tools. Many are trying to layer AI onto workflows that were never properly standardized, governed, or prepared for automation in the first place.
The Real Bottleneck: Fragmented Data
For many enterprises, the biggest obstacle to AI adoption is not model capability, but the condition of the underlying data itself.
Fukuda frames data readiness as a precondition for scaling AI, not a problem to fix after deployment.
Invoices, contracts, business cards, procurement records, and customer information are often stored across disconnected systems, handled manually by different departments, or exchanged in inconsistent formats. Even companies that have digitized parts of their operations may still rely heavily on scanned PDFs, spreadsheets, emails, and legacy approval processes.
According to Fukuda, organizations often assume that having large amounts of data automatically makes them ready for AI deployment. In practice, fragmented and poorly structured data can make AI outputs unreliable from the beginning.
“One of the most common failure points is large volumes of data that are unstructured or not usable for AI,” he said.
The problem becomes more visible in Southeast Asia’s emerging markets, where digitization maturity varies widely across industries and the supply chain. External vendors, suppliers, and contractors may still submit documents manually or use incompatible systems, making it difficult to create standardized workflows that AI systems can process consistently.
Even Sansan ran into this problem while developing AI-driven document management tools. Fukuda said the company found that general-purpose AI models struggled with the variety and complexity of real business documents, leading to delays and inaccurate outputs.
“The AI struggled with the variety of document formats and complex data extraction requirements,” he said.
Sansan’s response was to move away from relying on general-purpose AI alone and toward models trained for business-document structures. The broader lesson for operators is not product-specific: generic AI will struggle when the workflow depends on messy documents, inconsistent formats, and business-specific exceptions.
The experience highlights a broader challenge for enterprises adopting AI. Vendor demos and pilot environments are often cleaner than production reality. Once AI systems encounter fragmented workflows, inconsistent formats, incomplete data, and edge cases at scale, performance can deteriorate quickly.
For operators, that is the lesson to take into vendor selection. A successful demo does not prove that a system can handle real document variety, messy supplier inputs, or the exceptions that appear in day-to-day operations.
The Human and Workflow Problem
Fukuda adds that AI failures are not only caused by technical limitations, but also by how new systems fit into existing workflows.
If AI tools disrupt how teams already work, employees may see them as additional friction rather than productivity tools. In some cases, teams revert to manual processes when AI outputs become inconsistent or difficult to trust.
“The early warning signs usually appear quickly,” Fukuda said. “Teams may notice inconsistent results from the AI, employees may stop using the system, or the organization may struggle to define clear performance indicators.”
In most cases, these problems stem from the same root issue: AI initiatives were launched before the underlying data and workflows were properly prepared. Without proper training and integration into day-to-day workflows, AI initiatives can struggle to move beyond experimentation.
That makes adoption a pre-scaling test, not a post-launch training issue. Before expanding AI across departments, operators need to know who owns the workflow, who monitors the output, who investigates errors, and how teams will use the system when results are imperfect.
The same applies to governance. In regulated sectors, the question is not only whether policies exist, but whether companies can prove that controls are working inside AI-supported workflows day-to-day. That includes access controls, activity logs, monitoring, auditability, and audit trails that show how information is processed and who has accessed it. They also expect incident response procedures and regular security assessments to be in place before AI-assisted workflow changes are approved. What is often missing is operational evidence that these controls are consistently applied in daily workflow, not just written into policy.
Before the Next AI Budget Gets Approved
For Fukuda, the bigger risk for Southeast Asian enterprises is not moving too slowly on AI, but moving too quickly without fixing the operational foundations underneath.
“Avoid rushing to implement AI-first programs without a solid foundation,” he said. “Without these fundamentals, AI can quickly become a costly distraction rather than a value-driving tool.”
The safer path is to start with narrow operational problems where the business pain is clear, the data can be prepared, and the outcome can be measured. Only then should companies expand AI across more complex workflows.
“Start by identifying specific, high-impact use cases where AI can add measurable value,” he said. “Ensure that the data infrastructure is prepared to support AI applications, and integrate AI tools gradually into existing workflows.”
That approach is especially relevant in Southeast Asia’s asset-heavy industries, where many operational systems remain fragmented across sites, suppliers, and legacy processes. In these environments, the companies that benefit most from AI may not be the ones deploying the most tools, but the ones that spend more time preparing their operational foundations before scaling them.
Before the next AI line item lands in the budget, operators should ask a narrower set of questions. What workflow is this supposed to fix? Is the data usable? Who owns the output? How will employees use the tool? What happens when the system gets it wrong and how will success be measured?
The companies that get this right will not be the ones that moved fastest on AI. They will be the ones that were honest enough to fix their operations first.
More From Asia Tech Lens
Why AI Agents Still Struggle Inside Southeast Asia’s Enterprises
Why enterprise AI adoption in Southeast Asia runs into hierarchy, trust, workflow ownership, and accountability.
Agentic AI Can Act. Singapore’s New Rulebook Says: Prove You Can Stop It
Once AI moves from assistance to action, operators need bounded autonomy, audit trails, oversight, and rollback plans before deployment can be trusted.
Budget 2026 Puts AI Into Execution Mode. Operators Need To Sequence It Carefully
Singapore’s AI push shows why enterprises need to sequence deployment around sector readiness, operational capacity, and measurable use cases rather than treating AI adoption as a broad transformation mandate.
India’s AI Push Is Real. Production Access Is the Constraint
Why AI ambition does not equal deployment readiness: regulated operators need reservable capacity, auditable controls, and portability before pilots become usable production infrastructure.
Why Quantum Pilots Fail Before They Start And What To Do About It
Different technology, same operator lesson: pilots fail when teams start with what the technology can do instead of defining the costly decision, measurable baseline, owner, and procurement path upfront.


