AI for Global Equity Begins With Local Realities
An NUS Business School - Tencent academic conference underscores the need for localization, disciplined safety practices, and a more sober look at economic returns

Backed by new capital, global tech expansion, and growing government interest, artificial intelligence is advancing across Southeast Asia at a pace that is both exciting and uneven. The region’s adoption curve is constrained by fragmented markets, uneven digital readiness, and policy uncertainty over how to regulate growth without slowing it down.
These themes anchored the panel discussion “AI for Global Equity: Bridging the Digital Divide and Unlocking Potential,” held as part of the NUS–Tencent academic conference “Technology for Good: Driving Social Impact.”
Moderated by Asia Tech Lens’ Editor-in-Chief, Miro Lu, the session brought together voices from government, academia, and industry to examine what inclusive AI deployment requires in a region defined by diversity and uneven digital maturity.
Southeast Asia Is Not One Market
One of the panel’s clearest messages was that Southeast Asia cannot be treated as a single, uniform AI market.
As Kenneth Siow, the Regional Director, Southeast Asia and General Manager (Singapore and Malaysia) of Tencent Cloud International puts it:
“I think Southeast Asia, just to paint that snapshot, is a very, very fragmented market.”
Fragmentation between countries
Singapore operates with advanced infrastructure and regulatory clarity, while Indonesia, Vietnam, the Philippines, and Cambodia move at different speeds and face different constraints.
Fragmentation within countries
In Indonesia, for instance, Jakarta races ahead while cities like Medan and Surabaya lag behind.
Fragmentation between enterprises
Enterprise fragmentation is just as visible. Some companies, like Grab, move quickly on AI adoption, while many family-owned firms advance more slowly, shaped by founder preferences and long-established practices.
As Siow puts it:
“To sum it up: very fragmented market, very fragmented enterprise journeys toward cloud computing, toward AI, and also in how governments and enterprises are embracing technologies.”
This multilayered fragmentation, Siow stressed, makes localization non-negotiable:
“Certain things may have worked in China,” he said, “but how do we localize them for the local market? How do we make that experience practical and useful for that country, that jurisdiction, that enterprise, and that family?”
For Tencent, that means adapting strengths in communications, gaming, and fintech to each country’s environment rather than importing a uniform solution.
“There is no one-size-fits-all,” he said. “For each use case, for each application, what do we need to do to power the next wave of AI?”
This pragmatic approach, he added, is what enables deployments that are “cheaper, better, faster” across Southeast Asia’s diverse technological landscape.
China Offers a Useful Playbook
Fragmentation doesn’t make AI impossible, it primarily makes localization essential. That’s where China’s experience becomes instructive. Assistant Professor School of Public Policy, The Chinese University of Hong Kong (Shenzhen), argued that China’s digital evolution mirrors many of Southeast Asia’s structural complexities.
“China has experienced something similar to Southeast Asia… going from no internet to the most advanced digital economic ecosystem very fast.”
Like Southeast Asia, China contains “advanced areas, disadvantaged areas,” forcing its technology companies to build for drastically different levels of digital readiness. Over time, this created an instinct for designing solutions that work across multiple environments and user groups.
“Chinese companies are quite good at handling these tricky scenarios, developing different digital solutions for different companies or different users,” Huang explained.
He added that tools no longer widely used in China can still be effective in emerging markets, and that collaboration between Chinese firms and ASEAN partners can help bridge digital gaps.
Make AI Boring Again
Adaptability helps, but it does not solve the region’s biggest deployment barrier: safety. Organizations across the region experiment with AI, but few operationalize it without confidence that systems will behave predictably and securely. This is where Singapore offers a useful complement.
Benjamin Goh, Senior Assistant Director at Singapore’s National AI Group, leads a team that works across national policy and product development, supporting agencies trying to move AI from pilots into production. He described the broader AI moment as unfolding in phases. The first when “ChatGPT 3.5 hit the world in late 2022.” and the second with the “DeepSeek moment.” when organizations realized that “AI doesn’t need massive capex.”
That shift accelerated interest in Singapore, a market perceived as “a bit more stable” and geopolitically neutral. Compared with Europe, which he described as facing “AI winter”, organizations in Singapore are “quite keen to try.”
But the gap between experimentation and deployment remains wide. Across the government, “70% have a proof of concept…but when I ask how many have actually seen it go into real production, no one raises their hand.”
Safety fears have become the biggest blocker, prompted by incidents involving inaccurate AI-generated reports or chatbots “saying something they shouldn’t have said.”
Goh’s philosophy is simple: “Doing AI safety is to make AI boring again.”
AI remains inherently probabilistic. “You do A… and you don’t know what comes next.” Singapore’s answer is systematic testing: repeatedly simulating scenarios until model behavior becomes predictable and “boring.”
To help agencies move from proof of concept to production, Singapore launched two national tools: Litmus for testing and Sentinel for guardrails. Goh likens the relationship to a medical check-up and treatment: testing surfaces risks, but guardrails ensure systems behave within defined boundaries. Adoption has been fast, with “almost 40% of government agencies” using them within six months.
He also clarified Singapore’s regulatory posture. While issues like misinformation or impersonation are regulated, the broader AI environment remains intentionally light-touch. The principle is pragmatic: as long as systems avoid serious harm, “like death or liability”, experimentation should stay open to encourage innovation.
Returns and Sustainability
Trust, however, is not enough; investors want returns. Panelist Tim Zhang, who is the founder of Edge Research SG, said the investment community is increasingly asking whether the AI surge is creating real value. Among investors, he noted, “there is consensus that there is a bubble. The question is: which stage of the bubble are we in?”
Since the arrival of ChatGPT, markets have been fixated on compute accumulation.
“When the top U.S. companies announce their quarterly earnings, all the analysts are looking at just one KPI: how many NVIDIA chips or compute resources they have secured.”
More recently, power has become a second constraint.
“You can get a lot of compute, a lot of chips, but if the power supply isn’t there, it still doesn’t work.”
Citing a recent MIT study, he observed that only 5% of AI pilots in the U.S. delivered material returns of more than US$1 million.
“Ninety-five percent… the return is abysmal,” he said. “From a financial market perspective, this is not sustainable.”
He contrasted this with China, where companies, constrained by chip access, appear more focused on measurable outcomes. For Zhang, this divergence between compute-driven expansion in the U.S. and return-driven pragmatism in China will determine which AI ecosystems remain viable.
The Human Element
The conversation closed with how AI is reshaping work, skills, and the talent pipeline.
Zhang pointed to early U.S. data showing the speed of disruption: “On average, 13% of entry-level jobs were replaced by AI,” with computer science graduates facing a “20% lower” chance of being hired.
He also highlighted Jensen Huang’s now-famous framing: “AI is not a tool. AI is a worker.”
Zhang argued that Asia may take the lead in responding to these shifts because governments in the region place greater emphasis on employment stability. “Responsible AI diffusion,” he said, will matter, ensuring adoption balances efficiency with long-term social impact.
Benjamin Goh raised a different concern: the erosion of the apprenticeship model. Junior analysts, associates, and trainees—once essential to developing future leaders—risk being bypassed as firms automate early-career tasks.
Referring to himself as both a boomer and doomer, Kenneth Siow offered a slightly more positive view, noting that AI will “disrupt many industries, many jobs,” but will also create new ones.
For labor-scarce economies like Singapore, where aging demographics intensify demand for automation, AI could help address structural pressures. With the right guardrails, he said, AI can “change the world for the better,” even as markets shape the pace of adoption.
Across all perspectives, one theme stood out: Asia’s AI future will hinge on the choices made by individuals, enterprises, and governments—and on keeping people, not technology, at the center of the transition.
As Goh put it, society is being pushed to confront a deeper question:
“What is it about you that AI cannot do? What is human about you?”
*The full conference announcement from NUS Business School can be found here

