Asia’s Agentic Moment: The Manus Interview That Preceded Meta’s Billion-Dollar Move
Captured on tape before the Meta-Manus deal became public, this Xiaojùn Podcast traces the Chinese‑founded, Singapore‑based startup’s sprint to US$100m ARR

Editor’s Note: This article is based on a Chinese-language interview with Yichao “Peak” Ji, co-founder and chief scientist of Manus, recorded on the 張小珺 Xiaojùn Podcast on December 1, 2025, before Meta’s deal with Manus became public. The deal has since come under regulatory scrutiny in China, with reports indicating that authorities are reviewing whether the relocation of Manus’s staff and technology to Singapore prior to the transaction could trigger Chinese technology export controls and related compliance requirements. Rather than speculate on motivations or outcomes, we focus on what Ji said about how Manus was built: its architecture, unit economics, and the operational disciplines behind “AI agents.” All analysis and framing are Asia Tech Lens’ own.
After the deal news, the commentary wrote itself: why buy, why now, and what it signals about “AI agents.” But the more instructive material is what Manus said before any of that framing existed—when the company still had to explain itself in the language of constraints.
In Ji’s telling, Manus is not a single breakthrough so much as an accumulation of hard choices: where distribution stalls, where error recovery becomes product, where token economics behave less like software and more like manufacturing.
Manus is a cloud-based “general agent” that runs tasks inside a sandboxed, disposable virtual environment—more like renting a remote worker than chatting with a bot. When the discussion starts drifting toward grand narratives, Ji pulls it back to basics.
Early on, the host brings up a familiar frontier-tech archetype: intense, eccentric, and self-mythologizing. Yichao Ji rejects it. He says it plainly: “You’re not Steve Jobs, yet you behave like Steve Jobs.” He is not arguing that founders should be meek. He is arguing that an “artist-like” posture and extreme narratives can become a weakness when the work is operationally heavy and brutally empirical.
That tone matches how he describes his own background. He frames himself as shaped by two traditions: a physicist father at Peking University, and a mother from an older generation of Zhongguancun entrepreneurs. He also describes a childhood split between the U.S. and China, moving to the U.S. at four and returning during primary school. The point is less biography than temperament: he presents himself as someone who learned by tinkering, building, and testing—a bias toward making and measuring that runs through how he talks about Manus.
From there, his broader point is simple. If the AI application layer is drifting toward manufacturing-like economics, where marginal costs do not fall to zero, then operating discipline matters more than storytelling. You measure. You test quickly. You cut without sentimentality. You keep the product tight. And you design for customers who will pay enough to make high-quality computation economically viable.
“I’m Not Cut Out to Be a CEO”
Ji turns the same realism inward. He describes a personal lesson from earlier ventures: he is “not CEO material,” and the role mismatch is structural, not motivational.
“I realized I’m fundamentally not cut out to be a CEO,” he says, adding that he does not like the business side of things like commercialization and people management. “I’d rather deal with computers. People are too complicated.”
That admission explains a key choice in how Manus was built. Ji did not just want a co-founder. He wanted a counterpart who could own the business surface area end-to-end so he could stay anchored in the technical core. In his description, the ideal arrangement is explicit: a CEO makes final product calls, runs organizational trade-offs, and carries the weight of go-to-market, while the technical lead can “dictate” within a defined technical domain when needed. The division is deliberate, and it is how he avoids forcing himself into a role he believes he will execute poorly.
He links this to a broader view of founder quality. “Normal” founders are underrated, he argues. In today’s AI industry, being psychologically stable and operationally grounded is surprisingly rare, and therefore strategically valuable. In his framing, Manus is designed to be run like a serious business, not a founder drama.
The Pivot Logic: Distribution Friction As Destiny
His foray into software began in the App Store era, when individuals could build software, charge users directly, and get immediate feedback. He describes building and selling a third-party iOS browser in high school using a one-time paid model, earning “a little over US$300,000” across the product’s life.
The takeaway was not the amount he earned, but how clear the sales model was. He did not have to think about in-app purchases or add-ons. He could “just sell it,” and that let him prove what he was building could create economic value.
He contrasts that with the current AI wave. Yes, the technical leap is real, but the distribution dividend is not automatically there. Many AI products are competing inside existing ecosystems, with fewer open lanes. That is why Manus’ story is pivot-heavy. The core problem is not only “how do we make the model smarter,” but “where do we sit in the stack so that users can adopt us, trust us, and pay repeatedly.”
This is where his co-founder’s earlier work becomes more than a side project. Ji describes Monica.ai, a browser extension, as both a real-world lab and a financial anchor. He calls it “a positive cash-flow product,” and says that mattered because it let the team keep making decisions without panic. “I wasn’t anxious at all,” he says. “This is why we say having a positive cash-flow product is important.”
He is even more direct about what that cash flow enabled. “We knew Monica was always making money for us,” he says, and that gave them room to be “objective and bold” about what to keep building and what to cut. When Manus started taking shape, they did not “bet the company” all at once. “We started the experiment with five people,” he says. “Each time we saw a good sign, we moved more teammates over from Monica.” Later, he notes Monica was already profitable and roughly a US$12 million annual recurring revenue business at the time, which made that gradual ramp possible.
But he is equally direct about the ceiling of that channel. Extensions are rarely a destination, and behavior change is hard to force. Even if the product works, growth can plateau because the channel is not naturally “pushy.” That constraint is what pushed the team toward a bolder bet: an AI-native browser, and the next wall, persuading people to leave Chrome.
This is where he shares a rule that reads like a founder lesson disguised as product advice. If you finish something and you do not feel excited about it, do not launch it. In his words: “If a product is done and you don’t think it’s cool, don’t launch. If you don’t think it’s cool, nobody will.”
Why the Cloud Agent Made More Sense
Yichao Ji’s explanation for the next pivot is not “we believed in agents early.” It is more grounded than that.
He describes watching how people use tools like Cursor. Not just engineers. Regular users too. People were already doing complicated work by describing what they wanted in natural language, while code operated in the background.
That led him to a simple conclusion: programming is no longer only for programmers. It is becoming a general way to get things done. But the best products will hide most of the code and still keep its power.
That maps to the product Manus became. Not a browser, but an agent that can take a task and carry it through: plan the steps, do the work, recover when something breaks, and keep going without the user hovering over it every minute.
For him, the cloud piece is critical because it changes how attention works. If the agent is tied to your local machine, you end up babysitting it. You get prompts. You get interruptions. The agent cannot really run ahead.
If it runs in the cloud, the experience changes. The user can hand over a task and move on. And once it is running remotely, another idea becomes possible: tasks can run in parallel. Instead of one job at a time, you can have multiple jobs moving at once.
In his framing, that is the promise of agents. Not a slightly smarter assistant, but a different way to allocate work.
What He Means by a “General Agent”
Ji tries to make the concept concrete. A general agent, in his mind, is not just “a model plus tools.” It is more like a remote worker.
He describes it through familiar interfaces: a screen, a mouse, a keyboard, and the ability to see what is on the screen, click, type, and navigate like a human would.
The reason he stresses this is practical. If an agent can operate through standard interfaces, it can work across many environments, even when there is no clean API. It can move through websites, software, documents, and workflows the way a person does.
It is a different design philosophy from the more constrained “function calling” approach. It is closer to: give the model a computer, and let it work.
Why Sandboxes Matter
This is where the sandbox comes in. The co-founder describes Manus sessions running inside a sandboxed virtual environment. The point is not only safety, although safety is part of it. The bigger point is control.
If the agent has a contained environment, it can browse, run code, install tools, handle files, and complete multi-step work without touching the user’s actual machine. It is also more repeatable. When something goes wrong, you can debug the environment instead of guessing what happened on someone’s laptop.
In the notes, Manus frames this as a “thick shell” approach. Instead of relying only on clean tool calls, you expand what the agent can do by giving it a full working environment with tools it can reliably use.
The simple takeaway is: if you want an agent that does real work, it needs a place where work can actually happen.
The Uncomfortable Part: Agents Are Expensive
He does not romanticize economics. He repeats a point in different ways: agents do not behave like classic internet software where marginal cost melts away.
In his framing, this looks more like manufacturing. You can scale, but you cannot pretend the cost disappears.
The most concrete example he gives is token consumption. He says agent tasks can be extremely “input heavy,” because the system has to read a lot, keep long context, browse, and track histories across steps. He describes ratios on the order of 100:1 to 1000:1 for input versus output, and he notes that output tokens often cost more than input tokens. Essentially, if you want quality, you have to pay for it.
What is interesting is what he says Manus does with that reality. He suggests they are not trying to cut tokens aggressively just to be faster or cheaper. The priority is output quality, even if that means consuming more tokens. But that stance only works if the business model matches.
That is where he becomes explicit about market selection and operating choices. “We chose to go overseas for a very simple reason,” he says: “overseas users have a stronger willingness to pay for productivity tools,” and he follows it with the constraint that makes the point unavoidable: “agents are very expensive.”
When the host raises the “people say you ran away” framing around Singapore, he pushes back and makes the same argument in a different form: if you are building for a global market, “you go where your customers are.”
He also describes the move as execution, not ideology. When the team was split across locations, he says coordination was weak, and during high-pressure periods the instinct was simple: “don’t work online, hurry up and get together offline,” including flying teammates in so they could solve problems in the same room.
In that context, Singapore is a way to reduce friction when the underlying work is costly, brittle, and breaks in ways that require fast, coordinated debugging.
That is why he keeps coming back to who the product is for. Manus is not chasing everyone. He frames the target as prosumers and high-value knowledge workers, people who care about quality enough to pay for it.
And he defines success in plain terms: not usage volume, but whether users are willing to pay because the work result is good.
His Big Belief: Evaluation Decides What You Become
Ji then makes one of his strongest claims: in AI products, what matters most is how you evaluate your system.
He argues that internal benchmarks and measurement standards shape everything. They decide what gets improved, what gets rewarded, and what “good” means inside the company.
He talks about evaluation like it is the steering wheel. Without it, teams drift. They chase vibes. They get trapped in demos that look impressive but do not hold up in real use.
To ground this, he points to an external benchmark he finds meaningful: the Remote Labor Index, from Scale AI and partners, which measures whether systems can complete paid-work-like tasks to a human standard. He notes that success rates are still extremely low—around 2.5% on the benchmark’s results—and treats that gap as a roadmap: not a reason for despair, but a definition of what “better” has to mean.
He does not say this to sound pessimistic. He uses it as direction. If the goal is hard, the benchmark tells you what to improve next. It gives a team a map.
Execution, Not Mystique
The interview is useful precisely because it refuses to perform inevitability. Ji doesn’t sell Manus as a mystic leap forward. He sells it as an operating system for expensive work: a way to turn frontier-model capability into repeatable outcomes under hard constraints—token cost, long context, brittle environments, and the messy reality of error recovery.
In that framing, “agent” stops being a label and becomes a design requirement. The agent needs a place where work can happen (a sandboxed virtual environment), a way to act through ordinary interfaces (not just clean APIs), and an evaluation regime that punishes impressive demos and rewards outputs someone would actually pay for. Most of what sounds like philosophy in the interview is really governance: how to cut weak ideas early, how to keep quality high even when computation is costly, and how to measure progress so the team doesn’t drift into storytelling.
If there’s a conclusion worth taking from the recording, it’s not that Manus invented anything new. It’s that it treated agents as a product category with unit economics and QA, not as a model feature with marketing. That—more than any broader narrative—explains why the company scaled the way it did, and why its approach is worth watching.
Related Reading On Asia Tech Lens
If this piece resonated with you, these might be of interest as well:
Emerging Voices: The Security-Trained Founder Rebuilding Workflows for the Agent Era
A ground-level look at what happens when AI moves from “assist” to “act,” and why autonomy collides with hierarchy, security expectations, and trust constraints in Southeast Asia.Why ByteDance’s AI Phone Hit a Wall: Security, Fair Play, and the Economics of Attention
A case study in agent-like behavior meeting platform guardrails, where “doing things for you” quickly becomes an authorization and accountability problem.When Frontier AI Emerges from Outside Silicon Valley
A “how it gets built” story about Sakana AI framed around alternative playbooks, constraints, and execution choices outside the default geography.Meet MiniMax: The Chinese Tech Company Touted by Jensen Huang That’s Headed for an IPO
A grounded profile of one of China’s most watched “AI Tigers,” linking model building to product strategy and market positioning.
Invisible Arteries: Subsea Cables in the Age of AI
AI scale is not only about models and chips. It is also about the physical network that carries almost all intercontinental traffic. This piece explains why subsea cables are a hidden constraint on Asia’s AI boom, and why resilience, chokepoints, and ownership dynamics matter more than most people realize.

