The First Country to Tell AI Agents What They Can and Cannot Do
AI agents are no longer hypothetical. They book flights, approve invoices, triage patient records, and negotiate supplier contracts. They act. And until January 2026, no government on earth had published rules for how they should behave.
Then Singapore did something no other country had attempted. On 22 January, Minister Josephine Teo announced the Model AI Governance Framework for Agentic AI at the World Economic Forum in Davos, making Singapore the first nation to issue formal guidance on governing AI systems that operate autonomously.
What the Framework Actually Says
The framework, developed by the Infocomm Media Development Authority (IMDA), rests on four pillars: risk assessment, human oversight, technical controls, and user responsibility. It is voluntary, not legally binding, but carries weight because Singapore has a track record of turning soft governance into regional norms.
The core idea is accountability. When an AI agent books the wrong hotel, sends the wrong email, or makes a medical recommendation that harms a patient, someone has to be responsible. The framework says that someone is always a human, never the agent itself.
"The framework fills a critical gap in policy guidance for agentic AI by establishing foundational principles for assurance and risk mitigation." - April Chin, Co-Chief Executive Officer, Resaro
This matters because agentic AI is different from the chatbots most people know. A chatbot waits for instructions and responds. An agent reasons, plans, uses tools, accesses databases, and chains multiple actions together without asking for permission at every step. The governance challenge is fundamentally harder.
Why Singapore Moved First
Singapore has been building toward this moment for years. Its original Model AI Governance Framework landed in 2019, followed by the AI Verify testing toolkit, a starter kit for testing large language model applications, and the establishment of an AI Safety Institute. The country also leads the ASEAN Working Group on AI Governance, which gives this framework outsized influence across Southeast Asia.
On 12 February, Prime Minister Lawrence Wong reinforced the direction in his 2026 Budget Speech, announcing a National AI Council and national AI Missions targeting advanced manufacturing, finance, and healthcare. Over 60 firms have now set up AI Centres of Excellence in Singapore, according to government figures.
But there is a gap between ambition and readiness. A Deloitte report published in February 2026 found that only 14% of leaders in Singapore have a mature model for agentic AI governance, below the global average of 21%. Half are using a patchwork of public and internal frameworks to assess risks.
By The Numbers
- 14%: Share of Singapore leaders with mature agentic AI governance models, per Deloitte
- 40%: Enterprise applications expected to embed task-specific AI agents by end of 2026, up from under 5% in 2024 (Gartner)
- $10.86 billion: Projected global agentic AI market value in 2026, up from $7.55 billion in 2025
- 79%: Organisations reporting some level of agentic AI adoption globally in 2026
- 60+: Firms with AI Centres of Excellence in Singapore
The Regional Ripple Effect
Singapore does not make rules in isolation. Its governance frameworks tend to become templates for the rest of ASEAN. The 2019 framework influenced AI governance thinking in Thailand, the Philippines, and Vietnam. The agentic AI framework is likely to follow the same path.
South Korea has already moved on a parallel track, with its AI Basic Act entering into force in January 2026. China has taken a different approach entirely, requiring embedded watermarks and encrypted metadata in AI-generated content, with software that removes these watermarks now outlawed. India updated its IT Rules in 2025 to mandate labelling and removal of AI-generated content.
"Open and distributed AI innovation will accelerate capability diffusion. Governance must therefore be embedded at design stage, not retrofitted post-deployment." - Emad Mostaque, AI industry commentator
What Companies Should Watch
The framework applies to organisations developing AI agents in-house and those adopting third-party solutions. That scope is broad. Any company deploying Salesforce Agentforce, Microsoft Copilot agents, or custom-built autonomous workflows in Singapore should be paying attention.
Three practical implications stand out:
- Risk assessment is mandatory thinking, not optional paperwork. The framework expects organisations to evaluate what happens when agents fail, not just when they succeed. That means stress-testing autonomous workflows before deployment.
- Human oversight does not mean human-in-the-loop for every action. The framework acknowledges that agents need autonomy to be useful. The requirement is meaningful oversight at key decision points, not micromanagement.
- User responsibility is real. If your company deploys an agent that causes harm, the framework says you own the outcome. This is a clear signal that "the AI did it" will not be an acceptable defence.
| Country | AI Governance Approach | Status (2026) | Focus |
|---|---|---|---|
| Singapore | Model Framework for Agentic AI | Published Jan 2026 | Accountability and risk pillars |
| South Korea | AI Basic Act | In force Jan 2026 | Comprehensive AI regulation |
| China | AI Content Labelling Rules | In force Sep 2025 | Traceability and watermarks |
| India | IT Rules Amendment | Updated 2025 | Deepfake and content labelling |
| Japan | AI Guidelines | Voluntary framework | Principle-based governance |
What makes agentic AI different from regular AI?
Agentic AI systems can reason, plan, and take actions across multiple steps without human intervention at each stage. Unlike chatbots that respond to prompts, agents use tools, access databases, and chain tasks together autonomously. This autonomy creates new governance challenges around accountability and risk.
Is Singapore's framework legally binding?
No. The Model AI Governance Framework for Agentic AI is voluntary guidance, not legislation. However, Singapore has a history of converting voluntary frameworks into industry standards through adoption pressure and ASEAN influence. Companies operating in the region should treat it as a strong signal of future expectations.
Which companies need to pay attention to this framework?
Any organisation deploying autonomous AI agents in Singapore, whether built in-house or purchased from vendors like Salesforce, Microsoft, or Google. The framework covers both developers and adopters of agentic AI systems, which means enterprise buyers carry governance responsibility too.
How does this affect other ASEAN countries?
Singapore leads the ASEAN Working Group on AI Governance. Its previous frameworks became templates for neighbouring countries. Businesses operating across Southeast Asia should expect similar governance expectations to emerge in Thailand, the Philippines, Vietnam, and Indonesia within 12 to 18 months.
Singapore has drawn the first line in the sand for agentic AI governance. Does your organisation have a plan for when your AI agents make a decision you did not expect? Drop your take in the comments below.
YOUR TAKE
We cover the story. You tell us what it means on the ground.
What did you think?
Share your thoughts
Be the first to share your perspective on this story
This is a developing story
We're tracking this across Asia-Pacific and may update with new developments, follow-ups and regional context.


