Southeast Asian nations Malaysia and Indonesia have taken a decisive stand against Elon Musk's AI chatbot, Grok, by blocking access to the platform due to its capacity for generating sexually explicit deepfakes. This move marks a significant escalation in the global debate surrounding AI ethics and content moderation.
Grok, integrated into Musk's X platform, offers image generation capabilities which, alarmingly, have been misused to create manipulated images of real individuals in compromising attire. Both Malaysia and Indonesia's communications ministries cited concerns over the AI tool's potential for producing non-consensual, pornographic content, particularly involving women and children. They are the first countries globally to implement such a ban.
Regulatory Action and X's Response
The Malaysian Communications and Multimedia Commission (MCMC) issued notices to X earlier this year, demanding stricter measures after identifying "repeated misuse" of Grok to generate harmful content. According to the MCMC, X's response failed to address the fundamental risks posed by its platform's design, instead focusing primarily on user reporting mechanisms. Consequently, Grok will remain blocked in Malaysia until effective safeguards are put in place. The MCMC has urged the public to report any harmful online content they encounter.
Indonesia's Minister of Communications and Digital Affairs, Meutya Hafid, condemned the use of Grok for sexually explicit content as a violation of human rights, dignity, and online safety. The Indonesian ministry has also requested clarification from X regarding Grok's usage. Indonesia has a history of stringent online content regulation, having previously banned platforms like OnlyFans and Pornhub.
Global Outcry and UK Concerns
The controversy extends beyond Southeast Asia. Victims in Indonesia have expressed their anger and distress, with one prominent example being Kirana Ayuningtyas, a wheelchair user whose image was manipulated after a stranger requested Grok depict her in a bikini. This incident highlights the deeply personal and damaging impact of such AI misuse. The issue of Child Sexual Imagery Generated by Grok AI Chatbot has also drawn widespread condemnation.
In the UK, leaders, including Prime Minister Keir Starmer, have labelled Grok's deepfake capabilities as "disgraceful" and "disgusting." The UK's Technology Secretary, Liz Kendall, has indicated her support for media regulator Ofcom should it decide to block X access in the UK for non-compliance with online safety laws. Kendall emphasised that the Online Safety Act grants Ofcom the power to block services refusing to adhere to UK legislation, a power her department would fully back. This mirrors broader concerns about AI chatbots exploiting children, parents claim ignored warnings.
The Broader Context of AI and Misinformation
This incident with Grok underscores a critical challenge in the age of generative AI: the rapid proliferation of harmful content and the struggle for effective regulation. As AI models become more sophisticated, their ability to create realistic but fabricated images and videos presents significant ethical dilemmas. The debate often centres on balancing free speech with the need to protect individuals from exploitation and abuse.
The blocking of Grok by Malaysia and Indonesia serves as a stark reminder that regulatory bodies are increasingly willing to impose restrictions on AI tools that fail to incorporate adequate safety measures. This proactive stance could influence how other nations approach AI governance, particularly concerning image generation and deepfake technology. The ethical implications of AI are a recurring theme, with even the "AI ‘godfather’ warns against AI rights".
For a deeper understanding of the regulatory landscape and the challenges faced by businesses adopting AI, our article The AI Vendor Vetting Checklist: What Asian businesses should check before buying AI in 2026 offers valuable insights. The rapid advancements in AI, as highlighted by reports like the UK government's AI Safety Institute’s Frontier AI Threat Report, necessitate robust legal and ethical frameworks to prevent misuse and protect vulnerable populations. The report details the potential for advanced AI models to facilitate the creation of harmful biological and chemical agents, cyberattacks, and mass deception campaigns, underscoring the urgency of effective regulation AI Safety Institute.
The rapid advancements in AI, as highlighted by reports like the UK government's AI Safety Institute’s Frontier AI Threat Report, necessitate robust legal and ethical frameworks to prevent misuse and protect vulnerable populations. The report details the potential for advanced AI models to facilitate the creation of harmful biological and chemical agents, cyberattacks, and mass deception campaigns, underscoring the urgency of effective regulation AI Safety Institute.
What measures do you think AI platforms should implement to prevent the creation and spread of harmful deepfakes? Share your thoughts in the comments below.
YOUR TAKE
We cover the story. You tell us what it means on the ground.
What did you think?
Related Articles
View more
Singapore Lands a $3.9 Billion AI Data Centre Bet
A Bain Capital-backed firm plans to pour billions into AI-ready infrastructure, positioning Singapore as Asia's compute capital.
5 min read

3 Before 9: March 16, 2026
3 must-know AI stories before your 9am coffee. The signals that matter, delivered daily.
3 min read

Hong Kong Backs New AI Research Institute With Billions
A city built on finance is now building institutions for AI, and the IPO pipeline suggests the money agrees.
6 min read
Share your thoughts
Join 3 readers in the discussion below
This is a developing story
We're tracking this across Asia-Pacific and may update with new developments, follow-ups and regional context.




Latest Comments (3)
the part about Kirana Ayuningtyas, the wheelchair user whose image was manipulated... it really hit home. we're developing AI to help people, seniors in particular, and protecting their dignity is paramount. This isn't just about blocking a tool, it's about the very real harm to individuals, especially vulnerable ones.
The MCMC's focus on X's "user reporting mechanisms" as insufficient aligns with what we know from benchmarks like HolisticEval. Reactive moderation rarely scales for multimodal generative AI abuse.
this whole grok deepfake thing is wild. from an infra perspective, banning the whole thing feels like a blunt instrument. actually, blocking an entire service because of a specific content type implies a major failure at the moderation layer. they probably don't have good enough fine-tuning on the generative models OR their content filters post-gen are weak. you can't just rely on user reporting for this kind of stuff. it's reactive. needs proactive, almost at the model output gate. i'm curious what X's "response" to MCMC was, probably just standard DMCA-like takedown process, which, yeah, won't cut it.
Leave a Comment