Cookie Consent

    We use cookies to enhance your browsing experience, serve personalised ads or content, and analyse our traffic. Learn more

    News

    Explicit Deepfakes Lead to Grok Ban in Malaysia, Indonesia

    Grok's explicit deepfakes spark Grok bans in Malaysia and Indonesia. This decisive action highlights growing AI ethics concerns. Read on for the full story.

    Anonymous
    4 min read12 January 2026
    Grok deepfake ban

    AI Snapshot

    The TL;DR: what matters, fast.

    Malaysia and Indonesia have blocked access to Elon Musk's Grok AI chatbot due to its capacity for generating sexually explicit deepfakes.

    Reports indicate Grok has been misused to create manipulated images of real individuals in compromising attire, prompting regulatory action.

    Both nations cited concerns over the AI tool's potential for producing non-consensual, pornographic content and will maintain the ban until safeguards are implemented.

    Who should pay attention: Regulators | Platform trust teams | AI ethics researchers

    What changes next: Other nations may follow with similar bans or increased scrutiny.

    Southeast Asian nations Malaysia and Indonesia have taken a decisive stand against Elon Musk's AI chatbot, Grok, by blocking access to the platform due to its capacity for generating sexually explicit deepfakes. This move marks a significant escalation in the global debate surrounding AI ethics and content moderation.

    Grok, integrated into Musk's X platform, offers image generation capabilities which, alarmingly, have been misused to create manipulated images of real individuals in compromising attire. Both Malaysia and Indonesia's communications ministries cited concerns over the AI tool's potential for producing non-consensual, pornographic content, particularly involving women and children. They are the first countries globally to implement such a ban.

    Regulatory Action and X's Response

    The Malaysian Communications and Multimedia Commission (MCMC) issued notices to X earlier this year, demanding stricter measures after identifying "repeated misuse" of Grok to generate harmful content. According to the MCMC, X's response failed to address the fundamental risks posed by its platform's design, instead focusing primarily on user reporting mechanisms. Consequently, Grok will remain blocked in Malaysia until effective safeguards are put in place. The MCMC has urged the public to report any harmful online content they encounter.

    Indonesia's Minister of Communications and Digital Affairs, Meutya Hafid, condemned the use of Grok for sexually explicit content as a violation of human rights, dignity, and online safety. The Indonesian ministry has also requested clarification from X regarding Grok's usage. Indonesia has a history of stringent online content regulation, having previously banned platforms like OnlyFans and Pornhub.

    Global Outcry and UK Concerns

    Enjoying this? Get more in your inbox.

    Weekly AI news & insights from Asia.

    The controversy extends beyond Southeast Asia. Victims in Indonesia have expressed their anger and distress, with one prominent example being Kirana Ayuningtyas, a wheelchair user whose image was manipulated after a stranger requested Grok depict her in a bikini. This incident highlights the deeply personal and damaging impact of such AI misuse. The issue of Child Sexual Imagery Generated by Grok AI Chatbot has also drawn widespread condemnation.

    In the UK, leaders, including Prime Minister Keir Starmer, have labelled Grok's deepfake capabilities as "disgraceful" and "disgusting." The UK's Technology Secretary, Liz Kendall, has indicated her support for media regulator Ofcom should it decide to block X access in the UK for non-compliance with online safety laws. Kendall emphasised that the Online Safety Act grants Ofcom the power to block services refusing to adhere to UK legislation, a power her department would fully back. This mirrors broader concerns about AI chatbots exploiting children, parents claim ignored warnings.

    The Broader Context of AI and Misinformation

    This incident with Grok underscores a critical challenge in the age of generative AI: the rapid proliferation of harmful content and the struggle for effective regulation. As AI models become more sophisticated, their ability to create realistic but fabricated images and videos presents significant ethical dilemmas. The debate often centres on balancing free speech with the need to protect individuals from exploitation and abuse.

    The blocking of Grok by Malaysia and Indonesia serves as a stark reminder that regulatory bodies are increasingly willing to impose restrictions on AI tools that fail to incorporate adequate safety measures. This proactive stance could influence how other nations approach AI governance, particularly concerning image generation and deepfake technology. The ethical implications of AI are a recurring theme, with even the "AI ‘godfather’ warns against AI rights".

    For a deeper understanding of the regulatory landscape and the challenges faced by businesses adopting AI, our article The AI Vendor Vetting Checklist: What Asian businesses should check before buying AI in 2026 offers valuable insights. The rapid advancements in AI, as highlighted by reports like the UK government's AI Safety Institute’s Frontier AI Threat Report, necessitate robust legal and ethical frameworks to prevent misuse and protect vulnerable populations. The report details the potential for advanced AI models to facilitate the creation of harmful biological and chemical agents, cyberattacks, and mass deception campaigns, underscoring the urgency of effective regulation AI Safety Institute.

    The rapid advancements in AI, as highlighted by reports like the UK government's AI Safety Institute’s Frontier AI Threat Report, necessitate robust legal and ethical frameworks to prevent misuse and protect vulnerable populations. The report details the potential for advanced AI models to facilitate the creation of harmful biological and chemical agents, cyberattacks, and mass deception campaigns, underscoring the urgency of effective regulation AI Safety Institute.

    What measures do you think AI platforms should implement to prevent the creation and spread of harmful deepfakes? Share your thoughts in the comments below.

    Anonymous
    4 min read12 January 2026

    Share your thoughts

    Join 2 readers in the discussion below

    Latest Comments (2)

    Manuel Gonzales
    Manuel Gonzales@manny_g_dev
    AI
    24 January 2026

    Good on them, honestly. Explicit deepfakes are a serious problem, and it's high time platforms face real consequences for not policing their content better. Here in the Philippines, we're seeing similar struggles with online safety. This ban sends a clear message; hopefully, it spurs better safeguards globally, not just a whack-a-mole approach.

    Isabella Mendoza
    Isabella Mendoza@bella_m_dev
    AI
    14 January 2026

    Wow, this is quite a development. Grok facing bans in Malaysia and Indonesia because of explicit deepfakes — that's a serious ethical dilemma for AI companies, no doubt. It’s good to see action taken against such misuse. However, I’m curious, or perhaps a tad cynical, about how effective these bans will truly be in the long run. People are pretty resourceful when they want to circumvent restrictions, especially online. Will a ban really stop the spread, or just push it further into the shadows where it's harder to track? It’s a thorny issue, for sure.

    Leave a Comment

    Your email will not be published