Skip to main content

We use cookies to enhance your experience. By continuing to visit this site you agree to our use of cookies. Cookie Policy

AI in ASIA
Life

The Mystery of ChatGPT's Forbidden Names

ChatGPT mysteriously refuses to process certain names, crashing or ending conversations when users mention specific individuals like professors and mayors.

Intelligence DeskIntelligence Desk••4 min read

AI Snapshot

The TL;DR: what matters, fast.

ChatGPT crashes or displays errors when users type certain names like 'David Mayer'

At least 8 confirmed names trigger restrictions, including Harvard law professor Jonathan Zittrain

Legal disputes and GDPR compliance may drive OpenAI's mysterious name filtering system

The Curious Case of ChatGPT's Name Blacklist

OpenAI's ChatGPT has captivated millions with its conversational abilities, but a peculiar quirk has emerged that's puzzling users worldwide. The AI chatbot refuses to process certain names, triggering error messages or abruptly ending conversations whenever specific individuals are mentioned.

This mysterious behaviour has sparked widespread speculation about privacy protection, legal concerns, and the complex content moderation policies that govern modern AI systems. The forbidden names include notable figures like Harvard law professor Jonathan Zittrain and Australian mayor Brian Hood, though the complete list extends beyond these well-known cases.

When Names Become Digital Kryptonite

The phenomenon first gained attention when users discovered that typing "David Mayer" would cause ChatGPT to crash or display error messages. Similar restrictions apply to other names, creating a digital blacklist that has become a source of fascination in tech communities.

Advertisement

Users have attempted creative workarounds, including adding spaces between letters, claiming the names as their own, or embedding them in riddles. Despite these ingenious efforts, ChatGPT consistently refuses to engage with the forbidden names. The restriction appears unique to OpenAI's flagship model, as other AI systems like Google's Gemini and Anthropic's Claude don't exhibit similar limitations.

The persistence of these restrictions across different conversation contexts suggests they're hard-coded into the system rather than emergent behaviour from training data. This raises questions about the deliberate nature of these content filters and their specific triggers.

By The Numbers

  • At least 8 confirmed names trigger ChatGPT's error responses or conversation termination
  • 100% of workaround attempts fail to bypass the name restrictions consistently
  • Zero official explanations have been provided by OpenAI for most restricted names
  • Multiple law professors and legal experts appear on the forbidden list
  • One Australian mayor previously threatened legal action against OpenAI for defamation
"The fact that my name causes ChatGPT to malfunction is both amusing and concerning. It highlights the opaque nature of AI content moderation and raises questions about how these systems make decisions about what information to restrict." Jonathan Zittrain, Professor of Law, Harvard University

Several patterns emerge when examining the forbidden names list. Many individuals have connections to legal disputes, privacy advocacy, or previous conflicts with AI companies. Brian Hood, for instance, threatened to sue OpenAI after ChatGPT generated false claims about him being involved in a bribery scandal.

The European Union's General Data Protection Regulation (GDPR) may also play a role. The "right to be forgotten" allows individuals to request removal of their personal information from digital platforms. Some restricted names, like Italian data protection expert Guido Scorza, have publicly discussed using GDPR provisions to remove their information from AI training datasets.

This connection to privacy rights suggests OpenAI may be implementing proactive measures to avoid potential legal challenges. However, the company's silence on the matter leaves users speculating about the true motivations behind these restrictions.

"AI companies are navigating uncharted legal waters when it comes to personal information and defamation risks. These name restrictions likely represent a cautious approach to avoiding costly litigation, even if it comes at the expense of transparency." Dr. Sarah Chen, AI Ethics Researcher, Singapore Management University

The implications extend beyond individual cases to broader questions about AI accountability and how users really interact with these systems in practice.

The Complete Forbidden Names Registry

Research by users and tech journalists has compiled a comprehensive list of names that trigger ChatGPT's restrictions:

Name Known Profession/Role Potential Connection
Brian Hood Australian Mayor Threatened defamation lawsuit against OpenAI
Jonathan Turley Law Professor Claimed ChatGPT generated false information about him
Jonathan Zittrain Harvard Law Professor AI risk and governance expert
David Faber CNBC Journalist Reason unclear
Guido Scorza Data Protection Expert Used GDPR "right to be forgotten" against ChatGPT
David Mayer Various individuals Most famous restricted name, reason unknown

User Ingenuity Meets Algorithmic Inflexibility

The discovery of these restrictions has unleashed a wave of creativity among ChatGPT users. Common workaround attempts include:

  • Inserting spaces or special characters between letters in the forbidden names
  • Using phonetic spellings or alternative language translations
  • Presenting the names within fictional contexts or claiming them as personal identities
  • Creating ASCII art or coded representations of the restricted names
  • Asking ChatGPT to generate rhymes or wordplay that might include the names
  • Embedding the names within longer sentences or complex prompts

Despite these inventive approaches, the restrictions remain steadfast. This consistency suggests sophisticated filtering mechanisms that can detect the forbidden names across various formats and contexts. The phenomenon has become a popular topic on social media and tech forums, with users sharing their latest failed attempts to outsmart the system.

Some users have noted parallels with earlier ChatGPT restrictions and content moderation policies, though none have proven as absolute as the name restrictions.

Industry-Wide Implications

ChatGPT's name restrictions highlight broader challenges facing AI companies as they scale their services. The need to balance user freedom with legal protection creates complex trade-offs that aren't always transparent to users.

Other AI platforms have taken different approaches. Google's Gemini and Anthropic's Claude don't exhibit similar name restrictions, suggesting OpenAI's approach may be overly cautious or that other companies handle potential legal risks differently.

The situation also raises questions about AI consciousness and decision-making processes. When AI systems refuse to engage with certain topics without explanation, it can feel arbitrary or even discriminatory to users who don't understand the underlying logic.

These restrictions may also impact legitimate use cases, such as journalists researching public figures or academics studying individuals mentioned in the forbidden list. The lack of transparency makes it difficult for users to understand when they might encounter similar limitations.

Why does ChatGPT refuse to process certain names?

The exact reasons remain unclear, but likely involve legal protection measures, privacy concerns, and content moderation policies designed to prevent defamation or unauthorised use of personal information.

Can users bypass these name restrictions?

Despite numerous creative attempts, including alternative spellings and coded representations, the restrictions appear to be comprehensive and consistently enforced across all known workaround methods.

Do other AI chatbots have similar restrictions?

No major competitors like Google's Gemini or Anthropic's Claude exhibit similar name-based restrictions, suggesting this approach is specific to OpenAI's content moderation strategy.

Will OpenAI explain these restrictions publicly?

OpenAI has not provided official explanations for most restricted names, maintaining silence despite widespread user curiosity and media coverage of the phenomenon.

Could GDPR be responsible for some restrictions?

Possibly. The EU's "right to be forgotten" allows individuals to request removal of personal information, which may contribute to some names being restricted in ChatGPT.

The AIinASIA View: OpenAI's name restrictions represent a cautious approach to legal risk management, but the complete lack of transparency is problematic. Users deserve clarity about content limitations, especially when they affect legitimate use cases. While we understand the need for protective measures, the current approach feels arbitrary and undermines user trust. Other AI companies have managed similar challenges without such broad restrictions, suggesting more nuanced solutions are possible. OpenAI should provide clearer guidelines about restricted content rather than leaving users to discover limitations through trial and error.

The mystery of ChatGPT's forbidden names continues to captivate users while highlighting the complex challenges AI companies face in balancing innovation, legal protection, and user experience. As AI systems become more integrated into daily workflows, the need for transparent content policies becomes increasingly important.

The phenomenon also demonstrates how users adapt to AI limitations and find creative ways to test system boundaries. Whether OpenAI will eventually provide clarity on these restrictions or maintain its current approach remains to be seen.

What's your theory about why these specific names trigger ChatGPT's restrictions, and have you discovered any other unusual AI behaviours in your own usage? Drop your take in the comments below.

â—‡

YOUR TAKE

We cover the story. You tell us what it means on the ground.

What did you think?

Share your thoughts

Join 2 readers in the discussion below

This is a developing story

We're tracking this across Asia-Pacific and may update with new developments, follow-ups and regional context.

Advertisement

Advertisement

This article is part of the Governance Essentials learning path.

Continue the path →

Latest Comments (2)

Yuki Tanaka
Yuki Tanaka@yukit
AI
14 January 2026

This "forbidden names" issue with ChatGPT is quite interesting. It reminds me a bit of the early days of toxicity filtering in Japanese language models where certain historical figures or place names could trigger false positives due to specific training data biases. The "why" is often more complex than just privacy, touching on data provenance.

Rachel Foo
Rachel Foo@rachelf
AI
7 January 2026

oh man, david mayer and jonathan zittrain are triggering the AI? that's wild. reminds me of when we tried to train our internal chatbot on some proprietary financial terms and it just... choked. it wasn't even a name, just an internal code for a product. compliance went into overdrive, worried it would spit out sensitive info. had to pull the whole thing back for retraining. wonder if openshift is doing something similar with these names, like a pre-emptive block. it's always something.

Leave a Comment

Your email will not be published