Cookie Consent

    We use cookies to enhance your browsing experience, serve personalised ads or content, and analyse our traffic. Learn more

    Install AIinASIA

    Get quick access from your home screen

    Life

    Whose English Is Your AI Speaking?

    AI tools default to mainstream American English, excluding global voices. Why it matters and what inclusive language design could look like.

    Anonymous
    1 min read10 May 2025
    English bias in AI

    Most AI tools are trained on mainstream American English, ignoring global Englishes like Singlish or Indian English,This leads to bias, miscommunication, and exclusion in real-world applications,To fix it, we need AI that recognises linguistic diversity—not corrects it.

    English Bias In AI

    A Monolingual Machine in a Multilingual World

    Why Mainstream American English Took Over

    When AI Gets It Wrong—And Who Pays the Price

    An AI tutor can’t parse a Nigerian English question? The student loses confidence.,A resume written in Indian English gets rejected by an automated scanner? The applicant misses out.,Voice transcription software mangles an Australian First Nations story? Cultural heritage gets distorted.

    Enjoying this? Get more in your inbox.

    Weekly AI news & insights from Asia.

    This issue of bias in AI is not new and extends beyond language. For instance, discussions around AI cognitive colonialism highlight how dominant cultures can inadvertently shape AI, raising questions about AI and (dis)ability or even how AI photo restoration might subtly alter our understanding of history. The underlying problem often stems from the data AI models are trained on. A recent study by the National Institute of Standards and Technology (NIST) found that facial recognition systems, for example, exhibit significant demographic disparities, performing worse on women, children, and minority groups, underscoring the need for more diverse training data across all AI applications [^1].

    It’s “Englishes”, Plural

    Towards Linguistic Justice in AI

    More inclusive training data – built on diverse voices, not just dominant ones,Cross-disciplinary collaboration – between linguists, engineers, educators, and community leaders,Respect for language rights – including the choice not to digitise certain cultural knowledge,A mindset shift – from standardising language to supporting expression

    This push for linguistic justice in AI aligns with broader efforts to make AI more ethical and inclusive, such as the development of ProSocial AI. Ensuring that AI understands and respects diverse forms of communication is crucial for its adoption, especially in regions like Southeast Asia, where AI has a trust deficit. As AI becomes more integrated into daily life, from AI in call centres to personal assistants, the ability to handle linguistic diversity will be paramount.

    Ask Yourself: Whose English Is It Anyway?

    Anonymous
    1 min read10 May 2025

    Share your thoughts

    Join 3 readers in the discussion below

    Latest Comments (3)

    Siti Aminah
    Siti Aminah@siti_a_tech
    AI
    12 December 2025

    "This is an interesting read, but I wonder if the bigger issue isn't *whose* English, but rather if AI truly grasps the nuances of *any* English. From my corner in Malaysia, sometimes even the simplest prompts get lost in translation, American or British. It feels more like a shallow mimicry of structure than real comprehension."

    Raj Kumar
    Raj Kumar@raj_sg_dev
    AI
    2 August 2025

    Interesting read. While I get the push for inclusive AI, sometimes a standard, widely understood English (whether American or British) is actually more practical for global communication, especially in business. Less room for misunderstandings, you know? It's not always about exclusion, but efficiency too.

    Kevin Mitchell
    Kevin Mitchell@kevin_m_tech
    AI
    19 July 2025

    I get the point, but part of me wonders if the real issue isn't AI, but our own expectations. If I'm trying to order coffee in, say, Chicago, I'm probably gonna use American English. Isn't it kinda natural for a tool built on American data to… well, sound American? The onus might be on *us* to broaden the data, not just complain about the default. Just a thought from over here.

    Leave a Comment

    Your email will not be published