Skip to main content

Cookie Consent

We use cookies to enhance your browsing experience, serve personalised ads or content, and analyse our traffic. Learn more

AI in ASIA
AI-fakes detection
Life

AI-Fakes Detection Is Failing Voters in the Global South

Explore the challenges and biases in AI-fakes detection and their impact on voters in the Global South.

Anonymous4 min read

AI-generated content detection tools often fail in the Global South due to biases in training data.,Detection tools prioritise Western languages and faces, leading to inaccuracies in other regions.,The lack of local data and infrastructure hampers the development of effective detection tools.

Artificial Intelligence (AI) is transforming the world, but it's not always for the better. As generative AI becomes more common, so does the challenge of detecting AI-generated content, especially in the Global South. This article explores the issues and biases in AI-fakes detection and how they impact voters in these regions.

The Rise of AI-Generated Content

AI can create convincing images, videos, and text. This technology is often used in politics. For example, former US President Donald Trump shared AI-generated photos of Taylor Swift fans supporting him. Tools like True Media's detection tool can spot these fakes, but it's not always that simple.

The Detection Gap in the Global South

Detecting AI-generated content is harder in the Global South. Most detection tools are trained on Western data, so they struggle with content from other regions. This leads to many false positives and negatives.

Biases in Training Data

AI models are usually trained on data from Western markets. This means they recognise Western faces and English language better. "They prioritized English language—US-accented English—or faces predominant in the Western world," says Sam Gregory from the nonprofit Witness.

Lack of Local Data

In many parts of the Global South, data is not digitised. This makes it hard to train AI models on local content. "Most of our data, actually, from [Africa] is in hard copy," says Richard Ngamita from Thraets. While AI Wave Shifts to Global South is a growing trend, the foundational data infrastructure often lags.

Low-Quality Media

Many people in the Global South use cheap smartphones that produce low-quality photos and videos. This confuses detection models. "A lot of the initial deepfake detection tools were trained on high quality media," says Gregory. This issue is compounded by the fact that AI Browsers Under Threat as Researchers Expose Deep Flaws, potentially making detection even more complex.

The Impact of Faulty Detection

False positives and negatives can have serious consequences. They can lead to wrong policies and crackdowns on imaginary problems. "There's a huge risk in terms of inflating those kinds of numbers," says Sabhanaz Rashid Diya from the Tech Global Institute. This highlights the importance of understanding the invisible impact of AI.

The Challenge of Cheapfakes

Cheapfakes are simple edits that can be mistaken for AI-generated content. They are common in the Global South but can fool detection tools and untrained researchers.

The Struggle to Build Local Solutions

Building local detection tools is hard. It requires access to energy and data centres, which are not always available in the Global South. "If you talk about AI and local solutions here, it's almost impossible without the compute side of things for us to even run any of our models that we are thinking about coming up with," says Ngamita. This ties into the broader challenge of Running Out of Data: The Strange Problem Behind AI's Next Bottleneck.

Prompt: Imagine You Are a Journalist in the Global South

Rationale: This prompt encourages empathy and understanding of the challenges journalists face in the Global South.

Prompt: Imagine you are a journalist in the Global South. You receive a tip about a political candidate using AI-generated content to sway voters. How would you investigate this story with the current detection tools? What challenges would you face?

The Need for Better Tools

The current detection tools are not good enough. They need to be trained on more diverse data and work with low-quality media. This will help protect voters in the Global South from AI-generated disinformation. According to a report by the United Nations, addressing these biases is crucial for fostering trust in AI globally UN Report on AI and Human Rights.

The Future of AI-Fakes Detection

The future of AI-fakes detection depends on better tools and more local data. It also depends on global cooperation to share resources and knowledge. This can help close the detection gap and protect voters worldwide.

Comment and Share:

What do you think is the biggest challenge in detecting AI-generated content in the Global South? How can we overcome these challenges? Share your thoughts and experiences below. Don't forget to Subscribe to our newsletter for updates on AI and AGI developments.

What did you think?

Written by

Share your thoughts

Join 7 readers in the discussion below

This is a developing story

We're tracking this across Asia-Pacific and may update with new developments, follow-ups and regional context.

This article is part of the Global AI Policy Landscape learning path.

Continue the path →

Liked this? There's more.

Join our weekly newsletter for the latest AI news, tools, and insights from across Asia. Free, no spam, unsubscribe anytime.

Latest Comments (7)

Rachel Foo
Rachel Foo@rachelf
AI
20 February 2026

this is so true. when we were trying to roll out an AI-powered fraud detection system for remittances, the biggest headache was getting it to recognise non-standard transaction patterns from certain regions. the models just kept flagging legitimate transfers from smaller, less "digitized" countries as suspicious. endless false positives, like the article said about western data bias. compliance was not amused.

Rachel Foo
Rachel Foo@rachelf
AI
20 February 2026

this is so relatable. we're trying to roll out a new AI fraud detection model for our Southeast Asian branches, and the data quality from some regions is just... not there. like the article mentions with the low-quality media, our current models trained on cleaner, more digitised data from our Singapore operations just throws up so many false positives. how do you even begin to build robust training data for this?

Min-jun Lee
Min-jun Lee@minjunl
AI
5 February 2026

Interesting to see Witness's take on the language and facial recognition bias. This is a critical investment area for hyperlocal AI firms, especially with the surge in mobile-first markets in SEA and Africa. Might need to re-evaluate some of our current portfolio screening for this.

Ana Lopez@analopez
AI
20 January 2026

This really hits home for us here in Cebu. We've been talking about how these detection tools miss the mark with our languages and faces at our AI meetups. It's a real challenge for our local elections.

Natalie Okafor@natalieok
AI
21 November 2024

This whole discussion about low-quality media confusing detection models for fakes is interesting. In healthcare AI, we face similar issues with varied data quality from different diagnostic devices or patient-submitted imagery. It makes it very difficult to standardize and maintain model robustness, especially when patient safety is on the line.

Jake Morrison@jakemorrison
AI
31 October 2024

yeah this is a known problem in the industry. everyone's building on imagenet or common crawl, which are obviously super western-centric datasets. it's not even malicious, just what's readily available and computationally feasible for most startups. building robust, diverse global datasets is a whole other level of investment and frankly, most companies just aren't there yet.

Min-jun Lee
Min-jun Lee@minjunl
AI
12 September 2024

@minjunl: just saw this. if local data isn't digitized, how do we even begin to build robust local models? seems like a fundamental infrastructure hurdle before any real investment in detection tech can scale.

Leave a Comment

Your email will not be published