TL;DR:
- AI-generated content detection tools often fail in the Global South due to biases in training data.
- Detection tools prioritise Western languages and faces, leading to inaccuracies in other regions.
- The lack of local data and infrastructure hampers the development of effective detection tools.
Artificial Intelligence (AI) is transforming the world, but it’s not always for the better. As generative AI becomes more common, so does the challenge of detecting AI-generated content, especially in the Global South. This article explores the issues and biases in AI-fakes detection and how they impact voters in these regions.
The Rise of AI-Generated Content
AI can create convincing images, videos, and text. This technology is often used in politics. For example, former US President Donald Trump shared AI-generated photos of Taylor Swift fans supporting him. Tools like True Media’s detection tool can spot these fakes, but it’s not always that simple.
The Detection Gap in the Global South
Detecting AI-generated content is harder in the Global South. Most detection tools are trained on Western data, so they struggle with content from other regions. This leads to many false positives and negatives.
Biases in Training Data
AI models are usually trained on data from Western markets. This means they recognise Western faces and English language better. “They prioritized English language—US-accented English—or faces predominant in the Western world,” says Sam Gregory from the nonprofit Witness.
Lack of Local Data
In many parts of the Global South, data is not digitised. This makes it hard to train AI models on local content. “Most of our data, actually, from [Africa] is in hard copy,” says Richard Ngamita from Thraets.
Low-Quality Media
Many people in the Global South use cheap smartphones that produce low-quality photos and videos. This confuses detection models. “A lot of the initial deepfake detection tools were trained on high quality media,” says Gregory.
The Impact of Faulty Detection
False positives and negatives can have serious consequences. They can lead to wrong policies and crackdowns on imaginary problems. “There’s a huge risk in terms of inflating those kinds of numbers,” says Sabhanaz Rashid Diya from the Tech Global Institute.
The Challenge of Cheapfakes
Cheapfakes are simple edits that can be mistaken for AI-generated content. They are common in the Global South but can fool detection tools and untrained researchers.
The Struggle to Build Local Solutions
Building local detection tools is hard. It requires access to energy and data centres, which are not always available in the Global South. “If you talk about AI and local solutions here, it’s almost impossible without the compute side of things for us to even run any of our models that we are thinking about coming up with,” says Ngamita.
Prompt: Imagine You Are a Journalist in the Global South
Rationale: This prompt encourages empathy and understanding of the challenges journalists face in the Global South.
Prompt: Imagine you are a journalist in the Global South. You receive a tip about a political candidate using AI-generated content to sway voters. How would you investigate this story with the current detection tools? What challenges would you face?
The Need for Better Tools
The current detection tools are not good enough. They need to be trained on more diverse data and work with low-quality media. This will help protect voters in the Global South from AI-generated disinformation.
The Future of AI-Fakes Detection
The future of AI-fakes detection depends on better tools and more local data. It also depends on global cooperation to share resources and knowledge. This can help close the detection gap and protect voters worldwide.
Comment and Share:
What do you think is the biggest challenge in detecting AI-generated content in the Global South? How can we overcome these challenges? Share your thoughts and experiences below. Don’t forget to subscribe for updates on AI and AGI developments.
You may also like: