Skip to main content

Cookie Consent

We use cookies to enhance your browsing experience, serve personalised ads or content, and analyse our traffic. Learn more

AI in ASIA
AI safety labels
News

Unveiling AI Safety Labels: A New Era of Transparency in Singapore and Beyond

AI safety labels to enhance transparency in Singapore's AI landscape.

Intelligence Desk3 min read

AI Snapshot

The TL;DR: what matters, fast.

Singapore will introduce AI safety labels for generative AI apps to provide clear usage instructions, potential risks, and testing details.

The initiative aims to standardize transparency and testing communication from tech companies, similar to existing safety labels on other products.

Upcoming guidelines will outline safety benchmarks covering risks such as falsehoods, toxic statements, and biased content, which must be tested before AI deployment.

Who should pay attention: Policymakers | AI developers | Consumers | Regulators

What changes next: Other nations may follow Singapore's lead in AI labelling initiatives.

Singapore plans to introduce AI safety labels by early 2025 to enhance transparency and understanding of generative AI technology.,The labels will detail AI usage, risks, and testing, similar to safety labels on medication or appliances.,A guide on data anonymisation will be released in early 2025 to facilitate secure data transfer across Asean.

Singapore's Push for AI Transparency with Safety Labels

In a significant move towards AI transparency, Singapore is set to introduce safety labels for generative artificial intelligence (AI) apps. These labels will provide clear instructions on how the AI should be used, its potential risks, and testing details. This initiative aims to standardise how tech companies communicate transparency and testing, much like safety labels on medication or household appliances. For a broader look at how different regions are approaching AI governance, you can explore the diverse models of structured governance in North Asia.

Understanding Generative AI

Generative AI refers to AI that can create new content, such as text and images. It is less predictable than traditional AI, making it crucial for users to understand its workings and potential risks. The rise of AI artists topping the charts weekly is a testament to the creative power of this technology.

What to Expect from the AI Safety Labels

According to Josephine Teo, Minister for Digital Development and Information, creators and deployers of generative AI should clearly inform users about the data used, any risks and limitations of the model, and how their systems have been tested. The upcoming guidelines will also outline safety benchmarks that should be tested before an AI is deployed, covering risks such as falsehoods, toxic statements, and biased content. This aligns with global efforts to ensure responsible AI innovation, as seen in Taiwan's AI law redefining "responsible innovation".

A Guide to Data Anonymisation in Asean

In addition to the AI safety labels, businesses in Asean will receive a guide on data anonymisation in early 2025. This guide aims to facilitate secure transfer of data across the region, contributing to the development of a secure global digital ecosystem.

The Role of Synthetic Data in AI Innovation

Synthetic data has emerged as a promising solution to address the growing demands for more data to train AI without compromising users' privacy. This technology creates realistic data for AI model training without using actual sensitive data, helping to speed up innovation while mitigating concerns about cyber-security incidents. A deeper dive into the challenges and solutions can be found in this NIST publication on synthetic data and privacy.

Challenges in Managing Data in Generative AI

Managing data in generative AI poses more challenges compared to traditional AI. Denise Wong, IMDA assistant chief executive, highlighted the need for discussions on how to manage data in the generative AI space and what principles should be applied.

The Importance of Data Protection Safeguards

OpenAI’s head of privacy legal, Jessica Gan Lee, emphasised the need for data protection safeguards at all stages of AI, from training and development to deployment. She also highlighted the importance of training AI models through diverse data sets from around the world, while reducing the processing of personal information.

Consumer Responsibility in Data Sharing

Irene Liu, regional strategy and consulting lead for finance, risk and compliance practice at Accenture, stressed the need for consumers to be responsible for the data they provide. She suggested that more focus should be placed on educating consumers about the implications of sharing information online.

Comment and Share

What are your thoughts on the introduction of AI safety labels? Do you think it will enhance transparency and understanding of AI technology? Share your views in the comments section below and don't forget to Subscribe to our newsletter for updates on AI and AGI developments.

What did you think?

Written by

Share your thoughts

Join 4 readers in the discussion below

This is a developing story

We're tracking this across Asia-Pacific and may update with new developments, follow-ups and regional context.

This article is part of the AI Safety for Everyone learning path.

Continue the path →

Liked this? There's more.

Join our weekly newsletter for the latest AI news, tools, and insights from across Asia. Free, no spam, unsubscribe anytime.

Latest Comments (4)

Arjun Mehta
Arjun Mehta@arjunm
AI
25 January 2026

@arjunm: labels for generative AI are interesting but the 'data used' part feels like it could be super high level. are they actually talking about model cards or something more like data sheets for datasets? for a practitioner, knowing the underlying data distribution and preprocessing is way more critical than just a generic statement.

Dr. Farah Ali
Dr. Farah Ali@drfahira
AI
6 January 2026

I'm just reading about these proposed AI safety labels for Singapore. While transparency is always welcome, one has to ask how much these labels will truly benefit users in diverse ASEAN contexts. Will the explanations be culturally nuanced enough, or just a one-size-fits-all approach that overlooks varying digital literacy levels and local interpretations of risk, particularly outside urban centers?

Marcus Lim@marcuslim
AI
21 October 2024

The idea of standardizing how companies communicate transparency with AI labels sounds good on paper, especially for generative models. But from what we've seen on the engineering side, especially with how frequently these models get updated even in just the past year, keeping those labels current and accurate is going to be a massive operational lift for any company scaling.

Le Hoang
Le Hoang@lehoang
AI
2 September 2024

hey i'm le hoang, from hcmc, vietnam. i'm a junior data scientist. i just read this. so Singapore wants to use labels for generative AI risks, like medicine labels. but for AI, how do you even quantify "risk" like that? is it going to be a percentage or a warning about hallucination frequency? keen to understand how that works in practice.

Leave a Comment

Your email will not be published