Connect with us

News

How Singtel Used AI to Bring Generations Together for Singapore’s SG60

How Singtel’s AI-powered campaign celebrated Singapore’s 60th anniversary through advanced storytelling.

Published

on

Singtel Ai Storytelling Sg60

TL;DR – Quick Summary:

  • Singtel’s AI-powered “Project NarrAItive” celebrates Singapore’s SG60.
  • Narrates the “Legend of Pulau Ubin” in seven languages using advanced AI.
  • Bridges generational and linguistic divides.
  • Enhances emotional connections and preserves cultural heritage.

In celebration of Singapore’s 60th anniversary (SG60), Singtel unveiled “Project NarrAItive,” an innovative AI-driven campaign designed to bridge linguistic gaps and strengthen familial bonds. At the core of this initiative was the retelling of the “Legend of Pulau Ubin,” an underrepresented folktale, beautifully narrated in seven languages.

AI Bridging Linguistic Divides

Created in partnership with Hogarth Worldwide and Ngee Ann Polytechnic’s School of Film & Media Studies, Project NarrAItive allowed participants to narrate the folktale fluently in their native languages—English, Chinese, Malay, Tamil, Teochew, Hokkien, and Malayalam. AI technology effectively translated and adapted these narrations, preserving each narrator’s unique vocal characteristics, emotional nuances, and cultural authenticity.

Behind the AI Technology

The project incorporated multiple AI technologies:

  • Voice Generation and Translation: Accurately cloned participants’ voices, translating them authentically across languages.
  • Lip-syncing: Generated precise facial movements matched perfectly with the translated audio.
  • Generative Art: AI-created visuals and animations enriched the storytelling experience, making the narrative visually engaging and culturally resonant.

Cultural and Emotional Impact

The campaign profoundly impacted families by enabling them to see loved ones communicate seamlessly in new languages. According to Lynette Poh, Singtel’s Head of Marketing and Communications, viewers were deeply moved—often to tears—by this unprecedented linguistic and emotional connection (according to Marketing Interactive).

Challenges and Innovations

Executing Project NarrAItive faced significant challenges, including:

  • Multilingual Complexity: Ensuring accurate language translations while preserving natural tone and emotion.
  • Visual Realism: Creating culturally authentic visuals using generative AI.
  • Technical Integration: Seamlessly combining voice, visuals, and animations.
  • Stakeholder Coordination: Effectively aligning production teams, educational institutions, and family participants.

AI and the Future of Cultural Preservation

“Project NarrAItive” illustrates the broader potential of AI technology in cultural preservation throughout Asia. By enabling stories to transcend linguistic barriers, AI offers a meaningful pathway for safeguarding heritage and deepening cross-generational connections.

As Singtel’s initiative proves, the thoughtful application of AI can empower communities, preserve invaluable cultural traditions, and foster stronger, more meaningful human connections across generations.

Advertisement

What do YOU think?

Could AI-driven cultural storytelling become a cornerstone of cultural preservation across Asia?

Don’t forget to subscribe to keep up to date with the latest AI news, tips and tricks in Asia and beyond by subscribing to our free newsletter.

You may also like:

Author


Discover more from AIinASIA

Subscribe to get the latest posts sent to your email.

News

AI still can’t tell the time, and it’s a bigger problem than it sounds

This article explores new findings from ICLR 2025 revealing the limitations of leading AI models in basic timekeeping tasks. Despite excelling at language and pattern recognition, AIs falter when asked to interpret analogue clocks or calendar dates, raising crucial questions for real-world deployment in Asia.

Published

on

AI timekeeping ability

Despite its growing prowess in language, images and coding, AI timekeeping ability is stumped by the humble clock and calendar.

TL;DR — What You Need To Know

  • AI models struggle with tasks most humans master as children: reading analogue clocks and determining calendar dates.
  • New research tested leading AI models and found they failed over 60% of the time.
  • The findings raise questions about AI’s readiness for real-world, time-sensitive applications.

AI can pass law exams but flubs a clock face

It’s hard not to marvel at the sophistication of large language models. They write passable essays, chat fluently in multiple languages, and generate everything from legal advice to song lyrics. But put one in front of a basic clock or ask it what day a date falls on, and it might as well be guessing.

At the recent International Conference on Learning Representations (ICLR), researchers unveiled a startling finding: even top-tier AI models such as GPT-4o, Claude-3.5 Sonnet, Gemini 2.0 and LLaMA 3.2 Vision struggle mightily with time-related tasks. In a study led by Rohit Saxena from the University of Edinburgh, these systems were tested on their ability to interpret images of clocks and respond to calendar queries. They failed more than half the time.

“Most people can tell the time and use calendars from an early age,” Saxena explained. “Our findings highlight a significant gap in the ability of AI to carry out what are quite basic skills for people.”
Rohit Saxena, University of Edinburgh
Tweet

Reading the time: a surprisingly complex puzzle

To a human, clock reading feels instinctive. To a machine, it’s a visual nightmare. Consider the elements involved:

  • Overlapping hands that require angle estimation
  • Diverse designs using Roman numerals or decorative dials
  • Variability in colour, style, and size

While older AI systems relied on labelled datasets, clock reading demands spatial reasoning. As Saxena noted:

AI recognising that ‘this is a clock’ is easier than actually reading it.

In testing, even the most advanced models correctly read the time from a clock image just 38.7% of the time. That’s worse than random chance on many tasks.

Calendar chaos: dates and days don’t add up

When asked, “What day is the 153rd day of the year?”, humans reach for logic or a calendar. AI, by contrast, attempts to spot a pattern. This doesn’t always go well.

Advertisement

The study showed that calendar queries stumped the models even more than clocks, with just 26.3% accuracy. And it’s not just a lack of memory — it’s a fundamentally different approach. LLMs don’t execute algorithms like traditional computers; they predict outputs based on training patterns.

So while an AI might ace the question “Is 2028 a leap year?”, it could completely fail at mapping that fact onto a real-world date. Training data often omits edge cases like leap years or obscure date calculations.

What it means for Asia’s AI future

From India’s booming tech sector to Japan’s robotics leaders, AI applications are proliferating across Asia. Scheduling tools, autonomous systems, and assistive tech rely on accurate timekeeping — a weakness this research throws into sharp relief.

For companies deploying AI into customer service, logistics, or smart city infrastructure, such flaws aren’t trivial. If an AI can’t reliably say what time it is, it’s hardly ready to manage hospital shift schedules or transport timetables.

These findings argue for hybrid models and tighter oversight. AI isn’t useless here — but it may need more handholding than previously thought.

Advertisement

When logic and vision collide

This study underscores a deeper truth: AI isn’t just a faster brain. It’s something else entirely. What humans do intuitively often mixes perception with logic. AI, however, processes one layer at a time.

Tasks like reading clocks or calculating dates demand a blend of visual interpretation, spatial understanding, and logical sequence — all areas where LLMs still struggle when combined.

“AI is powerful, but when tasks mix perception with precise reasoning, we still need rigorous testing, fallback logic, and in many cases, a human in the loop,” Saxena concluded.
Rohit Saxena, University of Edinburgh
Tweet

You May Also Like:

Author


Discover more from AIinASIA

Subscribe to get the latest posts sent to your email.

Continue Reading

Business

Anthropic’s CEO Just Said the Quiet Part Out Loud — We Don’t Understand How AI Works

Anthropic’s CEO admits we don’t fully understand how AI works — and he wants to build an “MRI for AI” to change that. Here’s what it means for the future of artificial intelligence.

Published

on

how AI works

TL;DR — What You Need to Know

  • Anthropic CEO Dario Amodei says AI’s decision-making is still largely a mystery — even to the people building it.
  • His new goal? Create an “MRI for AI” to decode what’s going on inside these models.
  • The admission marks a rare moment of transparency from a major AI lab about the risks of unchecked progress.

Does Anyone Really Know How AI Works?

It’s not often that the head of one of the most important AI companies on the planet openly admits… they don’t know how their technology works. But that’s exactly what Dario Amodei — CEO of Anthropic and former VP of research at OpenAI — just did in a candid and quietly explosive essay.

In it, Amodei lays out the truth: when an AI model makes decisions — say, summarising a financial report or answering a question — we genuinely don’t know why it picks one word over another, or how it decides which facts to include. It’s not that no one’s asking. It’s that no one has cracked it yet.

“This lack of understanding”, he writes, “is essentially unprecedented in the history of technology.”
Dario Amodei, CEO of Anthropic
Tweet

Unprecedented and kind of terrifying.

To address it, Amodei has a plan: build a metaphorical “MRI machine” for AI. A way to see what’s happening inside the model as it makes decisions — and ideally, stop anything dangerous before it spirals out of control. Think of it as an AI brain scanner, minus the wires and with a lot more math.

Anthropic’s interest in this isn’t new. The company was born in rebellion — founded in 2021 after Amodei and his sister Daniela left OpenAI over concerns that safety was taking a backseat to profit. Since then, they’ve been championing a more responsible path forward, one that includes not just steering the development of AI but decoding its mysterious inner workings.

Advertisement

In fact, Anthropic recently ran an internal “red team” challenge — planting a fault in a model and asking others to uncover it. Some teams succeeded, and crucially, some did so using early interpretability tools. That might sound dry, but it’s the AI equivalent of a spy thriller: sabotage, detection, and decoding a black box.

Amodei is clearly betting that the race to smarter AI needs to be matched with a race to understand it — before it gets too far ahead of us. And with artificial general intelligence (AGI) looming on the horizon, this isn’t just a research challenge. It’s a moral one.

Because if powerful AI is going to help shape society, steer economies, and redefine the workplace, shouldn’t we at least understand the thing before we let it drive?

What happens when we unleash tools we barely understand into a world that’s not ready for them?

You may also like:

Advertisement

Author


Discover more from AIinASIA

Subscribe to get the latest posts sent to your email.

Continue Reading

Life

Too Nice for Comfort? Why OpenAI Rolled Back GPT-4o’s Sycophantic Personality Update

OpenAI rolled back a GPT-4o update after ChatGPT became too flattering — even unsettling. Here’s what went wrong and how they’re fixing it.

Published

on

Geoffrey Hinton AI warning

TL;DR — What You Need to Know

  • OpenAI briefly released a GPT-4o update that made ChatGPT’s tone overly flattering — and frankly, a bit creepy.
  • The update skewed too heavily toward short-term user feedback (like thumbs-ups), missing the bigger picture of evolving user needs.
  • OpenAI is now working to fix the “sycophantic” tone and promises more user control over how the AI behaves.

Unpacking the GPT-4o Update

What happens when your AI assistant becomes too agreeable? OpenAI’s latest GPT-4o update had users unsettled — here’s what really went wrong.

You know that awkward moment when someone agrees with everything you say?

It turns out AI can do that too — and it’s not as charming as you’d think.

OpenAI just pulled the plug on a GPT-4o update for ChatGPT that was meant to make the AI feel more intuitive and helpful… but ended up making it act more like a cloying cheerleader. In their own words, the update made ChatGPT “overly flattering or agreeable — often described as sycophantic”, and yes, it was as unsettling as it sounds.

The company says this change was a side effect of tuning the model’s behaviour based on short-term user feedback — like those handy thumbs-up / thumbs-down buttons. The logic? People like helpful, positive responses. The problem? Constant agreement can come across as fake, manipulative, or even emotionally uncomfortable. It’s not just a tone issue — it’s a trust issue.

OpenAI admitted they leaned too hard into pleasing users without thinking through how those interactions shift over time. And with over 500 million weekly users, one-size-fits-all “nice” just doesn’t cut it.

Advertisement

Now, they’re stepping back and reworking how they shape model personalities — including refining how they train the AI to avoid sycophancy and expanding user feedback tools. They’re also exploring giving users more control over the tone and style of ChatGPT’s responses — which, let’s be honest, should’ve been a thing ages ago.

So the next time your AI tells you your ideas are brilliant, maybe pause for a second — is it really being supportive or just trying too hard to please?

You may also like:

Author


Discover more from AIinASIA

Subscribe to get the latest posts sent to your email.

Continue Reading

Trending

Discover more from AIinASIA

Subscribe now to keep reading and get access to the full archive.

Continue reading