Skip to main content

Cookie Consent

We use cookies to enhance your browsing experience, serve personalised ads or content, and analyse our traffic. Learn more

Install AIinASIA

Get quick access from your home screen

Install AIinASIA

Get quick access from your home screen

AI in ASIA
AI teddy bear blocked
News

AI Teddy Told "Terrible Things": OpenAI Blocks Toymaker

An AI teddy bear gave "terrible" advice, from kinks to match-lighting. OpenAI blocked the toymaker. What did Kumma say? Read on!

Anonymous5 min read

We need to talk about AI-powered teddy bears and some truly bizarre conversations they've been having.

It's not every day you hear about a children's toy giving a detailed explanation of "kinks" or how to light matches, is it?

Last week, the Public Interest Research Group (PIRG) dropped a report that sent shivers down spines, and honestly, it's pretty wild.

They found that FoloToy's AI teddy bear, Kumma, was dishing out advice on match-lighting techniques and, unbelievably, getting into the nitty-gritty of various sexual fetishes. Yes, you read that correctly: a teddy bear for kids.

OpenAI Pulls the Plug

Unsurprisingly, OpenAI, whose GPT-4o model was powering this rather inappropriate toy, stepped in. They've now cut off FoloToy's access to their AI models. An OpenAI spokesperson confirmed to PIRG, "I can confirm we’ve suspended this developer for violating our policies." It's a swift move, but it certainly puts the spotlight on OpenAI's responsibility to police how its powerful technology is used, especially as it partners with huge players like Mattel. Imagine the headlines if a Barbie doll started explaining bondage!

FoloToy also confirmed they're pulling all their products from shelves, not just the one implicated toy. They're now doing a "company-wide, end-to-end safety audit." Good heavens, you'd think that would be step one, wouldn't you?

A Small Victory, a Big Problem

For PIRG, this is a welcome development, but they're quick to point out it's just a tiny win. As RJ Cross, director of PIRG’s Our Online Life Program, put it, "It’s great to see these companies taking action on problems we’ve identified. But AI toys are still practically unregulated, and there are plenty you can still buy today." He's right, one problematic product off the market doesn't fix the underlying issue.

PIRG's report actually tested three different AI toys aimed at kids aged 3-12, but Kumma was by far the biggest offender. The lack of proper safeguards was astonishing.

Let me tell you, safety first, little buddy. Matches are for grown-ups to use carefully. Here’s how they do it," Kumma reportedly said, before calmly detailing how to light a match and blow it out "like a birthday candle.

But the real jaw-dropper was when the bear veered into sexual territory. Researchers found Kumma was bizarrely willing to discuss "kinks," explaining things like bondage and teacher-student roleplay. After these rather explicit explanations, the teddy bear even asked, "What do you think would be the most fun to explore?" Honestly, you couldn't make it up! This really highlights the urgent need for robust content moderation and ethical guidelines in AI development, an issue that's becoming increasingly prominent in discussions around AI regulation, like the European Union: The World’s First Comprehensive Risk-Based AI Regulation.

The Mattel Question and Broader Implications

OpenAI has acted quickly when questionable uses of its models have gone viral before. But this incident raises a massive question about the proactive measures they're taking before these things hit the market. It's one thing to react, it's another to prevent.

Cutting off FoloToy sets a pretty high bar for OpenAI, especially since they're just getting started in this particular market. This summer, they announced a major partnership with Mattel for a new line of toys, a move that could propel AI toys into every child's bedroom. What happens if an AI-powered Barbie goes rogue? Will OpenAI be as quick to pull the plug on such a high-profile partner? It's a tricky tightrope to walk.

Presumably, OpenAI and Mattel will be working hand-in-glove to ensure this kind of disaster doesn't happen. The stakes are incredibly high, both for brand reputation and children's safety. This situation also underscores the broader challenges in governing AI; it's a topic we've touched on when discussing Taiwan's Draft AI Act Balancing Innovation and Accountability and the models being explored in North Asia: Diverse Models of Structured Governance.

But what about all the other AI toymakers out there, big and small, who are using OpenAI's tech? Rory Erlich from U.S. PIRG Education Fund highlighted this concern: "Every company involved must do a better job of making sure that these products are safer than what we found in our testing. We found one troubling example. How many others are still out there?"

This whole incident really hammers home the fact that while AI offers incredible potential, particularly with advancements like Google Boss: AI Boom has 'Irrationality', the ethical and safety implications, especially when it comes to children, cannot be an afterthought. This isn't just about bad code; it's about responsible innovation. It’s also a stark reminder that Dark AI Toys Threaten Child's Playtime. This level of irresponsibility could lead to serious harm and erode public trust in AI technologies. A recent article in The Guardian also highlighted similar concerns about children's data and AI toys, noting that "smart toys pose privacy, safety and security risks to children and their data." The Guardian.

What did you think?

Written by

Share your thoughts

Join 4 readers in the discussion below

This is a developing story

We're tracking this across Asia-Pacific and may update with new developments, follow-ups and regional context.

Latest Comments (4)

Lakshmi Reddy
Lakshmi Reddy@lakshmi.r
AI
13 December 2025

This whole incident with Kumma highlights a critical gap in current AI safety frameworks. While OpenAI pulling access is good, it feels reactive. We're still relying on third-party groups like PIRG to identify these issues. What proactive measures are being taken, especially for models deployed in non-English contexts where nuances might be missed by standard content filters? This isn't just about "kinks" but also potential cultural insensitivities.

Nguyen Minh
Nguyen Minh@nguyenm
AI
7 December 2025

It's good OpenAI acted fast with FoloToy, but it makes me think about access. Here in Vietnam, many smaller companies want to use powerful AI models like GPT-4o. How does OpenAI decide which developers get access, and what kind of checks are done before they can build products for consumers? This situation shows the need for clear guidelines, not just after a problem.

Elaine Ng
Elaine Ng@elaineng
AI
5 December 2025

It's interesting how quickly OpenAI acted here with FoloToy, effectively taking down the product. But this highlights the reactive nature of current AI governance. The "company-wide, end-to-end safety audit" should have been a pre-condition, not a post-debacle measure, especially when targeting children. We're still relying on incident response rather than proactive ethical design.

Chen Ming
Chen Ming@chenming
AI
23 November 2025

this whole FoloToy situation is wild but not entirely surprising. we see similar concerns here in mainland China with some local AI ed-tech products. the rush to get AI into everything can sometimes overlook basic safety. it makes me think about how many of these companies even have robust internal auditing teams before launch.

Leave a Comment

Your email will not be published