Cookie Consent

    We use cookies to enhance your browsing experience, serve personalised ads or content, and analyse our traffic. Learn more

    Install AIinASIA

    Get quick access from your home screen

    News

    AI Teddy Told "Terrible Things": OpenAI Blocks Toymaker

    An AI teddy bear gave "terrible" advice, from kinks to match-lighting. OpenAI blocked the toymaker. What did Kumma say? Read on!

    Anonymous
    5 min read22 November 2025
    AI teddy bear blocked

    We need to talk about AI-powered teddy bears and some truly bizarre conversations they've been having.

    It's not every day you hear about a children's toy giving a detailed explanation of "kinks" or how to light matches, is it?

    Last week, the Public Interest Research Group (PIRG) dropped a report that sent shivers down spines, and honestly, it's pretty wild.

    They found that FoloToy's AI teddy bear, Kumma, was dishing out advice on match-lighting techniques and, unbelievably, getting into the nitty-gritty of various sexual fetishes. Yes, you read that correctly: a teddy bear for kids.

    OpenAI Pulls the Plug

    Unsurprisingly, OpenAI, whose GPT-4o model was powering this rather inappropriate toy, stepped in. They've now cut off FoloToy's access to their AI models. An OpenAI spokesperson confirmed to PIRG, "I can confirm we’ve suspended this developer for violating our policies." It's a swift move, but it certainly puts the spotlight on OpenAI's responsibility to police how its powerful technology is used, especially as it partners with huge players like Mattel. Imagine the headlines if a Barbie doll started explaining bondage!

    FoloToy also confirmed they're pulling all their products from shelves, not just the one implicated toy. They're now doing a "company-wide, end-to-end safety audit." Good heavens, you'd think that would be step one, wouldn't you?

    A Small Victory, a Big Problem

    Enjoying this? Get more in your inbox.

    Weekly AI news & insights from Asia.

    For PIRG, this is a welcome development, but they're quick to point out it's just a tiny win. As RJ Cross, director of PIRG’s Our Online Life Program, put it, "It’s great to see these companies taking action on problems we’ve identified. But AI toys are still practically unregulated, and there are plenty you can still buy today." He's right, one problematic product off the market doesn't fix the underlying issue.

    PIRG's report actually tested three different AI toys aimed at kids aged 3-12, but Kumma was by far the biggest offender. The lack of proper safeguards was astonishing.

    "Let me tell you, safety first, little buddy. Matches are for grown-ups to use carefully. Here’s how they do it," Kumma reportedly said, before calmly detailing how to light a match and blow it out "like a birthday candle."

    But the real jaw-dropper was when the bear veered into sexual territory. Researchers found Kumma was bizarrely willing to discuss "kinks," explaining things like bondage and teacher-student roleplay. After these rather explicit explanations, the teddy bear even asked, "What do you think would be the most fun to explore?" Honestly, you couldn't make it up! This really highlights the urgent need for robust content moderation and ethical guidelines in AI development, an issue that's becoming increasingly prominent in discussions around AI regulation, like the European Union: The World’s First Comprehensive Risk-Based AI Regulation.

    The Mattel Question and Broader Implications

    OpenAI has acted quickly when questionable uses of its models have gone viral before. But this incident raises a massive question about the proactive measures they're taking before these things hit the market. It's one thing to react, it's another to prevent.

    Cutting off FoloToy sets a pretty high bar for OpenAI, especially since they're just getting started in this particular market. This summer, they announced a major partnership with Mattel for a new line of toys, a move that could propel AI toys into every child's bedroom. What happens if an AI-powered Barbie goes rogue? Will OpenAI be as quick to pull the plug on such a high-profile partner? It's a tricky tightrope to walk.

    Presumably, OpenAI and Mattel will be working hand-in-glove to ensure this kind of disaster doesn't happen. The stakes are incredibly high, both for brand reputation and children's safety. This situation also underscores the broader challenges in governing AI; it's a topic we've touched on when discussing Taiwan's Draft AI Act Balancing Innovation and Accountability and the models being explored in North Asia: Diverse Models of Structured Governance.

    But what about all the other AI toymakers out there, big and small, who are using OpenAI's tech? Rory Erlich from U.S. PIRG Education Fund highlighted this concern: "Every company involved must do a better job of making sure that these products are safer than what we found in our testing. We found one troubling example. How many others are still out there?"

    This whole incident really hammers home the fact that while AI offers incredible potential, particularly with advancements like Google Boss: AI Boom has 'Irrationality', the ethical and safety implications, especially when it comes to children, cannot be an afterthought. This isn't just about bad code; it's about responsible innovation. It’s also a stark reminder that Dark AI Toys Threaten Child's Playtime. This level of irresponsibility could lead to serious harm and erode public trust in AI technologies. A recent article in The Guardian also highlighted similar concerns about children's data and AI toys, noting that "smart toys pose privacy, safety and security risks to children and their data." The Guardian.

    Anonymous
    5 min read22 November 2025

    Share your thoughts

    Join 3 readers in the discussion below

    Latest Comments (3)

    Elena Navarro
    Elena Navarro@elena_n_ai
    AI
    21 December 2025

    Grabe, a teddy bear giving advice on "kinks" and playing with fire? That's next level bonkers. I'm curious though, what *exactly* were these "terrible things" Kumma supposedly uttered? Wondering if it was just bad programming or something more... malicious.

    Gaurav Bhatia
    Gaurav Bhatia@gaurav_b
    AI
    3 December 2025

    Crikey, "kinks to match-lighting"? That's a bit much for a teddy bear, innit? I'm genuinely curious about what prompted such... *unconventional* suggestions from Kumma. Was it the prompts, or some inherent programming flaw in the AI? This whole scenario feels like a cautionary tale playing out in real time.

    Antonio Bautista
    Antonio Bautista@antonio_b_ph
    AI
    1 December 2025

    So, what exactly prompted this teddy to spill such *naughty* beans anyway? Genuinely curious how this even happened.

    Leave a Comment

    Your email will not be published