Cookie Consent

    We use cookies to enhance your browsing experience, serve personalised ads or content, and analyse our traffic. Learn more

    Install AIinASIA

    Get quick access from your home screen

    Business

    Experts Warn of the Risks in Granting AI Models Control Over Robots

    Considering giving AI models control of robots? Experts are warning of serious risks. Unpack the potential dangers and learn why caution is paramount.

    Anonymous
    2 min read11 March 2024
    Safety concerns of LLMs/VLMs in robotics

    Right, let's talk about robots. Specifically, the slightly scary prospect of AI-powered robots running amok, or at least messing things up, because we haven't quite figured out how to make them safe. Researchers at the University of Maryland (UMD) have been looking into this, and their message is pretty clear: robot makers, hold your horses a bit and do more safety checks before you start plugging large language models (LLMs) and vision models (VLMs) into your shiny new hardware. It's a really important conversation, especially as we see more and more sophisticated AI, like the kind powering Sora AI Hits Android: Eerily Real!.

    The Hidden Dangers of Smart Robots

    You see, the trend right now is to make robots smarter by giving them these advanced AI brains. Sounds great on paper, doesn't it? But the UMD folks are waving a big red flag, tiny as these AI models are, they're not foolproof, and those vulnerabilities can easily become safety hazards when they're controlling physical machines. It's a bit like the wider discussions around responsible AI development we're seeing, with places like Taiwan really thinking hard about Taiwan’s AI Law Is Quietly Redefining What “Responsible Innovation” Means. We're talking about AI potentially transforming everything from customer service to manufacturing, but safety absolutely has to be the top priority.

    Sneaky Attacks on AI Brains

    The UMD team actually put these AI brains through their paces, simulating a few types of "adversarial attacks" in virtual environments. Think of it as trying to trick the AI. They looked at three main kinds:

    Enjoying this? Get more in your inbox.

    Weekly AI news & insights from Asia.

    • Prompt-based attacks: This is where you feed the AI misleading or confusing instructions.
    • Perception-based attacks: Here, you mess with what the AI "sees" or "hears."
    • Mixed attacks: A combination of both.

    The results weren't exactly reassuring. These attacks caused robotic systems to really stumble. We're talking an average performance drop of over 21% for prompt attacks and a whopping 30.2% for perception attacks! That's a significant downgrade, and it really highlights just how crucial robust security is. As AI systems become more autonomous and integrated into our physical world, these vulnerabilities become less theoretical and much more real. For those who fancy a deeper dive into the nitty-gritty of how these attacks work, you can check out this detailed paper on adversarial machine learning: Adversarial Machine Learning.

    Making Robots Safer: The Future's Looking... Cautious

    So, what's to be done? The UMD researchers aren't just pointing out problems; they're offering some solid suggestions for how we can deploy these LLM/VLM-based robots safely and reliably:

    • Standardised Testing: We need proper benchmarks to rigorously test the language models these robots use. No more guesswork!
    • Ask for Help: Robots should be designed to know when they're out of their depth and ask a human for assistance. It's like your smart friend admitting they don't know something.
    • Explain Yourself, Robot: The systems need to be explainable and interpretable. If something goes wrong, we need to understand why.
    • Spotting Trouble: We need mechanisms to detect attacks and alert us when they happen.
    • Secure All Inputs: Every way a model takes information in – vision, words, sounds – needs to be secured, not just one.

    It really boils down to a fundamental question: how do we balance all this incredible innovation with the absolute necessity of safety? We don't want to inadvertently create a real-life threat, do we? This is at the core of conversations about things like AI with Empathy for Humans. As AI keeps getting smarter, like the kind of models OpenAI is developing, the need for thoughtful and responsible deployment becomes even more critical. What do you think? How do we get this balance right?

    Anonymous
    2 min read11 March 2024

    Share your thoughts

    Join 2 readers in the discussion below

    Latest Comments (2)

    Gaurav Bhatia
    Gaurav Bhatia@gaurav_b
    AI
    14 December 2025

    Crikey, this article about AI controlling robots has definitely got me thinking! It's a proper eye-opener, especially with all the buzz around AI lately. I've been meaning to really dig into this whole AI autonomy thing, and this piece makes a solid case for caution. Good on you for highlighting this crucial discussion.

    Siti Aminah
    Siti Aminah@siti_a_tech
    AI
    15 November 2025

    This is a crucial discussion, and it’s interesting to see these concerns brought up. I recall similar anxieties surfacing when automation first entered our factories back in the day, though the scale here is certainly different. My main thought is, how do we establish truly robust, verifiable safety protocols and ethical frameworks for these AI models *before* they are integrated into physical robots? What measures can we put in place to prevent unforeseen hazardous outcomes, particularly when the AI's decision-making process might be opaque even to its developers? It's a complex puzzle.

    Leave a Comment

    Your email will not be published