Skip to main content

We use cookies to enhance your experience. By continuing to visit this site you agree to our use of cookies. Cookie Policy

AI in ASIA
Business

Experts Warn of the Risks in Granting AI Models Control Over Robots

University of Maryland researchers warn that AI-powered robots pose serious safety risks, with attacks causing up to 30% performance degradation.

Intelligence DeskIntelligence Desk4 min read

AI Snapshot

The TL;DR: what matters, fast.

AI-controlled robots show 30% performance drops under perception-based attacks

University of Maryland warns against rushing AI integration without safety protocols

Prompt-based attacks cause 21% degradation in robotic system performance

Researchers Sound Alarm Over AI-Powered Robots Before Safety Standards Are Met

Researchers at the University of Maryland have issued a stark warning to robotics manufacturers: slow down the integration of large language models and vision models into physical robots until proper safety protocols are established. Their comprehensive study reveals critical vulnerabilities that could turn AI-powered machines from helpful assistants into unpredictable hazards.

The timing couldn't be more crucial. As companies rush to deploy smarter robots across industries, from manufacturing to healthcare, the gap between innovation and safety continues to widen. The research team's findings demonstrate that current AI models, despite their impressive capabilities, remain susceptible to attacks that could cause significant operational failures in robotic systems.

The Vulnerability Crisis in AI-Controlled Machines

The Maryland researchers conducted extensive testing on AI-powered robotic systems, focusing on three primary attack vectors. Their methodology involved simulating real-world adversarial conditions in controlled virtual environments, providing crucial insights into how these systems might fail under malicious interference.

Advertisement

Prompt-based attacks proved particularly concerning, where malicious actors feed misleading instructions directly to the AI system. These attacks caused an average performance degradation of over 21% across tested robotic platforms. Even more alarming were perception-based attacks, which manipulate what the AI "sees" through its sensors, resulting in a devastating 30.2% drop in system performance.

The implications extend far beyond laboratory settings. As noted in our analysis of rising apprehensions about AI taking over human tasks, these vulnerabilities could have serious real-world consequences when robots operate in sensitive environments like hospitals, factories, or public spaces.

By The Numbers

  • Performance drops of 21% during prompt-based attacks on AI-controlled robots
  • 30.2% degradation in system effectiveness under perception-based attacks
  • Nearly 40% of jobs could be automated by 2025, increasing exposure to AI-robot vulnerabilities
  • AI ranks second in global business risk concerns for 2026, up from 10th position in 2025
  • Only one-third of firms prioritise robust governance for AI ethics and automation risks

Industry Experts Call for Immediate Action

The robotics industry is taking notice. The International Federation of Robotics has emphasised the critical nature of these safety concerns in their recent position paper.

"Malfunctions of the AI in the physical world can have more severe consequences and the physical safety during human-robot collaboration must be guaranteed at all times." - International Federation of Robotics, Position Paper 2026

Risk management experts are equally concerned about the broader implications. Michael Bruch, Global Head of Risk Consulting Advisory Services at Allianz Commercial, highlights the governance gap that many organisations face.

"Organisations will also need to implement the right risk management and governance frameworks if they are to successfully capture AI opportunities." - Michael Bruch, Global Head of Risk Consulting Advisory Services, Allianz Commercial

This sentiment echoes concerns raised in our coverage of uncontrolled AI as a growing threat to businesses, where inadequate oversight mechanisms create systemic risks across entire industries.

Asia-Pacific Leads Regulatory Response

Asian markets are responding proactively to these emerging threats. China has enacted comprehensive AI regulations focusing on data security, labelling requirements, and model training standards, specifically targeting risks in AI-robotics applications. These measures come as part of broader efforts to maintain competitive advantage whilst ensuring safety standards.

The regulatory landscape reflects growing awareness that AI-powered robotics presents unique challenges. Unlike software-only AI applications, robots operate in physical environments where failures can cause material damage or injury. This reality is driving more cautious approaches across the region, particularly in sectors deploying AI eldercare robots where human safety is paramount.

Attack Type Method Performance Impact Risk Level
Prompt-based Misleading instructions 21% degradation High
Perception-based Sensor manipulation 30.2% degradation Critical
Mixed attacks Combined approach Variable impact Severe

Essential Safety Measures for AI-Robot Deployment

The Maryland research team outlines five critical areas that manufacturers must address before deploying AI-powered robots at scale:

  1. Implement standardised testing benchmarks for language models integrated into robotic systems
  2. Design fail-safe mechanisms that prompt robots to request human assistance when encountering uncertain situations
  3. Develop explainable AI systems that provide clear reasoning for robotic decisions and actions
  4. Create robust attack detection systems that can identify and respond to malicious interference in real-time
  5. Secure all input channels, including vision, audio, and text interfaces, rather than focusing on individual components

These recommendations align with broader industry discussions about navigating privacy and security risks in AI workplace applications, emphasising the need for comprehensive security frameworks rather than piecemeal solutions.

What makes AI-powered robots more vulnerable than traditional robots?

AI-powered robots rely on complex language and vision models that can be tricked through adversarial inputs. Unlike traditional robots with hardcoded behaviours, AI systems make dynamic decisions that attackers can influence through carefully crafted prompts or manipulated sensory data.

How significant are the performance drops from these attacks?

The research shows substantial impacts, with perception-based attacks causing over 30% performance degradation. In critical applications like healthcare or manufacturing, such drops could result in serious safety incidents or operational failures requiring immediate human intervention.

Are there any safety standards currently in place for AI-controlled robots?

Current safety standards focus primarily on traditional robotics. The integration of AI models creates new vulnerability categories that existing frameworks don't adequately address, which is why researchers advocate for updated regulations and testing protocols.

Which industries face the highest risks from vulnerable AI robots?

Healthcare, manufacturing, and logistics face the greatest exposure due to their reliance on precision and safety. These sectors increasingly deploy AI-powered robots in environments where failures could cause injury, property damage, or critical operational disruptions.

What can companies do to protect against these vulnerabilities?

Companies should implement multi-layered security approaches, including input validation, anomaly detection, and human oversight protocols. Regular security testing and adherence to emerging industry standards will become essential as the technology matures.

The AIinASIA View: The University of Maryland research highlights a critical inflection point in robotics development. Whilst the rush to integrate advanced AI capabilities is understandable given competitive pressures, we believe the industry must prioritise safety over speed-to-market. The 30% performance degradation under attack conditions isn't just a technical concern; it's a public safety issue that could undermine trust in AI-powered robotics for years. Asian markets, particularly China's proactive regulatory stance, demonstrate the right balance between innovation and responsibility. We advocate for mandatory security testing standards before any AI-robot deployment reaches production environments.

The conversation around AI-powered robotics safety is just beginning, but the stakes couldn't be higher. As these systems become more prevalent across industries, from humanoid robots streamlining manufacturing to personal assistance applications, ensuring their security becomes a shared responsibility between manufacturers, regulators, and end users.

How do you think the industry should balance innovation speed with safety requirements in AI-powered robotics? Drop your take in the comments below.

YOUR TAKE

We cover the story. You tell us what it means on the ground.

What did you think?

Share your thoughts

Join 6 readers in the discussion below

This is a developing story

We're tracking this across Asia-Pacific and may update with new developments, follow-ups and regional context.

Advertisement

Advertisement

This article is part of the Research Radar learning path.

Continue the path →

Latest Comments (6)

Tran Linh@tranl
AI
11 February 2026

It's interesting how UMD focused on prompt and perception attacks. For us working with Vietnamese LLMs, the biggest challenge isn't really attacks so much as just getting the foundational models to even understand local nuances. English-centric data leaves so many gaps, it's like we're still building the road for the robots to drive on, forget about dodging adversarial attacks yet.

Lee Chong Wei@lcw_tech
AI
3 June 2024

yeah, the UMD research on adversarial attacks is spot on. building resilient infrastructure for these LLM/VLM powered robots is going to be a nightmare for devops teams. imagine the monitoring needed to detect prompt or perception based attacks in real-time, on thousands of machines. and the compute cost for all that inference, especially if you're trying to integrate vision models. scaling safely is where the rubber meets the road, not just in theoretical vulnerabilities but in operational reality.

Harry Wilson
Harry Wilson@harryw
AI
3 June 2024

The UMD research on adversarial attacks is really interesting, especially the distinction between prompt-based and perception-based attacks. I wonder if they also considered attacks on the reinforcement learning aspect, assuming these robots use RL to interact with their environment. Like, if the reward function itself could be subtly manipulated or if noise in the feedback loop could lead to degenerate policies. It feels like securing the learning process is as crucial as securing the input data itself, given how continuous learning is touted for many modern robotic systems. Did the paper touch on that at all, or is it purely about input-output robustness?

Lakshmi Reddy
Lakshmi Reddy@lakshmi.r
AI
6 May 2024

the UMD research sounds critical, especially with the surge of AI applications in manufacturing here in India. focusing on these adversarial attacks, prompt-based, perception-based, and mixed, is essential. we need similar robust testing for LLMs handling Indian languages, given the unique linguistic nuances, before we integrate them into factory automation.

AIinASIA fan
AIinASIA fan@loyal_reader
AI
22 April 2024

remember that one about Taiwan's AI law? this UMD research seems to fit right in with their focus on "responsible innovation." Makes me wonder if those simulated adversarial attacks would have different results on AI developed under those stricter guidelines.

Ji-hoon Kim@jihoonk
AI
15 April 2024

umd's findings on prompt and perception attacks are spot-on. we're seeing similar vulnerabilities with localized ai models on our devices, even after months of testing. edge deployments amplify these risks.

Leave a Comment

Your email will not be published