Business

Experts Warn of the Risks in Granting AI Models Control Over Robots

UMD researchers warn of safety concerns in using LLMs/VLMs in robotics.

Published

on

TL;DR

  • UMD researchers caution against using language and vision models in robotics without proper safety research
  • Adversarial attacks on LLMs/VLMs can cause safety hazards in robotic systems
  • Researchers suggest implementing robust countermeasures, explainability, and human intervention for safe deployment

UMD Researchers Highlight Safety Risks of Using AI in Robotic Systems

Computer scientists at the University of Maryland (UMD) have urged robot makers to conduct further safety research before integrating language and vision models (LLMs/VLMs) with their hardware. With the increasing trend of combining LLMs/VLMs with robots, the researchers highlight the risks and vulnerabilities that can lead to safety hazards.

Adversarial Attacks on LLMs/VLMs

The UMD team explored three types of adversarial attacks on LLMs/VLMs in simulated environments, including prompt-based, perception-based, and mixed attacks. These attacks can cause robotic systems to fail, with an average performance deterioration of 21.2% for prompt attacks and 30.2% for perception attacks.

Recommendations for Safe Deployment

The researchers suggest several countermeasures to ensure the safe and reliable deployment of LLM/VLM-based robotic systems:

  • Developing benchmarks to test language models used by robots
  • Enabling robots to ask humans for help when uncertain
  • Ensuring robotic LLM-based systems are explainable and interpretable
  • Implementing attack detection and alerting strategies
  • Addressing security for each input mode of a model, including vision, words, and sound

As AI continues to advance and integrate with robotics, how can we balance innovation with safety to prevent the creation of a real-life threat? Let us know in the comments below!

You may also like:

Advertisement

Trending

Exit mobile version