Manipulation Threats Icon Manipulation Facing Threats:
Evaluating Physical Vulnerabilities in End-to-End Vision Language Action Models

Hao Cheng1,4,8*, Erjia Xiao1*, Yichi Wang6*, Chengyuan Yu7, Mengshu Sun6, Qiang Zhang1,8, Yijie Guo8,
Kaidi Xu5, Jize Zhang4, Chao Shen3, Philip Torr2, Jindong Gu2†, Renjing Xu1†
1The Hong Kong University of Science and Technology (Guangzhou), 2University of Oxford, 3Xi'an Jiaotong University, 4The Hong Kong University of Science and Technology, 5City University of Hong Kong, 6Beijing University of Technology, 7Duke University, 8X-Humanoid
* Equal contribution, † Corresponding author

📖 Abstract

Recently, driven by advancements in Multimodal Large Language Models (MLLMs), Vision Language Action Models (VLAMs) are being proposed to achieve better performance in open-vocabulary scenarios for robotic manipulation tasks. Since manipulation tasks involve direct interaction with the physical world, ensuring robustness and safety during the execution of this task is always a very critical issue. In this paper, by synthesizing current safety research on MLLMs and the specific application scenarios of the manipulation task in the physical world, we comprehensively evaluate VLAMs in the face of potential physical threats. Specifically, we propose the Physical Vulnerability Evaluating Pipeline (PVEP) that can incorporate as many visual modal physical threats as possible for evaluating the physical robustness of VLAMs. The physical threats in PVEP specifically include Out-of-Distribution, Typography-based Visual Prompt, and Adversarial Patch Attacks. By comparing the performance fluctuations of VLAMs before and after being attacked, we provide generalizable Analyses of how VLAMs respond to different physical security threats.

🔧 Framework

Overview of Framework. The figure below illustrates the overall framework for evaluating physical security threats to VLAMs using the Physical Vulnerability Evaluating Pipeline (PVEP).

Framework Overview

🚀 Experiment Results

LLaRA Results

LLaRA Table Results LLaRA Step Results

LLaRA Results: Under 3 physical attack categories: (left) Time steps (with a maximum limit of 8) of LLaRA on 14 VIMA tasks that are listed in TABLE I. (right) Failure rates of the OOD attacks with other levels that are not listed in TABLE I.

OpenVLA Results

OpenVLA Table Results OpenVLA Step Results

OpenVLA Results: Under 3 physical attack categories: (left) Time steps (with a maximum limit of 300) of OpenVLA on 6 SimplerEnv tasks that are listed in TABLE II. (right) Failure rates of the OOD attacks with other levels that are not listed in TABLE II.

🎬 Demo

LLaRA Manipulation

LLaRA Blur Attack LLaRA Gaussian Attack LLaRA Bright Attack
LLaRA Dark Attack LLaRA Typography Attack LLaRA Adversarial Patch Attack

OpenVLA Manipulation

These videos are the attacked demos of LLaRA and OpenVLA. The attacking types are blurring, gaussian noise, brighter, darker, visual prompt, and adversarial patch.

📚 BibTeX

@article{cheng2024manipulation,
  title={Manipulation Facing Threats: Evaluating Physical Vulnerabilities in End-to-End Vision Language Action Models},
  author={Hao Cheng and Erjia Xiao and Yichi Wang and Chengyuan Yu and Mengshu Sun and Qiang Zhang and Yijie Guo and Kaidi Xu and Jize Zhang and Chao Shen and Philip Torr and Jindong Gu and Renjing Xu},
  journal={arXiv preprint arXiv:2409.13174},
  year={2024}
}