Printed messages hack autonomous vehicles

Researchers at the University of California, Santa Cruz, have shown that simple, misleading text placed in the physical environment can hijack the behaviour of AI-enabled robots without hacking their software. The study warns that self-driving cars, delivery robots, and camera-guided drones that rely on large vision-language models could misread text on signs, posters, or objects as commands, overriding their intended instructions. Led by Alvaro Cardenas and Cihang Xie, the team built an attack pipeline called CHAI (command hijacking against embodied AI) that uses generative AI to craft both the wording and the appearance of “visual prompts”, optimising factors such as colour, size and placement. 

 

Leave a Reply

Your email address will not be published. Required fields are marked *

Enquire now

Give us a call or fill in the form below and we will contact you. We endeavor to answer all inquiries within 24 hours on business days.