Printed messages hack autonomous vehicles
Researchers at the University of California, Santa Cruz, have shown that simple, misleading text placed in the physical environment can hijack the behaviour of AI-enabled robots without hacking their software. The study warns that self-driving cars, delivery robots, and camera-guided drones that rely on large vision-language models could misread text on signs, posters, or objects as commands, overriding their intended instructions. Led by Alvaro Cardenas and Cihang Xie, the team built an attack pipeline called CHAI (command hijacking against embodied AI) that uses generative AI to craft both the wording and the appearance of “visual prompts”, optimising factors such as colour, size and placement.
