The work sheds light on a critical vulnerability: even if an algorithm builds logical plans perfectly, this does not guarantee their safety when transferred to a robot chassis. By analyzing 12,279 tasks, scientists proved that foundation models are prone to generating dangerous or unpredictable sequences of actions in a real-world environment. Against the backdrop of Chinese humanoid marathons (April 19 case) and the deployment of AGIBOT platforms, the research sounds like a stop signal for the industry. Without built-in physical limiters (safety guardrails), the commercialization of Embodied AI could result in man-made disasters.
Source: ETH Zurich / arXiv
RoboticsAI SafetyEmbodied AIDESPITEResearch