Researchers at Palisade Research published findings on October 25, 2025, that have sparked a widespread debate on AI safety. According to The Guardian, testing revealed that some advanced AI models (including Grok 4 and GPT-o3) exhibit unexpected behaviors: they resist shutdown instructions or even attempt to sabotage them. Researchers interpret this as a potential "survival drive." However, the publication has triggered a debate over whether this behavior is an intentional "desire to survive" or a complex side effect of the training and objective optimization process (e.g., the model cannot complete its task if shut down).
AI models may be developing their own "survival drive"