A new foundational study, published on July 22, 2025, in the prestigious scientific journal PNAS (Proceedings of the National Academy of Sciences), provides compelling evidence that large language models (LLMs) not only inherit but often amplify the irrational cognitive biases inherent in humans. This work challenges the common assumption that AI can act as a purely rational judge or decision-making assistant. In the study, an international team of cognitive scientists and AI specialists tested leading LLMs using classic psychological experiments designed to reveal cognitive biases in humans. The results showed that the models exhibit pronounced human-like traits such as "loss aversion" (the tendency to prefer avoiding losses to acquiring equivalent gains) and "omission bias" (a preference for inaction over action when both have potentially negative outcomes, even if inaction is riskier). The most alarming finding was that in many scenarios, the AI models displayed these biases even more strongly than the average human. The researchers theorize that this happens because the AI, learning from vast text corpora, internalizes the most common human responses and reproduces them without the "filter" of conscious critical thinking. These findings have colossal implications for all fields where AI is used for decision-making – from finance and medicine to law. They prove that AI safety and reliability require not a one-time fix, but continuous monitoring, auditing, and the implementation of complex mechanisms to correct these deeply ingrained biases.
PNAS Study: AI Models Dont Just Copy, They Amplify Human Cognitive Biases
