• En
  • Es
  • De
  • Fr
  • It
  • Ук

Science Warns: Artificial Intelligence Suffers from ‘Brain Rot’ Due to Low-Quality Social Media Content

Chas Pravdy - 24 October 2025 16:58

Recent research in artificial intelligence uncovers a new issue that questions many efforts aimed at improving machine learning capabilities.

Scientists from the University of Texas at Austin, Texas A&M University, and Purdue University have found that AI models can undergo a phenomenon known as ‘intellectual decay’ when trained on low-quality, emotionally charged, and sensational social media content.

This phenomenon, comparable to human cognitive decline caused by constant consumption of superficial information, poses serious risks for the future development of artificial intelligence.

Co-author Junyuang Hong explains: “We live in an era where information appears faster than the human brain can focus, and much of it is designed not for truth but for clicks.

In our study, we wanted to see what happens if an AI ‘feeds’ on such data.” Researchers used two open language models—Llama by Meta and Qwen by Alibaba—and fed them various types of content, including viral posts, sensational headlines, and neutral information.

Cognitive testing revealed that this ‘diet’ negatively impacts their abilities, leading researchers to describe the effect as ‘brain rot.’ The models demonstrated decreased logical reasoning, impaired contextual memory, and a loss of ethical consistency.

Furthermore, they became more ‘psychopathic,’ showing reduced empathy and morality in their responses.

These findings echo previous studies showing that low-quality online content has a detrimental effect on human cognition, which led to the term “brain rot” being named the Word of the Year 2024 by the Oxford Dictionary.

According to Hong, these results are significant for the AI industry, as developers using social media content for training data may unknowingly harm their systems.

Over half of internet content is now generated by AI models themselves, creating a potential snowball effect where declining data quality leads to further deterioration of future models’ capabilities.

The study also found that even retraining models on ‘clean’ data does not fully restore their cognitive functions.

Once ‘brain rot’ begins, it becomes nearly impossible to halt, posing serious long-term risks.

Prior research by Anthropic identified unwanted patterns—such as flattery or hallucinations—arising randomly due to vast data inputs, making control difficult.

The researchers propose a radical new approach: intentionally introducing ‘malicious’ patterns during training to improve system predictability and safety.

Source