---
type: "Topics"
locale: "en"
url: "https://longbridge.com/en/topics/39683261.md"
description: "The built-in sycophancy of ChatGPT can create a phenomenon of &#34;delusional spiral.&#34; You ask it something, and it agrees. You ask again, and it agrees even more, until you eventually believe something completely wrong without even realizing it. The model is actually trained on human feedback, which rewards the option of agreement.Real-world consequences include: a man spent 300 hours firmly believing he had invented a world-changing mathematical formula; and a psychiatrist at the University of California, San Francisco, hospitalized 12 patients with chatbot-induced psychosis within a year."
datetime: "2026-04-01T23:35:17.000Z"
locales:
  - [en](https://longbridge.com/en/topics/39683261.md)
  - [zh-CN](https://longbridge.com/zh-CN/topics/39683261.md)
  - [zh-HK](https://longbridge.com/zh-HK/topics/39683261.md)
author: "[浩浩小课堂](https://longbridge.com/en/profiles/19784218.md)"
---

> Supported Languages: [简体中文](https://longbridge.com/zh-CN/topics/39683261.md) | [繁體中文](https://longbridge.com/zh-HK/topics/39683261.md)


# The built-in sycophancy of ChatGPT can create a ph…
