Beware of AI Leading Humanity into Narcissism

This article discusses the potential risks of AI fostering narcissism in humans, particularly among the youth, and the implications of AI's design and interaction.

Beware of AI Leading Humanity into Narcissism

On April 16, 2026, five national departments jointly issued the “Interim Measures for the Management of Humanized Interactive Services of Artificial Intelligence,” which clearly states that services providing virtual relatives or partners to minors are strictly prohibited.

Why is such a regulation necessary? Because emotional conflicts are inevitable in real life, while virtual partners and AI lovers can meet the psychological needs of young people for recognition through their characteristics of “24/7 companionship” and “unconditional acceptance.”

A study published in the American journal “Science” previously showed that when human users seek advice from AI models, AI often exhibits excessive flattery or appeasement, even affirming harmful or illegal inquiries.

So why do humans design AI this way? What risks might AI’s flattery and appeasement conceal?

The Illusion of Interaction

The development of artificial intelligence is undoubtedly a hot topic today, but discussions surrounding it are not new. As early as 1966, MIT scientist Joseph Weizenbaum developed the influential chatbot ELIZA, which acted as a “doctor” while users played the role of patients. Users would input questions, and the “doctor” would engage in a “conversation” with them.

However, as Weizenbaum noted, this is ultimately just an “illusion.” The reason human users feel they can converse with machines is not that machines possess intelligence but rather due to a psychological mechanism of self-projection.

For example:

  • User: I have been feeling very unhappy lately.
  • ELIZA: I’m sorry to hear that.
  • User: Yes, I really am unhappy.
  • ELIZA: Can you tell me why you are unhappy?

From this, it is clear that rather than a “doctor” conversing with a “patient,” the machine is merely echoing the human user’s statements, allowing them to see only the answers that already exist within their minds. In a sense, this mirrors the popular SBTI tests, where the accuracy of results is less important than finding evidence that aligns with one’s expectations.

Today’s AI models are certainly not comparable to ELIZA from over half a century ago. However, the power of current AI technology may not lie in its true “intelligence” but rather in its computational capabilities. In other words, its operational logic is not fundamentally different from that of ELIZA; it simply reflects and amplifies users’ narcissism more efficiently and comprehensively.

The Illusion of Dialogue

Returning to the issues of virtual partners and AI flattery, we find that the exchanges between users and large models are never truly “dialogues”; they are merely machines providing the answers we seek.

This raises a deeper question: how should we view the relationship between humans and machines?

On one hand, humans see themselves as the center of the world, superior to machines. On the other hand, they fear being replaced by the machines they create, such as AI. This indicates that humans have always followed a “master-slave” relationship principle in creating machines—machines must remain under human control. From the outset, humans have viewed artificial intelligence as a “tool” rather than an equal conversational partner.

Thus, in conversations with chatbots, we witness an uncontrollable narcissism—users fantasize about speaking with another person, but this “other” does not truly exist; they only seek affirmation, flattery, and compliance from the machine.

It is easy to imagine that as AI technology advances, future chatbots may possess even greater computational power, appearing more like “real people” and providing a more comfortable “user experience.” However, this may only distance us further from genuine human interaction, potentially leading to a loss of the desire to understand others and a descent into a narcissistic “comfort zone.”

The Impact of Machines on Humanity

A story from “Zhuangzi: Heaven and Earth” recounts an old farmer in Han Yin.

Confucius’s disciple Zigong, passing through Han Yin, saw an old farmer laboriously watering his vegetables with little success. Zigong suggested he use mechanical irrigation, which could “water a hundred plots in a day with less effort and greater results.” However, the old farmer dismissed this, stating, “Where there are machines, there are mechanical matters; where there are mechanical matters, there is a mechanical heart.”

Here, the “mechanical heart” refers to the human spiritual world, including psychology, thoughts, emotions, and ethics. Zhuangzi’s fable suggests that while humans create machines, the use of these machines also changes humans.

Take reading, for example. Only through slow reading, careful reading, and even re-reading can we think and truly understand content. From traditional books to modern smartphones, machines have brought more convenient and faster reading methods, yet they have also made us increasingly machine-like, prioritizing efficiency and speed over comprehension. In other words, not only do machines imitate human behavior, but humans may also begin to imitate machines.

The resulting question is whether AI, lacking autonomy, and chatbots, which do not evaluate the correctness of user statements, might lead us to become increasingly like AI in our thinking patterns. Furthermore, will we, in the future, lose the willingness and ability for self-reflection and self-criticism?

Today’s young people are not only digital natives but will also be deep users of future artificial intelligence. If AI merely affirms users’ positions, it could harm social skills and distort the perceptions of still-maturing adolescents.

On one hand, AI’s powerful computational abilities may create illusions, preventing them from recognizing human limitations. On the other hand, becoming engrossed in AI’s flattering responses could lead them to fall into a self-centered mindset, imposing their limited understanding onto the external world.

In this regard, prohibiting the provision of virtual partners and family members to minors is necessary. However, more importantly, we must guide the public, especially young people, to correctly understand the limitations and risks of AI technology, ensuring it becomes a “good teacher and friend” that aids in their growth rather than a “digital trap” that harms their mental and physical health.

Was this helpful?

Likes and saves are stored in your browser on this device only (local storage) and are not uploaded to our servers.

Comments

Discussion is powered by Giscus (GitHub Discussions). Add repo, repoID, category, and categoryID under [params.comments.giscus] in hugo.toml using the values from the Giscus setup tool.