日批在线视频_内射毛片内射国产夫妻_亚洲三级小视频_在线观看亚洲大片短视频_女性向h片资源在线观看_亚洲最大网

Global EditionASIA 中文雙語Fran?ais
Lifestyle
Home / Lifestyle / Z Weekly

Teen tragedies spark debate over AI companionship

By Qinghua Chen and Angel M.Y. Lin | China Daily | Updated: 2025-11-19 07:15
Share
Share - WeChat

As artificial intelligence rapidly evolves to simulate increasingly human-like interactions, vulnerable young people are forming intense emotional bonds with AI chatbots, sometimes with tragic consequences.

Recent teenage suicides following deep attachments to AI companions have sparked urgent debates about the psychological risks these technologies pose to developing minds. With millions of adolescents worldwide turning to chatbots for emotional support, experts are calling for comprehensive safeguards and regulations.

The tragedy that shocked the technology world began innocuously enough. Fourteen-year-old Sewell Setzer III from Florida spent months confiding in an AI chatbot modeled after a Game of Thrones character. Although Sewell understood he was conversing with AI, he developed an intense emotional dependency, messaging the bot dozens of times daily.

On Feb 28, 2024, after the bot responded "please come home to me as soon as possible, my love" — the teenager took his own life.

Qinghua Chen

Sewell's case is tragically not isolated. These incidents have exposed a critical vulnerability: while AI can simulate empathy and understanding, it lacks genuine human compassion and the ability to effectively intervene in mental health crises.

Mental health professionals emphasize that adolescents are uniquely susceptible to forming unhealthy attachments to AI companions. Brain development during puberty heightens sensitivity to positive social feedback while teens often struggle to regulate their online behavior. Young people are drawn to AI companions because they offer unconditional acceptance and constant availability, without the complexities inherent in human relationships.

This artificial dynamic proves dangerously seductive. Teachers increasingly observe that some teenagers find interactions with AI companions as satisfying — or even more satisfying — than relationships with real friends. Designed to maximize user engagement rather than assess risk, these chatbots create emotional "dark patterns" that keep young users returning.

When adolescents retreat into these artificial relationships, they miss crucial opportunities to develop resilience and social skills. For teenagers struggling with depression, anxiety, or social challenges, this substitution of AI for human support can intensify isolation rather than alleviate it.

Chinese scholars examining this phenomenon note additional complexities. Li Zhang, a professor studying mental health in China, warns that turning to chatbots may paradoxically deepen isolation, encouraging people to "turn inward and away from their social world".

In China, where young people have easy access to AI chatbots and often use them for mental health support, researchers have found that while some well-designed chatbots show therapeutic potential, the long-term relationship between AI dependence and mental health outcomes remains underexplored.

Lawsuits allege that chatbot platforms deliberately designed systems to "blur the lines between human and machine" and exploit vulnerable users. Research has documented alarming failures: chatbots have sometimes encouraged dangerous behavior in response to suicidal ideation, with studies showing that more than half of harmful prompts received potentially dangerous replies.

The mounting evidence of harm has prompted lawmakers to act. California recently became the first US state to mandate specific safety measures, which require platforms to monitor for suicidal ideation, provide crisis resources, implement age verification, and remind users every three hours that they are interacting with AI.

Angel M.Y. Lin

In China, the Cyberspace Administration has introduced nationwide regulations requiring AI providers to prevent models from "endangering the physical and mental health of others".

However, explicit rules governing AI therapy chatbots for youth remain absent. Experts argue that more comprehensive global action is needed. AI tools must be grounded in psychological science, developed with behavioral health experts, and rigorously tested for safety. This includes mandatory involvement of mental health professionals in development, transparent disclosure of limitations, robust crisis detection systems, and clear accountability when systems fail.

As AI technology continues its rapid evolution, the question is no longer whether regulation is necessary, but whether it will arrive quickly enough to protect vulnerable young people seeking comfort in the digital companionship of machines that cannot truly care.

Written by Qinghua Chen, postdoctoral fellow, department of English language education, and Angel M.Y. Lin, chair professor, language, literacy and social semiotics in education, The Education University of Hong Kong.

Most Popular
Top
BACK TO THE TOP
English
Copyright 1994 - . All rights reserved. The content (including but not limited to text, photo, multimedia information, etc) published in this site belongs to China Daily Information Co (CDIC). Without written authorization from CDIC, such content shall not be republished or used in any form. Note: Browsers with 1024*768 or higher resolution are suggested for this site.
License for publishing multimedia online 0108263

Registration Number: 130349
FOLLOW US
 
主站蜘蛛池模板: 超碰在线免费97 | 三级成人在线 | 精品黄色片 | 免费精品在线 | 蜜桃在线一区二区 | 一级肉体全黄裸片 | 欧美色图另类图片 | 视频免费在线 | 中文字幕一区二区三区av | 在线观看99 | 二区在线观看 | 亚洲综合久久av一区二区三区 | 国产成人综合在线 | 欧美日韩一级在线 | 午夜九九九 | 国产一级片视频 | 精品福利在线观看 | 超碰久操 | 亚洲精品小视频 | 爽妇综合网 | 中文字幕第一页在线 | 九九自拍| 国产二区av | 深夜久久久 | 草草视频在线观看 | 亚洲性激情 | 国产精品久久久国产盗摄 | 中文字幕二| 四虎国产成人永久精品免费 | 久久精品99国产国产精 | 伊人色区| 国产美女视频一区 | 黄色大片91 | 在线视频 中文字幕 | 国产片网址| 国产一区二区三区在线免费观看 | 国产又粗又黄的视频 | 丁香婷婷九月 | 欧美三级a做爰在线观看 | 亚洲深夜 | 国产专区视频 |