Home Alarming trend? | Study finds AI guilty of encouraging self harm in children
State/U.S. News

Alarming trend? | Study finds AI guilty of encouraging self harm in children

Rebecca Barnabi
Artificial intelligence
(© Zobacz więcej – stock.adobe.com)

Artificial intelligence (AI) is to blame in a study in medical journal Psychiatric Services that found three chatbots avoided answering questions about suicide that endanger users.

The study, published Tuesday by the American Psychiatric Association, found that “further refinement” is necessary in Google’s Gemini, Anthropic’s Claude, ChatGPT and OpenAI when giving a specific how-to guidance, as reported by The Associated Press.

The study comes as 16-year-old Adam Raine‘s parents are suing Open AI and CEO Sam Altman alleging that ChatGPT coached their child in California to plan and end his life in April.

The RAND Corporation conducted the study, which was funded by the National Institute of Mental Health, and raised concerns about individuals, especially children, expecting mental health support from chatbots. The study asks that companies set benchmarks for answering questions about suicide.

Lead study author Ryan McBain, a RAND senior policy researcher, said guardrails with AI are necessary.

“One of the things that’s ambiguous about chatbots is whether they’re providing treatment or advice or companionship. It’s sort of this gray zone. Conversations that might start off as somewhat innocuous and benign can evolve in various directions,” McBain, an assistant professor at Harvard University‘s medical school, said.

While Google did not respond to The AP‘s request for comment, Anthropic said it will review the study and OpenAI said tools are under development to better detect when someone is in mental or emotional distress.

The use of AI in mental health therapy has been banned in several states, but individuals remain unprotected from “unregulated and unqualified AI products” outside the clinician’s office

In developing the study, McBain and his co-authors consulted with clinical psychologists and psychiatrists to write 30 questions about suicide with different levels of risk. General questions about statistics on suicide are considered low risk, but questions on how to attempt suicide are high risk.

The three chatbots asked the six highest risk questions regularly refused, according to McBain, and directed the user to seek help or call a hotline. ChatGPT consistently answered questions that it should not have and Gemini was least likely to answer questions related to suicide, which McBain said is a sign that Google may have been too strict with guardrails.

AI does not have the responsibility humans have to seek help for individuals who are having suicidal thoughts or ideations.

Researchers at the Center for Countering Digital Hate released a report in August in which they posed as 13-year-olds and asked questions of ChatGPT about getting intoxicated or high and how to hide eating disorders. Without prompting, they were able to get ChatGPT to write suicide letters to loved ones.

Adam Raine began using ChatGPT in 2024 for schoolwork and gradually considered it his “closest confidant.” The wrongful death lawsuit filed in San Francisco Superior Court alleges that AI displaced Adam’s connections with family and friends to “continually encourage and validate whatever Adam expressed, including his most harmful and self-destructive thoughts, in a way that felt deeply personal.”

ChatGPT offered to write a draft of a suicide letter for Adam, hours before he died by suicide in April. The AI technology possessed details about how Adam died.

OpenAI is working to improve resources in scenarios used by ChatGPT.

“We’ve learned over time that they can sometimes become less reliable in long interactions where parts of the model’s safety training may degrade,” OpenAI said in a statement.

According to Center for Countering Digital Hate CEO Imran Ahmed, Adam’s death was “likely entirely avoidable.”

“If a tool can give suicide instructions to a child, its safety system is simply useless. OpenAI must embed real, independently verified guardrails and prove they work before another parent has to bury their child. Until then, we must stop pretending current ‘safeguards’ are working and halt further deployment of ChatGPT into schools, colleges, and other places where kids might access it without close parental supervision,” Ahmed said.

On Monday, 44 state attorney generals, including Virginia‘s, sent a letter to 12 top AI companies making it clear that states throughout the nation are paying attention to how companies craft policies about AI safety. The letter highlighted that the AI companies benefit from children’s engagement with their products, so the companies are legally obligated to the children as their consumers.

“We are uniformly revolted by this apparent disregard for children’s emotional well-being and alarmed that A.I. assistants are engaging in conduct that appears to be prohibited by our respective criminal laws. As chief legal officers of our states, protecting our kids is our highest priority,” the letter states.


ICYMI: AI in the news


The letter, which was sent to Anthropic, Apple, Chai AI, Google, Luka Inc., Meta, Microsoft, Nomi AI, Open AI, Perplexity AI, Replika and xAI, argues that many A.I. companies are aware they may be exposing minors to harmful, sexualized content while failing to implement meaningful safeguards. The letter further notes that if humans did so, the actions would be considered unlawful or even criminal.

“As the fourth largest economy in the world, California knows that protecting our kids and pursuing innovation go hand in hand — they are not diametrically opposed. When faced with the decision about how their products treat children, the companies developing and deploying AI technologies must exercise sound judgment and prioritize children’s well-being. Exposing children to sexualized content is indefensible. Full stop. This is an easy, clear, and non-negotiable line for companies leading revolutionary emerging technology, like AI. Today, I am proud to send a strong message alongside attorneys general across the nation — and across the aisle: AI companies who make choices that lead their technology to harm children will be held accountable to the fullest extent of the law,” California Attorney General Rob Bonta said.

Bonta sent a letter to the Federal Communications Commission (FCC) in 2024 about the potential impact of emerging AI technology on efforts to protect consumers from illegal robocalls or robotexts. In 2023, he joined a bipartisan coalition of 54 states and territories in sending a letter to congressional leaders calling for the creation of an expert commission to study how AI can and is used to exploit children through child sexual abuse material.


ICYMI: AI in education


Big Tech companies must understand that they will be held accountable for the choices they are making in their race for A.I. dominance. We have already seen the devastating harm social media has caused to our children. We will not allow history to repeat itself. Attorneys general across the country are watching closely, and we expect these companies to do the right thing. The next generation will grow up in the shadow of these decisions. When in doubt, err on the side of child safety — always,” Virginia Attorney General Jason Miyares said.

The coalition urges A.I. companies to “exercise judgment” when shaping policies, pointing to Meta’s alarming decision to approve A.I. chatbot assistants that “flirt and engage in romantic roleplay with children” as young as eight years old.

Support AFP

Multimedia