8 C
United Kingdom
Friday, June 6, 2025

Psychologists Are Calling for Guardrails Around AI Use for Young People. Here’s What to Watch Out For


Generative AI developers should take steps to ensure the use of their tools doesn’t harm young people who use them, the American Psychological Association warned in a health advisory Tuesday.

The report, compiled by an advisory panel of psychology experts, called for tech companies to ensure there are boundaries with simulated relationships, to create age-appropriate privacy settings and to encourage healthy uses of AI, among other recommendations. 

AI Atlas

The APA has issued similar advisories about technology in the past. Last year, the group recommended that parents limit teens’ exposure to videos produced by social media influencers and gen AI. In 2023, it warned of the harms that could come from social media use among young people. 

“Like social media, AI is neither inherently good nor bad,” APA Chief of Psychology Mitch Prinstein said in a statement. “But we have already seen instances where adolescents developed unhealthy and even dangerous ‘relationships’ with chatbots, for example. Some adolescents may not even know they are interacting with AI, which is why it is crucial that developers put guardrails in place now.”

The meteoric surge of artificial intelligence tools like OpenAI’s ChatGPT and Google’s Gemini the last few years has presented new and serious challenges for mental health, especially among younger users. People increasingly talk to chatbots like they would talk to a friend, sharing secrets and relying on them for companionship. While that use can have some positive effects on mental health, it can also be detrimental, experts say, reinforcing harmful behaviors or offering the wrong advice. (Disclosure: Ziff Davis, CNET’s parent company, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.)

What the APA recommended about AI use

The group called for several different ways to ensure adolescents can use AI safely, including limiting access to harmful and false content and protecting data privacy and the likenesses of young users. 

One key difference between adult users and younger people is that adults are more likely to question the accuracy and intent of an AI output. A younger person (the report defined adolescents as between age 10 and 25) might not be able to approach the interaction with the appropriate level of skepticism. 

Relationships with AI entities like chatbots or the role-playing tool Character.ai might also displace the important real-world, human social relationships people learn to have as they develop. “Early research indicates that strong attachments to AI-generated characters may contribute to struggles with learning social skills and developing emotional connections,” the report said.

People in their teens and early 20s are developing habits and social skills that will carry into adulthood, and changes to how they socialize can have lifelong effects, said Nick Jacobson, an associate professor of biomedical data science and psychiatry at Dartmouth who was not on the panel that produced the report. “Those stages of development can be a template for what happens later,” he said.

The APA report called for developers to create systems that prevent the erosion of human relationships, like reminders that the bot is not a human, alongside regulatory changes to protect the interests of youths. 

Other recommendations included that there should be differences between tools intended for use by adults and those used by children, such as age-appropriate settings being made default and designs made to be less persuasive. Systems should have human oversight and intensive testing to ensure they are safe. 

Schools and policymakers should prioritize education around AI literacy and how to use the tools responsibly, the APA said. That should include discussions of how to evaluate AI outputs for bias and inaccurate information. “This education must equip young people with the knowledge and skills to understand what AI is, how it works, its potential benefits and limitations, privacy concerns around personal data, and the risks of overreliance,” the report said.

Identifying safe and unsafe AI use

The report shows psychologists grappling with the uncertainties of how a new and fast-growing technology will affect the mental health of those most vulnerable to potential developmental harms, Jacobson said. 

“The nuances of how [AI] affects social development are really broad,” he told me. “This is a new technology that is probably potentially as big in terms of its impact on human development as the internet.”

AI tools can be helpful for mental health and they can be harmful, Jacobson said. He and other researchers at Dartmouth recently released a study of an AI chatbot that showed promise in providing therapy, but it was specifically designed to follow therapeutic practices and was closely monitored. More general AI tools, he said, can provide incorrect information or encourage harmful behaviors. He pointed to recent issues with sycophancy in a ChatGPT model, which OpenAI eventually rolled back.

“Sometimes these tools connect in ways that can feel very validating, but sometimes they can act in ways that can be very harmful,” he said. 

Jacobson said it’s important for scientists to continue to research the psychological impacts of AI use and to educate the public on what they learn. 

“The pace of the field is moving so fast, and we need some room for science to catch up,” he said.

The APA offered suggestions for what parents can do to ensure teens are using AI safely, including explaining how AI works, encouraging human-to-human interactions, stressing the potential inaccuracy of health information and reviewing privacy settings. 



Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles