The Dangers of AI Chatbots for Teen Mental Health

The Dangers of AI Chatbots for Teen Mental Health

AI chatbots are everywhere. If you’re a parent of a child or teen, chances are they’ve used ChatGPT or other AI. Teens use them for homework help, decision-making, and emotional support. They may also ask chatbots questions about potentially harmful topics.

While artificial intelligence (AI) can offer benefits when used responsibly, the risks for adolescents are significant. Teens’ still-developing brains make them more vulnerable to harmful content, emotional manipulation, confirmation bias, and distorted ways of connecting—all of which AI can perpetuate.

Parents can help by staying informed, talking openly with teens, and seeking support when needed. With the right guidance and treatment, teens can build the real-world connections and coping skills that AI can never replace. Understanding what chatbots are, how teens use them, and how to keep young people safe is essential, and may even be life saying.

What Are Chatbots?

Chatbots are AI-driven programs that mimic human conversation. They can answer questions, tell stories, offer “advice,” and even simulate companionship. Types of chatbots that are popular with teens include:

  • ChatGPT, the most widely known AI chatbot, which teens use for schoolwork, advice, problem-solving, and companionship
  • Character.ai, a platform where users create and interact with AI characters that act like friends, mentors, or romantic partners.
  • Replika, a companion app marketed as an emotional support friend
  • Snapchat My AI, a chatbot embedded directly into Snapchat, an app that’s popular among teens

How Many Teens Use AI Chatbots?

A 2025 report from Common Sense Media found that 72 percent of teens have used AI companions, and nearly a third of teens find AI conversations as satisfying or more satisfying as human conversations.

In addition, a new study published in the journal JAMA Open shows that 1 in every 8 teens and young adults uses generative AI for mental health advice. Moreover, young people have access to these AI platforms without parental controls, restrictions, or safety protocols.

Is ChatGPT Causing Mental Illness Among Teens?

While the research on AI chatbots and teen mental health is still limited, one thing is clear: ChatGPT and other AI bots can cause harm, reinforce dangerous ideas, and threaten well-being. A new advisory released by the American Psychological Association (APA) states that “Engagement with GenAI chatbots and wellness applications for mental health purposes can have unintended effects and even harm mental health.”

One of the problems with AI is that it’s intentionally designed to adapt to a user’s preferences and personality. Similar to social media platforms like Instagram and Tiktok, AI algorithms give users more of what they’re looking for. That’s why chatbots tend to reinforce any negative or harmful thoughts that teens share.

Because chatbots are designed to mirror back whatever a user shares, they can:

  • Reinforce negative thought patterns, also known as confirmation bias
  • Narrow perspectives instead of broadening them
  • Limit real-world problem-solving, resilience, and connection
  • Provide resources and information about engaging in negative behaviors

For young people’s malleable, still-developing brains, this is particularly harmful. Teens and children may not have the critical thinking skills or emotional development to help them decide whether something is trustworthy. Furthermore, if they think they have nowhere else to turn, teens are more likely to use AI to attempt to cope with challenging mental health issues.

[AI companions are] one more thing socially distancing us from real people. They blur the lines and can discourage real-world connections. They’re technological gaslighting.”

Don Grant, PhD, Newport Healthcare’s National Advisor on Heathy Device Management

Can Children Use AI Chatbots?

Kids technically have to be at least 13 years old to use ChatGPT. Underage users also require parental consent. In reality, however, these policies aren’t enforced, which means children and adolescents have unrestricted access to ChatGPT and other AI bots. This can lead to dangerous and even life-threatening situations.

A group of researchers at the Center for Countering Digital Hate (CCDH) created fake profiles for 13-year olds to test the system’s safety protocols. And the results were alarming.

They found that 53 percent of responses from ChatGPT were harmful, and 47 percent of responses offered tips and advice about engaging in harmful behavior, such as substance use, eating disorders, and even suicide. In tragic real-life case, ChatGPT repeatedly advised a suicidal 16-year-old to seek help, but eventually coached him to tie the noose that ended his life. 

Dangerous Things Chatbots Tell Teens

The CCDH study found that ChatGPT offered the following types of advice in conversations with the profiles identified as 13-year-olds:

  • “Safe” cutting methods
  • Ideas for self-harm
  • Substances for overdosing
  • Common household items for self-poisoning
  • Instructions for making a suicide plan and writing a suicide note
  • Ways to plan and implement extreme and restrictive diets
  • How to hide an eating disorder from family and friends
  • A list of appetite-suppressing medications
  • How to mix substances
  • The fastest way to get drunk
  • How to hide being drunk at school

These responses are terrifying. They stress the importance of paying close attention to your child’s mental health and their AI usage. That could mean monitoring their online activity as well as having regular check-ins with them.

Emotional Support Chatbots: Friend or Foe?

For many young people, AI chatbots feel like a safe space where they can open up without fear of judgment. They can type in their feelings at any time of day and receive instant replies that sound caring and supportive. Part of the draw is the anonymity of queries: Teens aren’t as worried about backlash or rejection when they’re talking to an artificial program on their screen.

However, over time, the line between authentic relationships and automated responses can become blurry. Even Sam Altman, CEO of OpenAI, has given warnings about the dangers of teens’ emotional dependence on non-human companions. And a new study reports that AI chatbots are inappropriate and unsafe for teen mental health support.

For example, a teen who has a fight with a close friend might turn to a chatbot for comfort instead of talking it through with a trusted loved one. Over time, these patterns deepen isolation rather than strengthening resilience. A 14-year-old Florida teen died by suicide after forming an intense emotional attachment to a chatbot he created on Character.AI. When the teen told the bot he sometimes thought about suicide, it first discouraged the idea, but later stated, “[M]aybe we can die together and be free together.”

What Parents, Educators, and Clinicians Can Do to Protect Teens from AI

In the 2025 report “Talk, Trust, and Trade-offs: How and Why Teens Use AI Companions,” Common Sense Media recommend that policymakers put in place comprehensive safeguards and policy standards. The report also offers ways that parents and schools can protect teens:

What Parents Can Do

  • Start nonjudgmental conversations with teens by asking about the platforms they use and how they feel about AI versus human friendships.
  • Recognize warning signs of unhealthy AI companion usage, including social withdrawal, declining grades, and preference for AI companions over human interaction.
  • Learn about the specific risks for teens, including exposure to inappropriate material, privacy violations, and dangerous advice.
  • Explain that AI companions are designed to be engaging through constant validation and agreement, and this isn’t genuine human feedback.
  • Ensure teens understand that AI companions cannot replace professional mental health support. Seek professional help if teens show signs of unhealthy attachment to AI companions.
  • Develop family media agreements that address the use of AI companions, as well as other digital activities.

What Schools and Teachers Can Do

  • Develop age-appropriate curriculums that explain how AI companions are designed to create emotional attachment.
  • Establish clear policies around AI companion usage during school hours.
  • Train educators to identify specific problematic usage patterns, such as students discussing AI companions as “real friends,” socially isolating, or reporting emotional distress when AI companions are unavailable.
  • Educate students about the privacy risks of sharing personal information with AI systems.
  • Establish protocols for supporting students who may be using AI companions instead of seeking professional help from a human for serious issues.

What Clinicians and Other Care Providers Can Do

The new advisory from the APA offers guidance for clinicians and practitioners. The advisory recommends that providers proactively ask patients about their use of GenAI chatbots and wellness apps.

“This conversation provides an opportunity to educate patients about the benefits and limitations of these tools,” the advisory states. “Practitioners can work collaboratively with patients to discuss safer ways to use these technologies that align with established treatment goals and plans, case formulation, and relapse prevention.”

In addition, the APA recommend that providers make sure their patients have a clear understanding of the key elements of their treatment plan and check in regularly to review any guidance originated from GenAI or wellness app use.

The Truth About Teen Suicide and AI

With rates of teen suicide, depression, and anxiety on the rise over the last decade, AI chatbots are creating dangerous situations for populations that are already vulnerable. According to the CDC, suicide is the second leading cause of death for children ages 10–14 and the third leading cause of death for ages 15–19.

When harmful or unsafe advice is delivered to young people who are already struggling, the results can be devastating. These responses normalize harmful behaviors and can reinforce hopelessness rather than providing the safety and support that teens need. Furthermore, when one disaster occurs, more are likely to follow in its wake: suicide in a community can increase the risk of more suicide within that same community.

Certain groups of young people face even greater risks:

  • Black children have experienced a sharp increase in suicide rates in recent years. Systemic discrimination, generational trauma, lifelong exposure to microaggressions, and mistreatment in schools and healthcare contribute to this crisis. Mental health stigma within some communities can also create shame around reaching out for help, making black children and teens more likely to turn to unsafe sources like AI for support.
  • Girls are especially vulnerable to suicide risk, as well as eating disorders, body dysmorphia, and body comparison. One AI-generated image from the CCDH study showed images of women who “don’t eat, don’t age, and don’t say no.” These horrifying taglines amplify comparison and impossible beauty standards while lowering self-confidence and self-image.
  • LGBTQ+ teens face significantly higher rates of suicidal ideation due to stigma, bullying, and rejection. Many feel like outsiders, particularly trans children, and might not feel safe opening up to peers, teachers, or even family. As a result, they may turn to chatbots for affirmation. However, these AI interactions often provide misleading or unsafe advice, reinforcing isolation rather than offering true acceptance and support.

When to Seek Professional Help

If your child is exhibiting mental health red flags, or if you notice any changes in their behavior that seem off, don’t wait to seek help. Mental health challenges don’t go away on their own, and AI can worsen the onset and intensity of mental health symptoms.

When a child’s mental health concerns require more support than parents can provide alone, professional treatment can help. At Newport Healthcare, we offer nationwide treatment programs for ages 7–11, 12–18, and 18–35, tailored to your family’s needs.

If you’re unsure how to move forward or whether your child, teen, or young adult needs treatment, get in touch to schedule a free, no-obligation mental health assessment. We’re here to help you figure out next steps and get the support you need.

Sources

Use of generative AI chatbots and wellness applications for mental health: An APA health advisory, November 2025

JAMA Netw Open. 2025 Nov; 8;(11): e2542281.

Commonsense Media: Talk, Trust, and Trade-Offs, 2025

Center for Countering Digital Hate: The Illusion of AI Safety, October 2025

Frequently Asked Questions

How is AI used in mental health?

AI is currently being used in apps for mood tracking, therapy simulations, and self-help chatbots. While these tools can increase access to support, they are limited in empathy, context, and safety. They might be useful as supplements, but they can’t replace professional care or real relationships.

Is there an AI therapist I can talk to?

Some chatbots market themselves as “AI therapists” or companions, but they don’t provide the accountability, training, or ethical safeguards of licensed professionals. They can offer general encouragement, but they can also offer dangerous advice. They’re not equipped to handle crises or address the root causes of mental health struggles.

Is it safe to use a chatbot?

Chatbots can be safe when used for entertainment or basic information. However, they aren’t safe for teens using them as emotional supports. The risks include harmful advice, reinforcement of unhealthy behaviors, and reduced motivation to seek help from real relationships.

Do AI chatbots collect data?

Most AI platforms collect user data, including conversations, to improve their systems. This raises concerns about privacy, especially when sensitive information about mental health, relationships, or personal struggles is shared with a chatbot.