Drawing the line: AI's effect on youth mental health, education
In the midst of the artificial intelligence boom, questions are being raised about AI's ethics, and AI's impact on young people.
The most popular AI platform, ChatGPT, was under scrutiny this summer after its parent company OpenAI and CEO Sam Altman were sued by the parents of 16-year-old Adam Raine; they allege the chatbot assisted in their son’s death by suicide, including suggesting self-harm methods, offering to write Raine’s suicide note and advising him to hide his struggles from family, per CNN.
According to OpenAI, ChatGPT has over 700 million active weekly users as of September 2025, which is double the national population of the United States.
“When it comes to things like mental health or psychology, we have increased evidence that AI tends to act in ways that are inconsistent with what a trained expert in the field would do,” said Dr. Jennifer Blossom, associate professor of psychology at UMaine Orono and principal investigator for the school’s Clinical Child and Adolescent Psychology (C-CAP) Lab.
She explained that, unlike a professional who looks at the context of a situation to make a comprehensive assessment, AI can only work with the information it's given, so its knowledge is limited when replying to a user prompt. AI is also quick to “problem solve,” whereas a therapist will focus on support and collaborative engagement.
“An important piece that AI misses, is that there can be a problematic behavior that occurs because of a negative emotion. A therapist will validate the emotion, but not the behavior, because if the behavior is unhelpful or harmful, we don't want to reinforce that. AI cannot make that distinction,” she said.
While AI is still too new for there to be substantive research into its psychological effects, worries about it mirror historical anxieties seen with other emerging technologies, such as the advent of cable TV, the internet and, most recently, social media.
Much has been written about social media’s connection to youth depression and anxiety, but Blossom said conversations, just as those involving AI, are more nuanced. For marginalized youth living in rural communities in Maine, online communities can be a saving grace.
“Their access to social media is how they build community and hope for the future if they're in a physical setting that doesn't have that support,” she said.
However, Blossom does see a difference in AI from previous technologies as it is mostly unregulated, while also being highly interactive and engaging; meant to draw people in.
So, where does the nuance of AI come in? Local classrooms may hold the answer.
The school response
Like many schools nationwide, Boothbay Region High School (BRHS) is navigating how to reasonably manage AI use among students. The school's technology team presented guidelines to the school committee on how teachers and students should approach Generative Artificial Intelligence (GAI). The committee approved those guidelines last March.
At BRHS, teachers can employ AI tools to help develop lesson plans, teaching aids or adjust the reading difficulty of resources to meet student needs. AI can be used as a jumping-off point for students to develop ideas or prompts. Per the guidelines, GAI is intended to enhance student learning, as well as “complement and support Staff roles, not replace them.”
“We wanted to make sure that kids had some exposure to AI in a measured way, so that they would not be behind the times when they got to (college),” said BRHS Technology Integrator Stacey Gauthier. This includes educating students about the limitations and pitfalls of AI-generated content.
Blossom said, although there are no current studies on the cognitive development front, data shows relying on AI for problem-solving or brainstorming can erode a person’s natural ability to do so.
This problem is on the mind of local educators, as Gauthier explained: “Teachers are pretty guarded about how much they want kids to lean on AI because they want them to develop good, solid skills on their own.”
BRHS also heavily monitors what AI programs students are allowed to use. According to Alternative Organizational Structure (AOS) 98 Director of Technology Abby Manahan, ChatGPT is inaccessible both on school-owned devices and on its network, meaning the site would be barred even on a personal device. The school also does not use any programs owned by OpenAI.
Manahan said a main reason for this is that most large AI language models are not compliant with federal student data privacy guidelines, which the AOS 98 school system must follow to receive funding. The school is also part of the Education Cooperative’s Student Data Privacy Alliance, which crowdsources legal expertise in reviewing terms of use, so it can avoid software that collects and sells students’ personally identifiable information.
But school isn't just for learning; it's a place where students can connect and build the social skills that will follow them into adulthood. Social interaction is another topic interwined with worries about AI involvement, with recent studies showing a growing number of people forming platonic and romantic bonds with chatbots; 80% of Gen Zers say they would marry an AI, according to a study of 2,000 users by AI chatbot company Joi AI, and 83% say they can form a deep emotional bond with AI, per Forbes.
In some ways, this is an offshoot of how people turn to online friendships rather than in-person connections, Blossom said.
"If somebody is going to AI for those types of relationships, that likely means that they have a need that they’re not getting otherwise. (It’s important to meet) somebody in a place of compassion and support rather than judgment ... Dismissing isn’t really getting at the heart of the issue.”
Blossom also noted how it benefits AI companies to foster these types of relationships with users, to keep them coming back, and to train their programs for future clients.
Meanwhile, in-person socialization is the name of the game at BRHS. Beyond most AI platforms being inaccessible, the school adopted a no-phones policy for the 2025-2026 school year. Each morning, students place their device (phones, earbuds, smart watches, etc.) in a Yondr pouch where it is magnetically locked until the end of the day.
BRHS Principal Tricia Campbell said the process has been smoother than expected, and the policy has been a success with more conversations filling the hallways and lunchrooms.
While she admitted it may be a “far reach,” she feels the social muscles students are exercising by not being allowed to have their phones during the school day is bleeding over into their private lives.
“I think this was one of our most successful homecoming weeks in years. (The kids') positive energy, their collaboration, their support of one another, and just general celebration, class to class has been just really inspiring to me,” she said.
The future
AI is an ever-evolving landscape, but that doesn't mean parents have to be helpless to it. Regulation is an important step to combat the mental health and environmental implications of AI for the future of young people, said Blossom. In the meantime, resources are available on how to manage and support healthy technology use that apply to AI; parents need to monitor their child’s online activity, have open conversations, and model healthy technology use.
Gauthier echoed that advice, adding, “The kids may surprise us in what they decide, but they need to be involved in these conversations because (AI is) going to impact them for longer and more significantly than for some of us who are much older.”

