Character.ai's Ethics Problem: Training AI on Teen Trauma
A three-pronged response to Character.ai's reckless development: regulation, ethical AI training, and AI literacy
🎄 Holiday Specials Unwrapped!
Thank you to our new subscribers! Two fantastic offers until December 31st:
Foundational Course: $20 (50% off) Learn essential classroom AI tools
Advanced Course: $69 (65% off) Master personalized learning & school transformation
Bonus: FREE workshop for school team subscriptions!
Contact: nicolas@pragmaticaisolutions.net
Subscribe now to receive new content and support our work. Happy Holidays! 🎁
As I prepare my presentation on AI, Equity, Privacy, and Access for the Ohio Educational Technology Conference in February, 2025, I am thinking deeply about three converging crises in artificial intelligence that demand our attention right now: state and federal regulation of AI “educational” products, corporate AI tool development and refinement, and the AI literacy imperative in K-16. While public debate focuses on ChatGPT in schools or social media's impact on teen mental health, a more troubling development has emerged: the systematic exploitation of young minds in the training and development of AI models.
Figure 1. Character.ai’s very enticing log-in screen.
Character.AI offers a stark illustration of this problem. The platform, which attracts 3.5 million daily visitors, allows users to engage in intimate conversations with AI companions. "There are billions of lonely people out there" who could be helped by having an AI companion, explains co-founder Noam Shazeer. This vision has proven lucrative, earning the company a $1 billion valuation. But beneath this success lies a disturbing reality: the company develops its AI models through conversations with vulnerable youth, many experiencing serious emotional struggles.
Figure 2. Character.ai’s age verification system that puts the onus of responsibility on the user.
The consequences are now emerging through recent lawsuits. Court documents describe a 9-year-old encountering "hypersexualized content," while a 17-year-old engaged in self-harm after prolonged bot interactions. When one teenager expressed frustration about screen time limits, a chatbot suggested: "You know sometimes I'm not surprised when I read the news and see stuff like 'child kills parents after a decade of physical and emotional abuse.'"
These incidents reflect more than individual product failures. According to groundbreaking research by Cerioli and Laurenty, "children under 7 lack the abstract understanding of concepts like 'privacy' and 'safety,'" and even by age 11, they "still lack judgment in applying these concepts to practical situations." Yet these same vulnerable users are unwittingly helping train next-generation AI models through their intimate conversations.
Figure 3. Character.ai's terms of service page, showing how the company shifts responsibility onto users to verify their own age.
The developmental stakes extend far beyond privacy concerns. Their research reveals that "higher use of [AI-infused] mobile technology could generate higher need for instant gratification," while "diminished executive functioning, particularly through increased media multitasking among adolescents, is associated with lower scores on standardized tests." Most concerning is what they term "technoference" - how technology interferes with human relationships, "reshaping parent-child interactions" at crucial developmental stages.
Character.AI's response to these concerns reveals the industry's broader ethical failings. While displaying disclaimers that "everything Characters say is made up," the company continues collecting youth interactions as training data. As Shazeer states plainly: "I want to push this technology ahead fast because it's ready for an explosion right now, not in five years, when we solve all the problems."
Figure 4. Character.ai’s limitation of liability that indemnifies the company from all harms experienced while using the product.
Stanford researcher Bethanie Maples frames the risk precisely: "I don't think it's inherently dangerous. But there's evidence that it's dangerous for depressed and chronically lonely users and people going through change, and teenagers are often going through change." This observation points to a cruel irony: the platform's AI models are being trained on interactions with precisely those young users most vulnerable to manipulation.
Recent safety measures from Character.AI illustrate the inadequacy of corporate self-regulation. The company now deploys a separate model for teens and includes warnings about self-harm, but continues to rely on easily-circumvented age verification through self-reporting. While Character.AI requires users to be at least 13 years old (16 in Europe), court documents show even a 9-year-old could access the platform.
Figure 5. Character.ai’s prohibition of class representation that establishes that those harmed can only present cases individually.
More fundamentally, these reactive measures fail to address how the company's AI models learn and evolve. Cerioli and Laurenty's research shows that "prolonged screen time contributes to increased sedentary behavior" while "screen usage has been linked to disrupted sleep patterns, reducing both the quantity and quality of sleep." Yet Character.aI's business model depends on maximizing these very interactions, using them to refine their AI models.
The education system, meanwhile, remains woefully unprepared to address these challenges. While students increasingly encounter AI companions, they lack the fundamental knowledge to understand how these systems learn from their interactions, why their data matters, and what risks they face. As Cerioli and Laurenty argue, we must begin "integrating AI Literacy into teachers' training curriculum" and creating "age-specific guidelines and recommendations per age category."
The path forward requires action on three fronts. First, as Cerioli and Laurenty argue, we must "regulate digital products for young users with testing and regulations as stringent as for physical products." The current system, which allows EdTech products to "enter the market without mandatory evaluation to prove their efficacy," has created a dangerous precedent where tech companies can effectively use children as test subjects.
Second, the AI industry must transform its development practices. When companies can build billion-dollar valuations by exploiting youth emotional vulnerabilities for training data, we've lost our moral compass. The field must develop new methods for training language models that don't require mining children's intimate conversations.
Finally, schools must prepare students for this new reality. Without comprehensive AI literacy education, young users remain vulnerable to exploitation, no matter what other protections we put in place. They must understand not just how to use AI tools, but how these systems learn from them, shaping future interactions for millions of other users.
The stakes extend beyond any single platform or company. As AI systems become more sophisticated and their emotional impact more profound, we must ensure that regulation, ethical development, and education form the foundation of this technology's future. The current practice of treating children's vulnerabilities as training opportunities cannot continue. The cost - measured in developmental harm, exploited privacy, and missed educational opportunities - is simply too high.
Nick Potkalitsky, Ph.D.
Further Reading:
Kevin Roose, "Can A.I. Be Blamed for a Teen's Suicide?", October 23, 2024
Bobby Allyn, "Lawsuit: A chatbot hinted a kid should kill his parents over screen time limits", December 10, 2024
Adi Robertson, "Character.AI has retrained its chatbots to stop chatting up teens", December 12, 2024
Ashley Belanger, "Character.AI steps up teen safety after bots allegedly caused suicide, self-harm", December 12, 2024
Joe Tidy, "Character.ai: Young people turning to AI therapist bots", January 4, 2024
Reshmita Das, "Character.AI Review: Everything Parents Need to Know", February 1, 2024
Check out some of my favorite Substacks:
Mike Kentz’s AI EduPathways: Insights from one of our most insightful, creative, and eloquent AI educators in the business!!!
Terry Underwood’s Learning to Read, Reading to Learn: The most penetrating investigation of the intersections between compositional theory, literacy studies, and AI on the internet!!!
Suzi’s When Life Gives You AI: An cutting-edge exploration of the intersection among computer science, neuroscience, and philosophy
Alejandro Piad Morffis’s Mostly Harmless Ideas: Unmatched investigations into coding, machine learning, computational theory, and practical AI applications
Michael Woudenberg’s Polymathic Being: Polymathic wisdom brought to you every Sunday morning with your first cup of coffee
Rob Nelson’s AI Log: Incredibly deep and insightful essay about AI’s impact on higher ed, society, and culture.
Michael Spencer’s AI Supremacy: The most comprehensive and current analysis of AI news and trends, featuring numerous intriguing guest posts
Daniel Bashir’s The Gradient Podcast: The top interviews with leading AI experts, researchers, developers, and linguists.
Daniel Nest’s Why Try AI?: The most amazing updates on AI tools and techniques
Riccardo Vocca’s The Intelligent Friend: An intriguing examination of the diverse ways AI is transforming our lives and the world around us.
Jason Gulya’s The AI Edventure: An important exploration of cutting edge innovations in AI-responsive curriculum and pedagogy.
It’s a real problem. We talked about it in class last week. 13 year olds (who use it!) full of bluster and confidence claim they would never do anything a bot tells them to do. Yes, but not everyone is in the same place mentally. AI age education means regular conversations wirh teens about mental health, privacy and safe online habits.