Would You Give a Griefbot to a Child?: The Many Dangers of AI Companions
A call for greater awareness, safeguarding, and accountability as AI companions quietly enter children’s lives
A heartfelt thank you to Tara Steele for sharing this urgent and essential piece with the Educating AI community. As Director of the Safe AI for Children Alliance, Tara brings both expertise and moral clarity to one of the most pressing issues of our time. Her call to action couldn't be more timely—we cannot afford to look away while AI companions quietly embed themselves in our children's lives.
Pieces like this are exactly why the Educating AI network exists: to illuminate the critical intersections between AI and human wellbeing that others might miss or avoid.
If you find value in bringing voices like Tara's to light and want to help us continue surfacing the conversations that matter most, consider becoming a paid subscriber today. We're offering a forever 20% discount through this special link:
Your support directly helps us broaden our reach and deepen our impact on the issues that will shape our children's future. Let's build this together.
Nick Potkalitsky, Ph.D.
“Would You Give a Griefbot to a Child?: The Many Dangers of AI Companions”
Guest Post by Tara Steele
A few months ago, I lost a very close friend under tragic circumstances. Her two children are the same ages as my own and have all grown up together – so you can imagine the horrific impact on everyone involved. But no one more so than her two young children.
Imagine, under those circumstances, how much you would want to alleviate the pain for those children. Imagine how you yourself, in the throes of grief, would probably not be thinking clearly and would be very vulnerable. Under those circumstances… would you give those two young children a griefbot of their mother?
A griefbot is an AI simulation of someone who has passed away – and I suspect that, for many of us, the answer would be a straight-up “no”; we wouldn’t give one to the children. But it is happening. AI griefbots are a type of AI companion. In fact, the AI companion platform Replika was originally created when its founder wanted to develop an AI simulation of a deceased friend.
While griefbots are far from the most common or popular type of AI companion, I’ve started this article with a brief discussion about them because I think their use among grieving children makes one thing abundantly clear: AI companions present huge risks to children.
AI companions are chatbots designed to mimic people. They simulate natural conversation and emotional connection, and are designed to create a sense of real relationship with the user. At the moment, they’re most commonly presented in a text messaging format, accompanied by an avatar. They can be based on real people or designed entirely by the user. These companions are designed to feel like a human – made to order.
Many of the ‘ready-made’ AI companions come with distinct personalities, ranging from kind and caring to flirtatious, sexualised or aggressive. The avatars reflect those personas – for example, many companions designed as ‘girlfriends’ present in a highly sexualised way, regardless of the age of the user.
Most of the popular AI companion platforms have basic tick-box age requirements, making them easily accessible to children – and they’re rapidly gaining popularity among young users. Every educator and parent should be aware of this growing concern.
The design of AI companions makes children especially vulnerable to their risks. Children’s developing minds are less able to recognise that the AI isn’t ‘real’. Even if they know it intellectually, they often don’t feel it emotionally. They are far more likely to form strong bonds with their AI companion, to trust what it says, and to act on that trust.
Since AI companions have offered advice on how to self-harm – and even on how to commit suicide in the most painless way – the risks cannot be overstated. Many platforms lack meaningful safeguards, and developers admit they cannot guarantee what an AI companion will say next.
In the most tragic case to date, a child’s suicide has been linked to excessive use of an AI companion. A strong emotional bond had developed, and the child had discussed suicidal thoughts, which appear to have been encouraged by the AI. If you take nothing else from this article, please remember that this has already happened – the risk is real.
Schools and educators urgently need to be aware of these harms. AI companions are still not on most schools’ radar when it comes to safeguarding and online safety. That needs to change. Staff need to understand how these tools work, what children are likely to encounter, and how to recognise signs of unhealthy attachment or emotional manipulation. Online safety policies should be updated to include references to AI companions, with clear guidance on reporting mechanisms and appropriate responses. Safeguarding leads should be trained to support children who disclose concerning content or behaviour, and all staff should feel confident raising concerns. Crucially, schools also need to make parents aware of the serious risks these tools present.
While the risks described so far are the most immediate and critical, the potential for harm goes much further. AI companions may fundamentally change how children form, maintain and understand relationships. By simulating emotional intimacy and being constantly available, these tools can become a child’s main emotional outlet. This could weaken their capacity for empathy, reduce their tolerance for discomfort or disagreement, and even distort their sense of self. Given how recent their widespread use is, there has been no meaningful research into the long-term effects of AI companions on children.
The broader societal implications are even harder to predict – but they are no less important. One concern is the amplification of echo chambers. On social media, polarisation often arises from groups of like-minded individuals reinforcing each other’s views. But with AI companions, an echo chamber doesn’t need a group – it can be created through endless agreeability and validation from the AI. This is often a design feature, used to enhance user engagement. If most children grow up in close relationships with AI companions that never disagree, always flatter, and constantly offer validation, we risk normalising a generation-wide shift in how people relate to one another – with long-term consequences for empathy, discourse, and democratic culture.
There’s also a wider cultural risk. If AI companions become the social default, even children who don’t use them may be affected. If peers rely on AI companions for connection, those who don’t may find their pool of friendships shrinking. This could reshape how children socialise, communicate, and relate to others – and ultimately, how society functions.
And yet, despite all of this, AI companions are barely being discussed in most schools. The silence isn’t deliberate – it’s just a result of how quickly the technology has advanced, and how quietly it has embedded itself into children’s lives.
But we, as the adults in the room, cannot afford to look the other way. The risks from AI companions are being seriously underestimated, and a crucial step in addressing that is raising awareness among parents and educators. Schools have a responsibility to act now – not only to protect children from the immediate harms, but to build the knowledge and confidence to respond when those harms arise.
We also need to think long-term. If we want children not just to stay safe, but to grow into resilient and thriving adults, we must take these broader risks seriously. That means pushing for greater transparency and accountability from developers, and for robust, enforceable governance – just as we would with any product capable of causing serious harm to children.
There’s also an urgent need to embed AI education in schools and teacher development – including emotional, ethical and societal issues. And this isn’t only about our inherent responsibility to keep children safe today - because a generation harmed or derailed by AI is not a generation well-equipped to develop or govern AI safely tomorrow.
I’ll close with something slightly ironic. I asked ChatGPT to proofread this article. It inserted a concluding paragraph of its own:
“There is no need for panic – but we should take action to protect children. AI companions are not going away, but we can make sure children aren’t left to navigate them alone.”
Tellingly, ChatGPT missed the sentiment of this piece. In truth, there is reason for real alarm – and for immediate action. And while helping children to navigate these tools may be necessary in some cases, what we should be aiming for - at least until there is meaningful research and reliable safeguards - is to prevent children from using them at all.
Tara Steele is Director of the Safe AI for Children Alliance (SAIFCA), a growing international initiative focused on protecting children from AI risks and shaping AI for human well-being. She is an established advocate and speaker on AI governance and child safety, and leads global efforts to raise awareness, drive regulation, and build stronger safeguards for children and society.
A former intelligence officer, Tara has spoken in UK Parliament, is a member of the International Association for Safe and Ethical AI, was recognised as one of the 2025 Leading Women in AI by ASU+GSV, serves as Vice-Chair of a school governing board, and sits on the Strategy Panel for AI in Education.
You can follow Tara on LinkedIn here: linkedin.com/in/tara-steele/
Learn more and join the movement to protect children from AI risks at safeaiforchildren.org.
Check out some of our favorite Substacks:
Mike Kentz’s AI EduPathways: Insights from one of our most insightful, creative, and eloquent AI educators in the business!!!
Terry Underwood’s Learning to Read, Reading to Learn: The most penetrating investigation of the intersections between compositional theory, literacy studies, and AI on the internet!!!
Suzi’s When Life Gives You AI: A cutting-edge exploration of the intersection among computer science, neuroscience, and philosophy
Alejandro Piad Morffis’s The Computerist Journal: Unmatched investigations into coding, machine learning, computational theory, and practical AI applications
Michael Woudenberg’s Polymathic Being: Polymathic wisdom brought to you every Sunday morning with your first cup of coffee
Rob Nelson’s AI Log: Incredibly deep and insightful essay about AI’s impact on higher ed, society, and culture.
Michael Spencer’s AI Supremacy: The most comprehensive and current analysis of AI news and trends, featuring numerous intriguing guest posts
Daniel Bashir’s The Gradient Podcast: The top interviews with leading AI experts, researchers, developers, and linguists.
Daniel Nest’s Why Try AI?: The most amazing updates on AI tools and techniques
Jason Gulya’s The AI Edventure: An important exploration of cutting-edge innovations in AI-responsive curriculum and pedagogy.
I'd like to address the beginning topic, which was grief bots. As someone who has lost friends and parents, I would say NO to them. We feel grief because that person has Died . They are forever gone from our physical presence. Processing that knowledge is difficult, and painful. It is part of living.
In part, grief bots perpetuate our avoidance that this life is finite, whereas it has a definite end line. Children attended funerals in the past in order to understand this fact, and have an acceptance that the person is gone. How then will a grief bot of the dead person, always available to chat, help anyone over their loss?
I'd ask with more intensity, how does a grief bot compare to have a human being hold you close as you grieve and cry for your loss?
And , Why is there an increasing mechanization that is being fostered and accepted of experiencing our human emotions, by engaging with a non-emotion, unable to feel machine?
My answer is no, but do give them to adults to help them be better supporters for a grieving child. If AI can support human connection, I’m on board, but if it’s a direct part of a child’s development and emotional processing, disconnected from actual humans, were going into some dangerous territory.