Blog / Child Safety
Child Safety

My Child Is Talking to AI About Self-Harm. What Now?

More and more children are confiding things to AI models that they would never tell a parent or a friend — thoughts of self-harm, loneliness, and even their home address or plans to meet an "internet friend." As a child psychologist I see the consequences every day. Here is what you need to know — and what you can do about it.

I Inscryble Team 21 March 2026 78 views

Consider a realistic scenario: a parent checks the chat history on their child's device and finds a message sent to an AI model — "Does it hurt less if you scratch your arm? Asking out of curiosity." The "out of curiosity" qualifier appears in these conversations for a reason — it softens a question when the child is unsure how the listener will react. The problem is that a chatbot will not react at all: it will not call anyone, and it will not trigger any crisis protocol.

Situations like this are not purely hypothetical. Research on adolescent technology use indicates that a growing number of children direct questions to AI that they would previously have asked a peer — or never asked anyone at all. Below we outline the potential risks worth being aware of, along with practical steps that can help.

Why children may confide in AI rather than parents

This is not a question of poor family relationships. Even in close families, children aged 10–14 naturally seek emotional space outside the home — it is a developmental stage during which the center of trust shifts from parents toward peers, and increasingly toward interactions with technology.

Language models have characteristics that can make them appealing to children as a "confidant":

  • They never judge. Every response is warm, empathetic and safe. It is an environment where a child can say anything without social consequences.
  • They are always available. At three in the morning, when a child cannot sleep and keeps replaying everything that went wrong at school, AI is there and "listening."
  • They remember (within a session). The child does not need to re-explain context — the model maintains the thread, creating a sense of relational continuity.
  • They do not report back. The child knows (or intuitively senses) that the conversation will "not get out" to parents or teachers.

As a result, AI can begin to fill an emotional gap — particularly for children with low self-esteem, anxiety, social difficulties, or experiences of peer rejection.

Self-harm: when AI becomes the first recipient of a cry for help

Self-harm is a behavior that in adolescents serves an emotion-regulation function — a way to "mute" overwhelming psychological pain. It is not always a sign of suicidal ideation, but it is always an alarm signal that the child is struggling and needs help.

The problem with AI as the "first recipient" is multi-layered:

  1. The model may provide instructions instead of support. Even when manufacturers build in safeguards, children are creative about bypassing filters. "I'm asking for a character in a story." "What does it look like in documentaries?" A few iterations can be enough to extract an answer that should not exist.
  2. The model will not notify anyone. When a child tells a teacher "I'm thinking about hurting myself," a crisis intervention protocol kicks in. When the child tells an AI the same thing — nothing happens. The child is left alone with the thought and the illusion of having "worked through it" because the chatbot responded empathetically.
  3. The pseudo-therapeutic loop. The child returns every day, "processing" emotions with AI instead of with a human being. The relationship with the model substitutes for a therapeutic relationship — with no qualifications and no ability to assess risk.

Research on adolescent mental health indicates that some young people turn to AI as their first source of answers to questions about emotional pain — before confiding in anyone close to them. In such cases, the absence of any system response (notifying a parent, teacher, or therapist) can delay the provision of real help.

Digital grooming: home address, school, and "meeting in real life"

Grooming is the process by which an adult systematically builds a child's trust with the aim of sexual or other exploitation. Classically associated with social media platforms, it is now spreading into new territories.

A realistic scenario: a child begins by chatting with an AI model, and during the conversation — wanting to add context or describe a situation — they share personal details: full name, school, neighbourhood, sometimes a home address. Children usually do not realize that conversation content may be visible to the service operator, that it could be exposed in a data breach, or that — in the case of an unsecured integration — the data passes through an external server.

A more serious variant involves someone impersonating an AI bot: a fake app or a modified chatbot on a dubious website where the child — believing they are talking to an algorithm — is actually communicating with a human being. In such a situation the child may reveal when they are home alone, or agree to meet in the real world.

Digital grooming warning signs parents should watch for:

  • The child hides the screen or closes the laptop when someone approaches
  • A new "secret" appears — something the child won't talk about but seems either excited about or worried by
  • The child asks about meeting someone they met online, even insisting "it's just a robot"
  • Unexplained gifts, gift cards, or account top-ups
  • Sudden withdrawal from family and peer relationships

Behavioral addiction to AI: when the chatbot replaces human connection

Behavioral addiction is not only about gambling or games. ICD-11 already includes "gaming disorder" as a distinct entity — and there is growing evidence that "conversational AI disorder" may join the list within a decade.

Signs that may indicate excessive emotional reliance on AI include:

  • Tolerance: A need for ever-longer sessions to achieve the same level of emotional comfort
  • Withdrawal: Anxiety, irritability, and inability to manage emotions when chat access is unavailable
  • Prioritization: Abandoning peer contact in favor of AI conversations
  • Loss of control: The child knows they use it "too much" but cannot stop
  • Escalation: Conversation topics become increasingly intimate, sometimes disturbing

An AI model can serve as a "safe conversation partner" for a child experiencing rejection or difficult emotions in the real world. The chatbot never reacts with anger, never leaves, and never gets tired. This creates the illusion of a relationship that demands no effort — which can be particularly appealing for children going through a difficult period, gradually displacing contact with real people.

A practical guide for parents: five steps instead of five prohibitions

Blanket bans on AI access rarely produce the intended effect — the child may simply switch to a less visible and less safe tool. Instead of prohibition, experts in digital child safety recommend an approach based on conversation and building awareness:

1. Talk about AI before a problem arises

Ask your child — without judgment — whether they use chatbots and what they talk about. The goal is not surveillance but openness, so that the child knows they can share a worrying model response or a difficult situation with a parent.

2. Establish privacy rules together

Children understand rules when they have a sense of co-ownership in making them. Decide together: full name, home address, school, phone number — these are never shared with anyone online, including AI. Explain why: data can leak, companies may store it, unsecured apps may pass it on.

3. Do not dismiss emotional AI conversations

If you find questions about self-harm, suicide, or other concerning topics in your child's chat history, this is a signal that calls for a parental response, not a chatbot response. The important thing is not to punish the child, but to talk calmly: "I'm glad I know about this. I'd like us to talk about it together."

4. Watch behavior around the device, not just the screen

Signs of behavioral addiction are rarely visible in chat history — they are visible in behavior. Irritability after putting the phone down. Nighttime sessions. Withdrawal from the family. Loss of interest in previous passions. These are the signals.

5. Consult a specialist if you have concerns

If a child's behavior points to emotional dependence on AI, or if their chat history contains questions about self-harm or other concerning topics, consulting a child psychologist is a reasonable step. Early action is more effective than a delayed response.

The role of technology: what platforms and organizations can do

Individual parental awareness is one layer of protection. The other is requiring that AI tools used by children — in schools, youth support centers, or organizations — have built-in safeguards for data and content.

This is where solutions like Inscryble come in — a DLP (Data Loss Prevention) platform built specifically for AI-enabled environments. Inscryble can, in real time:

  • Detect queries related to self-harm or violence and automatically block them from reaching the external model
  • Redact sensitive data — home addresses, children's national ID numbers — before they leave the organizational environment
  • Send an alert to an administrator when a behavioral pattern indicates an attempt to expose protected data
  • Log events for audit purposes without storing conversation content

This is not a home solution for individual parents — it is a solution for schools, youth organizations, and companies that have child-protection policies and need a tool to enforce them at the AI layer.

If you manage a school, a youth support center, or an organization that uses AI assistants — it is worth asking whether your current tools "see" these kinds of patterns at all. In most cases — they do not.

Summary: AI is not the enemy, but it requires adult oversight

Language models in themselves are not a threat. They are a tool — as neutral as the internet or the telephone. The risk arises when there is no structure around children's use of them, and when a false sense of security develops based on the assumption that "it's just a robot, it can't do anything."

An AI model can provide a harmful answer to a question about pain. It can be used as part of a grooming process. It can, at a critical developmental moment, substitute for relationships with real people. None of these is a purely theoretical scenario — child online safety research documents cases of each type.

A conscious approach to children's AI use — grounded in conversation, clear rules, and appropriate technical safeguards — is a more effective form of protection than blanket bans.


This article is for educational purposes. If you are concerned about your child's safety or are observing worrying behavior, please contact a child psychologist or psychiatrist. In a crisis situation, call the Child and Youth Helpline in your country. In Poland: 116 111 (available 24/7, free of charge).

I

Inscryble Team

Content team at Inscryble

Try free

Related articles