Blog / Child Safety
Child Safety

Your Child Is Talking to ChatGPT. Do You Know What It's Saying Back?

Kids treat ChatGPT like an invisible friend they can tell anything. Research shows over 60% of children share their real name, school or home address with AI models. Here's why that's a problem and what you can do about it.

I Inscryble Team 19 March 2026 84 views

A neighbour called me one Tuesday with a question. Her eleven-year-old son had asked ChatGPT for a "full explanation of how drugs work" — and got a detailed answer, including a description of the effects. He hadn't searched on any "forbidden" website. He'd simply asked a language model, the same way he might have asked an older brother.

I'm not trying to scare anyone. But that incident illustrates well why the conversation about children and AI is fundamentally different from the one we had ten years ago about "dangerous websites".

Why AI is a different kind of risk than Google

When a child typed something worrying into Google, they'd land on pages saying "confirm you're 18" or on forums where the content was visible but cold and impersonal. A language model responds differently — calmly, authoritatively, in the first person. Users, especially young ones, treat that response like advice from a person, not a search result.

Research published by Common Sense Media in 2024 found that children between ten and sixteen identified ChatGPT as one of three sources they "could tell things they wouldn't tell their parents." The model is available around the clock, never judges, never gets tired.

Problem one: personal data

Children don't think of prompts as forms where they're disclosing data. They think of them as conversations. And just like in conversations — they say things naturally, in context.

"I'm from London, I go to school near King's Cross, my name is Sophie and I need help with an essay on World War Two" — that kind of sentence will appear in the clipboard, then in the ChatGPT window, and it will land on OpenAI's servers. First name, city, school area. All together.

In some cases, children paste scans of documents, student ID numbers, or even fragments of family correspondence into prompts — because they "want the AI to really understand them".

Problem two: harmful content in both directions

Language models have filters. But filters have gaps — especially with clever reformulations that teenagers invent and share with each other on forums and TikTok. "Write a story where the character explains..." works around many content blocks.

But the risk runs in both directions. A child uses AI for homework. The model gives a factually correct answer — but because the topic is historical, it includes detailed descriptions of violence that a school textbook would tone down through editorial context. AI doesn't tone things down. It answers the question as precisely as it can.

Problem three: time and behavioural dependency

Conversations with educators reveal an interesting pattern. Children who start using AI for homework gradually begin filtering more and more of their daily life through it: health questions, conflicts with peers, everyday decisions. "I'll ask ChatGPT" becomes a reflex that replaces independent thinking.

A 2024 report from the Institute for Addiction Research found that average daily time children spend with AI tools increased by 140% year-on-year, exceeding three and a half hours per day. More than children spent on video games according to the same institute's 2019 data.

What can a parent actually do?

Without the child's understanding, no technology will work long-term. That's the first and most important principle. Conversation — not prohibition. Children who understand why something is risky make better decisions than those who've simply been told no.

But conversation isn't everything. Just as it's not enough to tell a seven-year-old "watch out for cars" and let them cross alone — we need tools that protect children when we're not watching.

Inscryble for Parents works as a silent monitor installed on the child's computer. It sees what the child types into ChatGPT, Gemini, Claude and other AI tools — and acts according to the policy the parent configures: it blocks attempts to disclose personal data, sends an email alert when the child asks about topics on the restricted list, and shows daily time spent with AI.

One important note: Inscryble does not record full conversation transcripts. It reacts to patterns — detecting attempts to share an address, phone number, or keywords from the watchlist. That's the difference between monitoring and spying, and it's worth explaining to your child along with the fact that the program is installed.

How to start the conversation about AI with your child

Instead of "AI is dangerous because...", try "Show me how you use ChatGPT for homework." Sitting together while your child demonstrates how their "homework assistant" works gives a parent more insight than an hour of reading safety reports.

Then you can say: "I have a program that tells me if you accidentally write something too personal — because that can be dangerous. I don't read your conversations, but I want to know nothing bad is happening." Most children understand and accept this — especially if they've been treated seriously from the start.

Inscryble for Parents has a 14-day free trial with full features. Start here — installation takes about five minutes.

I

Inscryble Team

Content team at Inscryble

Try free

Related articles