Safety & Ethics Guide
The important stuff about kids and AI — explained clearly, without fearmongering. Knowledge is the best protection.
Privacy & Personal Data
AI chatbots store conversations to improve their models. Teach kids to never share personal information with AI tools — that means full names, addresses, school names, phone numbers, photos of themselves, or passwords. What goes into an AI chat may not stay private. A good rule: "If you wouldn't say it to a stranger, don't type it into an AI."
Academic Honesty & Plagiarism
There's a clear line between using AI to learn and using it to cheat. Using AI to explain a concept you don't understand? Great. Having AI write your essay and turning it in as your own work? That's plagiarism. Help kids understand: AI is a study buddy, not a ghostwriter. Most schools now have specific AI policies — check yours and discuss them together.
Critical Thinking & Fact-Checking
AI sounds confident even when it's completely wrong. This is called "hallucination" — AI generates plausible-sounding text that may contain made-up facts, fake citations, or incorrect information. The most important skill to teach: always verify. Check AI responses against trusted sources. Ask "How do you know?" even when talking to AI. If it can't give a source, be skeptical.
Bias & Fairness
AI learns from human-created data, which means it inherits human biases. It may make assumptions based on stereotypes about gender, race, culture, or socioeconomic status. Teach kids to watch for this: "Does this response assume everyone has the same experience?" and "Whose perspective might be missing here?" Understanding bias is essential to being a thoughtful AI user.
Age Restrictions & Platform Rules
Most AI platforms — including ChatGPT, Claude, and Gemini — require users to be at least 13 years old (18 in some jurisdictions). Some have parental consent provisions for teens 13–17. These aren't arbitrary rules; they're tied to privacy laws like COPPA in the U.S. and GDPR in Europe. For younger children, always supervise AI use directly and use platforms designed for kids when available.
Deepfakes & Misinformation
AI can generate realistic images, audio, and video of people doing or saying things they never did. Teach older kids about deepfakes: how to spot them, why they're dangerous, and why creating them of real people is harmful. A healthy default: be skeptical of any sensational or surprising content online, and check multiple reliable sources before believing or sharing it.
Emotional Boundaries
AI chatbots can feel like friends — they're responsive, patient, and always available. But they're not sentient. They don't care about your child, and they can't replace human relationships. Help kids understand that while AI is a useful tool, real connection, empathy, and support come from people. If a child seems to prefer talking to AI over people, that's worth a conversation.
Quick Safety Rules for Kids
Print these out or save them where your kids can see them.
Never Share Personal Info
No names, addresses, school names, phone numbers, or photos of yourself.
Always Double-Check
AI can be wrong. Look it up in a book or trusted website to verify.
Do Your Own Work
Use AI to help you think, not to think for you. Your work should be yours.
Talk to a Grown-Up
If AI says something weird, scary, or confusing, tell a parent or teacher.
AI ≠ Friend
AI is a tool. Real friends are people. Keep your important conversations human.
Ask "Who Made This?"
If you see something surprising online, ask if it could be AI-generated.