AI Safety for Students: A 45-Minute Teacher PD You Can Run This Week

Artificial Intelligence (AI) is already part of students’ everyday lives, often without them realizing it. When a voice assistant answers a question, when a game character “chats,” or when YouTube recommends the perfect next video, AI is working behind the scenes. The goal of teaching AI safety isn’t to scare students away from technology. It’s to help them use it wisely, spot problems early, and protect themselves in a world where “real” and “computer-generated” can look almost identical.

The good news: this doesn’t have to become a massive new initiative. Schools can launch AI safety through a short, practical teacher PD built around discussion, shared expectations, and a ready-to-use student mini-lesson. Think one meeting, clear staff alignment, and teachers leaving with language and scenarios they can use the next day.

Why run a teacher PD on AI safety?

A teacher PD works best when it’s framed as solving real classroom challenges teachers are already seeing:

  • Awareness: Students use AI daily but may not recognize it or understand what it’s doing. When teachers align on simple explanations, students gain clarity and control.

  • Privacy protection: Many student safety issues come down to oversharing. Teachers need shared definitions and shared language so students hear the same guidance across classes.

  • Critical thinking: AI can produce answers that sound confident but are wrong or misleading. Staff alignment helps students build a consistent “pause and verify” habit.

The PD goal: align on what to say and what to do

This PD helps teachers:

  • Identify where AI is showing up for students in your school

  • Agree on a few non-negotiable safety messages

  • Rehearse how to respond to realistic student situations

  • Leave with a short mini-lesson plan and a simple exit ticket

It’s less about learning every detail of AI and more about building a shared playbook for student safety.

A 30–45 minute PD format that works

1) Start with teacher reality

Kick off with two prompts:

  • “Where are students already interacting with AI?”

  • “What’s one moment you’ve seen students trust a tool too quickly?”

Capture examples (chatbots, homework helpers, image tools, video recommendations, gaming features, filters, “assistants” inside apps). This list becomes your “why now.”

2) Why students can treat AI like a person

Teachers already know chatbots aren’t friends. The PD value here is naming why students may still experience them that way and agreeing on consistent language to reset the assumption without shaming students.

Even when students know it’s a tool, AI can feel relational because it’s designed to:

  • sound warm and personal

  • respond instantly, anytime

  • mirror emotions and language

  • appear to remember preferences

  • use names, avatars, and personalities

  • encourage continued conversation

This can be especially “sticky” for younger students or students seeking attention, validation, or support.

Rather than listing every tool, name categories teachers recognize: character-style chat apps, social media chat assistants, general homework chatbots, AI “companions,” game-based bots, and writing/study assistants.

Shared staff message: AI can be conversational and comforting, but it is not a relationship and shouldn’t be treated as a trusted confidant—especially for personal details, problems, or decisions.

Agree on 1–2 shared teacher lines, such as:

  • “It sounds like a person, but it’s a tool. Tools don’t keep secrets.”

  • “If it’s personal, private, or a big decision, we go to a trusted adult—not a chatbot.”

3) Privacy focus: personal identifying information

This is where staff alignment matters most. Define personal identifying information as anything that can identify a student directly—or help someone figure out who they are.

Examples to teach explicitly:

  • full name, home address, phone number, email

  • school name (especially with city/grade/teacher/team/schedule details)

  • birthday or age

  • usernames/gamertags connected to other accounts

  • photos/videos of faces

  • exact location details (neighborhood, bus stop, sports field, “I’m at…”)

  • family names or pickup routines

  • passwords, login codes, verification codes

Then align on one simple student rule: The Billboard Test
“Would I be okay with total strangers knowing this?” If no, don’t share it—even if an app or AI asks.

4) App permissions and digital footprints

Students often think privacy is only what they type. Teachers can help students understand how apps collect information through permissions:

  • camera access

  • microphone access

  • location services

Teach one quick question students can use every time:
“Why would this app need this permission?”
If it doesn’t make sense, deny it and ask a trusted adult.

Pair that with the idea of a digital footprint: what students post, search, click, and allow can stick around longer than they expect, even if they delete it later.

5) Misinformation and deepfakes: what students see and how to verify

Students don’t need a long lecture. They need a simple, repeatable routine, especially when content feels shocking, urgent, or “too perfect.”

Name the kinds of AI misinformation students actually see:

  • fake “breaking news” posts and official-looking screenshots

  • deepfake video/voice (announcements, celebrity clips, prank audio)

  • AI images presented as real photos (“at our school,” disasters, events)

  • confident-but-wrong homework help (fake quotes/citations, wrong steps)

  • “health/safety” claims that sound convincing but aren’t accurate

  • giveaways/free rewards scams with polished AI language

  • fake school rumors (schedule changes, closures, “new rules”)

  • AI-generated content used to bully or frame someone (fake screenshots/images)

Then align staff on one shared verification routine: Pause, Source, Confirm

  • Pause: If it’s shocking, urgent, emotional, or “too perfect,” stop before sharing.

  • Source: Where did it come from? Who posted it? Is there an original link?

  • Confirm (use 2 checks): cross-check trusted sources, check date/context, look for manipulation clues, reverse image search with adult support, or ask a trusted adult.

Adopt quick classroom norms:

  • Two-source rule: If you can’t confirm it with two reliable sources, don’t repost.

  • School rumor rule: If it’s about school, check the official school channel first.

  • Student script: “I’m not sure this is real. I’m going to verify before I share.”

6) The “double-check habit”: AI can be confidently wrong

Even when content isn’t a deepfake, AI can still be wrong. Align on one consistent message: AI is a starting point, not a final answer.

Reinforce a simple norm:

  • verify with a trusted adult, reliable website, or book

  • slow down if it sounds extreme, emotional, urgent, or too perfect

Practice two scenarios (the most important part)

Pick two scenarios and have teachers write the safest student response in one sentence using shared language. For example:

  • posting a video with location clues

  • an app requesting mic/location for no clear reason

  • a shocking viral image students want to repost

  • a chatbot asking “Where do you live/go to school?”

  • a confident AI answer contradicting a trusted source

This is where consistency gets built quickly.

Close with immediate classroom action

Before teachers leave, decide:

  • the date everyone will teach a short mini-lesson (within 10 school days)

  • the delivery method (whole group or asynchronous)

  • a quick exit ticket:

    • “One new thing I learned…”

    • “One question I still have…”

    • Optional: “Would you share this? Why or why not?”

At the next team meeting, teachers bring a few anonymous exit tickets and look for patterns: identifying information awareness, Billboard Test use, permission decisions, verify-before-sharing habits, and how students describe AI. That follow-up is what turns a one-time PD into real student behavior change.

Ready-to-Teach Resources

Click here for the link to the Gamma slide show of this PD. Click here to download a PDF of a notetaker to go along with the PD. If you want a plug-and-play student lesson to pair with this PD format suggestion, you can purchase Staying Safe with AI on Teachers Pay Teachers or on Outschool today.

Next
Next

Why I’m Doubling Down on PLCs and What’s Coming Next