Before teachers can explore tools like NotebookLM meaningfully, they need to feel safe—both technically and professionally. AI exploration asks educators to take risks: trying new workflows, questioning outputs, and admitting uncertainty. Those moves don’t happen in a culture that feels evaluative, unclear, or inconsistent.
This module helps you build the foundation that protects trust and supports learning.
By the end of this module, you will be able to:
1) Create a psychologically safe environment for trying something new
You’ll set the tone that exploration is expected, questions are welcomed, and mistakes are part of the learning process—not evidence of incompetence. Leaders will practice language that signals:
“This is not evaluation.”
“This is exploration.”
“We learn by trying.”
2) Set strong privacy guardrails with a clear “Never-Upload” list
Even when a tool is “grounded” in documents, privacy and compliance still matter. Teachers need crystal-clear boundaries about what can never be entered into an AI tool (student identifying information, student work, IEPs/504s, behavior records, assessment data tied to students, and any protected records). You’ll also identify what is safe to use: public curriculum resources, teacher-created materials with no student data, and publicly shareable documents.
3) Develop team norms focused on inquiry and collaboration
Instead of handing teachers a list of rules, you’ll support them in co-creating agreements for how the group will learn together. This builds buy-in, reduces defensiveness, and helps teams respond productively when the tool is wrong, confusing, or inconsistent.
Examples of norms you’ll build:
Curiosity over certainty
“Assume positive intent”
Support without comparison
Verify before sharing
Privacy is non-negotiable
4) Invite teachers into an AI pilot PLC with transparency and clarity
Teachers are more willing to participate when the invitation is honest and specific. You’ll learn how to communicate:
what the pilot is and isn’t
what teachers are expected to do
what supports will be provided
how privacy and professionalism will be protected
Why this matters
These foundational steps protect educators and students—and ensure your AI work supports innovation without compromising trust or safety. When you lead with guardrails and psychological safety first, teachers are far more likely to participate, practice, and apply what they learn in ways that truly benefit students.
IN THIS LESSON