Stop Treating AI Like Training. Start Treating It Like Change Management.
One of the things I do best is help schools introduce new work in a way that actually sticks from new curriculum, new tools, or new structures like PLCs (Professional Learning Communities) and RTI (Response to Intervention). In other words: I live in the world of change management in schools.
A PLC is a team-based routine where educators use shared evidence of student learning to make instructional decisions, try a plan, and come back with results. RTI is a tiered support system for identifying student needs early and responding with targeted instruction and interventions. Both are strong on paper. Both can fall flat in practice when implementation is treated like a one-time training instead of a sustained change management process.
So when AI tools started becoming more mainstream, I treated it like any other major shift: I went to learn how it works before I tried to tell schools what to do with it.
I took roles that would put me close to the work. My first AI-related job was contract work with Google, supporting the training process for a large language model. Most recently, I worked as an Education Program Manager at an edtech startup, partnering with district leaders, school leaders, and teachers to implement a tool in real school conditions, not just ideal ones.
That experience forced a rethink.
Because the biggest barrier to AI integration isn’t interest. It isn’t even skill. It’s the human side of change. It’s the risk teachers feel when they have to try something new in front of colleagues while they’re already carrying a full load. If it’s not safe to say, “I don’t know how to do this,” people will either opt out quietly or perform competence. And neither of those leads to real learning or lasting change.
That’s why this PD I’m creating starts with psychological safety before we talk about tools.
Implementation Is Hard. Change Management Makes It Possible.
Most leaders already know this, even if they’re exhausted from living it: implementation isn’t launching something. It’s getting it to stick.
Over the past year, I watched educators engage with an unfamiliar AI platform, some with excitement, others with hesitation, many with both at the same time. And what stood out wasn’t whether the tool was “good”, because, honestly, it wasn’t a perfect platform. It had the predictable growing pains of an early-stage product, and that showed up in day-to-day use.
What mattered more than the platform itself were the change-management conditions around the rollout:
protected time
shared language
clear purpose
leadership follow-through
permission for a realistic learning curve
Teachers and leaders needed a place to ask questions, test ideas, reflect honestly, and build on each other’s learning.
That’s why I’m doubling down on PLCs as a practical vehicle for change management in schools, and a natural launchpad for AI integration.
But there’s one more layer that matters even more than the meeting structure.
The Barrier Isn’t the Tool. It’s the Risk of Learning in Public.
What often gets overlooked in AI rollouts is that we’re not just asking teachers to learn something new.
We’re asking them to learn it in public.
To draft, test, and revise in front of colleagues. To admit confusion. To ask questions they don’t have time to polish. To try something that might not work.
That’s not a technical hurdle. That’s a change management hurdle, because change asks people to step into uncertainty, and uncertainty can feel unsafe.
And if it isn’t safe to be new, people will protect themselves in predictable ways:
Quiet avoidance
Minimal compliance
“Looks like I’m doing it” implementation
A few early adopters carrying the weight for everyone else
None of that is a character flaw. It’s what humans do when the environment doesn’t protect learning.
This is why my PLC + AI PD starts with psychological safety.
And I’m not saying that as a trendy buzzword. I’m saying it because I learned it the hard way.
The Moment I Realized I Was Asking for Vulnerability (Not “Just Data”)
Five years ago, as an instructional coach, I rolled out my VOYAGE Horizons PLC framework at an elementary school. I created a structure called the Short Data Cycle, a concise PLC routine designed to keep teams anchored in student learning by moving from evidence → decisions → action → reflection.
Part of the cycle required teams to look at class data together.
When I introduced it, one of the first pushbacks came from a teacher I never expected it from: the grade level chair.
She was the kind of teacher administrators count on. Experienced. Strong instruction. Kind. Respected. If leadership asked her to do something, even something hard, she did it. No pushback.
So when she reacted with immediate anger about sharing class data with her team, I was completely taken aback.
In my head, I kept thinking: It’s just data. Haven’t they been doing this already?
They hadn’t.
Not her team. Not any team. It wasn’t part of the culture. And what I was calling a “simple protocol” landed as something else entirely:
Put your results on the table so everyone can see them and judge them.
Instead of defending the structure, I started asking open-ended, probing questions. The anger shifted. And then she started to cry.
She worried that her class data was so low—COVID-19, anyone?—that if others saw it, they would judge her effectiveness, not only as a teacher, but as a leader.
And once she said it out loud, it became clear she wasn’t the only one carrying that fear.
That experience changed how I think about implementation and about change management.
Because the resistance wasn’t about the Short Data Cycle.
It was about vulnerability without protection.
Why This Matters for AI Change Management
Trying an AI tool in front of colleagues can trigger the same internal questions teachers often won’t say out loud:
What if I don’t get it?
What if I misuse it?
What if I look behind?
What if this becomes one more way people measure me?
What if this replaces me?
If a school hasn’t built the conditions where it’s safe to be a beginner, AI integration doesn’t become innovation.
It becomes performance.
That’s why this PD I’m working on includes psychological safety on purpose, not as a soft opening, but as a core change management requirement.
Because before I ask teachers to try something that makes them feel exposed, I have to help leaders and facilitators build the container that makes that exposure safe.
What’s Coming Next: PLCs as the Launchpad for AI
This is the direction I’m heading next in my work: helping schools use PLCs as the place where AI exploration becomes sustainable, responsible, and actually useful.
Not because PLCs are trendy.
Because PLCs, when they’re supported well, create the conditions teachers need to learn side by side:
a shared routine
a shared purpose
a shared language
and enough trust to say, “I don’t know yet.”
And that’s the starting point for real change.
Final Thoughts
AI tools are going to keep showing up in schools. Some will be helpful. Some will be noise. But none of them will “transform” anything if we skip the conditions teachers need to learn them without fear.
PLCs can be the safest and most effective place for that learning if we treat PLC implementation as a change management responsibility, not a calendar event.
So before we talk about tools, my question is this:
What are we doing to make it safe to be a beginner here?