Seven Schools, Seven Pathways: What We Can Learn from AI Implementation Across Contexts
The Global Laboratory: Lessons from Schools Navigating AI Integration
Thank you for engaging with this work—if you found it valuable, please like the post. Over the next month, I’m establishing a cohort of educators interested in developing disciplinary-specific AI practices in their classrooms. Participants will receive early access to DSAIL materials frameworks, and the chance to build this approach alongside other educators, so DM me if you’re interested.
Opening Frame
When Dan Fitzpatrick’s editorial team compiled case studies for The Educators’ 2026 AI Guide, they captured something more valuable than best practices: they documented schools in the messy middle of transformation. A 65-member task force in Ohio spending two years on foundational learning. A New York administrator buying Chrome extensions in February to help overwhelmed teachers. A UK boarding school deploying AI to students in January, then pivoting to teach it as a discrete subject by fall. A micro-school founder building dashboards that predict student outcomes from daily emotional check-ins.
These aren’t the polished implementation stories that appear in edtech marketing materials. They’re snapshots of institutions wrestling with impossible timelines, limited capacity, and questions nobody has answered yet. What makes the collection valuable is precisely what makes it unsettling: the sheer diversity of approaches reveals there is no consensus about what AI integration should look like, what problem it’s meant to solve, or even who it’s primarily for.
Fitzpatrick positions the book around two camps—pragmatists using AI to support existing work versus transformationalists reimagining education entirely. But reading through the seven case studies, a different pattern emerges. These aren’t stories about choosing between efficiency and transformation. They’re stories about schools trying to be intentional while the ground shifts beneath them, attempting to build frameworks while students are already using ChatGPT for homework, and making high-stakes decisions about tools and data and pedagogy with incomplete information.
The question isn’t which school got it right. The question is: what can we learn from watching seven different institutions navigate different constraints toward different goals? What patterns appear across contexts? What gaps persist despite good intentions? And what does this moment of radical variation teach us about the coordination challenges ahead?
Context Shapes Everything (And That’s Actually Good)
What the case studies reveal: The variation isn’t chaos—it’s adaptation to real constraints and opportunities.
Berea City Schools (Lesson 1) - Large public system, Ohio
Approach: 65-member task force, 5-stage process over 1-2 years, book study foundations
What they got right: Taking time to build shared vocabulary and vision before deployment. “AI should be human-centered” isn’t just rhetoric when 65 people study and discuss it together.
The constraint they’re navigating: Scale. You can’t move 65 people quickly, but once they move, they move together.
Transferable lesson: Task forces aren’t bureaucratic delays—they’re infrastructure. When Berea reaches student AI literacy (currently in “next steps”), they’ll have institutional buy-in most districts never achieve.
Mohosen School, Upstate NY (Lesson 2)
Approach: Teacher pain points first, rapid tool adoption (Brisk for grades 6-12), heavy emphasis on efficiency
What they got right: Meeting teachers where they are. “Overwhelmed teachers, disengaged students, and a need for support”—Karandy didn’t pretend this was about transformation. It was about survival.
The constraint they’re navigating: Teacher capacity. Special ed teacher quote: “With Gemini, the shift wasn’t only in time, it was emotional capacity.”
Transferable lesson: Sometimes “just help teachers breathe” is the right first step. The danger is stopping there. Karandy acknowledges “AI literacy curriculum” is on the horizon—the question is whether they’ll get there before patterns calcify.
Haileybury College, UK (Lesson 3)
Approach: Fast deployment to students, flipped learning model, then pulled back to discrete AI subject
What they got right: Willingness to pivot. Early resistance framed AI as academic dishonesty → shifted to flipped classrooms. Policy overload → policy clarity. They learned in real-time.
The struggle worth noting: “Students, 13 and above, could use tools during lessons and for homework”—but what did they learn about AI from AI before the Digital Innovations course launched?
Transferable lesson: Speed creates data. Haileybury’s pivots came from watching what actually happened when students used AI, not from committee deliberations. But you need commitment to observe and adjust, not just deploy and declare victory.
The Specialist’s Dilemma—When Expertise Enables (And Limits)
International School of Panama (Lesson 4)
Approach: Custom model building, fine-tuned tools for Understanding by Design, RAG tools for unit planning, automated observation feedback
What’s genuinely impressive: Rostan built systems aligned to their pedagogy (UbD), not generic AI tools. “Many of our teachers adopted it to do the heavy lifting for them, while they focused on the final review.”
The expertise barrier: This requires someone who can fine-tune models. Most schools don’t have a Jeremie Rostan. The case study documents what’s possible with rare technical capacity, but doesn’t help schools without it.
Transferable lesson: The principle transfers even if the execution doesn’t. “Even the best models usually underdeliver because they are not trained for this specific purpose”—schools should adapt AI to their frameworks, not adapt their frameworks to AI’s defaults. Most schools will use different tools (prompt libraries, not custom models), but the thinking applies.
Culture-Building as Core Work
GEMS Wellington, Dubai (Lesson 5)
Approach: AI Core Team, 6-stage process including student conference (AI Nexus: Connecting Minds), Research Retreat for 6th Form on ethics, parent engagement with question cards
What stands out: The integration challenge was central. “How do I get all these people together in one room at the same time?” Multi-trajectory approach: newsletter, coffee chats, trainings, information hub.
The success metric that matters: 35% student onboarding year one, 75% by end of year two. That’s actual culture shift, not just policy announcement.
Transferable lesson: Natha treated AI implementation as organizational change, not technology adoption. The tools weren’t the hard part—getting teachers, students, parents, and administrators developing shared understanding simultaneously was the work.
Sacred Heart Grammar School, Northern Ireland (Lesson 6)
Approach: “AI sandbox,” custom GPTs, voluntary Teacher AI Focus Group, CRAIC prompt framework, logging all interactions
What they articulated clearly: “Our pupils didn’t just need access to information. They needed scaffolding, clear frameworks for evaluation, structured digital guidance, and a critical lens through which to view the tools now shaping their world.”
The honest struggle: April data showed 186 chat sessions total. They built thoughtful infrastructure and... limited adoption.
Transferable lesson: Going slow and being intentional doesn’t guarantee uptake. Sacred Heart’s low usage raises important questions: Were barriers too high? Was the sandbox too restrictive? Or does careful literacy-first implementation just take longer to gain traction? Their approach deserves watching over time, not dismissing based on early numbers.
The Alternative Model Shows Different Possibilities (And Different Risks)
IMperfect Academy (Lesson 7)
Context: Micro-school for students “stuck in the gaps of traditional education”
Approach: AI-driven Individual Learning Plans, IMperfect Dashboard pulling wide swath of student data, students co-creating PBL tasks, AI avatars, AI-driven SEL prompts
The compelling outcomes: 25% decrease in recidivism, 18 students accelerating through coursework, teachers saving 6 hours/week
The unresolved tension: Claims “zero-surveillance stance” while describing systems monitoring SEL indicators, academic heatmaps, and flagging wellbeing concerns. This isn’t necessarily wrong—but calling intensive data monitoring “zero-surveillance” obscures important conversations about trade-offs.
Transferable lesson: Alternative schools can move faster because they’re already outside traditional constraints. IMperfect demonstrates what high-AI-integration can look like. Whether traditional schools should move toward this model requires grappling with questions Blackwell raises but doesn’t fully answer: What’s the difference between personalization and surveillance? Between support and dependence?
What Emerges Across All Seven
Three Success Patterns:
Alignment to existing pedagogy matters (Panama’s UbD integration, Berea’s human-centered foundation)
Multi-stakeholder engagement prevents siloing (GEMS Wellington’s integration challenge focus, Sacred Heart’s Teacher Focus Group)
Iteration beats perfect planning (Haileybury’s pivots, Mohosen’s responsive tool adoption)
Three Persistent Gaps:
Student AI literacy keeps getting deferred - Even schools doing interesting work treat it as future phase, not foundation
Assessment redesign barely appears - Only Haileybury acknowledges AI “breaks traditional assessment.” Everyone else bolts AI onto existing grading structures
Data security lip service - How can a school protect student data when utilizing 4-5 different tools or inputting SEL data to automate feedback and interventions?
The Geographic Pattern Worth Noting:
US schools are slowest and most teacher-productivity-focused. International schools move faster toward student-facing tools but struggle with literacy frameworks. The UK schools show more willingness to pivot and experiment. This isn’t better or worse—it’s different institutional contexts producing different risk tolerances and different bottlenecks.
Closing: What These Seven Schools Show Us
These case studies don’t provide a blueprint. They provide evidence that certain questions persist across radically different contexts.
How do we move from teacher productivity tools to genuine student AI literacy? How do we redesign assessment when AI transforms what students can produce? How do we govern data across multiple platforms and increasingly sensitive use cases?
These questions appear in Ohio public schools and Dubai international schools, in fast-moving UK boarding schools and carefully-paced Northern Ireland grammar schools. Context shapes how schools approach them, but it doesn’t eliminate them.
The value isn’t in the solutions. It’s in watching committed educators generate knowledge in real time. Berea’s deliberate pace teaches us about building institutional capacity. Mohosen’s rapid adoption reveals what happens when you prioritize immediate teacher relief. Haileybury’s pivots show the importance of observation over planning. GEMS Wellington demonstrates what multi-stakeholder culture change requires. Sacred Heart raises questions about adoption versus intentionality. Panama shows what’s possible with rare technical expertise. IMperfect presents both possibilities and cautions about data-intensive personalization.
For schools beginning this work, the lesson isn’t “copy one of these approaches.” It’s this: understand what questions matter most in your context, then commit to learning from what happens next.
What constraints are you navigating? What existing pedagogies should AI align with rather than replace? Who needs to be in the room? How will you coordinate the AI literacy students are already developing into something coherent rather than contradictory?
These seven schools can’t answer those questions for you. But they show what it looks like when schools take them seriously, make different choices based on different constraints, and document both successes and uncertainties. That honesty is the real contribution. Thoughtful implementation doesn’t mean having all the answers. It means staying curious about the questions.
Nick Potkalitsky, Ph.D.
Check out some of our favorite Substacks:
Mike Kentz’s AI EduPathways: Insights from one of our most insightful, creative, and eloquent AI educators in the business!!!
Terry Underwood’s Learning to Read, Reading to Learn: The most penetrating investigation of the intersections between compositional theory, literacy studies, and AI on the internet!!!
Suzi’s When Life Gives You AI: A cutting-edge exploration of the intersection among computer science, neuroscience, and philosophy
Alejandro Piad Morffis’s The Computerist Journal: Unmatched investigations into coding, machine learning, computational theory, and practical AI applications
Michael Woudenberg’s Polymathic Being: Polymathic wisdom brought to you every Sunday morning with your first cup of coffee
Rob Nelson’s AI Log: Incredibly deep and insightful essay about AI’s impact on higher ed, society, and culture.
Michael Spencer’s AI Supremacy: The most comprehensive and current analysis of AI news and trends, featuring numerous intriguing guest posts
Daniel Bashir’s The Gradient Podcast: The top interviews with leading AI experts, researchers, developers, and linguists.
Daniel Nest’s Why Try AI?: The most amazing updates on AI tools and techniques
Jason Gulya’s The AI Edventure: An important exploration of cutting-edge innovations in AI-responsive curriculum and pedagogy.



Thanks. Great overview of a variety of best efforts to deal with the realities of 2025. I've just downloaded the book. Interesting to me that only one of the sites changed assessment. WHAT?? This is a key finding that stands out to me. Thanks for another very helpful post.
To me the key takeaway is that educators seem to think it’s all about whether or not to use AI and are putting no energy at all into what to teach about it.