Beyond Compliance: A Reappraisal of Trust
Trust is the greatest, untapped resource in our race to keep up with AI advancement
Thank you for engaging with this work—if you found it valuable, please like the post. Over the next month, I’m establishing a cohort of educators interested in developing disciplinary-specific AI practices in their classrooms. Participants will receive early access to DSAIL materials frameworks, and the chance to build this approach alongside other educators, so DM me if you’re interested.
Let’s be clear: AI implementation in K-16 is high stakes. We’re talking about student data privacy, academic integrity, the future of assessment, and fundamental questions about what learning looks like in an AI-saturated world. There are real safety concerns, legitimate pedagogical worries, and pressures from every direction. We all want to get this right.
Given the stakes, the impulse toward control makes perfect sense. When the ground is shifting beneath our feet, when new tools launch monthly, when parents and board members are asking hard questions, reaching for policies and detection software feels like responsible leadership.
But it’s time I returned to a foundational principle that’s been threading through my work all summer, even when I haven’t named it directly: trust.
Trust has been the implicit commitment behind every conversation I’ve had with district leaders, every classroom visit, every reflection on how schools are navigating AI integration. And as I travel across Central Ohio, watching how this plays out in real time, I’m seeing trust emerge not just as an ideal, but as the most practical path forward in this high-stakes environment.
The Compliance Trap
Walk into any district meeting about AI, and you’ll likely hear conversations about detection software, plagiarism policies, and elaborate systems designed to catch students using AI inappropriately. The language is about compliance, control, and catching violations.
I understand the impulse. The pressure is real. Boards are asking questions. Parents have concerns. The tools are moving faster than policy can keep pace. When everything feels uncertain, control feels like safety.
But here’s what I’m observing: the districts investing the most energy in detection and control are often the ones struggling most with meaningful AI integration. They’re spending time and resources on surveillance while missing opportunities to develop the critical thinking and ethical reasoning that actually matter in an AI-saturated world.
The compliance approach assumes the worst about both teachers and students. It assumes teachers can’t be trusted to navigate new tools thoughtfully and students can’t be trusted to engage with AI in service of learning. These assumptions create exactly the environment they fear: secrecy, workarounds, and missed learning opportunities.
What Trust Looks Like in Practice
Three years into this disruption, I’m seeing something different in the districts that are thriving. They’re choosing trust as a strategic approach, and it’s working.
Trust with teachers means recognizing that no two classrooms, schools, or communities are identical. It means understanding that the teacher working with newcomer English learners needs different AI approaches than the one teaching AP Literature. It means giving teachers space to experiment, reflect, and share what’s actually working rather than mandating uniform policies that ignore the realities of diverse learning environments.
I’m watching technology directors pause implementation plans to ask teachers what they’re seeing in their classrooms. I’m seeing instructional coaches create spaces for teachers to share both successes and failures with AI tools. These aren’t perfect rollouts, but they’re sustainable ones, built on the recognition that teachers are master adapters who understand their students better than any policy manual ever could.
Trust with students means acknowledging that they already have AI in their pockets and helping them navigate it thoughtfully rather than pretending we can control their access. It means shifting from “How do we catch students using AI?” to “How do we help students use AI well?”
The most powerful classroom conversations I’m witnessing aren’t about whether AI was used, but about how thoughtfully it was used. Students who feel safe being transparent about their AI process are the ones having meaningful discussions about when these tools enhance their learning and when they don’t.
Why Trust Isn’t Naive
Some will argue that trust is idealistic, that we need guardrails and oversight. But trust isn’t the absence of boundaries; it’s the foundation for meaningful ones.
Trust-based approaches still have clear expectations. The difference is that those expectations focus on learning outcomes and ethical reasoning rather than tool avoidance. They assume students and teachers want to do good work and create conditions for that to happen.
More practically, the alternative simply doesn’t scale. We cannot monitor every AI interaction, detect every tool use, or control every learning environment. The energy spent on surveillance could be redirected toward developing the critical thinking skills that will serve students long after our current detection tools become obsolete.
The Trust Ecosystem
Here’s what I’m learning: trust with teachers and trust with students aren’t separate strategies. They’re interconnected parts of the same ecosystem.
When we trust teachers to lead AI integration thoughtfully, we model the kind of professional judgment we want students to develop. When we trust students to engage with AI transparently, we create space for the authentic learning conversations that help teachers refine their practice.
Students notice when their teachers are supported and trusted to make professional decisions. Teachers notice when students are treated as thoughtful partners in learning rather than potential violators of policy. This creates a culture where everyone can focus on the real work: developing wisdom about how to live and learn in an AI-integrated world.
Moving Forward
The students who will thrive in the next decade aren’t the ones who learned to avoid AI or sneak around detection software. They’re the ones who learned to use these tools wisely, ethically, and in service of deeper learning. The teachers who will lead this work aren’t the ones following scripts about AI, but the ones trusted to adapt these tools to serve their students’ unique needs.
Three years in, the evidence is clear: trust, grounded in clear expectations and ongoing dialogue, creates better outcomes than compliance and control. It’s not just more humane; it’s more effective.
As we head into another school year of rapid change and evolving tools, maybe it’s time to trust the people who’ve been adapting to change all along: our teachers and our students.
Nick Potkalitsky, Ph.D.
What are you seeing in your context? Where are you finding opportunities to choose trust over control? I’d love to hear your reflections.
Check out some of our favorite Substacks:
Mike Kentz’s AI EduPathways: Insights from one of our most insightful, creative, and eloquent AI educators in the business!!!
Terry Underwood’s Learning to Read, Reading to Learn: The most penetrating investigation of the intersections between compositional theory, literacy studies, and AI on the internet!!!
Suzi’s When Life Gives You AI: A cutting-edge exploration of the intersection among computer science, neuroscience, and philosophy
Alejandro Piad Morffis’s The Computerist Journal: Unmatched investigations into coding, machine learning, computational theory, and practical AI applications
Michael Woudenberg’s Polymathic Being: Polymathic wisdom brought to you every Sunday morning with your first cup of coffee
Rob Nelson’s AI Log: Incredibly deep and insightful essay about AI’s impact on higher ed, society, and culture.
Michael Spencer’s AI Supremacy: The most comprehensive and current analysis of AI news and trends, featuring numerous intriguing guest posts
Daniel Bashir’s The Gradient Podcast: The top interviews with leading AI experts, researchers, developers, and linguists.
Daniel Nest’s Why Try AI?: The most amazing updates on AI tools and techniques
Jason Gulya’s The AI Edventure: An important exploration of cutting-edge innovations in AI-responsive curriculum and pedagogy.



A follow up question to the opportunity in the cohort - Why by discipline? To ensure the teacher has a mastery of the subject to quickly note erroneous output? I ask because as an AP Language teacher, I am often working across disciplines for topics like International Humanitarian Law and its importance or a discussion like "Does the pursuit of quantum computing justify using the resources often required?"
I would also argue that out of education, in the interactions I've had with people working in for-profit AI the past year or so, they are looking for broad thinkers who understand concepts across disciplines.
That stated, certainly students must reach a point of understanding underlying jargon and grammars within disciplines, so I understand the need to split apart AI usage especially at middle or primary levels.
Trust is such a difficult word here because we know what it means... Kind of. But we don't know how to operationalize it. That's why I like using the term Entrust as in, so I trust AI? No. Do I trust kids? Also no..
However, will I entrust kids to use AI in certain context and, as you regularly recommend, frameworks that bound the problem space.
More on the nuance here:
https://www.polymathicbeing.com/p/dont-trust-ai-entrust-it