The Great Revelation: AI Exposes the Limits of Educational Control
What happens when we acknowledge the limits of our educational control? What happens when students truly become participants in their educational development?
Thanks for joining our growing community of educators exploring AI in learning! With 200-300 new subscribers each week, you're now part of an important conversation about how AI is transforming education at all levels.
Everything we share remains freely available, though paid subscriptions help support our work. Questions about anything I cover? Just DM me - always happy to chat!
Nick
Perhaps what's really irking us educators about this whole AI situation is that it's revealing what has been true the whole time—we teachers are not in control.
The emergence of accessible AI chatbots has sparked intense debate in education. Within weeks of ChatGPT's debut, large school districts moved to ban AI on school networks, fearing students would use it to generate answers and essays dishonestly. Early surveys found that 43% of K-12 educators believed ChatGPT would make their jobs more difficult. Teachers suddenly spent time devising "AI-proof" assignments rather than focusing on instruction.
Yet beneath this panicked response lies a deeper revelation: AI is not creating a new paradigm but exposing a truth educators have long denied. The locus of control in education has never fully resided with teachers, despite our institutional structures suggesting otherwise. As Stanford research revealed, there was no statistical rise in cheating after ChatGPT's release—honest students used it to enhance learning rather than cheat outright, while those who previously found shortcuts simply adopted a new tool.
This mirrors what educational philosopher Paulo Freire critiqued as the "banking model" of education, where "the teacher is a lecturer, and the students are containers that need to be filled by the teacher." This model treats students as objects rather than subjects of their own learning—a dynamic Freire warned is dehumanizing: "To alienate human beings from their own decision making is to change them into objects."
Students have always driven their own educational journeys. They decide what to internalize, what to memorize temporarily, and what to discard. They determine when to engage deeply and when to merely comply. Through history, they've created informal networks to navigate around strict rules—swapping notes, using summaries instead of assigned readings—essentially asserting control over how much effort to invest in meeting requirements.
Our elaborate grading schemes and assessments are particularly powerful mechanisms for maintaining the illusion of teacher control. As Jesse Stommel bluntly states, "Grades are currency for a capitalist system that reduces teaching and learning to a mere transaction." This system puts teachers and students at odds, creating what one educator described as "a hierarchical system that pits teachers against students and encourages competition by ranking students." The traditional grading paradigm shifts the focus from learning to accumulating points—students are incentivized to play the game rather than pursue knowledge for its own sake.
The Science Advances study of 2024 demonstrates this dynamic perfectly: when college students were allowed to choose whether attendance would count toward their grade (rather than having it mandated), nearly all opted to make attendance part of their evaluation and attended more regularly than those for whom it was simply required. When trusted with autonomy, most students acted responsibly—the outcomes improved not despite student control, but because of it.
What AI tools like ChatGPT have done is unveil this reality, making it impossible to ignore. Now, students have unprecedented access to thought-simulation technologies that operate outside our ability to trace or control them. These tools aren't just answering questions; they're revealing the fundamental truth that education is, and has always been, primarily student-driven.
The ease and undetectability of AI-generated content breaks the traditional equation of schooling that equated submitted work with learning achieved. As veteran teacher David Nurenberg observes, for decades education (especially under standardized testing regimes) emphasized measurable outcomes—polished essays, test scores, final projects—as proof of learning. But now that students can produce polished essays via AI with minimal effort, educators are compelled to refocus on the learning process rather than just the end product.
This realization can be profoundly unsettling. It challenges our professional identity and institutional frameworks built on the premise of teacher authority. It makes us question the validity of our assessments and the authenticity of student work. But perhaps instead of seeing this as a threat, we might recognize it as an opportunity for a more honest educational model—one that acknowledges the reality that students have always been co-authors of their educational stories.
What if we acknowledged that we are, at best, trust participants in our students' learning journeys? What if our role shifted toward what educational models identify as the teacher-as-mentor: where authority becomes more about expertise and less about power? This aligns with what constructivist and inquiry-based models have long advocated—moving from the "sage on the stage" to a facilitator who provides structure while students explore and discover.
The pandemic exposed these truths before AI accelerated them. As I explored in my post "AI is Not the Solution to All Our Educational Challenges," post-pandemic students reveal learners who "crave opportunities" for multiple pathways of engagement, "push themselves towards mastery when they feel significant ownership" of their learning, and struggle with a "fundamental distrust in 'school as normal.'" They aren't yearning for technological revamps but for recognition as "self-empowered learners" after a period of extended powerlessness.
This explains why early approaches to AI in education failed so spectacularly. The "resist" model (ban or restrict) reasserted teacher authority and characterized student AI use as misconduct, creating a cat-and-mouse dynamic while ignoring the reality of student agency. The "adapt" model (mitigate and redesign) acknowledged AI's permanence but still clung to traditional authority structures. Only the "embrace" model—which integrates AI as a learning tool while teaching students to use it critically—truly recognizes the balance of control that has always existed.
As New York City schools discovered when they reversed their AI ban and launched an education program teaching students how to use tools like ChatGPT, the embrace approach treats students as partners in innovation. It implicitly trusts that students can learn to use new tools constructively if empowered with knowledge and ethical frameworks.
So perhaps what's irking us isn't really about AI at all. It's about confronting the gap between the education we think we're providing and the education students are actually creating for themselves. It's about recognizing that our illusion of control has been just that—an illusion maintained by institutional structures that no longer align with technological realities.
The question now isn't how to regain control, but how to become worthy partners in our students' learning journeys. How do we shift from being gatekeepers to trusted guides who can help students navigate a world where information is abundant but wisdom remains rare? How do we build what educational theorists have long described as "a balanced approach whereby teachers and students partner to co-construct student learning"?
This isn't just about adapting to AI. It's about embracing a fundamental truth that AI has simply made impossible to ignore: the tension between student agency and teacher authority, when balanced thoughtfully, is what creates a productive educational experience. As one teacher Cherrie Shields urged regarding AI's arrival: "The best way to learn anything new is just to jump right in and try it out." That spirit of open-minded experimentation, guided by pedagogical wisdom and mutual respect, will be crucial in reimagining classrooms where authority and agency are not opposites at war, but complementary forces driving education forward.
Nick Potkalitsky, Ph.D.
Check out some of our favorite Substacks:
Mike Kentz’s AI EduPathways: Insights from one of our most insightful, creative, and eloquent AI educators in the business!!!
Terry Underwood’s Learning to Read, Reading to Learn: The most penetrating investigation of the intersections between compositional theory, literacy studies, and AI on the internet!!!
Suzi’s When Life Gives You AI: An cutting-edge exploration of the intersection among computer science, neuroscience, and philosophy
Alejandro Piad Morffis’s Mostly Harmless Ideas: Unmatched investigations into coding, machine learning, computational theory, and practical AI applications
Michael Woudenberg’s Polymathic Being: Polymathic wisdom brought to you every Sunday morning with your first cup of coffee
Rob Nelson’s AI Log: Incredibly deep and insightful essay about AI’s impact on higher ed, society, and culture.
Michael Spencer’s AI Supremacy: The most comprehensive and current analysis of AI news and trends, featuring numerous intriguing guest posts
Daniel Bashir’s The Gradient Podcast: The top interviews with leading AI experts, researchers, developers, and linguists.
Daniel Nest’s Why Try AI?: The most amazing updates on AI tools and techniques
Riccardo Vocca’s The Intelligent Friend: An intriguing examination of the diverse ways AI is transforming our lives and the world around us.
Jason Gulya’s The AI Edventure: An important exploration of cutting edge innovations in AI-responsive curriculum and pedagogy.
This is how many of us who recognized the power and opportunities AI presented rather than focused on the problems or obstacles to overcome, have approached AI in education.
As I have only taught at the college and university level, I’ve been fortunate to focus on the adult learning model (Andragogical) rather than pedagogical. One of the many aspects of teaching that, I believe, hampers those in the profession from adopting and applying “forward thinking” is the continual use of “backward facing” terminology.
While many would argue that pedagogy and Andragogy are interchangeable terms, I would posit the opposite. Particularly for legacy teachers, instructors, and professional knowledge providers. Using correct terminology for students can encourage and enable the shift in our teaching and learning styles and mindset.
Many thanks for bringing up the use of any tool that supports preparing the whole citizen via many subject areas offered, taught and explored. Wondering if AI has the capacity for bringing people together in as yet unexplored ways; just riffing here.