The Five Faces of Education in the Age of AI: A Spectrum of Survival, Skepticism, and Symbiosis
Or: What Do We Lose When We Outsource Thought?
Consider supporting Educating AI by becoming a paid subscriber. Know that I love to hear from my readers. Drop me a line at nicolas@pragmaticaisolutions.net
Discussions about AI in education too often fall into oversimplified binaries—AI good vs. AI bad, full rejection vs. total adoption. We tend to assume that a single idea—like preserving AI-free zones or using AI for feedback—encapsulates an entire philosophy. This leads to mischaracterizations and prevents deeper conversations about how different responses to AI can coexist and evolve.
In reality, very few educators or institutions fully embrace or reject AI. Instead, responses are layered, context-dependent, and shaped by pedagogical values, institutional goals, and broader societal concerns. The Microsoft Critical Thinking Study has further prompted many to clarify their positions, revealing a spectrum of approaches rather than a rigid divide.
In my work with schools, I help institutions explore AI not through a singular stance but through a process of alignment with their specific instructional commitments. The goal is not to advocate for one approach over another but to recognize that AI integration will necessarily proceed along multiple lines, with each offering insights into the future of learning. This diversity should not be seen as a weakness but as a strength—our students will ultimately benefit from the lessons learned across these varied strategies. To illustrate these nuances, I outline five educational stances—not as rigid categories, but as reflective models of how educators and institutions are currently negotiating AI’s role in learning.
1. The Fortress: “No AI. Ever.”
Core argument: Human minds are built through friction. Writing by hand, thinking without algorithmic crutches, failing slowly—these are sacred rites. To outsource any part of the process to AI is to rob students of the struggle that forges intellectual grit.
What they fear: A generation that cannot think without autocomplete. That learning itself—struggle, synthesis, deep work—will be sacrificed for ease and efficiency.
What they miss: The world has already outsourced memory to Google, navigation to GPS, and some aspects of judgment to algorithms. A total rejection of AI may risk a form of educational nostalgia that prioritizes tradition over adaptability.
Key policies in practice: Bans on AI-generated content. Written assignments must be done by hand or on non-connected devices. AI detection software becomes a gatekeeper of academic integrity.
2. The Gatekeepers: “AI as Librarian, Not Co-Author.”
Core argument: Let AI fetch sources, check grammar, or debug code—but keep it away from the blank page. The first draft of thought must be human, because writing is not just communication; it’s the act of becoming coherent.
What they fear: Students who mistake AI’s fluency for understanding. That the ability to produce words does not mean one has generated ideas.
What they risk: A false binary between research and creation. If writing is thinking, what happens when AI participates in that thinking? Can a tool that edits your sentences also edit your ideas?
Key policies in practice: AI may be used in later stages—editing, formatting, summarizing—but not in initial idea generation or drafting. AI citation and disclosure policies become crucial to maintaining academic integrity.
3. The Mediators: “Teach Me to Argue With My Algorithm.”
Core argument: AI is not a calculator for words. It’s a collaborator with motives. Let students dissect its biases, hack its templates, and co-write essays they later defend against its claims. The goal: minds agile enough to partner with AI without being parasitized by it.
Radical shift: Assessment becomes less about the product and more about the process of human-AI negotiation. Writing assignments shift toward meta-cognition—thinking about thinking.
Example: A student uses ChatGPT to generate a thesis about Macbeth, then must trace how their own revisions subvert the AI’s clichés. The A grade goes not to the best essay, but to the best critical interrogation of AI’s limitations.
Key policies in practice: AI must be actively critiqued within the writing process. Assignments require students to annotate AI-generated text, debate its assumptions, and justify human intervention in revisions.
4. The Re-negotiators: “Let’s Redefine ‘Skill’ Entirely.”
Core argument: With AI generating text rapidly, our goal is to foster human strengths like creative judgment and reflective insight. Instead of treating AI as a distant tool, we engage it as a collaborative partner.
Key move: Students compare multiple AI outputs to craft a final narrative that blends machine efficiency with human creativity.
What they fear: That uncritical reliance on AI will dilute original thought and erode deep evaluative skills.
Why it matters: This method banks on the activation of knowledge generation processes via information management.
Key policies in practice: AI-generated drafts can serve as starting points, with students required to analyze and refine each version.
5. The Synergists: “AI as Oxygen.”
Core argument: The ship has sailed. To ban AI is to teach typewriter repair in the age of Photoshop. Let students integrate it into every stage, provided they learn to cite, edit, and disclose. The new literacy: fluency in orchestrating human-machine creativity.
What they embrace: AI as collaborator, amplifier, and relentless editor.
What they dismiss: The romantic myth of the solo human genius.
Key policies in practice: AI literacy becomes foundational. Students are trained to navigate AI as a creative partner—knowing when to lean on it, when to challenge it, and when to reject it outright. Transparency in AI use is mandatory, not optional.
Why This Hurts (and Why It Matters)
This spectrum is not about technology. It’s about power—who controls the boundaries of thought.
The Fortress and Synergists share a secret: Both are uncertain about the future. The former fears obsolescence; the latter, irrelevance.
The Mediators and Re-negotiators ask the uncomfortable question: If AI can mimic critical thinking, do we double down on humanity—or redefine it?
The Gatekeepers seek to preserve core cognitive skills, recognizing that all cognition is already augmented (by language, culture, the very act of schooling).
The stakes: This is not a binary debate of “tech good vs. tech bad.” Different responses will be merited for different contexts. No single stance will serve all learners equally, and the most meaningful approach may be a blend, adapted to the needs of individual students and disciplines.
A Shared Challenge
These five perspectives, while distinct in their boundaries and practices, represent varied responses to the same fundamental question: how do we prepare students for intellectual agency in an AI-infused world? Each stance—from the Fortress's emphasis on cognitive struggle to the Synergist's embrace of human-machine collaboration—offers unique insights into what we value in the educational process. The diversity of these approaches reflects the complexity of education itself, which has never been reducible to a single methodology or philosophy.
As institutions navigate AI integration, many will likely adopt elements from across this spectrum, applying different stances to different contexts, disciplines, and developmental stages. The most valuable outcome may not be consensus but the ongoing dialogue these varied approaches generate—a conversation that continually recalibrates our understanding of learning as technology evolves. By acknowledging this spectrum of responses rather than demanding uniformity, we create space for the kind of thoughtful experimentation that has always driven educational innovation.
Nick Potkalitsky, Ph.D.
Check out some of our favorite Substacks:
Mike Kentz’s AI EduPathways: Insights from one of our most insightful, creative, and eloquent AI educators in the business!!!
Terry Underwood’s Learning to Read, Reading to Learn: The most penetrating investigation of the intersections between compositional theory, literacy studies, and AI on the internet!!!
Suzi’s When Life Gives You AI: An cutting-edge exploration of the intersection among computer science, neuroscience, and philosophy
Alejandro Piad Morffis’s Mostly Harmless Ideas: Unmatched investigations into coding, machine learning, computational theory, and practical AI applications
Michael Woudenberg’s Polymathic Being: Polymathic wisdom brought to you every Sunday morning with your first cup of coffee
Rob Nelson’s AI Log: Incredibly deep and insightful essay about AI’s impact on higher ed, society, and culture.
Michael Spencer’s AI Supremacy: The most comprehensive and current analysis of AI news and trends, featuring numerous intriguing guest posts
Daniel Bashir’s The Gradient Podcast: The top interviews with leading AI experts, researchers, developers, and linguists.
Daniel Nest’s Why Try AI?: The most amazing updates on AI tools and techniques
Riccardo Vocca’s The Intelligent Friend: An intriguing examination of the diverse ways AI is transforming our lives and the world around us.
Jason Gulya’s The AI Edventure: An important exploration of cutting edge innovations in AI-responsive curriculum and pedagogy.
Your opening reminded me of the cognitive trap known as the "Prison of Two Ideas", often seen in politics. Just as political positions rarely boil down to absolute advocacy or total rejection, the conversation around AI is far more nuanced than simply embracing or abandoning it.
I consider myself an advocate for AI and its positive potential, yet I also recognize the risks and the need for appropriate safeguards—both technical and legislative—to prevent misuse. When it comes to education, however, I find myself more cautious, particularly in its application with younger students. As a father of three (with two still in middle school), I see AI as yet another digital challenge alongside social media, news sources, and other online influences. The key, I believe, is not avoidance but gradual, age-appropriate exposure, much like many other complex tools we introduce to children as they develop the necessary critical thinking skills.
A recent conversation with a colleague highlighted this tension. He argued that AI is just like a calculator—something kids will inevitably use. I pushed back, suggesting that this analogy oversimplifies the issue. Most people think of a calculator in terms of basic arithmetic—addition, subtraction, multiplication—because they already understand the underlying math. However, when a calculator produces answers for more advanced concepts, users may accept the results without truly understanding them.
To illustrate this, I asked my colleague if he knew what cosine and Pi were. He confidently recited Pi as 3.14159, but when pressed on its meaning, he struggled to explain that it represents the ratio between a circle's circumference and diameter. This, I argued, is the trap of the calculator analogy—while tools can be powerful, they can also enable intellectual laziness. AI, like a calculator, can either limit cognitive growth (by simply providing answers) or unlock curiosity (when used as a learning aid).
The real challenge in education is ensuring that AI fosters understanding rather than mere reliance. If a middle schooler asks, "What is Pi?" or "What does cosine help me with in a triangle?", AI can be a catalyst for deeper learning. But if they simply use it to generate answers without questioning, it risks becoming a crutch rather than a tool for growth.
The goal, then, is not to block AI from the classroom but to integrate it responsibly, ensuring that students develop both the critical thinking skills and intellectual curiosity necessary to engage with it meaningfully.
This is the one of the most important parts of this debate - you mentioned, "prepare students for intellectual agency in an AI-infused world?" The problem with removing AI in education altogether is that graduates won't know how to work alongside AI in an AI-infused world, which is quite literally the opposite of what educators set out to accomplish. In order to properly prepare young people, the gatekeeper approach must be removed from the Spectrum altogether.