Who Is This AI Enthusiast Everyone’s Talking About?
A personal appeal to examine an insubstantial artifact born from our heated discourse about AI and education
Thank you for engaging with this work—if you found it valuable, please like the post. Over the next month, I’m establishing a cohort of educators interested in developing disciplinary-specific AI practices in their classrooms. Participants will receive early access to DSAIL materials frameworks, and the chance to build this approach alongside other educators, so DM me if you’re interested.
I keep hearing about this “AI enthusiast.” In faculty meetings. In conference presentations. In the comment sections of education blogs. In the worried conversations administrators have behind closed doors.
The AI enthusiast is everywhere in our discourse about AI in education: the villain in our cautionary tales, the straw figure we prop up to justify resistance, the specter we invoke whenever we need someone to argue against.
You know the type: they want to replace teachers with chatbots. They think AI solves everything. They’ve abandoned critical thinking, given up on writing instruction, handed everything over to OpenAI. They’re accelerationists drunk on possibility, indifferent to risk, hostile to the humanistic traditions that ground education.
I’ve been working with teachers and administrators for over two years now, since ChatGPT launched and transformed our professional discourse. I’ve led professional development sessions. I’ve consulted with schools writing AI policies. I’ve sat in countless meetings where educators try to figure out what to do.
And I’ve met exactly one person who comes close to this description.
One.
And even this single case doesn’t fit. This person wasn’t drunk on techno-utopianism. They were exhausted. Overwhelmed by the sense that we have no control over products our students access constantly: outside school, inside school, on devices we provide. Yes, they trusted AI’s analytical capabilities prematurely. But that trust came from desperation, not zealotry. From the sense that if we can’t stop it, we might as well try to harness it.
So who is this AI enthusiast we keep summoning?
A rhetorical convenience. A stand-in. A figure we conjure when we need someone extreme enough that our own position looks reasonable by comparison.
The AI enthusiast is, in literary terms, a foil: a character who exists to make the protagonist look good.
What the Phantom Lets Us Avoid
When we invoke the AI enthusiast, when we spend energy arguing against a figure who barely exists, we avoid much harder questions.
We avoid designing actual pedagogy. It’s easier to debate whether AI belongs in schools than to determine the specific pedagogical sequences students need. What does it actually look like to teach students to interrogate AI output before relying on it? Not in theory, but in practice. With real constraints, real curriculum pressures, real students who have chemistry homework due tomorrow.
We avoid acknowledging student reality. Our students are already using these tools. Not because they’re enthusiasts. Because the tools exist, because they work reasonably well, because ChatGPT will generate something plausible and they’re exhausted. We can’t wish this away. But invoking the AI enthusiast lets us pretend the question is still whether AI belongs in education when our students answered that question every time they opened a chat window at midnight.
We avoid confronting our own uncertainty. I don’t have all the answers about AI in education. Neither do you. Neither does anyone. But arguing against the AI enthusiast lets us feel certain for a moment. At least we’re not that. The phantom gives us something to define ourselves against, which feels safer than admitting we’re all improvising.
The AI enthusiast has become a way to postpone the work that matters.
Two Trajectories, Not One
In my early work in 2023, when I argued for what I called “techno-pragmatism,” I was pushing back against the binary: ban it or embrace it, enthusiast or skeptic, on or off. I was trying to articulate a middle path that acknowledged both promise and risk.
I still believe in that pragmatism. But working inside schools taught me something: pragmatism isn’t just a position between extremes. It has architecture. Structure. Sequence.
There are two trajectories of AI engagement I now use to frame my work with teachers:
Critique and interrogation of AI output
Production and generation through AI engagement
In skilled practice, these trajectories flow together seamlessly. Experts interrogate as they generate, critique as they produce. The boundaries blur because the critical apparatus has become internalized, automatic.
Which is precisely why pedagogy must make them explicit and sequential.
When students move too quickly to production, when they use AI to generate essays, solve problems, create presentations without first learning to interrogate what the AI is actually doing, they develop exactly the kind of premature trust that comes from skipping foundational work. That trust is dangerous. Not because AI can’t do useful things. But because we can’t track its reasoning. Because the output looks authoritative even when it’s wrong. Because fluency and accuracy aren’t the same thing, and AI is fluent about everything.
Students need to learn to read AI output critically before they learn to rely on it. They need to understand that the AI is generating plausible-sounding text, not verified knowledge. Only then does production become something other than outsourcing thought.
I know the objection: if students are already using AI outside our classrooms, why does this sequencing matter? Won’t they just skip our careful pedagogical steps?
Yes. Some will. But that’s not an argument against building the foundation in our classrooms. It’s an argument for it. We can’t control what students do at home. But we can give them tools to recognize when they’re being sold plausibility as truth. We can teach them what interrogation looks like. That’s education’s work in every era: teaching students to see what they’re looking at.
This sequencing matters. Students who skip the foundation become dependent. Students who build the foundation first can use AI as a tool they understand, not an oracle they trust blindly.
The two trajectories aren’t opposed. They’re sequential. And getting that sequence right isn’t just good pedagogy. It’s the difference between education and outsourcing.
Pragmatism in Practice
When I look across the landscape of AI in education right now, most AI-infused lessons involve students using AI uncritically. As an educational additive. As an expert assistant. The critical apparatus that should undergird interrogation is missing.
This is where my DSAIL (Disciplinary Studies in AI Literacy) framework begins. The work focuses on developing the evaluative frameworks at the heart of disciplinary thinking: the criteria and processes students need to assess not just whether something is right or wrong, but how well it approximates disciplinary reasoning.
Because AI doesn’t reason. It produces text that approximates the patterns of reasoning. And students can’t evaluate that approximation without understanding the frameworks their discipline uses to construct and validate knowledge.
In history, students develop criteria for evaluating how sources function in historical argument. When AI references a primary document, students learn to assess: Is this engaging with the document’s context, its perspectives, its relationship to other evidence? Or is it invoking the source to make a claim sound authoritative? The evaluative framework exists in the discipline’s methods, not in the tool’s output.
In mathematics, students develop understanding of what makes logical justification valid. When AI produces step-by-step solutions, students learn to assess: Where does this skip the conceptual reasoning that connects one step to another? Where does it assume what it should demonstrate? Mathematical validity provides the standard against which AI’s approximation is measured.
In literature, students develop frameworks for how textual evidence supports interpretive claims. When AI quotes a text, students learn to assess: Is this showing how evidence builds toward a reading? Or is it asserting an interpretation and gesturing at details as decoration? The discipline’s interpretive methods provide the standard for evaluation.
When students develop these evaluative frameworks (the criteria, processes, and methods that define how their discipline constructs knowledge), they can assess AI’s approximations with precision. They have standards that exist outside the tool. They can gauge where the output matches disciplinary reasoning, where it diverges, where it substitutes plausibility for rigor.
This is critique and interrogation as foundation. Students who develop these frameworks can then use AI productively, because they understand what disciplinary thinking actually requires. They can see where the tool’s approximation is useful and where it breaks down.
This is pragmatism with architecture. Not a position statement, but a method. Not a middle ground between extremes, but a deliberate sequence of engagement.
Nick Potkalitsky, Ph.D.
Check out some of our favorite Substacks:
Mike Kentz’s AI EduPathways: Insights from one of our most insightful, creative, and eloquent AI educators in the business!!!
Terry Underwood’s Learning to Read, Reading to Learn: The most penetrating investigation of the intersections between compositional theory, literacy studies, and AI on the internet!!!
Suzi’s When Life Gives You AI: A cutting-edge exploration of the intersection among computer science, neuroscience, and philosophy
Alejandro Piad Morffis’s The Computerist Journal: Unmatched investigations into coding, machine learning, computational theory, and practical AI applications
Michael Woudenberg’s Polymathic Being: Polymathic wisdom brought to you every Sunday morning with your first cup of coffee
Rob Nelson’s AI Log: Incredibly deep and insightful essay about AI’s impact on higher ed, society, and culture.
Michael Spencer’s AI Supremacy: The most comprehensive and current analysis of AI news and trends, featuring numerous intriguing guest posts
Daniel Bashir’s The Gradient Podcast: The top interviews with leading AI experts, researchers, developers, and linguists.
Daniel Nest’s Why Try AI?: The most amazing updates on AI tools and techniques
Jason Gulya’s The AI Edventure: An important exploration of cutting-edge innovations in AI-responsive curriculum and pedagogy.





This piece helps me think about the distance between now and 2023. Thank you for setting the strawman of “the AI enthusiast” on fire. I’ve been guilty of caricaturing this guy, and his alter ego, “the AI skeptic.” It isn’t that those extremes don’t exist…it’s that they are not particularly interesting or useful.
Far better to talk about people doing interesting work with generative AI in the classroom while also asking critical questions about how the tools are being developed and commercialized. Or, look to skeptics who are worried about the educational effects of AI chatbots, but who also recognize that their concerns mean engaging with students to think critically about technology.
The second thing is that you continue to develop a vision of pragmatism, what I’ve started calling “process philosophy,” as the theoretical context for our work. John Dewey is central to this, but as you describe it here, any educator thinking carefully about their work and approaching it experimentally is working in this cultural tradition.
Generative AI is a new cultural technology. To understand its value, we need less name-calling and demands to pick a side, and more attention to the methods of critical inquiry that Nick describes in this short piece.
I’m interested please