The Knowledge Asymmetry Problem: Why Student Expertise Matters in AI Collaboration
This disciplinary knowledge isn’t about knowing everything; it’s about knowing enough to think within the domain.
Two students are writing history essays about World War I. Both turn to AI for help. Both type “What caused World War I?”
Student A has spent weeks studying the period. She understands the alliance system, militarism, imperialism. When AI responds with the standard framework, she recognizes it as such. She asks follow-up questions: “How did the Balkans fit into the alliance system?” She evaluates whether AI’s explanation of the Schlieffen Plan matches what she’s learned about German military strategy. When AI oversimplifies, she notices.
Student B is starting from scratch. When AI gives him the same response, he copies it into his notes. He doesn’t know if it’s comprehensive or simplified. He doesn’t know what questions to ask next. He accepts AI’s framing as authoritative because he has no basis for questioning it.
Both students “used AI.” But they had completely different experiences. Student A used AI as a collaborative tool, something to think with, to test ideas against. Student B used AI as an oracle, a source of answers he couldn’t evaluate.
This is knowledge asymmetry in action.
Asymmetry Creates Dependency
What makes AI different from traditional reference materials is that it’s conversational and generative. It doesn’t just give you information; it talks with you, responds to your queries, adapts its explanations. This responsiveness creates an illusion of collaboration even when real collaboration isn’t happening.
Student B feels like he’s having a productive conversation. The AI is answering his questions! But what’s actually happening is that AI is determining what questions are worth asking, what information is relevant, how the topic should be framed, what counts as a good answer. Student B isn’t directing this interaction; he’s following it. The asymmetry isn’t being reduced; it’s being reinforced.
Student A’s experience is fundamentally different. She brings enough disciplinary knowledge that she can formulate specific questions based on gaps in her understanding, recognize when AI’s explanation doesn’t match other sources, notice when important context is missing, and take what’s useful while rejecting what isn’t. She’s not dependent on AI’s judgment because she has her own disciplinary judgment.
What Knowledge Actually Matters
The knowledge that reduces asymmetry is disciplinary knowledge, not just facts, but the ways of knowing, thinking, and evaluating specific to a domain.
In history: understanding how historians evaluate evidence and construct arguments, recognizing that interpretation involves debate, knowing how to contextualize events, being familiar with key frameworks.
In mathematics: understanding what counts as valid proof, recognizing different problem-solving strategies and when they’re appropriate, seeing logical structure of arguments, having fluency with core concepts.
In literary analysis: understanding how texts create meaning through language and form, being familiar with critical lenses, recognizing textual evidence and how it supports claims.
This disciplinary knowledge isn’t about knowing everything; it’s about knowing enough to think within the domain. Enough to recognize what good work looks like. Enough to evaluate whether explanations make sense. Enough to formulate sophisticated questions rather than surface-level ones.
This is why generic “AI literacy” training doesn’t solve the asymmetry problem. Teaching students prompt engineering or general AI capabilities doesn’t give them what they need to evaluate AI outputs in history, math, or literature. The evaluation requires disciplinary expertise.
The Leveling Illusion
There’s a seductive narrative that AI might “level the playing field” by giving all students access to expert knowledge. If every student has a personal AI tutor, differences in prior knowledge become less consequential.
But knowledge asymmetry reveals why this is misleading. AI doesn’t level the playing field; it relocates the advantage.
Students with strong disciplinary grounding get more out of AI. They formulate better queries, evaluate outputs more critically, identify errors more readily, use AI outputs as starting points for deeper investigation rather than endpoints. AI amplifies their existing capability.
Students without that grounding become dependent on AI’s framing and judgment. They may complete assignments and produce outputs that look acceptable, but they’re not developing the disciplinary thinking that would let them move beyond AI assistance. AI becomes a crutch preventing them from building capability.
Rather than reducing inequality, AI can widen the gap between students with strong foundations and those without them. The students who already have knowledge get a powerful tool for extending it. The students who lack knowledge get a sophisticated answer-generator that makes it easier to avoid the hard work of building understanding.
Implications for Implementation
Understanding knowledge asymmetry changes how we should think about AI integration.
AI literacy must be disciplinary-specific. We can’t teach AI skills in isolation and expect transfer across domains. The knowledge that lets you critically assess AI’s mathematical proof is different from what lets you assess its literary interpretation.
AI introduction should come after disciplinary grounding begins. Students need enough expertise to think critically within a domain before AI becomes productive rather than a crutch. This doesn’t mean “mastery” but enough foundation to maintain critical distance from AI outputs.
Teacher expertise becomes essential, not obsolete. Teachers with deep disciplinary knowledge can help students develop the frameworks and concepts that reduce asymmetry. The notion that AI might replace teacher expertise gets the relationship backwards: teacher expertise enables students to use AI well.
We need to measure different outcomes. If students complete assignments with AI help but don’t develop disciplinary thinking, the implementation isn’t working—even if completion rates look good. We must assess whether students are building knowledge and capabilities for independent work.
How Asymmetry Reshapes the Interaction
Here’s what makes AI’s conversational nature particularly problematic when asymmetry is high: Student B doesn’t just get wrong information; he gets confident, coherent wrong information that he has no way to evaluate.
When AI explains the Schlieffen Plan, it does so fluently. It provides context, uses proper terminology, connects to broader patterns. To Student B, this sounds authoritative. The explanation has all the markers of expertise: it’s detailed, structured, confident. He has no way to know that AI might be oversimplifying German strategic thinking, or missing crucial historiographical debates about whether the plan was actually the determining factor in how the war unfolded.
Student A, with her disciplinary grounding, can recognize these gaps. She knows enough about the historiography to notice when AI is presenting one interpretation as settled fact. She can ask: “What do historians who argue the war was not inevitable say about the Schlieffen Plan?” She’s testing AI’s output against her developing framework of how historical arguments work.
The asymmetry doesn’t just affect what students learn; it affects how they learn to think. Student B is learning that knowledge comes from asking questions and accepting explanations that sound good. Student A is learning that knowledge comes from evaluating evidence, comparing interpretations, and maintaining critical distance from any single source.
Over time, these different learning patterns compound. Student A develops intellectual habits of verification, questioning, synthesis. Student B develops intellectual habits of outsourcing judgment to whatever source seems most confident.
What’s Really at Stake
At its core, the knowledge asymmetry problem is about intellectual agency.
When students lack disciplinary grounding, they can’t maintain agency in their AI interactions. They’re not directing the inquiry; they’re being directed by it. They’re not thinking with AI; they’re letting AI think for them. The asymmetry isn’t just about who knows more; it’s about who gets to decide what questions matter, what answers are good, what understanding looks like.
Students who bring disciplinary knowledge and critical judgment to AI interactions will find it genuinely transformative. It will extend their capability, help them investigate questions they couldn’t tackle alone, challenge and refine their thinking.
Students who can’t bring disciplinary knowledge will find in AI something else: a convenient way to get through school without developing expertise they’re supposed to be building.
The difference isn’t in the AI. It’s in the knowledge the student brings to the interaction. And that means addressing knowledge asymmetry is ultimately the work of teaching: helping students develop disciplinary foundations that make them capable partners in human-AI collaboration, rather than dependent consumers of machine-generated outputs.
That’s not a retreat from AI integration. It’s the prerequisite for making it work.
Nick Potkalitsky, Ph.D.
Check out some of our favorite Substacks:
Mike Kentz’s AI EduPathways: Insights from one of our most insightful, creative, and eloquent AI educators in the business!!!
Terry Underwood’s Learning to Read, Reading to Learn: The most penetrating investigation of the intersections between compositional theory, literacy studies, and AI on the internet!!!
Suzi’s When Life Gives You AI: A cutting-edge exploration of the intersection among computer science, neuroscience, and philosophy
Alejandro Piad Morffis’s The Computerist Journal: Unmatched investigations into coding, machine learning, computational theory, and practical AI applications
Michael Woudenberg’s Polymathic Being: Polymathic wisdom brought to you every Sunday morning with your first cup of coffee
Rob Nelson’s AI Log: Incredibly deep and insightful essay about AI’s impact on higher ed, society, and culture.
Michael Spencer’s AI Supremacy: The most comprehensive and current analysis of AI news and trends, featuring numerous intriguing guest posts
Daniel Bashir’s The Gradient Podcast: The top interviews with leading AI experts, researchers, developers, and linguists.
Daniel Nest’s Why Try AI?: The most amazing updates on AI tools and techniques
Jason Gulya’s The AI Edventure: An important exploration of cutting-edge innovations in AI-responsive curriculum and pedagogy



This is a great article that should be the standard for all PhD candidates. It clearly defines the advanced disciplinary expertise required to engage with AI as a collaborator.
For K-16 education, the core focus must be critical thinking instruction and a full understanding of what this article points out so well regarding AI's limitations. The educational approach must involve scaffolding these concepts at every level and at every step of interaction. This allows students to use their developing critical skills to constantly evaluate the AI's output, thus promoting authentic collaboration with AI throughout their learning journey.
Thank you for your continued insights into how we must consider all the variables when teaching young, and older, minds to work with AI .
Yes! Students must be able to evaluate AI either by their own domain expertise/knowledge or by an established process of critically evaluating outputs. This is such a critical missing step in digital literacy ed… great post!