Why “Think Critically About AI” Isn’t Enough
Student AI Literacy Must Be Grounded in Disciplinary Ways of Knowing
I’m launching a research cohort on disciplinary AI, bringing together practitioners from across academia to examine AI’s impact on our diverse disciplines. Send me a DM with your contact info to join our discussions, starting with our inaugural meeting in early December.
Consider this statement: “Human behavior is primarily shaped by genetics.”
Ask an AI chatbot about this claim and you’ll get a confident, seemingly comprehensive answer. But here’s what’s invisible in that response: the question itself means fundamentally different things depending on which disciplinary lens you’re using. A scientist, a historian, and a philosopher would each approach this claim in radically different ways, not because they disagree about the facts, but because they’re engaged in different kinds of intellectual work.
If we want students to be literate about AI, they need to understand these differences. Not as abstract theory, but as practical tools for evaluating what AI can and cannot do in their own learning. Let’s trace how three disciplines would approach this claim, and what that reveals about what AI literacy needs to look like in each context.
The Scientist: Evidence, Measurement, and Falsifiability
A scientist encountering this claim immediately asks: What kind of evidence would support or refute this?
The scientific method demands operationalization. What does “primarily shaped by” mean in measurable terms? How do we isolate genetic influence from environmental factors? A scientist would turn to specific research designs: twin studies comparing identical twins raised apart, adoption studies tracking genetic vs. environmental contributions, genome-wide association studies linking specific genetic variants to behavioral traits.
The work involves quantification. Heritability estimates, effect sizes, confidence intervals, statistical significance. A scientist asks: How much variance can we attribute to genetic factors? What’s the margin of error? Can these findings be replicated across different populations?
Crucially, the scientist thinks in terms of falsifiability. What observations would prove this claim wrong? If we found identical twins with radically different behaviors despite shared genetics, what would that tell us? Good science doesn’t just confirm; it actively seeks disconfirmation.
What AI literacy looks like here:
Students working scientifically with AI need to evaluate methodology embedded in outputs. When AI cites studies about genetic influence on behavior, can students assess the research design? Do they recognize the difference between correlation and causation? Can they identify when AI conflates heritability (population-level variance explained by genes) with genetic determinism (individual-level inevitability)?
Science-specific AI literacy means understanding data provenance and limitations. Where did this training data come from? What populations were studied? What variables were controlled or confounded? Students need to recognize when AI makes claims beyond what the data can support, when it treats models as reality, when it oversimplifies complex statistical relationships.
It means knowing what questions to ask: What would falsify this? What’s the effect size? Has this been replicated? What are the confidence intervals? These aren’t generic critical thinking questions; they’re the specific moves scientists make to validate knowledge.
The Historian: Context, Contingency, and Power
A historian encountering the same claim asks: When and why has this question been asked before, and what happened?
The historical method demands contextualization. This isn’t an abstract philosophical question; it’s a claim with a history. Historians would trace the genealogy of genetic determinism: from 19th-century social Darwinism through eugenics movements, IQ testing controversies, sociobiology debates, to contemporary behavioral genetics. Each iteration served specific social and political purposes.
The work involves examining primary sources and understanding silences. Who championed genetic explanations of behavior, and what did they gain? Whose voices were excluded from these debates? A historian notices what the archive reveals and what it obscures. They see that scientific claims about genetics and behavior have consistently been deployed to justify existing hierarchies of race, class, and gender.
Crucially, the historian thinks in terms of contingency and change. Human behavior hasn’t remained constant; it shifts dramatically across time and culture. Any claim about genetics as the “primary” shaper needs to account for historical variability. How do genetic accounts explain the rapid behavioral changes we see across generations? What gets lost when we privilege biological over social explanations?
What AI literacy looks like here:
Students working historically with AI need to contextualize outputs. When AI presents genetic determinism as settled science, can students recognize this as a claim with a fraught history? Do they ask whose perspectives shaped this training data? Can they identify presentism (projecting current values backward) or anachronism in AI-generated historical narratives?
History-specific AI literacy means understanding how bias persists across time. AI trained on historical texts will reproduce historical prejudices. Students need to recognize that “neutral” AI outputs often embed 19th-century racial science, mid-20th-century gender essentialism, or Cold War ideological framings. They need to ask: What perspectives are centered here? Whose experiences are erased?
It means developing sensitivity to change and contingency. When AI describes “human nature” or makes transhistorical claims about behavior, students need to ask: How has this varied across time and place? What local contexts are being flattened? What gets lost when we treat culturally specific patterns as universal?
The historian’s AI literacy is about power and narrative. Who benefits from this explanation? What political work is this claim doing? What alternative stories are suppressed?
The Philosopher: Clarity, Logic, and Assumptions
A philosopher encountering this claim asks: What exactly does this statement mean, and is it even coherent?
The philosophical method demands conceptual clarity. Before we can evaluate whether behavior is “primarily shaped by genetics,” we need to unpack every term. What counts as “behavior”? All behavior, or specific types? What does “shaped by” mean? Caused? Influenced? Constrained? And what work is “primarily” doing here? More than 50%? More than any other single factor?
The work involves examining logical structure and assumptions. Is this a meaningful claim? The philosopher might point out category confusion: genetics provides potentials and constraints, but behavior is enacted in environments. Saying genetics “shapes” behavior might be a category error, like saying blueprints “build” houses. The philosopher asks: What’s assumed in this framing? A nature/nurture binary? Fixed traits? Reducibility of complex systems to simple causes?
Crucially, the philosopher distinguishes descriptive from normative claims. Even if we could establish that genetics influences behavior, what follows? The philosopher is alert to the naturalistic fallacy (deriving “ought” from “is”). Claims about genetic influence often smuggle in assumptions about inevitability, about what can or should be changed.
What AI literacy looks like here:
Students working philosophically with AI need to identify conceptual confusion. When AI makes claims about genetics and behavior, can students spot ambiguous terms, hidden premises, logical leaps? Do they recognize when AI conflates correlation with mechanism, or treats metaphors (genes “for” traits) as literal?
Philosophy-specific AI literacy means examining assumptions. What is AI taking for granted? What binaries is it imposing (nature/nurture, biological/social)? Can students identify when AI reifies abstractions, treating concepts like “intelligence” or “aggression” as natural kinds rather than constructed categories?
It means distinguishing fact from value. When AI presents genetic explanations, students need to ask: Is this a descriptive or normative claim? What ethical implications are implied but not argued for? Where does evidence end and interpretation begin?
The philosopher’s AI literacy is about logical rigor. Does this argument follow? Are the terms defined? What would make this claim true or false? What assumptions would we have to accept to believe this?
What This Means for Student AI Literacy
Here’s the pattern: Each discipline asks different questions because each is engaged in different knowledge-making practices. Science validates through controlled observation and replication. History validates through contextual analysis of sources and change over time. Philosophy validates through logical analysis and conceptual clarification.
AI doesn’t know these differences. It aggregates patterns from its training data without understanding the epistemological work behind them. It presents scientific studies, historical examples, and philosophical arguments in an undifferentiated stream, as if they’re all the same kind of claim.
This is why generic “AI literacy” falls short. Telling students to “think critically” or “check sources” doesn’t give them the disciplinary tools they need. A scientist checking sources asks different questions than a historian checking sources. Critical thinking isn’t one thing; it’s discipline-specific ways of knowing made visible.
AI literacy, then, needs to be taught from within disciplines, by teachers who understand their field’s epistemology. The science teacher knows what methodological questions matter. The history teacher knows how to read for bias and context. The philosophy teacher knows how to untangle conceptual confusion.
Rather than importing external AI literacy frameworks, what if we asked teachers: What does your discipline reveal about AI’s limitations? When students use AI in your field, what do they need to understand about how knowledge is made here? What questions should they be asking that AI cannot ask for them?
This is slower work than rolling out standards. It requires trusting teacher expertise and creating space for disciplinary inquiry. But it may be the only way to develop AI literacy that’s durable, meaningful, and genuinely integrated into how students learn to think.
The claim about genetics and behavior isn’t just an example. It’s a test case. How students approach that claim reveals what kind of intellectual work they’ve learned to do. And that work—that disciplinary literacy—is what will serve them when they encounter AI, and everything else.
Nick Potkalitsky, Ph.D.
How does your discipline approach claims about truth and evidence? What would your students need to understand about AI in your field? This is the conversation we need to have.
Check out some of our favorite Substacks:
Mike Kentz’s AI EduPathways: Insights from one of our most insightful, creative, and eloquent AI educators in the business!!!
Terry Underwood’s Learning to Read, Reading to Learn: The most penetrating investigation of the intersections between compositional theory, literacy studies, and AI on the internet!!!
Suzi’s When Life Gives You AI: A cutting-edge exploration of the intersection among computer science, neuroscience, and philosophy
Alejandro Piad Morffis’s The Computerist Journal: Unmatched investigations into coding, machine learning, computational theory, and practical AI applications
Michael Woudenberg’s Polymathic Being: Polymathic wisdom brought to you every Sunday morning with your first cup of coffee
Rob Nelson’s AI Log: Incredibly deep and insightful essay about AI’s impact on higher ed, society, and culture.
Michael Spencer’s AI Supremacy: The most comprehensive and current analysis of AI news and trends, featuring numerous intriguing guest posts
Daniel Bashir’s The Gradient Podcast: The top interviews with leading AI experts, researchers, developers, and linguists.
Daniel Nest’s Why Try AI?: The most amazing updates on AI tools and techniques
Jason Gulya’s The AI Edventure: An important exploration of cutting-edge innovations in AI-responsive curriculum and pedagogy.






Thanks for writing this, it clarifies lot. This distinction between disciplinary approaches is so important, especially for policymakers and students trying to navigate AI's real-world implications. It's truly insightful and makes me think about how we can build more equitable AI systems.
Thanks a lot Nick for this so much thought provoking piece. It seems that from one perspective AI can so much simplify teachers’ work in terms of efficiency/productivity (e.g. helping with preparation engaging lesson plans, tests, etc.) saving a lot of time for them. On the other hand using AI and teaching AI literacy will require from teachers lots of careful intellectual (unfortunately often painful 😉) work that they should put in to teach young generations AI literacy right. I wonder how many teachers are eager to do the tough part 🤔. I’m very happy🤩 that I came across your writing on education 🙏