Discussion about this post

User's avatar
Petar Dimov's avatar

A sharp reminder that overreliance on AI isn’t a new cognitive failure but a familiar one in new packaging

Sueño Osis's avatar

This is a sharp and necessary corrective. The field genuinely does not need another label for a phenomenon Tversky and Kahneman were already describing fifty years ago. Your call for intervention research over taxonomy-building is well taken.

But I'd add a twist to the Kahneman framing that I think sharpens your argument further: reasoning-focused AI models have actually *inverted* the original dual-process problem, and that inversion has design implications educators should care about.

Kahneman's whole framework rests on scarcity. System 2 (slow, deliberate, effortful reasoning) is costly, so we ration it. Cognitive bias is essentially what happens when we substitute cheap intuition for expensive deliberation. The tragedy isn't that we can't reason carefully; it's that careful reasoning has a price we're constantly trying to avoid paying.

LLM agents with extended reasoning (chain-of-thought, self-critique, multi-step verification) have collapsed that cost almost entirely. System 2 deliberation is now fast, cheap, and on demand. The bottleneck has flipped: the problem is no longer that careful reasoning is too expensive to do, it's that humans are too quick to *outsource* it to a system that performs deliberation without actually *understanding* anything.

This reframes your intervention question productively. The old pedagogical challenge was teaching students when to slow down, when to override their System 1 instincts. The new challenge is teaching them when to stay in the loop, to resist handing the deliberation over entirely just because a machine will do it instantly and confidently. That's a meaningfully different cognitive skill to cultivate, and it suggests the intervention research you're calling for needs to be built around agency and metacognition, not just verification habits. The question isn't "did you check the AI's answer?" but "did you ever actually think?"

And if a.i. llm agents/bots/apps switch our default thinking method to System 2, will our System 1 toolset (i.e. intuition) atrophy?

9 more comments...

No posts

Ready for more?