Lazy Brain Syndrome: Language Machines, Cultural Models, and the Erosion of Critical Thinking
Guest Post by Terry Underwood
Nick’s Introduction
Today, I am featuring an extraordinary piece written recently by my friend and research partner Terry Underwood, who writes prolifically at the Learning to Read, Reading to Learn Substack. If you haven't already subscribed, I strongly encourage you to do so.
It’s a rare thing for me to become genuinely emotional when reading about AI and education. Having been immersed in these dialogues and controversies for so long, I can usually anticipate where an author is heading after a few paragraphs. And if we’re being truly honest, the debate itself has become so highly scripted within some grander scheme—whether our own or someone else’s—that it’s genuinely hard to be surprised by anything being published these days. For this, against that… Future of education, end of education… Pick your absolute or binary.
But then, there is Terry Underwood. An island of sophistication and nuance in the midst of a sea of predictable currents. He brings to all his work a profound understanding of pedagogy and instruction, coupled with incredible lived experience, always advocating for student-oriented teaching in the face of the manifold systems and regimes that repeatedly claim to know better.
As my own projects continue to take shape this year—generative thinking, possibility literacy, ethical and safe access to AI—I’m continuing to search for my own through line amid the mounting complexities of AI and education. On the one hand, we are seeing emerging evidence that un-scaffolded AI use among our youngest learners tends toward efficiency-based case uses, which challenges our educational goals to help them develop foundational skills and broaden their personal and critical engagement with the world at large. On the other hand, AI is an immovable object in the lives of our students, and to pretend otherwise is foolish; thus, we as educators must find a way to teach methods of engagement that either continue the work of school as we know it or reimagine school accordingly.
Unlike my own somewhat muddled attempt to pull together these scattered elements, Terry not only pinpoints this pragmatic vision with remarkable precision and clarity, but also challenges and inspires us to undertake the necessary work of realizing it in the astounding complexity of the here-and-now. The piece moves steadfastly and with purpose. Please don’t miss its fabulous concluding moments.
Terry’s final appeal for the continued importance of writing struck a deep chord with me—especially as I face ongoing criticism that my response to AI, and my curricular approaches, somehow oppose writing itself. These kinds of polarizing accusations are how we now mark territory within this complex debate; no longer humanist vs. technologist, but humanist vs. humanist in an attempt to control the vision for the future of AI in education.
Perhaps that’s how Terry touched me emotionally. He allowed me to finally acknowledge the toll of the barriers we’re creating as we turn against each other in this supposed fight—even as he helped me recognize our common goal: assisting our amazing students through a time of profound transformation. I hope you find this piece as moving and thought-provoking as I did.
Nick Potkalitsky, Ph.D.
Lazy Brain Syndrome: Language Machines, Cultural Models, and the Erosion of Critical Thinking
Will the use of AI by students damage synapses and destroy their determination to learn independently? Synapses, the junction points through which neurons communicate, demonstrate remarkable plasticity in response to environmental stimuli and learning experiences. As Johns Hopkins researchers have shown, "different life experiences, such as exposure to new environments and learning skills, are thought to induce changes at synapses, strengthening or weakening these connections to allow learning and memory.”
The concept “Hebbian learning” is named after Canadian psychologist Donald O. Hebb, who first introduced the idea in his 1949 book The Organization of Behavior. A double-edged sword, synaptic plasticity as a physiological explanation for learning and unlearning follows Hebbian principles summarized in the poetic expression "neurons that fire together, wire together.” Long-term potentiation (LTP), another Hebbian idea, refers to the biological mechanism underlying Hebbian learning, where synaptic strength increases due to repeated stimulation.
As you may know, Microsoft published a recent study on AI and its potential impact on critical thinking among knowledge workers. The study suggests that a risk of developing what we might call lazy brain syndrome is real. A core finding of the study seems fairly obvious. Knowledge workers who have confidence in AI have a tendency to accept its output without thinking about it. Over time, these workers might lose the capacity to think critically about low-risk, routine AI uses through lack of practice. This notion might be summarized in the not-so-poetic line “use it or lose it.” It appears to be the reason calculators were initially resisted in math classrooms.
Microsoft’s study was in the spotlight in early February, 2025, but faded quickly, which seems to be the case for much recent AI research. Given the slippery slope ending at a cliff implied by the study, I expected much more media coverage than the study got. I found the article worth the read, but not as a model of impeccable research, and not as a contribution to original knowledge. I wrote a detailed critique of the study here.
What lingers in my mind about the Microsoft study is what it reveals about Blooms Taxonomy rather than what it says about the slippery slope we could ski off of into the AI nightmare. I expected evidence from brain imaging at a minimum, evidence of synaptic unwiring of hard-won wiring. Instead, the study was based on self-reports from knowledge workers via a survey. It relied heavily on what many consider the archaic Bloom’s Taxonomy to design data collection and analysis.
In its day, the Taxonomy was a powerful tool of educational reform because of its face validity and its practicality. For one thing, Bloom discovered that 95% of the multiple-choice test items in use at the time were measuring low-level recall. Bloom and his colleagues created the Taxonomy in 1956 to modernize standardized tests in America’s universities following fifty years of world war. It challenged the factory model of schools to do more than produce adults who could read, write, and do arithmetic—and recall the state capitals—well before the cognitive revolution beginning in the 1960s. It spoke of higher-order thinking skills.
Robert Marzano was among the first to take up the challenge of modernizing the Taxonomy for a more complicated world where learning standards were proliferating. Marzano discussed three systems and a knowledge domain, arguing that rather than occupying the lowest level of thinking, knowledge creation is actually the object of learning and thinking supported by three systems: the Self-System. the Metacognitive System, and the Cognitive System. With a focus on a wider range of factors affecting student thinking and learning, Marzano integrated motivation (Self-System) and goal-setting (Metacognitive System) into the taxonomy and noted that learning resulted from the interaction of these systems. The greatest problem with Marzano’s work is its neglect of the social surround, the sociocultural nature of learning. At least a dozen attempts to revision the Taxonomy took place as well, all of them firmly grounded in Bloom (e.g., see here, here, and here).
The Microsoft research team admitted that the Taxonomy was selected to anchor their measurement of critical thinking not because it accurately captures the complexity of higher-order thinking, but because it does not. They wrote that they selected it because it was convenient, well-recognized, and simple. Readers would not be called on to think critically about critical thinking. Evidently, the team was more concerned with the rhetoric of its study and its impact on their audience than the currency of the theoretical model of critical cognitive activity.
As a critical reader of the study, I concluded that the Taxonomy became central to their study precisely because it is a cultural model, not a theoretical model. Cultural models are simple, ubiquitous scripts that get lodged in people’s brains to help them understand often complex cultural phenomenon. Marriage, for example, involved a man, a woman, a house, and a white picket fence in 1956—simple, clear. Indeed, this simple understanding has been shattered into tiny fragments, rightfully so, but its durability has created ongoing struggle and cruelty. Dominant longstanding cultural models represent mainstream social reality. Bloom’s Taxonomy is such a model.
*****
Holland and Namoi Quinn (1987) edited a volume titled Cultural Models in Language and Thought discussing cultural models as “…storylike chains of prototypical events that unfold in simplified worlds.” At the time of conception, Bloom’s Taxonomy was more than a “storylike chain of events”; it took seventy-five years to shrink to cultural model scale. It was a link in an historical narrative. In the 19th century, the college admission’s essay appeared on the scene, raising the prominence of writing as a teachable and testable subject. By 1956, standardized tests were at full boil, competing with the college essay as an entry requirement. Keep in mind that ETS was founded a decade before the Taxonomy, and the need to develop fairness and clarity about college admissions was taken seriously. ETS wanted to help foster a true meritocracy.
Serving as the University Examiner at the University of Chicago, Bloom initiated the taxonomy project in 1949 and worked on it for several years with a practical goal in mind. The assistant director of the University of Chicago's Board of Examinations, Bloom wanted to diminish the amount of labor in multiple-choice test development by creating a system to warrant the simple exchange of like test items among universities.
This practical origin is often overlooked in discussions of the taxonomy's later widespread adoption. But reading between the lines of the original, it’s clear that the criteria are easily applied to multiple-choice questions. Here are the indicators for good multiple-choice items corresponding to the higher-order thinking skills:
In addition to its immediate purpose for universities, the Taxonomy offered a much-needed common language that could bridge subject matter and grade levels. It served as a touchstone for specifying the meaning of broad educational goals for the classroom and helped determine the congruence of goals, classroom activities, and assessments. This framework provided educators with a sense of the range of possible teaching objectives against which the limited breadth and depth of any particular educational curriculum could be contrasted.
The Common Core State Standards (2012) purport to provide educational standards placing critical thinking as the pinnacle accomplishment of learners. There is, however, a conceptual gap in the Common Core's approach to critical thinking. The standards incorporate activities widely recognized as components of critical thinking (analyzing, evaluating, etc.) without explicitly defining critical thinking itself. Why is this important? It creates a circular reasoning problem.
The standards imply critical thinking is important and use specific terms like analysis and evaluation. They imply that these skills are understood to be part of critical thinking. But without defining critical thinking, the connection remains implicit and assumes a shared understanding of what critical thinking entails without establishing that understanding within the standards themselves. It relies on educational traditions like Bloom and all of those who came later and tried to square the Taxonomy with current research, but it never uses the mantra “critical thinking” except in casual references in introductory and ancillary texts.
This lack of definition could be seen as problematic for the same reason it is problematic in the Microsoft study. What, exactly, is critical thinking for knowledge workers? For middle school students? It leaves interpretation open to individual educators. It makes it difficult to ensure consistent implementation across classrooms, though there is a world of difference between implementing a standard and teaching learners which the Core elides. It complicates assessment of whether students are truly developing critical thinking skills. It forces educators to time travel back to 1956, take a haphazard look at the Taxonomy, and then scour the work of Robert Marzano and others—the revisionists of Bloom—to get even a modicum of clarity on what this prized cognitive behavior is. Critical thinking in the CCSS is construed as a narrative taking place in a simplified world, such as Microsoft wanted for its study, with a setting, characters, a plot, a conflict, resolution and a happy ending.
In developing his own taxonomy, Marzano highlighted several significant criticisms of Bloom’s framework. Perhaps most fundamentally, he challenged the hierarchical structure of Bloom’s Taxonomy. According to Marzano, the idea that each higher skill is composed of skills beneath it—that comprehension requires knowledge, application requires comprehension and knowledge, and so on—is simply not supported by research. Comprehension is essential to build knowledge, analysis can be important for application, and synthesis surely ought to be done with some evaluation.
Marzano (see the URL cited earlier) also questioned the assumption that complex learning activities could be classified as primarily requiring one cognitive process over others. The originators of Bloom’s six thinking processes assumed that complex projects could be labeled as requiring one process more than others—a task was primarily an “analysis” or an “evaluation” task.
This assumption has been challenged over the years, which may account for the difficulty that educators have classifying challenging learning activities using the Taxonomy. Additionally, Marzano recognized that Bloom’s Taxonomy failed to adequately address metacognition and the self-system, focusing almost exclusively on cognitive aspects. In contrast, Marzano’s framework emphasizes metacognition and the self-system, treating the cognitive system in a more practical and motivational way.
*****
Bloom's Taxonomy played a significant role in the European Bologna Process, particularly in aligning degree requirements across European universities. The Tuning Project, a key initiative within the Bologna Process, used the Taxonomy to develop reference points for subject areas and to express learning outcomes. Bloom's work offered a common language for describing educational objectives and cognitive processes across different European higher education systems. The Taxonomy helped develop comparable and compatible degree programs by providing a structured approach to defining learning outcomes and competences. In this case, because the Taxonomy was used as a discussion starter rather than a standard bearer, is was helpful in achieving situated clarity. Perhaps local schools and districts could begin their own Bologna Tuning processes.
Until then, the Taxonomy is an antiquated treasure which has become almost a caricature of itself as the decades have whittled away its theoretical importance and reduced it to a cultural model like marriage, religion, school, etc.—these simplified realities that get us through the days. Whenever I hear the word or read it, my antennae go up, and I look for evidence of what the person is thinking about. I’ve come across others in the university who more or less agree with me that the phrase is functionally useless—a linguistic Rorschach test.
For me, critical thinking isn't some special type of cognition isolated within educational taxonomies. It's a whole-brain assault on passive acceptance of cultural models, absurd assertions, unwarranted assumptions, and faulty inferences. It's intellectual dissection, idea autopsies, life and death sense-making, vigilant ethical reasoning, disciplined skepticism, and analytical excavation.
Above all, it is participatory and social, requiring engagement with diverse perspectives and collaborative meaning-making. This richly complex process bears little resemblance to the oversimplified tiers of cognition that educational frameworks like Bloom's Taxonomy and the Common Core suggest. AI language models may indeed transform how we think and learn, but to understand that transformation, we need a far more sophisticated, empirical, and sociocultural understanding of critical thinking than our current cultural models provide—one that goes beyond formulaic tasks like writing an essay on the symbolic meaning of the green light in The Great Gatsby.
Terry Underwood is a distinguished educator, assessment expert, and Professor Emeritus at Sacramento State University with over three decades of experience in portfolio-based and authentic assessment systems. A pioneer in the field, Terry served on the California authentic assessment design team (1991-1994) and the New Standards Project (1991-1996), writing portfolio handbooks used across 19 states.
Terry's doctoral dissertation (1996) on portfolio systems earned the prestigious NCTE Promising Researcher Award. Their expertise led to publications including two influential books on portfolio frameworks (1998, 2000) and consulting work for Iowa's teacher feedback system.
Terry later contributed to the Performance Assessment for California Teachers (PACT) design team and implemented portfolio systems at CSU Sacramento. Most recently, Terry served as principal assessment consultant for the Western Interstate Academic Passport project (2013-2015) and contributed to the VALUE rubric on college reading (2009).
Check out some of our favorite Substacks:
Mike Kentz’s AI EduPathways: Insights from one of our most insightful, creative, and eloquent AI educators in the business!!!
Terry Underwood’s Learning to Read, Reading to Learn: The most penetrating investigation of the intersections between compositional theory, literacy studies, and AI on the internet!!!
Suzi’s When Life Gives You AI: An cutting-edge exploration of the intersection among computer science, neuroscience, and philosophy
Alejandro Piad Morffis’s The Computerist Journal: Unmatched investigations into coding, machine learning, computational theory, and practical AI applications
Michael Woudenberg’s Polymathic Being: Polymathic wisdom brought to you every Sunday morning with your first cup of coffee
Rob Nelson’s AI Log: Incredibly deep and insightful essay about AI’s impact on higher ed, society, and culture.
Michael Spencer’s AI Supremacy: The most comprehensive and current analysis of AI news and trends, featuring numerous intriguing guest posts
Daniel Bashir’s The Gradient Podcast: The top interviews with leading AI experts, researchers, developers, and linguists.
Daniel Nest’s Why Try AI?: The most amazing updates on AI tools and techniques
Riccardo Vocca’s The Intelligent Friend: An intriguing examination of the diverse ways AI is transforming our lives and the world around us.
Jason Gulya’s The AI Edventure: An important exploration of cutting edge innovations in AI-responsive curriculum and pedagogy.
Thanks for sharing Nick.
We are lucky that you and your colleagues are engaging in such important work.
The collapse amazing and I have a great time reading both pieces. Such important work you're doing here and I love it when writer's feature other writers in their work it's actually what I do and this makes me feel so proud of the community.