Looking for the Next World: Possible Risks of Cognitive Offloading in an AI Education Landscape
Guest Post by Terry Underwood
Nick’s Introduction
Two weeks ago, I shared a piece by Terry Underwood that shook something loose in me—something that had grown calcified after months of tired AI debates and binary arguments about what schools should become. That essay reminded me not just what it means to think critically, but what it feels like when someone writes critically from a place of lived urgency, hard-won knowledge, and intellectual grace.
Today’s follow-up is, in a word, stunning.
In “Looking for the Next World”, Terry goes further—not only diagnosing the lazy brain syndrome that may come with habitual AI use, but offering a theory of how deep the consequences could be if educators and institutions continue to treat this moment as either a threat to resist or a trend to accommodate. Instead, he shows us what it looks like to actually engage with the problem—conceptually, socioculturally, pedagogically, and emotionally.
If the last piece moved me with its clarity and its critique of long-standing cultural models like Bloom’s Taxonomy, this one stayed with me for entirely different reasons. It’s a rigorous, emotionally attuned exploration of what AI offloading might really do—not just to students’ ability to write or remember things, but to their sense of self as thinkers. It’s about agency. Ownership. The inner life of learners. What happens to metacognition, epistemic identity, and intellectual autonomy when the line between collaboration and substitution is blurred beyond recognition.
Terry is careful not to demonize AI or glorify traditional instruction. He navigates the tensions between executive teaching and agentive learning with nuance. But most of all, he challenges us to move past platitudes and create a human-centered response. His articulation of risks—especially the erosion of epistemic agency, the standardization of thought, and the loss of authentic intellectual voice—feels both measured and urgent. The term “offloading” doesn’t sound so benign anymore after this piece. It sounds like something that must be named, studied, and resisted where it counts most.
As I’ve continued reflecting on the work I want to do around generative thinking and possibility literacy, this essay reframed the stakes. This isn’t just about whether students cheat, or whether AI is helpful or harmful. It’s about what kind of learners we are shaping. What kind of futures we’re preparing them for. What kind of world we believe is possible.
If you care about students—not as data points, not as test-takers, but as human beings trying to find their way through a confusing new reality—then you need to read this. Here again, I advise readers to stick with this piece until the end. Another magnificent finish!!!
And as always, if you're not already following Terry’s Learning to Read, Reading to Learn, you're missing one of the most vital voices in education today.
Nick Potkalitsky, Ph.D.
Looking for the Next World: Possible Risks of Cognitive Offloading in an AI Education Landscape
“Retentiveness is a factor that can be used to explain how well and for how long learned information is stored [in] and retrieved [from] the memory” (Onowugbea et. al., 2024).
Onowugbea et. al. (2024) used the passive voice to obscure the subject/agent of the sentence and to underline the word retentiveness. On first read, one might wonder why such an assertion can’t just be assumed, thinking that these researchers simply defined a well-understood word, dressing it up in abstractions and scholarly robes. Instead, as one reads further, these researchers gain momentum and speak in no uncertain terms, assuming nothing. Pay attention, they seem to say to an audience of African thought leaders, pay attention to an existential crisis: “Since 2015, the Africa Union has been engaged in efforts to raise awareness among individuals throughout the continent regarding the significance of addressing the fundamental factors contributing to Africa’s comparatively deficient progress in science and technology advancement relative to other global regions” (p.50). One fundamental factor: Poor retention of biological concepts.
It's my guess that the experimental study they conducted, published in 2024, was undertaken before the unveiling of ChatGPT 3.5 in November, 2022. Nonetheless, the issues they raise are highly relevant to current debates about AI wherein educators predict a similar decline in conceptual memory in American schools. Their discussion includes no mention of AI, though I’m going to be on the lookout for AI initiatives in Africa with the kind of rigorous empirical study we need in America. The control group in the Onowugbeda study, the business-as-usual group, is taught by ‘lecture, reading texts, and rote memory.’ The experimental group is treated to an approach called “the culturo-techno-contextual approach” designed to anchor biological concepts in indigenous culture and knowledge. The null hypothesis, which was rejected in the study in a mixed method analysis, reads as follows: “There is no statistically significant difference in the knowledge retention scores of students taught variation and evolution using the culturo-techno-contextual approach and those taught with the conventional lecture method” (p.52). In other words, the null hypothesis is wrong. The contextual approach was better.
By now everyone probably has heard of the term cognitive offloading, i.e., turning over the labor of thinking to a machine. It’s appropriate to offload mental work to the bot that is a) impossible for humans or b) harmful to humans. Not so for offloading in the context of teaching and learning. By now, the word has circulated about the alleged risks students face when they paste their assignments into a bot and get an instant essay or solution. As near as I can tell from searching my university library, we don’t yet have much of an empirical base to draw firm conclusions, but because the media is filled with conjectures, I’ll provide you with a collection of the more nuanced risks of offloading as I construe them before citing a recent study with a theoretical framework strikingly similar to the “culturo-techno-contextual” approach being offered in Africa. Keep in mind this collection is nothing more than the residue from what I’ve read and heard in arguments in favor of “banning the bot” from schools.
It’s becoming increasingly clear that fears of AI destruction are well within reason and deserve to be researched with surgical precision. Educators must guard against demonizing the lecture approach to teaching, what I call the executive teacher method (ETM) and the more progressive agentive student method (ASM) is appropriate. This binary is false. Taking sides on this issue is counterproductive when the focus is squarely on the risks and rewards of language machines in the classroom.
The Risks of Cognitive Offloading
Basic Offloading 101. Flat out offloading of learning opportunities is perhaps the most obvious risk. Here’s the assignment; do it for me, bot. True, students miss learning opportunities for lots of reasons, but those who habitually offload and, worse, grow skilled at deception, likely learn to bypass the cognitive work necessary for germane cognition, the mental activity that links new knowledge durably to prior knowledge and integrates new knowledge into the domain knowledge that comes over time from wrestling with challenging, discipline-specific material.
Making the grade by offloading instead of learning. It’s so easy! Sounds horrible, doesn’t it? Long-term memory domains like biology can’t be mastered by recalling a concept like, say, photosynthesis for the test, rather by really working it, talking about it, visualizing it, seeing it as a metaphor, marveling at the lives of plants, and thereby locking it in by integrating it into a concept network like cellular respiration, carbon cycle, energy transfer, agriculture, human respiration.When teaching aims at rote memory and testing involves recall or recognition of words and phrases, cognitive offloading seems impossible to avoid with or without the bot. The whole point is to recall. In rote executive teaching situations, even the teacher might be seen as a mimic, generating words and phrases and “saying things” that others have said a million times before, similar in effect to what the teacher down the hall is saying. Clearly this sort of offloading is made easier by ETM pedagogy.
Effortful offloading is perhaps the most depressing kind, especially in the executive teacher mode. This student wants to play the rote game fairly, wants to write the five-paragraph essay, wants to learn what they are supposed to learn. It's more that the student can't do the work and knows it, having been told and told again by test scores and grades. The bot can be a great stocking stuffer when the stocking is in your head waiting to get filled with magic words that may raise your self-esteem and your grade. Such students have gotten the idea from somewhere that learning means being able to say the right thing at the right time and get credit for it. Cellular respiration? Oh, yeah. That's when “cells convert glucose and oxygen into carbon dioxide, water, and energy.”
Ask these students to explain how cellular respiration relates to photosynthesis or why it matters to living organisms, and they stumble. They've memorized language without understanding concepts—ironically, tragically, they’ve mimicked the bot with the best of intentions. Bots don’t understanding words, either—they predict them. When faced with assignments requiring deeper understanding, some students turn to AI not to cheat but to survive—to produce something that looks like knowledge. The tragedy is that this approach creates fake learning: As course content builds on foundational concepts these students never truly grasp, they become increasingly dependent on technological crutches while falling further behind in actual understanding. Their relationship with learning becomes transactional rather than transformative, focused on outputs rather than intellectual growth.
Academic Dishonesty and Intellectual Provenance. Offloading work to AI without attribution corrodes the foundation of learning by severing the connection between intellectual labor and intellectual growth. Too often, students view this behavior as a victimless crime. Who does it hurt? No one more than themselves, but they don’t understand that. When students pass off AI-generated content as their own, they violate academic integrity policies, for one thing, potentially damaging their own future academic chances. What’s far worse, they undermine the ecosystem of knowledge development that depends on honest attribution.
Intellectual provenance matters for at least three crucial reasons. First, it maintains accountability in the knowledge-building process—attributing ideas to their sources creates a verifiable chain that allows others to evaluate claims based on their origins and the evidence supporting them. Second, proper attribution acknowledges the collaborative nature of knowledge creation, recognizing that ideas emerge from networks of thinkers and fostering respect for the intellectual contributions of others. Third, tracking intellectual lineage enables metacognitive awareness of one's own thinking development. Understanding where our ideas come from helps us recognize how our thinking evolves through engagement with others' perspectives. When students bypass this process through concealed AI use, they don't just commit academic fraud; they forfeit the opportunity to locate and reveal themselves within knowledge communities and develop the metacognitive awareness essential for lifelong learning.
Dependency Development: The Erosion of Intellectual Autonomy. This type of offloading deserves careful, sustained research. I’ve read other writers discuss it as an established fact, it sounds plausible and is worth considering. Students may become reliant on AI tools, undermining their confidence in their own abilities and creating anxiety when they must work independently. They may develop a mindset that values product over process, a dangerous mindset in a system that already overvalues products. On the other hand, they may fail to understand the real value of AI as a thought partner, a tool for exploration and brainstorming, and they may lose opportunities to monitor their own learning progress and identify knowledge gaps when filling those gaps can increase intrinsic motivation to learn voluntarily. They may shrink from collaborative learning activities, not confident that they can contribute without AI.
This dependency syndrome could happen through theoretically through several mechanisms. One that comes to mind is what psychologists call "learned helplessness"—the perceived inability to complete tasks without external assistance. As AI tools produce polished outputs in the blink of an eye, students could begin to doubt their capacity to match this standard through their own cognitive labor. The comparison between their messy, effortful drafts that took hours to write and AI's immediate, coherent productions creates a perceived competence gap that widens with each interaction.
The psychological impact extends beyond academic settings. These students might develop an "outsourcing reflex"—the instinctive urge to delegate cognitive tasks rather than engage with them directly much like we instinctively look for a calculator when faced with complex mathematics. When faced with challenges requiring deep thinking, their first impulse might become seeking technological assistance rather than activating internal resources. They don’t understand the difference between a calculator and a bot. This reflex toward the bot fundamentally alters their relationship with uncertainty and intellectual struggle in ways that are irrelevant to calculators.
Perhaps most damaging is how AI dependency might reshape students' relationship with knowledge creation. They might begin to view knowledge as something external to be retrieved rather than internally constructed. This transformation erodes what some have called "epistemic agency"—the sense of oneself as capable of generating, evaluating, and revising knowledge claims. Students with diminished epistemic agency approach learning passively, expecting information to be delivered rather than actively constructing understanding.
The social dimensions of this dependency are equally concerning. In group work or discussion settings, AI-dependent students might experience something I’ve read about called "impostor anxiety"—the fear that their spontaneous contributions will be recognized as inferior to those of peers who can produce interesting ideas independently. This anxiety can further reinforce their reliance on AI tools while limiting their willingness to engage in the vulnerable exchanges that characterize productive collaborative learning. The resulting participation pattern—either withdrawal or overreliance on AI-generated contributions—deprives them of social learning experiences critical for developing communication skills and intellectual confidence.
Most insidiously, AI dependency could undermine metacognitive development—the ability to monitor, evaluate, and regulate one's own thinking processes, the ability that must be heightened to learn to take up a command posture when using AI. When students consistently outsource difficult cognitive tasks, they miss crucial opportunities to experience and reflect on their own thinking strategies, strengths, and limitations. Without this metacognitive awareness, they may not effectively identify knowledge gaps or misconceptions, leaving them unable to direct their learning efforts strategically or experience the motivating satisfaction of recognizing their own intellectual growth.
This cycle of dependency ultimately could transform students' orientation toward learning from a growth mindset focused on developing capabilities to a performance mindset fixated on producing acceptable outputs with minimal cognitive investment—a transformation that diminishes both their academic potential and their capacity for lifelong learning in a rapidly evolving knowledge landscape.
Inability to Transfer Knowledge: The Erosion of Reliable Domain Knowledge. Bot output is inherently untrustworthy. It is also notoriously malleable and responsive to small, seemingly insignificant prompt details. Content produced by language machines even after using a defensible prompt is likely based on partial or even fake assertions, making it difficult to apply concepts learned from bot output to new contexts or to build upon them in future learning. Using the bot as a reliable source of knowledge can corrupt an emerging domain knowledge base.
This reliability crisis undermines knowledge transfer in multiple ways. When students build their understanding on AI-generated content, they construct mental models on shifting sands. These models appear solid but their output often collapses under expert scrutiny when applied to novel problems or integrated with new information. The issue lies in how human learning differs from machine pattern recognition. Human cognition thrives on discerning causal relationships from experiential knowledge, from rock climbing on conceptual hierarchies, from flashes of intuition and big picture glimpses—precisely what large language models approximate but can’t humanly do.
Consider a student who uses AI to "learn" about climate systems. The AI might produce a good explanation of atmospheric carbon cycles by describing it as the movement of carbon through the atmosphere, biosphere, hydrosphere, and lithosphere, but these explanations lack the epistemological foundations that would allow the student to reason through new climate phenomena. When later faced with questions about unexpected climate patterns or contradictory data, the student lacks the conceptual infrastructure to adapt their knowledge appropriately. For example, the student may have missed understanding how to connect molecular-level processes to global-scale phenomena. Their understanding remains brittle, fragmented, rather than flexible and useful.
More insidiously, AI-generated knowledge often contains plausible-sounding fabrications—the notorious "hallucinations" that present falsehoods as facts. I’ve been stunned by hallucinations before, nor so much because they are weird, but because they are not. Students who incorporate these falsehoods into their knowledge bases contaminate their understanding in ways difficult to later identify and correct. Each new piece of legitimate knowledge they encounter must interact with this corrupted foundation, creating cognitive dissonance or, worse, being rejected because it doesn't align with the flawed mental model already established.
This corruption can creates cascading failures of understanding. In fields with strict hierarchical knowledge structures like mathematics, physics, or computer science, foundational misconceptions from AI sources can render higher-level concepts incomprehensible. In humanities disciplines, subtle misrepresentations of historical events or theoretical frameworks can lead to flawed interpretations that persist through subsequent analysis. Perhaps most concerning is how this process short-circuits the development of epistemological vigilance—the critical ability to evaluate knowledge sources and assess truth claims. Students who habitually treat bot outputs as authoritative knowledge can develop intellectual habits that prioritize convenience over verification, undermining the metacognitive skills essential for advanced learning and professional judgment. It’s critically important to acknowledge that the output from a bot is just the beginning, not the end, of the user’s job.
Standardized Thinking: The Homogenization of Intellectual Expression. AI generates conventional responses based on training data, potentially limiting creativity, unique perspectives, and innovative thinking. Its so-called writing is studded with cliches. This standardization effect operates through several mechanisms, each with implications for intellectual development. Large language models distill patterns from billions of texts, inherently favoring the most common expressions, arguments, and conceptual frameworks while marginalizing outlier perspectives. When students routinely consume and submit AI-generated content, they unconsciously absorb these statistical tendencies as implicit norms for "good" thinking.
The consequences extend beyond mere stylistic homogeneity. Consider how AI systems consistently produce five-paragraph essays with predictable thesis-support-conclusion structures when prompted for academic writing. I’m not the first to note that students who internalize these patterns may come to view this narrow format as the definitive model of structured thinking rather than one possible approach among many. Their intellectual development becomes constrained within these artificial boundaries, and they struggle to recognize when alternative structures might better serve their communicative purposes.
More subtly, AI-generated content tends toward what might be called "intellectual centrism"—presenting balanced, moderate views that avoid controversial claims or radical perspectives. A natural consequence of training on diverse texts and optimizing for broad acceptability, it systematically excludes the kinds of provocative, boundary-pushing thinking that often drives intellectual breakthroughs. Students who rely on AI writing assistance may never develop the cognitive courage to stake out controversial positions or challenge established paradigms.
The problem compounds in collaborative contexts. When multiple students in a discussion have used AI to generate ideas on the same assignment using similar prompts, the resulting conversation might lack the friction of truly diverse perspectives. Instead of productive cognitive conflict that might generate novel insights, these interactions reinforce the statistical mean of thought represented in the training data.
This standardization can undermine the development of authentic voice and style. Except for the repetitive, boiler-plate text used in wide-scale communication among knowledge workers, corporations, and institutions, where text production can be safely offloaded, writing is not merely about communicating information; it's about developing a distinctive way of seeing and expressing ideas that reflects one's unique intellectual journey. AI-generated content, regardless of quality, cannot authentically represent a student's evolving perspective. Students who habitually rely on AI-generated text miss the pleasure of the difficulty of finding their own intellectual voice—a process that requires experimentation, failure, and the gradual discovery of one's distinctive patterns of thought.
Standardization could ultimately create a nightmare feedback loop that may reshape the landscape of human knowledge production. Another widely circulated risk that makes good sense to me, I concur that as more AI-influenced writing enters the digital corpus, future AI systems will train on increasingly homogenized data, further narrowing the range of expression. The resulting intellectual ecosystem risks losing the cognitive diversity essential for addressing complex, multifaceted problems that require varied perspectives and novel approaches—more reason for teaching students AI Theory and Practice.
The following provides a compressed list of the most serious risks as I see them. There are undoubtedly many more than I could think of, and I invite you to add them in the comments.
Loss of Germane Cognition: Bots simply can’t mimic germane cognition, and they can’t do it for you. When students habitually offload assignments to AI, they bypass the essential mental work that links new knowledge to existing knowledge networks, the goal of germane thinking, preventing the deep integration necessary for expertise development.
Dependency Development: Students may develop learned helplessness, an "outsourcing reflex," diminished epistemic agency, and impaired metacognitive abilities when they rely excessively or unwittingly on AI, undermining their intellectual autonomy.
Knowledge Transfer Failures: AI-generated content creates brittle understanding built on potentially unreliable information, leading to cascading failures in learning, especially in hierarchical knowledge domains.
Standardized Thinking: Regular consumption of AI-generated content homogenizes student thinking and expression, undermining the development of authentic voice and style and the discovery of creative, boundary-pushing perspectives.
Academic Dishonesty Issues: Beyond policy violations, offloading without attribution severs the connection between intellectual labor and growth, undermining the ecosystem of knowledge development and metacognitive awareness.
Designing Human-Centered Instruction
To be brutally honest with you, from what I see happening in the resistant, rebellious, frustrated outpouring of grief from professional educators working in the trenches, I’m worried. I’m worried because they are right. I’m worried because all that they fear can actually happen—if teachers do nothing about it.
I’m worried that these risks are being ignored or worse, blamed exclusively on the existence of AI, a scapegoat that has only highlighted the sentiment embedded in the Common Core State Standards for public schools: “Nobody gives a shit what [students] think or what they feel.” “I never asked for AI,” or “We never asked for AI,” or “I’m already a fabulous teacher, and I resent this insult,” or “I’ll never give in, I refuse to let AI destroy the Humanities”—these responses may feel good for the moment, but young people are stuck with the consequences of postponing the inevitable unless the adults in the room stand up and figure this out. Potential solutions are manageable without tearing down what we know.
Emphasize Experiential Learning: Create assignments requiring personal observation, reflection on lived experiences, and application of concepts to real-world situations that AI cannot replicate.
Develop Process-Oriented Assessment: Evaluate students based on their thinking process through reflections and in-person discussions that demonstrate intellectual development rather than just final products.
Teach Critical AI Use: Help students understand when AI collaboration is beneficial versus when it undermines learning, including identifying AI hallucinations and evaluating source reliability.
Design for Metacognitive Development: Incorporate regular structured reflection on learning strategies, dilemmas, strange feelings about how things are going, knowledge gaps, and intellectual growth to counteract the loss of metacognitive opportunities.
Create Authentic Collaborative Activities: Design group work that leverages spontaneous, diverse human perspectives that would be difficult to simulate with AI assistance.
By approaching AI as one tool in a broader learning ecosystem rather than as a substitute for human cognition, which it is not, educators can help students develop both the skills to use these technologies appropriately and the intellectual foundations needed for genuine human learning and development. A starting point for me, based on my personal expertise and what I know about bots, is reform in writing instruction and revisiting a strong push for writing across the curriculum.
The Final Word
In our rush to embrace or resist generative AI's capacity to predict the next word, we risk surrendering something distinctly human—our ability to imagine beyond immediate linguistic patterns toward purposeful expression. As Beck and Levine (2024) reminded us, "They [bots] cannot see forward, except to the next word. But seeing beyond the next word [emphasis added] is a key criterion for human intelligence." Even if AI improves to the level of an agentic machine, i.e., capable of forming an intention, because AI is a machine, its intent ion will never match the human intention.
When students write intentionally, they don't simply string together statistically probable words. They embark on intellectual journeys with destinations in mind, navigating through uncertainties, reconsidering paths, visiting Florence, interviewing the Chief of Police, and sometimes discovering entirely new territories of thought. They begin sentences without knowing precisely how they will end yet maintain a sense of direction guided by purpose rather than probability.
Consider the difference: An AI might confidently produce "The atmospheric carbon cycle involves the exchange of carbon between..." continuing along predictable pathways of explanation. A human writer, however, might start the same sentence but suddenly pivot toward unexpected revelation: "The atmospheric carbon cycle involves the exchange of carbon between—wait, I just realized this is like the circulatory system in our bodies, pumping life's essential elements through planetary veins."
This capacity to transcend the gravitational pull of the next word—to leap beyond statistical prediction toward purposeful meaning—represents the essence of human authorship. Where AI can only look backward to history's linguistic patterns, human writers can look forward to communication goals not yet realized. Where AI reaches for the next word, humans reach for the last word—the final expression that completes not just a sentence, but an intellectual journey.
As educators, our challenge lies not in rejecting AI assistance but in designing learning environments that cultivate this uniquely human capacity for goal-directed expression. We must help students understand when to invite AI as a collaborative partner in thinking and when to assert their human prerogative to see beyond the next word—to imagine, to question, to contradict statistical probability in service of deeper truth—never to surrender their thoughts and intentions to a language machine. The most profound human expressions have not come from predicting what words typically follow others. They've come from imagining what words might create worlds that don't yet exist toward ends we're still discovering. This is where human learners, human thinkers, and human writers must stake their claim—not in competition with AI, but in total commitment to using our existential powers in celebration of our natural, biological, evolutionary capacity to write not just toward the next word, but toward the next world.
Terry Underwood is a distinguished educator, assessment expert, and Professor Emeritus at Sacramento State University with over three decades of experience in portfolio-based and authentic assessment systems. A pioneer in the field, Terry served on the California authentic assessment design team (1991-1994) and the New Standards Project (1991-1996), writing portfolio handbooks used across 19 states.
Terry's doctoral dissertation (1996) on portfolio systems earned the prestigious NCTE Promising Researcher Award. Their expertise led to publications including two influential books on portfolio frameworks (1998, 2000) and consulting work for Iowa's teacher feedback system.
Terry later contributed to the Performance Assessment for California Teachers (PACT) design team and implemented portfolio systems at CSU Sacramento. Most recently, Terry served as principal assessment consultant for the Western Interstate Academic Passport project (2013-2015) and contributed to the VALUE rubric on college reading (2009).
Check out some of our favorite Substacks:
Mike Kentz’s AI EduPathways: Insights from one of our most insightful, creative, and eloquent AI educators in the business!!!
Terry Underwood’s Learning to Read, Reading to Learn: The most penetrating investigation of the intersections between compositional theory, literacy studies, and AI on the internet!!!
Suzi’s When Life Gives You AI: An cutting-edge exploration of the intersection among computer science, neuroscience, and philosophy
Alejandro Piad Morffis’s The Computerist Journal: Unmatched investigations into coding, machine learning, computational theory, and practical AI applications
Michael Woudenberg’s Polymathic Being: Polymathic wisdom brought to you every Sunday morning with your first cup of coffee
Rob Nelson’s AI Log: Incredibly deep and insightful essay about AI’s impact on higher ed, society, and culture.
Michael Spencer’s AI Supremacy: The most comprehensive and current analysis of AI news and trends, featuring numerous intriguing guest posts
Daniel Bashir’s The Gradient Podcast: The top interviews with leading AI experts, researchers, developers, and linguists.
Daniel Nest’s Why Try AI?: The most amazing updates on AI tools and techniques
Riccardo Vocca’s The Intelligent Friend: An intriguing examination of the diverse ways AI is transforming our lives and the world around us.
Jason Gulya’s The AI Edventure: An important exploration of cutting edge innovations in AI-responsive curriculum and pedagogy.
Thank you for defining and structuring with so much clarity all the risks I had only sensed around cognitive offloading. The one point that made me wonder is this:
"A human writer, however, might start the same sentence but suddenly pivot toward unexpected revelation: "The atmospheric carbon cycle involves the exchange of carbon between—wait, I just realized this is like the circulatory system in our bodies, pumping life's essential elements through planetary veins."
And I probably wondered because LLMs can also be quite good at finding metaphors. I guess the key lies in the "unexpected": a human would have the ability to come up with a metaphor that has not often been used (at least in digital form) before...
Reading that was transformative - thank you.