What Is AI Doing to My Brain?
The Effects of an Extended AI Work Cycle: 1. Difficulty Focusing, 2. Adrenaline Boost, 3. Externalizing of Writing Process
Greetings, Educating AI Readers,
Thank you for opening this newsletter, sharing it with your friends and colleagues, and engaging with it through restacks and likes. These subtle network effects help attract more paid subscribers, allowing me to dedicate the 5-10 hours each week needed to write and network in support of our ever-growing community of readers and collaborators.
I'll be taking a vacation next week, so the newsletter will pause for one week. I hope everyone enjoys the final weeks of summer before the return to school. In our next edition, Educating AI will celebrate its first year of publication! It's an exciting milestone, and I'm planning something special to recognize the many newsletters now recommending this one.
One final note: I'm seeking a teacher-reader who has worked with Khanmigo over the past year, both to build classroom materials and in student work cycles, to write a 1000-word article with images about integration and implementation. If you're interested, please send me a note and a writing sample.
Be well,
Nick Potkalitsky, Ph.D.
Long Work Cycles and Mind-Machine Analogies
Sometime ago, I started to entertain an untested theory that the conscious processes of our brains gradually come to imitate the tools we rely on for organizing, processing, and analyzing our thoughts and other kinds of information and data. I base this theory on what little I know of mirror neurons, though in the situation I'm describing, they'd be operating as a kind of limit case.
Mirror neurons were discovered in the 1990s to much ado. These fascinating brain cells fire when an animal performs an action and when it observes the same action being performed by another animal. Mirror neurons were first discovered in macaque monkeys, but similar neurons have been found in humans and other primates. Mirror neurons are thought to play a crucial role in learning, empathy, and understanding the actions and intentions of others.
“The user needs to spend a significant amount of time immersed in a machine process.
It's not enough to be mechanistically engaged; the user must either connect with the processes as a form of creativity or be challenged to use the system in a creative way.”
And yet, in the limit case I'm describing, the mirroring occurs between a human being and a tool or machine that lacks consciousness as traditionally conceived (self-awareness, subjective experience, intentionality, reportability, etc.). I can recall my first experience of this machine-mirroring when I learned to work with spreadsheets back in my high school general-ed computer class (BTW, we desperately need to bring similar classes back to schools!).
Anyone who works with spreadsheets for extended periods of time can tell you that you gradually start to visualize and organize information in your mind using the categorization frameworks built into these spreadsheet applications. And this is precisely what happened to me when I got immersed in a large data organization/spreadsheet project that concluded our gen-ed computer class. To me, this process of mental imitation didn’t happen instantaneously – it took time, creativity, purposefulness, and a degree of choice or cooperation on the part of the user.
The user needs to spend a significant amount of time immersed in a machine process. It's not enough to be mechanistically engaged; the user must either connect with the processes as a form of creativity or be challenged to use the system in a creative way. In my experience with spreadsheets, the novel process of translating mathematics into symbolic operations unfolding across spatial relationships was infused with enough creativity to keep me engaged throughout the entire process.
Moreover, the process must be goal-oriented in some sense; the user needs to feel like they're accomplishing something step-by-step building to a climax. Finally, choice is perhaps the most crucial element: users should have alternative methods available for this operation or task, selecting this particular machine for specific advantages it offers.
In my case, the spreadsheet offered a unique way to organize and visualize data that I couldn't achieve with pen and paper or other tools at my disposal. This conscious choice, combined with the creative engagement and step-by-step progress, made the experience ripe for this kind of cognitive mirroring.
Here, if we push this argument further, we might discover that the attribution of consciousness to machines is perhaps the result of an inverse mirroring between machines and the conscious processes of the brain when working with them. The first-order basis that establishes analogical continuity between conscious processes and machines opens up a channel for a reverse flow of properties from those conscious processes back to the machine.
As a result, what began as a limit case strangely transitions into a reverse mirroring resulting in anthropomorphization. Suddenly, it becomes possible, even very likely for some persons, that with enough training, data, processing power, and memory, a machine might not only become conscious but have empathy for its user. When a machine pushes through the linguistic milestone of generating cohesive text at amazing speed, this likelihood becomes a certainty for a surprisingly large proportion of the human population.
Such is the power of analogical reasoning, particularly as it accelerates through quick reversal. But here we can't forget that between each step of this now two-part process (initial mirroring, reverse mirroring) lies an analogical leap. And what lies in that leap? Hypotheses in search of evidence. This is not to discount the leap as groundless, rather to situate as both speculative and empirical.
Time, Purposiveness, Creativity, and Choice
That said, I don't think the experience of analogizing is without its real effects in the world. In other words, to the extent that we invest in the analogy between our conscious processes and the experience of using AI, for instance, we will undergo experiential and phenomenological changes in our perceptions, linguistic expressions, inclinations, and actions accordingly.
“An analogy is not a passive comparison; they actively reshape our cognitive landscape and the real world as a consequence.”
To return to my example above, when I as a high school student started to build up an analogy between my own thinking and spreadsheet algorithms, I experienced a gradual but definite shift in my own consciousness. The world started to look a little more spreadsheet-y. When I returned to math class, I began to reconfigure my algebra problems into spreadsheet shapes in my mind. When watching my favorite baseball team, I started to think of innovative ways to manipulate the myriad statistics that under-girded the nation’s pastime using my newfound organizational tool.
Some might argue that I'm reaching here, but that's the exact function of analogies: to reach, bend, shape, and transfigure. An analogy is not a passive comparison; they actively reshape our cognitive landscape and the real world as a consequence.
This past week, I spent an inordinate amount of time using AI, and I think I finally reached that point of immersion, purposiveness, creativity, and cooperation where I subconsciously—against my most reasoned arguments—locked into an analogy between my conscious mental processes and my preferred AI tools: ChatGPT and Claude.
Initially, I was going to title this article more philosophically, in the spirit of Hannah Arendt: “Where am I when I am using AI?” My readers who are Arendt devotees will hear an echo in this prospective title of her famous question: “Where are we when we think?” But I wished to register the quality of the experience of using AI, so I instead chose as title: “What is AI doing to my brain?”
Here was the situation: My family was out of town for 6 days, and I set myself the ambitious goal of building two online training modules (20 pages of text each) and rewriting a handbook for a senior capstone, complete with prompts and learning outcomes for 10-12 accompanying assignments. Luckily, my AI writing process involves taking 70-80% completed copy and using AI to refine the finish of texts and add framing or situating elements, and I had already completed that initial 70-80% work.
What this meant for me was that my 6 days would involve nearly continuous use of AI to finish and polish text for final publication, alternating with long close-reading sessions to check the coherence, clarity, etc. of the final copy. It was a deep dive into AI-assisted writing, a marathon of human-AI interaction.
Effect 1: Lack of Focus
The most concerning effect of this extended collaborative cycle, and perhaps the implicit analogizing, is a feeling of continuous motion attended by a difficulty focusing closely on particular things. When you're in an AI work cycle like the one I've described, you're constantly moving chunks of text through an AI tool for processing, checking the quality of the processed text, situating it in the original document, finding the next chunk of text, repeat x 1000. This process is interrupted when you run out of prompts on your free and/or paid accounts. You have to grapple with different bots' tendencies. ChatGPT has a better memory than Claude, so you don't have to waste as many prompts training up the AI on style, format, etc.
In this work, you're constantly shifting roles: big picture editor, cut-and-paste copy-editor, on-the-spot crack writer, prompt engineer, etc. The process as a result is quite grueling. When you step away from it, you're weary, and if you try to write original text—a very focused process—your brain just says, “Nope!!! Not gonna happen!!!”
Effect 2: Adrenaline Boost
The second concerning effect of this extended collaborative cycle is a function of their grueling nature: they become adrenaline-infused to the extent that you want to get them done as quickly as possible. I think the adrenaline is also an implicit result of the inverse mirroring, to the extent the human organism attempts to imitate the efficiency of the machine and can only do so by releasing hormones associated with stress mechanisms.
In the midst of an adrenaline boost, I could prompt for a long time. Computer science students around the world can comment on the sometimes incredibly long amounts of time they can sit and tangle with a class assignment. It's partly a function of flow, but also very much a function of biochemicals. My college roommate, a comp-sci major, could sit for 13-14 hours sometimes and work on an assignment with very little interruption.
Needless to say, such work cycles are not good for me or for our prospective students who already grapple with numerous addictive technologies. This AI-induced adrenaline rush creates a dangerous feedback loop: the more efficiently we work with AI, the more we push ourselves to match its tireless pace, further feeding our stress response.
Effect 3: Externalization of Writing Process
The third and perhaps most crucial effect of this extended collaborative work cycle is subtle, yet vitally important for educators, administrators, and researchers to consider. It concerns our experience of the self as a unified entity - a concept born from imaginative and analogical thinking, but one that holds profound social and personal value.
For most users, AI is not a unified entity but a series of interconnected processes. One such process involves text generation: a user can input text, externalize it, and with the press of a button, have it fundamentally altered through a separate process. This stands in stark contrast to the conscious experience of a human writer. For us, improving text isn't a matter of simply sending it off for processing. Instead, the text must become a heightened object of consciousness, subjected to explicit cognitive and graphical routines to improve along various metrics.
The power of human consciousness lies in its unity. However, the nagging analogy between mental and machine processing—here through inverse mirroring— undermines the visibility, accessibility, and internality of this unified conscious experience. This is the curious phenomenon that occurs when tools take on the function of language expression. I believe that, unlike any previous technology, our current AIs have the potential to fundamentally alter our relationship with language and, by extension, our sense of self as unified, conscious beings.
This shift challenges our traditional understanding of writing as an internal, cognitive activity. As we externalize and mechanize parts of this process, we must grapple with profound questions: What happens to our sense of authorship? How does this affect our understanding of creativity? Most critically, how might this impact our students' development of critical thinking skills?
These questions are not merely academic - they strike at the heart of how we conceive ourselves as thinking, creating beings. As we navigate this new landscape of AI-assisted cognition, we must remain vigilant to its effects on our fundamental sense of self and our capacity for deep, unified thought.
A Way Forward: The New South Wales AI Initiative
Admittedly, these effects are context-dependent. I had created a situation where I was attempting to accomplish far too much in too short a time, using AI in a largely mechanical rather than creative or intellectually stimulating manner. Most of the work was formulaic "plug and chug." What saved me from sinking into the Slough of Despond were the long stretches I spent closely reading AI-polished text, then reworking it to align with my own voice and infusing it with my personal perspective. BTW, I am focusing on the cultivation and maintenance of voice through interactions with AI as a key component of my AI-writing curriculum with upper-level high school students.
The path forward lies in restructuring AI work cycles to prioritize learning and engagement. In their recent guest posts, Terry Underwood and Rob Nelson both called for this reorientation in their own ways. Underwood argues that we've put the cart before the horse during this past year of AI integration. Convinced of AI's potential to aid learning, we've introduced it to students expecting immediate results. After initial failures with global AI integration, particularly in K-12 settings, we're now attempting piecemeal AI application drop-ins with mixed outcomes. Nelson pointedly states, "the short version is that chatbots just are not that good at helping most students learn, or most teachers teach." This reasoning raises even more fundamental questions for K-12 education: What precisely do we mean by "student learning" and "teacher efficacy"?
Until each institution, school, administrator, teacher, and student has answered these questions through dialogue, classroom testing, and ongoing revision, much of our work with AI tools in schools will unfold as baseless experiments in a high-stakes environment. We must remember that our students' futures depend on getting this integration and implementation right.
One incredibly thoughtful experiment I want to highlight is the work of the New South Wales (NSW) Department of Education. In my work in the AI x Education space, I want to commend educators and administrators in Australia and New Zealand as some of the most thoughtful and progressive developers of safe, effective, and pedagogically-grounded AI methods, tools, and practices in the world.
Daniel Bashir, host of The Gradient podcast, recently interviewed Dan Hart, Head of AI, and Michelle Michael, Director of Educational Support and Rural Initiatives at the NSW Department of Education. Unlike many of their American counterparts, NSW has successfully developed and integrated a system-wide AI-powered chatbot, NSWEduChat, for students and teachers.
Their design project began by grounding itself in a theory of learning and teacher efficacy, emphasizing the primacy of teachers in all classroom concerns. The roll-out is deliberate, always erring on the side of caution, safety, and ethics. Through this process, Dan Hart and his team have built up an impressive dataset to guide decisions about next steps. The process is thoughtful, data-driven, and grounded in theories of knowledge, learning, and teaching practice. It serves as an exemplar for us all to follow in the coming years.
What Is AI Doing to My Brain? Divergent Pathways
The NSW Department of Education's thoughtful approach to AI integration serves as a beacon for educators worldwide. Their emphasis on grounding AI tools in sound pedagogical theory, prioritizing teacher involvement, and proceeding with caution offers a stark contrast to the intensive, often mechanical use of AI that I experienced during my marathon writing session.
This contrast brings us back to our initial question: What is AI doing to my brain?
My experience suggests that prolonged, intensive AI use can have several notable effects:
It reshapes cognitive patterns, creating a sense of continuous motion and difficulty in sustained focus.
It triggers stress responses as we subconsciously try to match the AI's tireless efficiency, leading to adrenaline-fueled work sessions.
Most profoundly, it challenges our sense of self as a unified, conscious entity by externalizing and mechanizing parts of our language processing.
These effects, while concerning, are not inevitable. They stem from a particular context of intensive, mechanical use of AI tools. The NSW example shows us that there's an alternative: a measured, thoughtful approach that integrates AI while preserving our uniquely human capacities for deep thought, creativity, and unified consciousness.
Moving forward, we must strive for this balanced approach, using AI more mindfully and prioritizing tasks that engage our full cognitive abilities. By doing so, we can harness AI's potential while safeguarding our cognitive well-being and sense of self.
Ultimately, AI is doing to our brains what we allow it to do. As we continue to explore and refine AI's role in education and beyond, let's remember that the goal is not to mimic machines, but to use them in ways that amplify our uniquely human qualities. In doing so, we can ensure that AI becomes a true partner in our intellectual growth, rather than a force that reshapes our minds in potentially problematic ways.
Check out some of my favorite Substacks:
Terry Underwood’s Learning to Read, Reading to Learn: The most penetrating investigation of the intersections between compositional theory, literacy studies, and AI on the internet!!!
Suzi’s When Life Gives You AI: An cutting-edge exploration of the intersection among computer science, neuroscience, and philosophy
Alejandro Piad Morffis’s Mostly Harmless Ideas: Unmatched investigations into coding, machine learning, computational theory, and practical AI applications
Michael Woudenberg’s Polymathic Being: Polymathic wisdom brought to you every Sunday morning with your first cup of coffee
Rob Nelson’s AI Log: Incredibly deep and insightful essay about AI’s impact on higher ed, society, and culture.
Michael Spencer’s AI Supremacy: The most comprehensive and current analysis of AI news and trends, featuring numerous intriguing guest posts
Daniel Bashir’s The Gradient Podcast: The top interviews with leading AI experts, researchers, developers, and linguists.
Daniel Nest’s Why Try AI?: The most amazing updates on AI tools and techniques
Riccardo Vocca’s The Intelligent Friend: An intriguing examination of the diverse ways AI is transforming our lives and the world around us.
Jason Gulya’s The AI Edventure: An important exploration of cutting edge innovations in AI-responsive curriculum and pedagogy.
Interesting, insightful and important points. Although I have long resonated with the view that both cognition and language is "extended" and ecosystemic rather than internal states or possessions, I think you are right that "our current AIs have the potential to fundamentally alter our relationship with language and, by extension, our sense of self". Not quite sure what that would look like, but it is likely to not be positive.
As much as I often slip into a self-consoling view that genAI are language affordances (perhaps an occupational hazard as a foreign language teacher), they do seem to be much more, especially when they become integral to the writing experience (brainstorming, planning, writing, and revising).
I just finished reading JURGEN GRAVESTEIN's Substack piece on whether AI makes us less creative, which echoes your questions of "How does [genAI use] affect our understanding of creativity?" and perhaps the most pressing concern in education and L&D, "how might this impact our students' development of critical thinking skills?"
GenAI users can easily outsource skill and creativity to genAI. And in so doing, they shift the traditional human role of creator to evaluator, but this assumes that they have the skill, knowledge and expertise to evaluate.
So, how do we train our learners to not only use genAI but also become competent in skills and knowledge to allow them to critically evaluate genAI output? This is a tightrope balancing act at precarious heights.
Nick, I have been thinking along these lines, without the immersion in an AI, as I read Shannon Vallor, The AI Mirror: How to Reclaim Our Humanity in an Age of Machine Thinking (OUP, 2024). If you have not read it yet, I strongly encourage you to do so. For me, both what she writes and your recent experience highlight the need for teaching students to both reflect deeply on their experiences and to learn techniques to help them avoid thinking like machines.
A couple of non-AI books also come to mind. One is Matthew B. Crawford, The World Beyond Your Head: On Becoming an Individual in an Age of Distraction (Farrar, Straus and Giroux, 2015.) I particularly recommend the chapter on the pipe organ builders.
More controversially, you might want to look at the work of the neuropsychologist and philosopher Iain McGilchrist. His book, The Master and His Emissary: The Divided Brain and the Making of the Western World (Yale, 2009), gives some good historical context, though the specifically neurological portions are controversial.
I am old enough that we had no computers in my high school and I only bought one in my mid twenties. This was pre-WWW, but I had read Ted Nelson a few years before and became fascinated with his ideas of hypertext. I think computers changed my thought processes even before I had used them extensively. It was experimenting with reading books on a computer or mobile device that finally led me to the sort of reflection you did with the spreadsheets. For me that was in my late forties.
I guess you are never too old for this sort of thing, but how old does a child need to be to engage in this sort of reflection?
Having read your piece this morning I see I need to go revise the post I have been struggling with over the weekend some more.