“Brain rot” sounds terrifying, who wants melting neurons? Last month an MIT Media Lab preprint, Your Brain on ChatGPT, lit up headlines claiming AI makes us “dumber.” But the data show something subtler: when we outsource thinking to a language model we remember less and engage fewer executive networks, a state the authors call cognitive debt—not neurological decay.
What the Study Actually Did
Participants: 54 college-aged adults
Tasks: Four essay-writing sessions: brain-only, search-engine, ChatGPT, then a crossover.
Measures: EEG connectivity (alpha/beta power), recall quizzes, human & AI essay scoring.
Headline finding: The ChatGPT group showed the lowest prefrontal and parietal engagement during writing and could recall only ~17 % of their own text after 24 hours, versus ~46 % in brain-only controls.MIT Media Lab
What's important to consider is that the paper is a non-peer-reviewed preprint;the sample size is modest; and the task was limited to short essays.
Four Myths the Media Amplified
Hype Claim vs. Reality Check
1. “AI causes permanent brain damage.”
EEG shows temporary lower activation not programmed cell death or structural loss. Think “sleepy cortex,” not necrosis. EEG cannot measure neuron death. That would require post-mortem tissue analysis or advanced imaging like fMRI with cell-level resolution.
The study looked at short-term, task-based changes (not longitudinal outcomes). No evidence of permanent changes to the brain.
2. “Using ChatGPT always harms creativity.”
Study measured essay recall, not divergent-thinking scores or originality metrics (also it was never “always”, the study didn't explore multiple types of creative output, nor across varied subjects, ages, or use cases.).
Your brain reduces activity when it delegates (like with calculators or autocorrect). This may be efficient, not harmful.
3. “Any AI assistance is bad.”
The search-engine group sat between the extremes; nuance matters.
4. “One study is settled science.”
It’s a preprint. Replication, peer review, and longitudinal data are still pending.
Cognitive Debt ≠ Brain Rot
The authors in the MIT preprint coin “cognitive debt” to describe the mental IOU we incur when we let an LLM fill in sentences for us. It is implied the brain saves energy now but pays later when we must retrieve or integrate that knowledge. It’s analogous to cruising downhill on a bicycle: effortless until the next climb. Reframing the conversation around debt rather than rot steers us toward solutions instead of moral panic. However, I also get asked if this has to do with laziness? I doubt that this can be directly implied as it's also important to note this study tracked what the brain did (not why it did it). Inferring laziness from temporary EEG patterns would be likely more about speculation than science.
Practical Takeaways for Classrooms & Writers
Alternating-tool practice: Not to rely on AI-drafting and solo revision (copy/paste); the cognitive load rebounds when students must refine raw output.
Retrieval checkpoints: Insert one-minute recall pauses after each paragraph to counteract fading memory.
Visible metacognition: Ask students to annotate how and why they accepted or rejected ChatGPT suggestions; reflection re-engages executive networks.
Prompt scaffolds over autopilot: Provide high-level outline prompts rather than “Write the whole essay,” keeping idea generation human.
Long-run research mindset: Faculty should treat AI assignments as small N-of-1 experiments and collect outcome data; we can’t wait for perfect RCTs.
Where We Still Need Answers
Longitudinal effects beyond four sessions
Diverse populations (K-2 through adult learners)
Complex tasks like problem-solving or lab reporting
Brain imaging modalities beyond EEG (e.g., fMRI for deeper networks)
Funding and open datasets are scarce—an opportunity for educator-researcher collaborations.
The Bottom Line
“Brain rot” is a meme; cognitive debt is a measurable (but manageable) cost of convenience. AI won’t melt student brains, but it can lull them into mental autopilot. I think its our job as educators to design productive friction, reflection, revision, and retrieval, so learners borrow brilliance from machines without bankrupting their own.
I am open to discussion: What scaffolds have you tried, and how are they working? Comment below.
Bio:
Tina Austin is a biomedical-researcher-turned-AI educator , she formerly taught "Biomedical Research Deconstruction,” guiding pre-med students through graphs, statistics, and line-by-line critiques of peer-reviewed biomedical papers has led her to now enjoys publicly dissecting viral AI studies on LinkedIn.
She teaches graduate courses on AI, critical thinking, and computational biology, with a focus on AI ethics, across both USC and UCLA, advises school districts on responsible AI adoption, and has helped thousands of faculty across universities evaluate generative tools and best use cases for their curricula.. Her LinkedIn community seems to value her data-driven reality checks that cut through AI hype. And she is starting her own substack: One Marginally Useful Thing. Check it out!!!
LinkedIn References
Post 1 (June 17): “MIT’s ‘Brain Rot”
Post 2 (June 20): “Just When I Thought We Were Done”
Check out some of our favorite Substacks:
Mike Kentz’s AI EduPathways: Insights from one of our most insightful, creative, and eloquent AI educators in the business!!!
Terry Underwood’s Learning to Read, Reading to Learn: The most penetrating investigation of the intersections between compositional theory, literacy studies, and AI on the internet!!!
Suzi’s When Life Gives You AI: A cutting-edge exploration of the intersection among computer science, neuroscience, and philosophy
Alejandro Piad Morffis’s The Computerist Journal: Unmatched investigations into coding, machine learning, computational theory, and practical AI applications
Michael Woudenberg’s Polymathic Being: Polymathic wisdom brought to you every Sunday morning with your first cup of coffee
Rob Nelson’s AI Log: Incredibly deep and insightful essay about AI’s impact on higher ed, society, and culture.
Michael Spencer’s AI Supremacy: The most comprehensive and current analysis of AI news and trends, featuring numerous intriguing guest posts
Daniel Bashir’s The Gradient Podcast: The top interviews with leading AI experts, researchers, developers, and linguists.
Daniel Nest’s Why Try AI?: The most amazing updates on AI tools and techniques
Jason Gulya’s The AI Edventure: An important exploration of cutting-edge innovations in AI-responsive curriculum and pedagogy.
https://onemarginallyusefulthing.substack.com/p/starting-one-marginally-useful-thing
This is so beautifully written. Thank you. Gosh darn it that study has been so annoying in my LinkedIn feeds over the past month; AI doomsayers and nitpickers really jumping on it for clickbait value. Thanks for such a sensible take on it Tina.