The Agent Problem: Why AI’s Latest “Revolution” is K-12’s Worst Nightmare
An AI agent is relational. It functions through ongoing access to your student's digital ecosystem.
Educating AI publishes twice weekly. If this piece was useful, share it with a colleague who’s navigating these questions. And if you want to support the work, consider becoming a paid subscriber — it’s what keeps this going.
Check out my serial releases from my latest book, Thinking with AI: A Student’s Guide to Literacy in an AI-Rich World.
Intro and Ch. 1 Ch. 2-3 Ch. 4-5 Ch. 6-7
There’s a piece circulating right now that I keep returning to. Ethan Mollick, Wharton professor and one of the more thoughtful observers of the AI space, recently published his latest guide to AI tools. It’s well-written, practically useful, and probably the best single overview of where the technology stands right now. It’s also, for anyone who works in K-16 education, a quiet kind of horror.
Not because Mollick is wrong. Because he’s right.
Mollick opens with a confession: the question “which AI should I use?” has gotten fundamentally harder to answer. For most of the past three years, using AI meant chatting with a chatbot, a back-and-forth conversation you could see, evaluate, and mostly control. That era, he argues, is ending. We’ve entered what the industry calls the agentic phase: AI that doesn’t just talk about your work, but does it. Autonomously. Across your files, your browser, your email, your calendar, your code. For hours at a stretch, without you watching.
He describes tools like Claude Cowork, which runs on your desktop, accesses your local files and browser, and executes multi-step tasks on your computer while you watch, or don’t. Claude Code, which can research, build, test, and launch a website in an afternoon, “with very little effort on my part,” as Mollick puts it. OpenClaw, an open-source agent that lives on your machine, connects to whatever AI model you want, and is accessible through standard chats like WhatsApp or iMessage.
He mentions, almost as an aside, that OpenClaw “is also a serious security risk.” He recommends you “almost definitely shouldn’t use” it.
And then he moves on.
For those of us in education, that parenthetical is the whole story.
What Agentic AI Actually Means
Let’s be precise about what has changed, because the word “agent” gets used loosely and the stakes are real.
A chatbot is transactional. You ask, it responds. The data involved is the text you type and the text it returns. A poorly designed chatbot prompt can reveal things you didn’t mean to share, your thinking, your uncertainty, your students’ work, but the surface of exposure is relatively bounded and visible.
An AI agent is relational. It functions through ongoing access to your digital ecosystem. The more doors you open, your Google Drive, your email, your calendar, your browser history, your local files, the more capable it becomes. That’s not a bug. That’s the design. Mollick’s guide explains this clearly: the same underlying AI model behaves very differently depending on what tools and data it can access. The harness, as he calls it, is the thing. And the harness is built on your data.
This means that every expansion of AI capability in the agentic era is, simultaneously, an expansion of data exposure. These aren’t separable. You don’t get the powerful AI assistant without giving it access. You don’t open the doors without accepting what flows through them.
For K-12 students, this isn’t an abstract privacy concern. It’s a COPPA issue, a FERPA issue, an issue of minors’ data flowing into commercial systems through interfaces that were not designed with child protection in mind and in many cases explicitly disclaim it.
The Policy Vacuum
Here’s where I want to be honest with you, because I think we need to stop pretending otherwise: the AI policies most schools have built over the past two years were not built for this.
They were built for chatbots. For the “did the student write this themselves” question. For acceptable use policies that draw lines around text generation and image creation. Some of the more thoughtful ones addressed citation, transparency, disclosure. A few started grappling with the deeper question of what “original student work” even means in an AI-saturated environment.
None of them were built for a 14-year-old connecting an AI agent to their school Google account and asking it to “handle my homework and email my teachers if I’m going to miss something.” None of them contemplate what happens when a student uses Claude Cowork to autonomously research and draft a history paper by reading their own files, browsing the web, and synthesizing across both, in a process that happens faster than a teacher can observe it and leaves no legible trace of how it happened.
The assessment frameworks aren’t there. The instructional models aren’t there. The data governance policies aren’t there.
And the tools are here now. Already available. Already being marketed to adults as productivity revolutions, which means students are already watching, already asking, already trying.
The Thing Worth Holding Onto
I don’t want to end here, because I think despair is as much a policy failure as denial.
There is a thread worth following through all of this, and it runs directly through data privacy.
Data privacy is not the most exciting frame for an AI conversation. It doesn’t generate the engagement that “will AI replace teachers” does, or the anxiety that cheating detection does. But it is the most durable and most defensible position available to educators right now, for a simple reason: it doesn’t require you to take a position on whether agentic AI is good or bad for learning. It only requires you to take a position on whether students’ data should be protected.
That position has legal backing. It has parental support across the political spectrum. It has the advantage of being concrete enough to actually build policy around. And critically, it scales. The same framework that protects student data from a chatbot protects it from an agent, from a harness, from whatever comes next.
What this looks like in practice:
Audit before you expand. Before any AI tool enters your school or classroom ecosystem, especially anything that connects to student accounts, files, or communications, someone needs to ask the data questions. Who holds the data? Where is it stored? Is it used for training? What happens when a student turns 18 and “consents” to terms they agreed to at 12?
Distinguish between task and access. A student using an AI to help brainstorm an essay is a pedagogical question. A student connecting an AI agent to their school email is a data governance question. These require different conversations with different stakeholders.
Teach this explicitly. Students need to understand, in concrete, grade-appropriate terms, what it means to give a system access to their digital life. Not as a scare tactic. As a literacy practice. The same way we teach them to read a privacy policy, or to think about what they post publicly. AI agent literacy is the next chapter of digital citizenship, and it’s overdue.
A Closing Thought
Mollick is right that an AI that does things is more useful than an AI that says things. He’s right that the shift from chatbot to agent is the most significant change in how people use these tools since ChatGPT launched. He’s right that learning to work with agents is worth educators’ time.
He’s just writing for a different audience than mine.
For adults in knowledge work who own their own data and choose their own tools, the agentic era is genuinely exciting. For K-16 students whose data is protected by law, whose consent is legally complicated, and whose teachers are still catching up to three-year-old chatbot norms, it’s a different situation entirely.
The tools are not waiting for the policies. They never do.
The question is whether we use this moment of transition to build something more durable than “don’t use ChatGPT to write your essays,” or whether we get swept up in the hype cycle again and wake up two years from now asking how we got here.
We’ve been here before. We know how that goes.
Nick Potkalitsky, Ph.D.
Mike Kentz’s AI EduPathways: Insights from one of our most insightful, creative, and eloquent AI educators in the business!!!
Terry Underwood’s Learning to Read, Reading to Learn: The most penetrating investigation of the intersections between compositional theory, literacy studies, and AI on the internet!!!
Suzi’s When Life Gives You AI: A cutting-edge exploration of the intersection among computer science, neuroscience, and philosophy
Alejandro Piad Morffis’s The Computerist Journal: Unmatched investigations into coding, machine learning, computational theory, and practical AI applications
Michael Woudenberg’s Polymathic Being: Polymathic wisdom brought to you every Sunday morning with your first cup of coffee
Rob Nelson’s AI Log: Incredibly deep and insightful essay about AI’s impact on higher ed, society, and culture.
Michael Spencer’s AI Supremacy: The most comprehensive and current analysis of AI news and trends, featuring numerous intriguing guest posts
Daniel Bashir’s The Gradient Podcast: The top interviews with leading AI experts, researchers, developers, and linguists.
Daniel Nest’s Why Try AI?: The most amazing updates on AI tools and techniques
Jason Gulya’s The AI Edventure: An important exploration of cutting-edge innovations in AI-responsive curriculum and pedagogy.
Stephen Fitzpatrick’s Teaching in the Age of AI: Essential reflections from a veteran high school educator on the challenges and opportunities of generative AI in the classroom!!!



Thanks for your continued insights.
After reading most AI articles, hyperbole or not, I feel a simple/obviously not obvious recommendation should be included in each article something to the effect of: "I have done my critical thinking for this article, but, just as we should advise everyone we can, don't blindly accept my critical analysis." Maybe even include steps like: 1. put the article/your views/your role/your end goals/etc. into two/three/five LLMs (AIs) and 2. have it ask you clarifying questions in order to not only fully understand the potential impact, but also, 3. how you can, in whatever rule you have, teach others how to pass down critical thinking skills--whether through, 4. a lesson or simply a conversation on this great article you just read and how you critically validated it for your purposes.
Yes, I fully recognize the irony of not using AI to write up a clearer recommendation for doing critical thinking, but maybe that was the point.
Please keep them coming!
Fantastically informative - I love the explanation of chatbot vs agent. And the aspect of data privacy should shake the attention of all teachers, students, and parents. As a post-secondary teacher one of my biggest concerns is what happens (or doesn't) to my students' brains when the most critical thinking they do is creating the prompts that do their work. Thanks, Nick.