Neither Paper Nor Panopticon: Teaching for Integrity in the Age of AI
When arguments about teacher resistance and extended transitional moments breed accessibility barriers
I'm thrilled to share an exciting opportunity where I need your support.
First, a huge thank you to all my subscribers who continue to engage with this work on AI and education. Your thoughtful responses and questions push my thinking forward in invaluable ways.
I'm also incredibly honored to be working alongside important people like Alan Hilsabeck, Dr Michelle Ament, EdD, Patrick Camilleri EdD, within The G.R.A.C.E movement and participate as a speaker at the SXSW EDU 2026 conference.
A Quick reminder—your vote matters. VOTE HERE!!! (We only need a few more votes).
I want to preface this by saying my thinking is still evolving on these issues. AI in education is a topic worthy of rigorous debate and deeper examination, and I don't claim to have all the answers. In the following piece, I present what I believe is the stronger argument—not as settled doctrine, but in an effort to spark conversation about what pathways we need to take in the upcoming school year.
These questions are too important to leave to individual educators working in isolation. They require collective wisdom, diverse perspectives, and ongoing dialogue—exactly the kind of conversation I hope to facilitate at SXSW EDU if selected.
The stakes are high, the landscape is shifting rapidly, and our students deserve thoughtful, inclusive solutions. Let's think through this together.
Nick
This school year, two responses to AI are dominating classrooms. Some schools are rolling back to marble composition books, in-class essays, and handwritten tests. Others are going digital but doubling down on control with lockdown browsers, keystroke logging, and AI detectors.
I'll admit it: I was part of the first camp. When AI tools exploded into educational consciousness, my initial advice to teachers was pragmatic damage control: shift to paper assignments during the first part of the school year, buy ourselves time to figure this out. It felt like the responsible thing to do.
But as the months have passed, I'm rethinking this strategy entirely.
When "Safe" Becomes Harmful
At first glance, paper-only and high-surveillance strategies seem like opposites: one analog, one high-tech. But they share two key traits: they're built on mistrust, and they assume every student can function within the same rigid constraints.
That second assumption is where my thinking started to shift. In my work with teachers, I've always encountered students whose IEPs or 504 plans include clauses requiring computer access to complete work. These weren't accommodations for convenience: they were essential tools that allowed students with dysgraphia, motor impairments, processing differences, or language-learning needs to show what they actually knew.
When we "return to paper" as our AI solution, we're not creating a level playing field. We're tilting it steeply against students who need digital tools to participate fully in learning.
Paper Isn't Neutral
The "return to paper" is often framed as a clean, fair solution to AI misuse. But not all students can simply "pick up a pen and write."
Students with dysgraphia, cerebral palsy, or other motor impairments can be slowed to a crawl without keyboards or speech-to-text. Multilingual learners lose access to translation tools, dictionaries, and visual supports that help them bridge language gaps. Students who process information more effectively when they can organize, rearrange, or color-code digital text are forced into a single mode that doesn't match how they think.
These are not fringe cases — they're part of the everyday makeup of our classrooms. Paper assignments can also disadvantage students who learn best through iterative drafting, collaboration, or multimedia work. The more we standardize around one mode, the more we narrow the ways students can show what they know.
Surveillance Isn't Neutral Either
Lockdown browsers and AI detectors may avoid the handwriting barrier, but they introduce another set of equity issues.
Students who rely on assistive tech can find those tools blocked or flagged. Students in communities with a history of disproportionate surveillance may feel alienated or unsafe under constant monitoring. Limited device performance and unstable internet connections can cause false flags or lockouts, adding stress and cutting into learning time.
When a tool's success depends on perfect conditions — stable bandwidth, uninterrupted power, high-end devices — it privileges the students who already have the most.
A District Shows Another Way
Recently, I had the opportunity to work with a large district that has taken a radically different approach. They've committed to K-12 AI access through Google Workspace for all students and faculty: not as an experiment or pilot program, but as core infrastructure.
What impressed me wasn't just the policy decision, but how they were implementing it. Rather than treating AI as a threat to contain, they were treating it as a literacy to develop. Teachers weren't trying to prevent AI use; they were teaching students when and how to use it appropriately, and crucially, when not to use it at all.
Students were learning to evaluate AI outputs, to build on them rather than simply submit them, and to recognize tasks where human thinking was irreplaceable. The focus shifted from preventing "cheating" to developing judgment.
This district seemed to understand something fundamental: instead of layering more control onto broken assessment structures, they were trying to change the relationship between students and tools entirely. In the world our students are walking into, work is digital, collaborative, and choice-driven. Preparing them for that world means giving them multiple, accessible ways to demonstrate learning - not finding new ways to catch them in the act of learning differently than we expect.
What Both Control Approaches Miss
Whether we're removing tech or over-monitoring it, we're still solving for control, not capacity. Neither paper-only assignments nor digital surveillance teaches students how to use AI wisely, or how to choose not to use it when that's the better option.
But the deeper issue is that both approaches ignore how our education system actually works. We've designed schools to squeeze evidence out of students at every turn. Students know how high-stakes this evidence is - they know their essays, projects, and participation are constantly evaluated, recorded, and used to sort them into categories that will follow them for years.
Of course they optimize for what looks like learning rather than what actually is learning. AI just makes that optimization more efficient. When we respond with more control - whether through paper mandates or surveillance software - we're treating the symptom, not the cause.
The students aren't the problem here. They're responding rationally to systems that treat every interaction as potential evidence for or against them. We're asking for authentic learning within fundamentally inauthentic assessment structures.
The Core Question
When we remove technology or lock it down, who gets left out? Who loses access to their best tools for thinking, creating, and communicating?
My initial instinct was to protect students from AI's risks by removing it entirely. But I've come to realize that the bigger risk might be doubling down on systems that were already pushing students toward performative rather than authentic learning. When we respond to AI with more control, we're amplifying the very dynamics that made AI feel threatening in the first place.
AI has made assessment design harder. But the answer isn't to roll back to a time when only certain kinds of minds and bodies could succeed, nor is it to make classrooms into surveillance zones. The answer is to move forward with integrity, with adaptability, and with a commitment to making learning accessible and relevant for every student.
Districts that are leaning into thoughtful AI integration, rather than running from it, may be showing us the way toward classrooms that are both more authentic and more equitable. That's a direction worth heading.
Nick Potkalitsky, Ph.D.
Check out some of our favorite Substacks:
Mike Kentz’s AI EduPathways: Insights from one of our most insightful, creative, and eloquent AI educators in the business!!!
Terry Underwood’s Learning to Read, Reading to Learn: The most penetrating investigation of the intersections between compositional theory, literacy studies, and AI on the internet!!!
Suzi’s When Life Gives You AI: A cutting-edge exploration of the intersection among computer science, neuroscience, and philosophy
Alejandro Piad Morffis’s The Computerist Journal: Unmatched investigations into coding, machine learning, computational theory, and practical AI applications
Michael Woudenberg’s Polymathic Being: Polymathic wisdom brought to you every Sunday morning with your first cup of coffee
Rob Nelson’s AI Log: Incredibly deep and insightful essay about AI’s impact on higher ed, society, and culture.
Michael Spencer’s AI Supremacy: The most comprehensive and current analysis of AI news and trends, featuring numerous intriguing guest posts
Daniel Bashir’s The Gradient Podcast: The top interviews with leading AI experts, researchers, developers, and linguists.
Daniel Nest’s Why Try AI?: The most amazing updates on AI tools and techniques
Jason Gulya’s The AI Edventure: An important exploration of cutting-edge innovations in AI-responsive curriculum and pedagogy.




I really appreciate your addressing the issue of mistrust of students. This is not an aspect of AI integration that I run into very often. How can we base the education of children and youth on our distrust of them? Further, your focus on shutting out students with learning differences cogently ties up this piece. AI presents the opportunity for genuine multimodal learning. Why would we instead common, opt for monomodality?
Absolutely spot on analysis as I teach the use of AI to help my neurodivergent students to navigate and survive in this world.
They use it to make a resume, to extract meaning from extra long texts they read on the net, even create a personalized schedule of work following their chronotype.
I am very grateful to see that tool is helping them gain more autonomy in a fast changing and also « cruel » world to kids with learning disabilities.
Thank you so much for saying aloud what I am thinking right now, it’s important to help educators see how valuable AI can be in a very mindful and relevant way.