Finding the Right Questions: Why AI Implementation Starts with Purpose, Not Tools
AI isn’t just a tool waiting for the district to adopt. It’s already reshaping how students write, research, and complete assignments.
I’m launching a research cohort on disciplinary AI, bringing together practitioners from across academia to examine AI’s impact on our diverse disciplines. Send me a DM with your contact info to join our discussions, starting with our inaugural meeting in early December.
When districts begin thinking about AI implementation, the instinct is to jump to practical questions: Which tools should we use? Should we pilot ChatGPT with students? What does our policy need to say?
These feel like the right starting points because they promise concrete action. But they skip over something essential.
The Framing Problem
In working with districts on AI implementation, I’ve noticed we keep reaching for questions that don’t quite fit:
“How do you want to use AI in the district?” sounds reasonable, but it puts the cart before the horse. It assumes we’ve already decided AI is something to adopt, and we’re just figuring out the details.
“Where can AI help us with something we’re struggling with?” is pragmatic, and there’s real value in a needs assessment. But this frames AI purely as a productivity tool, something the district controls and deploys on its own terms.
Here’s what both framings miss: AI isn’t just a tool waiting for the district to adopt. It’s already reshaping how students write, research, and complete assignments. It’s already affecting academic integrity, assessment validity, and what “doing your own work” even means.
A district can’t simply decide whether to “use” AI. Students already are. The question is: What’s our educational stance on what’s happening?'
Before Tools, Before Policy: The Foundational Work
What districts need first (before piloting tools, before finalizing policy language, before professional development) is an organizing center. A clear sense of purpose that can guide all those downstream decisions.
This means asking different questions:
What kind of learning do we believe in? What are our core commitments about how students develop, what teachers do, and what educational relationships require?
How does AI affect what we value? Where might it genuinely enhance our commitments? Where does it threaten or undermine them?
What’s our stance? Only after wrestling with these questions can policy, guidelines, and tool decisions make sense.
A Protocol for Foundational Work
Here’s a process AI committees can use to do this work (not in abstract philosophical terms, but grounded in the real dilemmas educators are already facing):
Session Overview (90-120 minutes)
This isn’t about reaching consensus on every detail. It’s about surfacing what the district values, so those values can guide decisions.
Part 1: Ground in Concrete Reality (30-40 minutes)
Start with specific scenarios. Here are examples, though districts should adapt to their own context:
Scenario 1: A high school student submits a history essay. The teacher suspects AI wrote most of it. The student says, “You didn’t say I couldn’t use AI.”
Scenario 2: An elementary teacher uses AI to generate three versions of a reading passage (at grade level, above, and below) for differentiation. It takes 5 minutes instead of an hour.
Scenario 3: Middle school students use ChatGPT as a homework tutor for math, getting step-by-step explanations when they’re stuck.
Scenario 4: A teacher uses AI to write feedback comments on student essays, personalizing them with student names and specific details from their work.
For each scenario, the committee discusses:
What’s your immediate reaction?
What specifically concerns you? (if anything)
What might be valuable here? (if anything)
What does your reaction reveal about what you believe students need to be doing or learning?
That last question is key. Our reactions to AI use aren’t random. They reflect commitments we already hold about learning.
Part 2: Articulate Core Commitments (30-40 minutes)
Based on those reactions, work in small groups then come together to complete these sentences:
About Student Learning:
“We believe students need to develop...”
“Learning happens when students...”
“Students should struggle with ___ but shouldn’t have to struggle with ___”
About Teacher Practice:
“Teachers’ most important work is...”
“Teachers should spend their time on ___ not on ___”
About Educational Relationships:
“The student-teacher relationship depends on...”
“Feedback and assessment are valuable when...”
These aren’t abstract statements. They’re the beliefs that explain why certain AI uses feel wrong and others feel promising.
Part 3: Define Threat and Opportunity (20-30 minutes)
Create a two-column chart:
AI threatens our vision when it...
AI could support our vision when it...
Example: bypasses the struggle that builds thinking
Example: removes barriers unrelated to the learning goal
Example: replaces the teacher-student relationship
Example: gives teachers time for what matters most
This framework becomes the lens for evaluating everything: tools, policies, classroom practices.
Part 4: Articulate Organizing Principles (15-20 minutes)
Synthesize the conversation into 3-5 guiding statements:
“In [District Name], our AI implementation will be guided by these commitments:
Example: Students must engage in authentic intellectual work (thinking, problem-solving, and revising) not just producing outputs.
Example: AI tools should reduce barriers to learning, not replace the learning itself.
Example: Teachers need time for meaningful relationships and responsive instruction, not administrative tasks.”
What Comes Next
These organizing principles become the foundation for everything else:
Policy review: Does our policy align with our commitments? What needs to change?
Tool evaluation: Does this student-facing AI tool support or undermine what we value?
Professional learning: How do we help teachers make decisions aligned with our principles?
Pilot design: What are we testing, and what would success look like?
Additional Resources
As you engage in this foundational work, these resources offer valuable frameworks:
AI and the Future of Undergraduate Writing from the Chronicle of Higher Education
Teaching in the Age of AI from Harvard’s Derek Bok Center for Teaching and Learning
CoSN’s AI Guidance for Schools offers practical frameworks for district leadership
The Work Takes Time (But Saves Time)
Yes, this foundational work requires dedicated time before jumping to solutions. But without it, districts end up:
Adopting tools that don’t fit their context
Writing policies borrowed from elsewhere that don’t reflect their values
Dealing with conflicts because there’s no shared understanding of why certain uses are problematic
The centripetal force of clear purpose (a shared sense of what the district stands for) makes every subsequent decision easier and more coherent.
AI implementation isn’t really about AI. It’s about what kind of learning community you’re building.
Nick Potkalitsky, Ph.D.
Check out some of our favorite Substacks:
Mike Kentz’s AI EduPathways: Insights from one of our most insightful, creative, and eloquent AI educators in the business!!!
Terry Underwood’s Learning to Read, Reading to Learn: The most penetrating investigation of the intersections between compositional theory, literacy studies, and AI on the internet!!!
Suzi’s When Life Gives You AI: A cutting-edge exploration of the intersection among computer science, neuroscience, and philosophy
Alejandro Piad Morffis’s The Computerist Journal: Unmatched investigations into coding, machine learning, computational theory, and practical AI applications
Michael Woudenberg’s Polymathic Being: Polymathic wisdom brought to you every Sunday morning with your first cup of coffee
Rob Nelson’s AI Log: Incredibly deep and insightful essay about AI’s impact on higher ed, society, and culture.
Michael Spencer’s AI Supremacy: The most comprehensive and current analysis of AI news and trends, featuring numerous intriguing guest posts
Daniel Bashir’s The Gradient Podcast: The top interviews with leading AI experts, researchers, developers, and linguists.
Daniel Nest’s Why Try AI?: The most amazing updates on AI tools and techniques
Jason Gulya’s The AI Edventure: An important exploration of cutting-edge innovations in AI-responsive curriculum and pedagogy.



Can you update the 3 links at the end? Each one returns a 404 error page...
"AI isn’t just a tool waiting for the district to adopt. It’s already reshaping how students write, research, and complete assignments."
Your thesis is strong, but then it completely abandons 75% of the issue. I agree wholeheartedly with your process and frameworks; it isn't a clean laboratory where one can focus solely on addressing a single aspect of this issue.
1. Students are using AI.
2. AI is changing daily.
3. Teachers aren't using or fully using AI themselves, so how do they know its capabilities and liabilities?
4. Yes, it is about learning and not about the technology.
AI is here, and it is out in the wild! The development of processes and frameworks can only occur simultaneously with these other components in mind and being addressed as well.
The approach requires testing and review. How are students using AI? How should they be using AI? What works and what doesn't work?
So, establish a basic foundational framework rooted in education and learning processes, and then see how well it addresses the overall process and needs, and adjust accordingly. There is extremely little chance of any school, university, or district getting this completely right on a once-over.