The Elusive Quest for Safe, Affordable AI in K-12 Education
Rethinking AI in Schools: Putting Student Safety and Equity First
Dear Educating AI Readers,
Before I begin, I want to thank my readers who have decided to support my Substack via paid subscriptions. I regularly spend 5-10 hours each week creating new content for this newsletter.
I appreciate your vote of confidence. Your contributions allow me to dedicate this time to research, writing, and building Educating AI's network of contributors, resources, and materials.
What Are We Looking for in a K-12 AI Model?
I have been getting a lot of questions recently from teachers, administrators, and technology directors about safe, reliable, and affordable student access to AI in K-12.
It feels like my recent series on big tech's push for free access, the rise of personable, persuasive AI, and the tension between AI's tool and agent status builds to this question. So I want to spend some time unpacking the matter in its full complexity.
At the start, I want to encourage others to share what they know about this situation. I am not claiming to have all the knowledge on the matter. This is just what I have been able to gather at this particular moment. Thanks to Justin Bruno from Michigan Virtual for his insight into these issues as I prepared this article.
5 Requirements
The short answer to the question is that there is nothing like safe, reliable, and affordable access to AI for K-12 yet. Here is what I am hoping for in an ideal situation:
FERPA-level protection at the level of API
Socratic or inquiry-based interface
Clear policy of data sharing, privacy, protections with affiliated major AI model
Student login that school can manage so as to limit sharing of student information
Free (or less than $5 a student login)
GPT-level AI has been available for several years now. The best education oriented models only meet 3 or 4 of these 5 criteria (#5 is the major sticking point).
In the final section of the piece, I consider how the cost-prohibitive nature of current "safe AI" solutions challenges the very notion of their safety and accessibility.
The Complicated AI Landscape for K-12
Gemini, Claude, and Co-Pilot
Gemini, Claude, and Co-Pilot all have 18 and over policies. I personally think that we educators have insufficiently reflected on this company-imposed barrier to K-12 implementation and integration. Three major players in the AI market are saying that their products require users to be 18 and over to operate safely.
Students are using these tools despite age restrictions.
Companies aren't enforcing these restrictions.
Curriculum is being developed assuming students will use these tools in school.
What AI tools can students actually use without breaking requirements or policies?
ChatGPT
ChatGPT allows users as young as 13 to work with its application with parent permission.
Here is the actual language in Terms of Use Policy last updated on November 23, 2023:
"Minimum age. You must be at least 13 years old or the minimum age required in your country to consent to use the Services. If you are under 18 you must have your parent or legal guardian's permission to use the Services."
So if teachers want to proceed with this route, they definitely need to secure parent permissions. How???
OpenAI collects personal info (phone, email) and retains user input for 30+ days.
Info shared with AI is FERPA-level; needs FERPA protections from teacher/school/district.
OpenAI doesn't offer FERPA protections. Why not???
13+ policy suggests OpenAI prioritizes market share over young user safety/privacy.
I highly discourage teachers from having students sign up, even with parent permission; data usage unclear
Magic School
Magic School, originally a teacher only AI resource, has recently released Magic Student, a url-based AI resource for students. Students do not get individualized logins in the full sense, rather teachers create AI rooms within the application, populate those rooms with AI tools, and share access to rooms with students via urls or links. Dominant model in the market right now!!!
Specially designed student-facing AI (RAINA) with safety protections
Teacher can monitor chat history if students ID consistently
Data sharing anonymized at URL/API level
Flaw: rooms per assignment; access open/closed; potential misuse
14 day free trial; then must pay
SchoolAI
SchoolAI is the model application that Magic School is copying with their functionalities. It operates in exactly the same way.
Limited chat monitoring
Longer free account runway vs Magic School
No privacy agreement for free users; Enterprise account required for privacy
Enterprise accounts integrate with major LMS
Khanmigo
Khanmigo is a Socratic inquiry-based AI application born from a partnership between Khan Academy and OpenAI. Sal Khan explores the genesis of this product in his latest book/infomercial Brave New Words: How AI Will Revolutionize Education (and Why that Is a Good Thing).
Limited chat monitoring
Free teacher accounts; paid student accounts
Students access via code, no login needed
$4/month or $44/year per student - cheapest FERPA compliant option
Not waiting for schools to buy; focusing on parent market
Pre-made bots for science, math, writing - focus on critical thinking vs answers
Precursor to Google's LearnLM
PowerNotes
PowerNotes, a traditionally college-level research tool, now infused with AI, is finding increased utility in grades 9-12 with its novel approach to AI monitoring.
Full student login; API & school-level protections
Teacher & student accounts; functions like LMS
Teachers set up AI tools; can monitor interactions
Browser-based; flags content pasted from outside (e.g. AI text)
Novel monitoring focuses on origin vs AI detection
Good for IB - where documentation & authentication crucial
Organized research/writing space; concretizes meta-cognitive inquiry steps
Cost: one-year teacher $100; one classroom per year $100
Perplexity
Perplexity, one of the first AI applications to offer search capabilities, is a powerful AI-internet search tool.
Previously required accounts; no login needed now
Surprising to some; desire for at least one reliable AI tool
"Back to the drawing board"
Final Thoughts: Safety vs. Accessibility
At the heart of this challenge lies a troubling reality: access to "safe" AI is often determined by financial means. Even relatively low-cost options like Khanmigo's $4 per month per student can be out of reach for underfunded public schools, exacerbating existing educational inequities.
This raises a fundamental question: Is an AI tool truly safe if it's only accessible to those who can afford it? My answer, at this time, is no. An AI tool that is secure but unaffordable for many schools and students is not genuinely safe, as it perpetuates a system where only the privileged few have access to the most advanced learning technologies.
But the problem runs deeper than just affordability. We need to question the very configuration of the market that has led us to this point. Why are we in a situation where AI makers create products that schools then need to protect students from? Why are we in a position where we need a secondary market of providers to protect our students from unfiltered AI? Why is the onus on underfunded public institutions to pay for the privilege of shielding their students from the potential harms of AI?
There is something fundamentally wrong with this arrangement. While I praise the secondary market for doing the amazing work of bridging the gap, I keep coming back to the question of the primary providers. Why do you keep releasing such unsafe products to the general population when your own age restrictions seem to indicate a sense of caution about their widespread use?
It speaks to a system where the priorities of tech companies and the needs of students are misaligned, where the drive for profit and market dominance overshadows considerations of equity, accessibility, and genuine safety.
We can and must do better. It's time for AI makers to step up and take responsibility for the safety and accessibility of their products from the ground up. It's time for a fundamental shift in how we approach AI in education – one that puts the needs of all students first, not just those who can afford it.
The stakes are too high to settle for anything less. The future of education in the age of AI hangs in the balance, and it's up to all of us – educators, policymakers, and tech companies alike – to ensure that it's a future that works for every student, regardless of their background or means. It won't be easy, but it's a challenge we must embrace if we believe in the transformative potential of AI and the fundamental right of every child to a safe, equitable, and cutting-edge education.
Nick Potkalitsky, Ph.D.
Check out some of my favorite Substacks:
Terry Underwood’s Learning to Read, Reading to Learn: The most penetrating investigation of the intersections between compositional theory, literacy studies, and AI on the internet!!!
Suzi’s When Life Gives You AI: An cutting-edge exploration of the intersection among computer science, neuroscience, and philosophy
Alejandro Piad Morffis’s Mostly Harmless Ideas: Unmatched investigations into coding, machine learning, computational theory, and practical AI applications
Michael Woudenberg’s Polymathic Being: Polymathic wisdom brought to you every Sunday morning with your first cup of coffee
Rob Nelson’s AI Log: Incredibly deep and insightful essay about AI’s impact on higher ed, society, and culture.
Michael Spencer’s AI Supremacy: The most comprehensive and current analysis of AI news and trends, featuring numerous intriguing guest posts
Daniel Bashir’s The Gradient Podcast: The top interviews with leading AI experts, researchers, developers, and linguists.
Daniel Nest’s Why Try AI?: The most amazing updates on AI tools and techniques
Riccardo Vocca’s The Intelligent Friend: An intriguing examination of the diverse ways AI is transforming our lives and the world around us.
Jason Gulya’s The AI Edventure: An important exploration of cutting edge innovations in AI-responsive curriculum and pedagogy.
Good insights here. More than the tools, teachers need workflows for those tools.
The issues in higher ed are a little different, often only as a matter of degree. One of the biggest differences is for institutions that host hospitals and clinics. The privacy of medical records (HIPAA data if you are in the US) requires a higher level of security, and because some of our software (e.g., O365) crosses academic and administrative boundaries, a tremendous amount of due diligence is required to ensure that data does not get sucked into Copilot-type applications.
Another thing that makes for a degree of difference are the requirements of the variety of subjects that are taught. While we are beginning to see some basic principles of teaching and learning with AI in both the core curriculum and individual subjects, this is still in its early stages. Having an AI orientation course may be important to transition students to using it in college, and AI certificates (to allow them to explore it more deeply) are also emerging. The ways that AI may be used in different fields (say Computer Science, Nursing, Journalism, Art, Physics, and English) are as different as those areas of study. The needs for accuracy, the consequences of mistaken information, of data breaches, and ethics are all much higher than is usual in K12. If, for instance a nursing student learns the wrong thing from an AI, that is more likely to have serious consequences for others than if a middle schooler does. Of course if the middle school's AI manages to systematically indoctrinate students into a particular way of thinking, that could also have broader implications for the students and their community. One thing I have not seen addressed much about AI tutors and teachers is that they can be hacked in ways that humans cannot. Also, unless we develop AIs with theories of mind and good understanding of the physical and social world, they will not be able to relate to students in the same way that a human teacher would.
In terms of dealing with companies, I agree with Nick that educational technology companies are more likely to provide safe and reliable AI for students than the AI giants. One thing to consider is the extent to which they are developing applications at the same time as institutional policies that may limit the use of some functions is being developed. I have already run into this a few times. The developers need to build in fine-grained settings to enable to disable a functions at an institutional or course level - preferably both.
As for the AI giants, almost all of them have exhibited unethical or questionable behavior. This raises the concerns about whether use of their tools can ever be ethical. Their often lax approach to privacy and security, let alone accuracy, means a level of vigilance is required of institutions that is more extreme than before. It is taking a great deal of time and effort to exercise that vigilance. In the short term, it is largely being done by individuals and departments taking on extra work or pushing back other projects. In the long run, it is going to require additional human, fiscal, and technical resources.
The behavior and statements of the AI giants so far are actually not promising for education. Some of them, or at least their cheerleaders, seem to have teachers and professors in their sights - just more jobs to be deskilled or eliminated. There are some who see schools, colleges, and universities withering on the vine as everyone gets a personalized AI tutor. The greater societal implications of that are large. On the other hand, for a company to capture most of those institutions would be a huge windfall in revenue, influence, and long-term power. Some of the statements from some of the companies or their leaders raise questions about the nature of this game. We do need to treat them as companies - though we need to think about the big players in terms of what they are, cloud capitalists, which operate very differently than other corporations. We also need to realize that some of these companies stated aims indicate they are playing a very high-stakes game for control of the future.
What Nick and others are starting to do is introduce educators and those who care about education to a different way of understanding education. They are leading us to consider things we might not consider otherwise, or that only a handful of staff in a school district, community college, or university have had to deal with in the past. They are also pointing to the ways that AI is changing the larger contexts in which education and educational institutions operate.