What Is Going on with AI in K-12 Anyway?
A Report from the EduCon Conference in Philadelphia
Several weeks ago, I had the privilege of presenting at EduCon, an education, technology, and innovation conference at the Science Leadership Academy in Philadelphia. The conference's theme was "Human-Centric," and several presenters highlighted this value as a driving force for AI implementation and integration. In other words, as we design systems for integrating AI into our work cycles and classrooms, they need to be "human-centric."
However, I couldn't help but wonder, what exactly do we mean by "human-centric"? For one presenter, it meant students relying on AI for up to 50% of text generation while maintaining their ideas and authentic voice. For another, "human-centric" meant that teachers should never use AI to write lesson plans or assignment prompts. How do we reconcile these competing visions? More importantly, who is responsible for reconciling these competing value systems? And ultimately, how do those decisions drive the crucial process of AI adoption?
Following the opening session, the weekend unfolded with a series of remarkable sessions, immersing me in a plethora of engaging conversations. These discussions traversed both the expansive theoretical issues and the concrete, actionable strategies for effectively integrating and applying AI within the current educational landscape. My intention for this post is to synthesize these discussions into a comprehensive "state of play" overview for the enrichment of my readers.
The conference successfully convened a diverse array of educators, encompassing teachers, school leaders, and technology administrators from schools of varying sizes (small, medium, large), settings (rural, suburban, urban), and types (independent, religious, public). The experience of the weekend left me feeling both affirmed by the evident demand for the type of work I am pursuing and invigorated with a deeper, more grounded understanding of the specific needs of the audiences I endeavor to serve.
Public Schools:
The public schools of the New York City, Philadelphia, and DC Metro area attending the conference have progressed very little in terms of AI integration and implementation. At this time, very few students use AI as a sanctioned activity in their classroom spaces, and these students primarily attend charter or discipline-specialized schools. Hesitancy to implement is a complex equation. I spoke with several tech administration teams about the situation.
One team anticipated the arrival of LLMs over two years ago and has been methodically preparing for their arrival and implementation. This team’s hesitancy is two-fold. Externally, their district presently has a top-down ban on all AI tools across school computer networks as decided by the superintendent in consultation with the school board. Internally, this tech team does not trust OpenAI or any of the other large model makers with student, staff, or faculty data.
Switching gears, I had the opportunity to engage in a dialogue with a public school principal from Texas who was experimenting with Khanmigo at his institution. Khanmigo is an all-encompassing AI solution crafted by Khan Academy, leveraging Open AI's ChatGPT-4 framework. Educational institutions acquire substantial network-wide subscriptions; pupils register for personal accounts; educators formulate AI-adaptive tasks through the Khanmigo platform and monitor student engagement.
The principal highlighted the product's standout feature: its Socratic questioning approach. Unlike ChatGPT-4, which directly provides answers, Khanmigo deliberately encourages a prolonged exploration process among learners. He commended the tool for its method of prompting, seamless integration, and claimed standards of privacy and accessibility, yet he pointed out that its considerable cost represents the sole negative aspect.
Private Schools:
I spoke with teachers from a variety of private and religious schools across the East School, engaging in open discussions. From these talks, it became clear that most of these institutions haven't established clear policies for AI use in classrooms, though they haven't banned it outright. Initially, some educators experimented with having students use AI to draft essays for further development, but this approach was quickly abandoned. It turned out that only students with more advanced abilities could effectively evaluate AI-generated content to create new insights and improve their writing.
The rest found themselves in a limbo, asking students occasionally to use AI as a tool for brainstorming, crafting thesis statements, outlining, and revising. Yet, this group harbored significant concerns about the implications of relying on AI, questioning its impact on students' ability to perform these tasks independently. They worried about the consequences of depending on AI for ideas, particularly when it might offer only basic guidance instead of the innovative solutions needed to tackle complex problems.
Sam Reed III
During my discussions, I encountered Samuel Reed III, an educator adept at utilizing an AI tool named Youth Voices, crafted by Paul Allison of the National Writing Project. Sam stood out among his peers at the conference for his proficiency and innovation in AI education techniques. He recently completed a specialized training and certification in AI and education, emerging with a firm belief in AI's potential to empower students, especially those facing challenges in reading and writing.
Sam included his students in his presentation, showcasing how they engage with Youth Voices. This platform, leveraging ChatGPT-3.5, offers a straightforward, user-friendly interface that guides students through various prompts, maintaining a Socratic method of dialogue without overwhelming users with complex features.
When I posed challenging questions to his students about automation and the future impact of AI on jobs, they responded with unwavering optimism about a future enriched by AI. Observing Sam's approach gave me a preview of what education might look like in a few years—imagining a growing number of educators like Sam, who approach teaching with confidence and knowledge, equipped with the right tools to guide students in using AI to effect positive changes in their lives and the broader world.
Alana Winnick
During Educon, I had the pleasure of connecting with Alana Winnick, the technology director at Pocantico Hills Central School District in New York, celebrated author of The Generative Age, and the voice behind a podcast sharing the book's title. Her engaging seminar on AI ethics and safety stood out as a weekend highlight. The dialogue in this session shifted some of my perspectives on AI safety, which, I must confess, had become somewhat lax following the release of ChatGPT-4. Alana shared news of Microsoft's November 2023 announcement about the teacher version of Copilot (incorporating ChatGPT 4 and Dall-E 3) offering stringent data and content protection, preventing any input from contributing to broader OpenAI datasets.
My investigations revealed that obtaining these accounts can be quite tricky. Currently, these licenses are only sold in batches of 300 or more, presenting a challenge for smaller organizations or those not fully integrated with Microsoft products. Despite these hurdles, it's reassuring to know such options are emerging, prompting me to seek out similar solutions for individual users.
Moreover, Alana's workshop reinvigorated my early summer reflections on AI's ethical and racial biases. The honest discussions about these topics felt increasingly relevant as we are approaching the 2024 election. A notable moment came when a participant in one of my discussion groups realized the extent to which he had been delegating data analysis tasks to AI. This led us to contemplate the reliability of such analyses given AI's tendency towards normativity and bias. Despite initial confidence in the AI's performance, this participant gradually began to express doubts over the course of our conversation.
Recommendations
After engaging with numerous exceptional educators and school administrators, I've become acutely aware of the challenges involved in advising other educators, given the wide variety of environments and circumstances they operate in, many of which significantly differ from my own. This realization prompts me to pause and reconsider the guidance I've been readily sharing. My aim is to refine and narrow down these suggestions, taking into account the unique insights I've gained through my experiences in the field.
1. Security
The issue of safety and security must be a primary concern in any discussion about the adoption and incorporation of AI in education. It's vital that students, educators, administrators, and parents have confidence that student information will remain confidential - not merged into bigger datasets, repurposed for training AI applications, or traded to the highest bidder. Without firm commitments from AI providers regarding the protection of their products, progress in integrating AI into K-12 education will remain stagnant. Period.
The encouraging update is that we're seeing the development of more secure options. Schoolai, for instance, has gained notable attention on social media in recent weeks. However, a point of concern is that Schoolai is built on OpenAI's ChatGPT-4 platform, which means we're depending on OpenAI's promises not to incorporate student data for unknown purposes. While many who are familiar with current technology trends might assume that safeguards between or within collaborating organizations are reliable, it's important to remember that data is highly valued. Ensuring that control over such valuable assets is maintained might be challenging, especially with companies that have yet to establish a strong trustworthiness record.
2. Infrastructure
In numerous K-12 schools and districts, computer and technology networks are increasingly showing signs of age, becoming outdated or in dire need of repairs and upgrades. While training generative AI involves significant demands on memory, processing power, time, and energy, it's important to note that schools and districts are unlikely to undertake the training of their own AI models. This distinction means that the heavy computational requirements of AI training do not directly impact the modernization needs of existing computer networks.
However, deploying AI applications on networked computers, though less resource-intensive, still presents a challenge as it requires scaling up across the entire network. This scalability introduces slightly higher demands on system resources. Consequently, it is imperative for schools and districts to evaluate their current infrastructure to ensure it is capable of supporting the incremental increase in resource requirements necessary for transitioning to a fully AI-enabled educational community.
Moreover, schools and districts must brace for significant expenses associated with acquiring new AI tools, software packages, and applications that offer advanced educational features along with robust security measures. It's anticipated that the costs of these technologies will vary significantly in the coming years, as the quality, demand, and value of these offerings are evaluated in real-world settings. For instance, schools with around 1,000 students should anticipate spending at least $10,000 for a comprehensive, highly secure product akin to ChatGPT-4 that covers all students and faculty at the time of this writing.
During a recent workshop I participated in, experts unequivocally stated that the free version of ChatGPT-3.5 falls short for academic use due to its limited capacity in generating thoughtful, Socratic responses to well-designed prompts. In essence, while GPT-3.5 is adept at providing answers, it struggles with explaining them, thereby diminishing its utility as an educational resource. Essentially, the investment by schools and districts will be directed towards two key areas: (1) a school-wide license for a ChatGPT-4 level AI, and (2) an educational framework that not only guides searches and inputs towards more educational outcomes but also enhances security measures.
3. Pedagogy
Teachers, administrators, technology directors, parents, and students need to work collaboratively to develop pedagogy and curriculum best suited to the environments and purposes of particular school communities, grade levels, and pre-professional programs. While organizations like AAAA, CSTA, and ITSE are hard at work developing standards to guide integration and implementation of AI in today’s classrooms, schools and district ultimately need to determine their own purposes and outcomes for AI integration and development. To return to the point I raised at the beginning of this article: What does successful AI integration and implementation look like?
In my mind, I would argue that at the student-level we should be driving towards the following outcomes. I leave the matter of the institutional-level for another post. I will phrase them as I do any other learning outcomes in my classroom.
AI Learning Outcomes:
AI Learning Outcome 1:
Understand the history of AI, its text generation processes, its biases and inaccuracies, and the companies and organizations behind these tools.
AI Learning Outcome 2:
Comprehend the principles of prompt engineering and effectively apply them in practice to generate diverse types of texts.
AI Learning Outcome 3:
Maintain a high level of proficiency in traditional literacy and writing skills and competencies, utilizing them to enhance the quality of writing throughout all stages of AI-responsive writing practice.
AI Learning Outcome 4:
Utilize autonomy, choice, and judgment throughout the AI-writing process to retain tight control over the vision, purpose, voice, and creativity of the final output.
AI Learning Outcome 5:
Develop and employ ethical criteria to guide decisions regarding the extent of reliance and various types of use cases throughout AI-responsive writing practice.
This is just a start. Let me know if you think of any more I should add.
And thanks again for reading Educating AI.
Nick Potkalitsky, Ph.D.
This was great, Nick, thanks - we need more of these "field reports" that show how people think about AI outside of our theoretical bubble.
I find it curious how the Public vs. Private school chapters seem to separately exemplify the two different challenges in AI adoption within education: Bureaucracy and centralized bans/limitations (public schools) and lack of a structured approach / doubt about effective implementation (private schools).
At the same time, it's nice to see more chamipons like Sam Reed III and Alana Winnick emerging to help break the barriers and show a path forward. Until now, my main references in this space were you and Ethan Mollick - I'm sure that soon enough we'll see the emergence of some kind of collaborative committees and unions of pioneer educators in this space.
Thanks Nick really helpful posts as always. A couple of follow up questions, you mention ‘specialized training and certification in AI and education’, what was this? And secondly, really keen to see your thoughts on what the AI learning outcomes might be at the institutional level and wondering what these might be for leaders 🤔