AI Adoption in Education in 2024: An Overview
From Adoption to Innovation: Transformative Strategies for K-12 and College Educators
Section 1: Evaluating Success for AI Adoption in Education
By: Nick Potkalitsky
Upon the release of ChatGPT, schools, universities, and educational institutions rapidly identified AI as a disruptive agent. In January 2023, educators sounded the plagiarism alarms, leading to a wave of attempts in February and March 2023 by numerous schools and districts to ban AI entirely from their networks and servers. Nevertheless, students quickly devised ingenuous methods to circumvent these barriers.
A notable strategy in March and April 2023, aimed at evading early AI detection efforts, involved processing GPT-3-generated content through paraphrasing apps to produce undetectable material. This initiated a cat-and-mouse dynamic that has profoundly influenced ongoing discussions around AI adoption in both K-12 and college environments.
For those tracking AI x Education through popular media and social platforms, the aforementioned narrative has dominated the conversation. Its capacity to engage audiences, stir fears about the future, impede progress towards substantial reform, and its growing detachment from actual school realities make it a problematic tale to continue spreading.
Recognizing this, my colleagues, Marc Watkins, Lance Cumming, and I aim to narrate an alternative account of AI adoption. This narrative prioritizes gradual evolution, deliberate policy-making, practical recommendations, tangible outcomes, and enhanced responsibility, acknowledging the significant implications of integrating and applying AI in contemporary educational settings. Our narrative remains cautiously optimistic, grounded in the promise that AI holds for fostering educational experiences and institutions that are more engaging, efficient, effective, and equitable.
In 2024, AI adoption in education remains in a developmental stage, despite the urgent need to advance the process. This article discusses AI adoption as a comprehensive, multi-stage process requiring the combined efforts, resources, and expertise of various specialties to achieve the objective of becoming an AI-responsive educational organization.
The initial stage of AI adoption, the critical first years, involves front-end tasks such as (1) infrastructure and security development, (2) ethical analysis and policy development, (3) community engagement, (4) curriculum building, (5) professional development, which includes literacy, technology, and curriculum training, and (6) the initial rollout of student-facing initiatives and programs. Each step in this process is complex, and schools may not have the time or resources to internally develop the necessary talent to address all aspects of development simultaneously within the required timeframe to ensure a successful outcome.
What does success precisely look like at the end of the AI adoption process? No consensus has yet emerged, presenting perhaps the greatest obstacle to AI adoption. If we, as teachers, adhere to the theory of backward design, the absence of a clear end-goal complicates navigating the initial stages of AI adoption.
Several weeks ago, I had the privilege of presenting at Educon, an education, technology, and innovation conference at the Science Leadership Academy in Philadelphia. The conference's theme was "Human-Centric," and several presenters highlighted this value as a driving force for AI implementation and integration. In other words, as we design systems for integrating AI into our work cycles and classrooms, they need to be "human-centric."
However, I couldn't help but wonder, what exactly do we mean by "human-centric"? For one presenter, it meant students relying on AI for up to 50% of text generation while maintaining their ideas and authentic voice. For another, "human-centric" meant that teachers should never use AI to write lesson plans or assignment prompts. How do we reconcile these competing visions? More importantly, who is responsible for reconciling these competing value systems? And ultimately, how do those decisions drive the crucial process of AI adoption?
In light of this fundamental challenge of AI adoption, this article aims to offer three distinctive perspectives on AI adoption to accelerate the navigation of educators, researchers, designers, and policymakers through the initial stages of the process.
My name is Nick Potkalitsky, and I am an AI Implementation Consultant, 7-12 Language Arts Instructor, and Academic Researcher in AI, Linguistics, Rhetoric, and Instruction. I have worked in both private and public settings, with students from middle school to graduate school, bringing a wealth of knowledge about these various institutional spaces and students' social-emotional and academic development across this age range to my work developing responsive AI systems. In my Substack, Educating AI, I am currently collaborating with a team to develop a cutting-edge, evidence-based AI literacy program that utilizes skills- and knowledge-generation criteria to guide specific implementations and integrations of AI into teacher work cycles and student case usage. I am excited to assist teachers, schools, and districts in creating lasting and meaningful solutions at all scales and stages of the AI integration process.
I have invited two of the most engaging and insightful voices in the AI x Education space to offer their perspectives on AI Adoption.
Lance Cummings is an associate professor of English in the Professional Writing program at the University of North Carolina Wilmington. Dr. Cummings explores content and information development in technologically and culturally diverse contexts both in his research and teaching. His most recent work looks at how to leverage structured content with rhetorical strategies to improve the performance of generative AI technologies and shares his explorations in his newsletter, Cyborgs Writing.
Marc Watkins is an Academic Innovation Fellow, Lecturer of Writing and Rhetoric, and serves as the Director of the AI Institute for Teachers of Writing at the University of Mississippi. His work exploring generative AI’s impact on teaching and learning has been profiled in the Washington Post. He writes about AI and education at Rhetorica.
Without any further ado, let’s dive into the fascinating topic of AI Adoption in 2024.
Table of Contents:
Section 2: Marc Watkin’s “From Panic to Praxis: Developing AI Literacy in Higher Education”
Section 3: Lance Cummings’s “Redefining Higher Education: The Imperative Shift from Content to Outcomes in the Age of GenAI”
Section 4: Nick Pokalitsky’s “Seize the Moment: Four Keys to Unlocking AI Potential in K-12 Settings”
Section 2: From Panic to Praxis: Developing AI Literacy in Higher Education
By:
of the Newsletter Rhetorica.
The question plaguing academia and nearly every other industry that engages in knowledge work is how we train our stakeholders to use generative technology effectively when it continues to evolve and deploy at such an accelerated pace. On Thursday, February 15th, OpenAI announced Sora, their text-to-video model.
Not to be outdone, Google announced the rapid upgrade from their recently released GPT-4 level model Ultra 1.0 to Ultra 1.5. These examples show the rapid acceleration of models and should serve as reminders that we're going to have to rethink how we structure professional development in academia and many other sectors.
1. Our Generative Moment
As education entered the generative AI era, I had to look up an old German saying to capture the current mood: Torschlusspanik. The term's literal translation is "closing gate panic." More broadly, it refers to a feeling of running out of time, of closed doors, and of lost opportunities.
Throughout higher education, most faculty are caught between wanting to preserve the status quo and stubbornly refusing to change how they've taught for decades versus adopting generative tools they don't fully understand to help prepare students for a future many can barely comprehend.
That initial panic from ChatGPT's release has ebbed, being replaced gradually with a growing unease. Something new has arisen and cannot be easily policed or governed by policy, making us confront some deep and fundamental questions about what it means to teach and learn. For the past year, I've been working to train faculty to help them navigate generative AI's impact on education, helping them build their AI literacy so they can judge what course to take.
Helping create the Mississippi AI Institute hasn't been easy, and the training process produces more questions than answers, but one thing has become quite clear—we must move past this paralysis of indecision and train ourselves in how to use this technology ethically and pragmatically.
2. Cultivating a Culture of Exploration
My own testing of generative tools with students was recently published in a co-authored paper in Computers and Composition and suggests that most students aren't eager to fully cede their writing or other developed skills to chatbot interfaces that offload much of the task of thinking.
This should hearten traditionalists who aren't keen on embracing the new but it comes with a significant caveat— much of the feedback was focused on the limitations of the interface, not the underlying technology.
Developers like Amelia Wattenbarger and Maggie Appleton believe chatbots make for poor interfaces. That's why tools like Lex, Perplexity, and Wordtune's Spices that use generative AI to augment a user's existing skills like writing and research through more humanistic interfaces appealed to my students versus the chatbot experiences we've seen from ChatGPT.
3. One Framework for Engaging Students with Generative AI
One of the takeaways that emerged from our research was a working praxis designed by my colleague Bob Cummings to help educators establish a framework for exploring this technology with students. We call it D.E.E.R. —Design, Evaluate, Explore, and Reflect. Here's a more detailed explanation of what this approach looks like in practice.
D Clearly define the stages of the project (could be an essay, brainstorming assignment, or activity), and enumerate each stage's purpose in achieving student learning outcomes;
E For each stage, evaluate a specific generative AI technology to pair with the learning activity;
E Encourage students to explore that specific generative AI technology for that stage;
R Provide students with space for reflection. We believe this stage matters the most
Why does reflection matter so much? Educators need to gain more understanding of what generative technologies offer students in terms of their learning. Our students need to ask themselves what is gained and what is potentially lost using any new technology.
As former Meta VP Jerome Pesenti opined this week about the struggle of building AI literacy with users: "Helping billions of users remember AI's limitations while using it every day has to be top priority."
You can see examples of the D.E.E.R. Praxis in action in several of the published assignments I created below:
AI in First Year Writing Courses from TextGenEd: Teaching with Text Generation Technologies
From Consumer to Creator: Analyzing Machine Made Stories form TextGenEd: Continuing Experiments
Introduction to Lex: An AI Powered Writing Assistant from Exploring AI Pedagogy
I hope these assignments offer a balanced approach, encouraging ethical use and critical reflection alongside technological exploration. By cultivating AI literacy and ethical guidelines, we can ensure that these tools enhance educational experiences rather than undermine them. Embracing this dual imperative will prepare educators and students alike for a future where integrating human intellect with artificial intelligence is common.
Marc Top 5 Posts:
Section Three: Reflective Teaching in an AI World: Using AI With Your Outcomes, not Against them
By
As generative AI (GenAI) makes waves in various sectors, higher education is no exception. But instead of introducing new challenges, I would argue that GenAI is highlighting cracks in our educational system that have been there for years.
I confess … I am not your traditional educator. For example I don’t give grades on most of my student work. Instead, I focus on giving students coaching and feedback on their writing, so they feel safe to take risks, which is key to learning and enjoying writing.
Much of the anxiety about GenAI in the classroom comes from grades. If students can complete most assignments with AI at an average level, how will we sort and collate our students? Inversely, if students are cheating, with AI or otherwise, it is most often because they are afraid of getting a bad grade, overwhelmed, or confused.
AI isn’t the problem. Grades are.
For decades, higher education has been content-centric. We've been asking: How do we cram this entire textbook into one semester? How do we ensure students absorb all the course content? How do we verify that they've read and comprehended it? How do we ensure they write a certain number of words or cite a specific number of references?
Now that students have most of this content at their fingertips, educators may be anxious about what to teacher … or even that they may be replaceable by AI.
But the problem isn’t the technology, it is our obsession with content.
I encourage educators to think about what they really teach. What do you want students to achieve? Not what do you want students to know. Focus on your outcomes, not your content or grades.
For the field of professional writing, this means helping students solve complicated problems, work with diverse groups, and manage complicated projects. All the “human skills” that AI can’t do, but can enhance.
It’s also what gets you hired and promoted in the workplace.
1. What Students Really Need
GenAI is already used in the workplace. By the time this incoming class graduates, working with GenAI will be a requirement for most, if not all, workplaces. AI is going to shape how they do work. But AI literacy shouldn't just be about using AI. Students need to learn to shape the technologies that shape them.
The most powerful aspect of GenAI isn’t the core technology, but how accessible these technologies now are to everyone. Anyone can create an AI tool. It is like going from Web 1.0, where you need to be a coder to publish on the web, to Web 2.0, where anyone can publish without advanced technical knowledge.
So let’s add GenAI to the classroom that opens up these opportunities, while still teaching students what they need. For example, consider adding GenAI tools to a data analysis class curriculum. Students can use AI to analyze data, but they still need to clearly explain the task needs done, provide the appropriate context, and verify the results … then, most likely adapt their AI interactions. This hands-on experience would help students learn data analysis skills, but also understand AI in their field.
When their boss asks them to tweak their AI tools or get AI to solve a new problem, they are ready.
In my cross-cultural writing course, we read about intercultural writing. But mostly, we turn it into structured notes for a business AI knowledge base. This approach requires students to work with diverse groups, solve complex problems, and manage complicated projects, not just work with information. One group of students created a guide for business etiquette in different cultures. They researched, collaborated, and used a GenAI tool to structure their findings. This hands-on experience is more than just about content, even though they must know and understand what we are learning.
The best writing has always been collaborative. The best problem solving has always been collaborative. The best way to learn with AI and build the technology will be collaboratively.
2. Democratizing AI
We can’t leave genAI to computer engineers. We need experts from all fields shaping AI technologies. The power of this new technology isn't just its processing or text generation ability, but the accessibility that allows anyone to shape these tools if they know how.
It's like moving from Web 1.0 to Web 2.0. In Web 1.0, only coders could publish web pages, but in Web 2.0, anyone could publish and edit. Now writers can build their own writing tools and businesses can build their own apps without code.
Imagine a literature student using GenAI to create a knowledge graph mapping characters, objects, and events in Shakespeare's plays. They are not just discussing or writing essays, but creating a knowledge base for "Shakespeare AI." This hands-on experience not only deepens their understanding but also equips them to shape AI technologies, while achieving similar outcomes as a traditional essay.
Integrating GenAI in higher education is an opportunity to address long-standing issues and shift our focus from content to outcomes. This better prepares students for the future and equips them with the skills to navigate a world shaped by AI. A good example is how we teach writing.
Writing shouldn't be taught as a solo act about generating text. It never really should have been. Writing happens in a network of people, processes, and technologies, which now includes AI.
3. Cyborg Writing - A New Paradigm
Beyond AI literacy and essential skills, there's a third outcome we need to focus on: the ability to engage in "cyborg writing," where humans and AI work together, each contributing their strengths to create something neither could achieve alone.
In a cyborg writing scenario, the AI is not just a tool for the writer but a part of the writing process, influencing and being influenced. The AI suggests phrases or structuring ideas, pushing the writer to explore new ideas and ways of writing. The writer's choices feed into the AI, potentially influencing its future behavior and outputs.
Cyborg writing differs from traditional writing, where the writer has sole agency. In cyborg writing, agency is shared between the writer and the AI. The writer isn't just using the AI; they're engaging with it, learning from it, and shaping it. And the AI shapes the writer, pushing them to explore new ideas and styles.
Incorporating cyborg writing into the curriculum would equip students with a valuable skill and change how they approach writing. They would learn to see writing as a collaborative process, involving not just them and their audience, but also the AI.
By embracing cyborg writing, we can prepare students for a future where AI is part of the writing process. We can help them become not just consumers of AI technology, but shapers of it.
Writing has always been collaborative. Always been shaped by technology. The same goes for learning. The rise of GenAI just makes this reality more real.
Lance Top 5 Posts:
Section 4: Seize the Moment: Four Keys to Unlocking AI Potential in K-12 Settings
By:
K-12: Excitement vs. Apprehension
In the K-12 educational arena, a growing subset of educators are becoming very excited about integrating Artificial Intelligence (AI) into today’s classrooms. The educational blogosphere is teeming with forward-thinking educators, pioneering researchers, cutting-edge ed-tech developments, and innovative entrepreneurs, all eager to seize on this pivotal moment.
For these trailblazers, AI's advent is seen as an amazing opportunity to break down the longstanding norms and conventions that have dominated mainstream education—a return to which many hastened after the pandemic—in pursuit of methods to reinvigorate student engagement in learning. This enthusiastic group at times veers towards idealism or utopianism, although this optimism often renders them targets for skepticism and criticism from the majority of their peers.
The broader educational community, encompassing the bulk of teachers, school leaders, administrators, and tech coordinators, exhibits considerable apprehension towards embracing new technological advancements. Overwhelmed in the aftermath of challenging pandemic years, many cannot presently muster the flexibility, energy, and resilience necessary to undertake the systemic changes needed for a meaningful integration of these new technologies.
Consequently, the landscape of educational technology, particularly in relation to AI, has seen minimal change since these innovations made their arrival. Most public schools continue to enforce top-down prohibitions on AI usage, while private institutions unevenly explore more progressive policies and strategies.
Notably, the K-12 sector remains markedly distinct from higher education and professional training environments in its approach to AI. This hesitancy in K-12 is largely attributable to two key reasons: the ongoing development of foundational skills, competencies, and literacies among K-12 students, and the legal restrictions preventing this demographic from using many of the commercial AI products available today.
The Time to Shift Is Now!
As reliance on AI for schoolwork shows an upward trend in each successive survey and study, the statistics are telling. From a modest 20-40% in winter 2023 to a significant 50-60% of students in the latest US studies, the use of AI has become commonplace in completing daily classwork. Given that AI detection does not work with 100% accuracy and cultivates a surveillance culture in our classrooms, its high time K-12 educators reconsider the effectiveness of their assessment methods.
The current pace of AI integration in education is slow, with many schools adopting a wait-and-see approach, hoping AI usage remains minimal. This passive strategy, however, only amplifies doubts among students about the relevance and purpose of traditional schooling, potentially leading to increased disengagement.
The student's viewpoint is increasingly shaped by the realization that their assignments can be effortlessly completed by AI, without any 100% accurate detection by their teachers. This perception of school as an automatable performance begs the question: why bother engaging at all?
In this context, it's important to recognize that the true barrier to student growth and development isn't AI itself, but rather our collective inaction to respond to it. The emergence of AI technologies, courtesy of major tech companies, has thrown down the gauntlet to K-12 education, presenting a challenge that demands our immediate attention and action.
What Steps Can We Take?
1. Establish a Shared AI Resource
Educational reform in the United States typically follows a top-down approach, starting at the state level as the initial step towards adopting new technologies. However, local education authorities still wield considerable discretion in customizing these technologies to fit their specific needs. Despite the issuance of AI adoption guidelines by a few states, most policies remain provisional and largely aspirational.
State-Level Policies:
The experience following the summer of 2023, where numerous schools and districts found their rigid AI policies quickly outdated by technological advancements, has led to a quiet consensus for adaptable, open-ended policy frameworks.
Yet, this approach can hinder the measurable adoption of AI in education.
I recommend that schools initially focus on integrating simpler, more manageable, AI models as a shared resource. Investing in a shared AI tool, complemented by secure access mechanisms, will lay the groundwork for developing targeted, effective AI usage policies. Schoolai is an interesting product that is readily available, offering both free and paid subscription options. It promises FERPA-level protection, and is developed using the OpenAI architecture.
This strategy is predicated on the understanding that crafting impactful AI policies is challenging without first determining a common tool for use across an educational network.
Concurrently, it's imperative that teachers receive comprehensive training on these new systems and the evolving pedagogical approaches discussed below. Without equipping educators with the necessary knowledge and skills, even the most advanced AI infrastructure will fail to make a meaningful impact.
2. Reduce Homework Assignments
This suggestion might spark some debate, but it's grounded in substantial research.
The challenge AI poses is significant: much of the homework we assign can now be easily completed by AI tools, prompting a reevaluation of the skills and literacies we're asking students to develop.
More fundamentally, it raises questions about our in-class activities.
My approach, especially post-pandemic, has been to maximize in-class time, recognizing that my influence wanes once students are outside my direct supervision. This lack of control over external tools and processes used by students casts doubt on the validity of including such externally completed assignments in my overall assessment of their abilities. N.B. I am now striving to use more equitable grading practices in all my classes.
AI compels us to rethink the role of classroom time, merging instruction, practice, and assessment into a cohesive whole. Homework, while not obsolete, should complement in-class learning, reinforcing rather than introducing new concepts and skills.
3. Revise the Writing Curriculum
As we navigate through the evolving educational landscape, it's clear that students must adapt to new paradigms of writing that are less about polish and more about functionality. While AI tools are transforming the writing process, the essence of AI-informed writing remains deeply anchored in the foundational skills of reading, writing, and engaging in critical and efficient thinking.
Future writers will still need to hone their abilities in these critical areas of pedagogy. However, AI offers a remarkable advantage by accelerating drafting, streamlining editing, revising, and publishing processes.
This newfound efficiency provides an opportunity to allocate time towards teaching the skills needed for effective and ethical engagement with AI technologies. Additionally, this saved time opens up space for even more focus on critical thinking and problem-solving skills. The urgency of integrating AI into our writing practices is underscored by the ticking clock of AI disengagement, compelling us to ensure students are proficient in both traditional literacies and the new skills required for the digital age.
4. Emphasize Process Over Product
Addressing plagiarism within K-12 requires a shift towards emphasizing the learning process over the final product. This approach, often summarized as "process over product," aims to make the classroom a dynamic environment where teaching, learning, and assessment are seamlessly integrated.
The focus here is on developing a curriculum that values the journey of learning as much as, if not more than, the outcome. In an age where AI can effortlessly generate polished work, it's imperative for educators to craft lessons and assessments that prioritize and evaluate the development of critical thinking and problem-solving skills.
By doing so, we leverage AI to highlight and assess the skills beneath the skills we are used to assessing. This strategy encourages students to engage in self-reflection and to monitor their own progress in mastering both content and the meta-cognitive skills that AI-enhanced learning environments demand.
The goal is to build a framework where, by the end of any given learning unit, educators can confidently assess students’ development with a clear understanding of their process. This method not only prepares students for the complexities of the future but also aligns education with the evolving landscape of AI technology.
Nick Top 5 Posts:
AI as a Catalyst for Innovation: An Interview with Dr. Elliot Bendoly
Neurons to Networks: Bridging Human Cognition and Artificial Intelligence, Part 1
__________________________________________________________________________
We want to extend our heartfelt gratitude to all the readers who have engaged with this article exploring transformative strategies for AI adoption in education. Your interest and participation in this conversation are essential as we collectively navigate the complex terrain of integrating AI into educational settings.
As educators and researchers, Marc, Lance, and I are deeply committed to fostering a dialogue that prioritizes gradual evolution, practical recommendations, and tangible outcomes in AI adoption. We recognize the challenges and opportunities that AI presents in both K-12 and higher education environments, and we believe in the power of collaboration to address them.
We invite you to share your thoughts, experiences, and insights in the comments section below. Your perspectives are invaluable as we continue to explore the potential of AI in education and strive to create more engaging, efficient, and equitable learning experiences for all students.
Thank you for being part of this conversation, and we look forward to continuing it with you.
Nick Potkalitsky, Ph.D.
Marc Watkins, Ph.D.
Lance Cummingins, Ph.D.
Great work, Nick. I love that you're putting together a dream team of like-minded people to cover AI in education, which will only become a more pressing and relevant topic as time goes by. I'll have to give this post the time it deserves and read it more carefully (for now I only managed to skim it).
Thanks for the restack, @Amrita Roy!!!