Navigating New Horizons: Charting the Future of Writing Instruction
Educating AI: The 2023 Year-End Comprehensive Review, Part 3
Introduction
Greetings, Readers!
In this post, I am concluding my “Year-End Comprehensive Review”: Part 3. If you haven't had the chance to read the first two posts, you can find them here:
From Novelty to Necessity: My Evolution with AI in Education
8 Essential Techniques for Integrating AI into Classroom Teaching: Part 1
In what follows, I develop strategies for writing instruction inside a broader AI-responsive instructional approach. Feel free to navigate through these readings to find the information that is most relevant to you and your classroom practice.
I am most proud of my work in Section 5 and 6, so don’t miss that content. I feel like some doors are opening right now with regards to AI-responsive writing curriculum, but we have to move quickly to take advantage of the moment before our students settle into less productive pathways and habits of use.
Here is a Table of Contents:
Section 5: Developing Rationales for Unassisted Writing in the Classroom
Section 6: Teaching Students How to Use AI as an Editor and Thought-Partner
Section 7: Challenging Teachers to Think the Unthinkable
Section 8: Experimenting with New Methods through Collaboration
Let me know what you think. And please share your practices and insights.
5. Developing Rationales
Initial Responses to Gen AI as a Writing Tool
Today's students need more than just prescriptions or prohibitions of specific AI practices. They require thoughtful, persuasive, and evidence-based rationales. These rationales should guide both classroom interactions and daily engagements with Gen AI.
The existing literature on Gen AI and future writing instruction reveals two dominant schools of thought. The first group strongly opposes using AI-infused text generation as evidence of student learning. The second group enthusiastically endorses a widespread shift to AI-infused text generation across the curriculum.
Notably, both schools of thought as first-wave reactive responses to the widespread accessibility of Gen AI under-theorize both rationales and practices.
Opponents of Gen AI in education warn about its risks. However, they provide little supporting evidence from research and academic communities. On the other hand, proponents present utopian visions of Gen AI reshaping schools. Yet, they offer little practical insight into transitioning today's schools to such a future.
Substack: A Pedagogical “Middle Ground”
Notably, Substack is now a hub for many educational theorists, researchers, and teachers. These professionals operate in the middle ground of this debate. This group includes
, , , Dan Meyer, , ,, , , Nat at AI Observer, Birgitte Rasine at , and . While reading their work, I search for rationales. These should be compelling to students in real-time, especially when using specific frameworks and methods to engage with Gen AI.Possible Rationale: AI Prepares Students for Workforce
To date, one of the most commonly expressed rationales for fuller inclusion of AI-enhanced text generation across the generation is preparation for the jobs of the future. In its most popularized version, students who do not become experts at generating different kinds of texts with the assistance of Gen AI will be at a disadvantage when applying for jobs and when in a workforce where such skills and competencies will be highly valuable assets.
Opponents of this viewpoint point out that the current practices of AI-enhanced text generation will become obsolete in five years, and therefore, schools should resist the temptation to become job training centers and instead continue to focus on the “more” fundamental skills and competencies, including traditional writing methods and computer science studies.
In my own work, I have tried to triangulate around this division by pointing to a different set of rationales, ones that I believe will be more compelling and persuasive in that they focus on immediate, tangible returns rather than promises of future gains.
Possible Rationale: Unassisted Writing is More Knowledge-Constituting
In a previous post, I raised the question of knowledge-constitution as an alternative fulcrum point in the debate about how to transition schools and classrooms into more AI-responsive perspectives and methodologies.
This rationale is as much an initial guidepost for first steps in our redesign of pedagogical frameworks and instructional approaches as it is an opening for a broader research program.
What kinds of activities, methods, and applications (AI-enhanced or otherwise) help students generate knowledge, build meaningful connections between disparate concepts, translate information from one knowledge domain into another, form lasting memories, and inhabit flow states in pursuit of fresh, synthetic arrangements of existing knowledge building blocks?
There is an extensive body of literature that establishes the practice of writing unassisted by Gen AI as a unique instructional experience and context where many.if not all of these goals and objectives become readily available to all students with proper assistance, guidance, framing, instruction, and assessment.
The Impact of Writing with AI: Still More to Learn
By contrast, we have relatively little good research on the efficacy of AI-enhanced writing practices as reliable pathways for knowledge-constitution, experience-grounding, and knowledge-synthesis. This is not to say that these practices do not have these properties, but, rather, their efficacy is as of yet unknown. Thus, the opening for a broader research program.
In this context, teachers and administrators can make a strong appeal to students that writing unassisted is an activity with significant benefits for their cognitive and academic development.
Appealing to the Here-And-Now
While this rationale will not be persuasive to all students, it has the advantage of appealing to the here-and-now. Each time a student sets aside Gen AI, particularly during the early stages of the writing process, and draws on their first-order and second-order memory in order to synthesize new ideas in the form of written expression, they are involved in the work of knowledge-constitution, which plays a central part in the development of critical thinking, meta-cognition, ethical evaluation, organizational awareness, and relational analysis.
Notably, in the debate about preparation for the future of jobs, these broader competencies are usually regarded as the best predictor of success of the workforce in the face of the radical unpredictability of future economic, technological, social, and political developments.
Continued Work: Stacking and Sequenced Rationales
If teachers and administrators make this case persuasively by stacking and sequencing the related rationales and sub-rationales, embedding our messaging across the curriculum, and engaging student in the vital work of promulgating this message to peers, we just might open the door wide enough for students to entertain the notion that writing in more traditional forms is in students’ long-term best interests.
In my own classroom, I found the above case to be quite persuasive with my students, and the cas became more persuasive as students engaged in unassisted work.
In other words, writing without the assistance of AI yields deeper engagement with content, more expansive modes of critical thinking, more experiences of original insights and control over the process of knowledge constitution, and a greater sense of satisfaction at having worked through a challenge successfully without overreliance on competing systems of thought generation.
LLMs and College Essays
This positive feedback loop is evident also in this year’s college application cycle. In March of 2023, several media outlets declared the death of the college essay, predicting that most seniors would use LLMs to generate their application materials for the 2024 admission cycle.
At my school, the opposite has occurred. Students do not trust LLMs to offer up the high-quality, personalized writing required for college admissions, and so they have continued to write college essays using more traditional methods. There may be some use of LLMs for polishing individual sentences later in the drafting process or introducing better conclusions, but on the whole, students are realizing through experimentation with these models that the high-level rhetorical performance, voice-personalization, knowledge-generation, and concept-synthesis that is the hallmark of a good college essay is most efficiently produced in an unassisted manner.
6. Using AI as Editor and Thought-Partner
Limited Ways to Interact with LLMs
Current LLMs provide limited interactivity options. Users can enter a prompt to generate a whole new text or modify an existing one. This process is an input-output pathway. It involves replacing the original text entirely with the new, generated text.
Several Substack writers including
, , and Daniel Bashir at have explored different interaction modes. Concurrently, various companies are developing these new products, set for public release soon.These new interaction modes allow users to focus on specific text segments for revision and editing with an LLM. Within the interface, users can try various rephrasings while keeping the original text visible. This approach grants more control over the revision and adaptation process.
The Real Promise of AI as a Writing Assistant
Current LLMs have amazing capabilities and can be of great assistance to student writers, particularly in the first and last stage of the writing process. For instance, I have found LLMs to be a powerful tool for generating essential questions, possible answers, research pathways, and possible outlines. And yet, once LLMs generate these possibilities, they are solidified into the textual field of the prompt generation cycle.
Granted, we can copy and paste these products into a word processor and adapt them to our hearts’ content, but I think that today’s students relate to these “solidified” offerings in a way that is fundamentally different from users who are non-digital natives. More foundationally, there is emerging research that suggests that when students are offered too many instrumental examples during the initial stages of a research or writing process, they have increasing amounts of difficulty–depending upon on their own skill levels, the number of examples, and the quality of the examples–individuate from those preconceived notions, patterns of analysis, and research conclusions.
Strategic Modulation between Case Use and Unassisted Time
For this reason, some neuroscientists have suggested that students put away all their technological devices and resources at strategic moments to take stock of their developing project and to take detailed hand-written notes on possible pathways forward, before opening back up those resources to dive back in.
A Classroom Experiment
In my own classroom, I ran a rather informal experiment about the influence of models and examples on creative thought during a unit on argumentative writing. I assigned two different classes the activity of defending the claim, “chocolate ice cream is the best ice cream there is,” by supplying two pieces of evidence and explaining the reasoning that connects each piece of evidence to the claim.
The first class was allowed to use technology to research existing evidence. The second class was asked to complete the assignment without the use of assistive technology.
In the first case, once computers were opened, very little original generation of evidence took place. The emphasis of the lesson became the comparison of different source materials and the various interpretations of the qualifier “best.”
In the second case, students initially struggled to get going, but then created a list of evidence that included 3 items that did not appear on the research list. The tenor of the lesson was completely different as students work more creatively and collaboratively toward shared solutions.
AI: Enabling vs. Overriding Design Thinking
Now, let me be clear that I am not arguing against the use of models and examples in our classrooms. Rather, I am extrapolating from these classroom experiences a trend or a process that I think can apply structurally to the kinds of interactions our students are having with current LLMs.
When a student asks a LLM to generate an outline on a particular topic, that projection can serve either to open up new fields of discourse or close them down depending upon a student’s past experiences and skills and competencies, the teacher’s framing of these utilizations, and the technology’s structural properties.
What this means is that LLMs can be used successfully in the initial phases of the writing process, but that teachers has to very carefully consider the process through which these tools are introduced to students while simultaneously keeping in mind that reliance on these tools–outside the frameworks introduced in class–will continue to complicate their utilization through the research process.
In Hope of New Interfaces that Amplify Agency
One final note: I personally think there is great promise for LLMs as thought-partners and editors in the later stages of the writing process. Our students already rely on applications like Spellcheck and Grammarly throughout the writing process in both capacities. When one adds advanced Gen AI into the mix, students will have increased ease and efficiency in bringing their written materials to a state of completion.
Again, educators will need to pay close attention to how students interact with LLMs throughout the process of revising, editing, and proofreading. I appreciate how current LLMs can offer students a range of different options when it comes to interaction, albeit through the prompt window.
A LLM can generate a list of possible edits, can analyze text from opposing viewpoints, can provide a series of critical questions about a text, and can provide alternative articulations–although this final capacity is the most problematic given the low-level of interactivity of the exchange and current models more generally.
7. Challenge Yourself to Think the Unthinkable.
Think the Unthinkable
Even as educators begin the work of designing thoughtful methods for interacting with LLMs and generating rationales for the continued practice of literacies, skills, and competencies that predate the widespread accessibility of these tools, these technological tools continue to advance and develop at an astounding speed.
Considering the current volatility and unpredictability, I'm developing my ability to 'think the unthinkable.' This means envisioning the future of pedagogy, instruction, and education over the next five to fifteen years
Un-Sediment Presuppositions, Biases, and Assumptions
Here, I frame this exercise as “thinking the unthinkable” because I believe it is worthwhile to contemplate educational futures where many of our time-honored traditions, methods, and approaches no longer apply.
I know that I personally am very much attached to particular ways of doing school. There is a lot of research that indicates that we teachers, if left to our own devices, tend to teach our students in the ways we were taught and the ways we learn best.
While my newsletter is testament to my commitment to certain pre-LLM educational practices–primarily, the need for unassisted writing time as an integral part of cross-curricular coursework, I know that it is also productive for me to even let that “necessity” go in order to open up space for other, potentially more effective frameworks and methodologies.
I would not characterize this as truly counterfactual thinking, but rather as the use of the unpredictability of the future to un-sediment presuppositions, biases, and assumptions in pursuit of theoretical and analytical refinement.
A Thought Experiment: School as a System of Inputs and Outputs
When I engage in this work, I see the future of school in terms of inputs and outputs.
Students need to take in content that is external to their own frames of references in order to build new literacies, skills, and competencies.
Then, students need to produce something external from themselves to demonstrate that they have mastered content, literacies, skills, and competencies.
As students grow and learn, this process of input and output unfolds on varying scales, through recursive feedback loops or more directed pathways, building through sequences and stages to higher degrees of knowledge, skill, engagement, and immersion in personal and academic development.
Traditional vs. Futurist Conceptions of Inputs and Outputs
Traditionally, reading has functioned as primary input, and writing functions as primary output. Educational futurists have challenged us for decades to imagine pedagogies, curricula, and instructional approaches where this traditional framework is fundamentally altered in some way.
Either reading and writing is complemented by other input/output modalities, or more radically, reading and writing no longer function as primary input and output. In these scenarios, students receive information and instruction through alternative media formats, and then produce evidence of mastery in those same alternative formats.
Upon the arrival of LLMs, educational futurists reenaged with these imaginative scenarios, but this time prompted by real-world technological developments that made these hypotheses feel more pressing.
In their writings, they asked educators to use the arrival of LLMs as the opportunity to focus on the skills and competencies beneath existing, more traditional processes such as the lab report, the personal essay, the research paper, and the translation exercise.
Here, the goal of these efforts was to inspire educators to reconceptualize traditional assignments in terms of these core skills and competencies and to seek out methods for measuring mastery that were not as susceptible to plagiarism by Gen AI tools.
After Thinking the Unthinkable, Review Your Practice
In my own thinking the unthinkable, I find it very hard to imagine school without continued reliance on reading and writing. There are too many cognitive, social, and academic benefits associated with these activities.
That said, I am willing to consider the possibility that tomorrow’s students may not need to know how to expertly polish and finish a writing assignment, for example.
While I continue to see immense value in unassisted first draft composition and first round of edits and revisions, I sense that LLMs, given their incredible abilities to work with existing texts as editors and thought partners, will render the skills of text-finishing less critical–in a manner similar to how bibliography engines have downsized the amount of knowledge our students need about the from-scratch creation of works cited entries.
I personally find LLMs quite useful in this capacity, using them regularly to catch errors and rewrite the odd, clunky sentence.
And yet, a whole host of perspectives, processes, and routines need to be meticulously cultivated and instructed across the curriculum to create the space and discipline wherein students refrain from overreliance on LLMs throughout the process of editing and revising, viewing suggesting feedback as just that.
My sense is that these lessons will best be taught through doing–through the comparison of different results–through the comparison of personal experiences while writing. But I will leave the development of these lessons for another newsletter.
8. Experiment Through Collaboration.
Developing a Research Program
Tackling Gen AI's complexities demands that educators update and modify curricula with scientific precision. This approach mirrors the meticulousness of scientists, designers, and engineers. Before making changes, I urge educators to devote time and effort to formulating precise 'research questions.' These should pertain to their specific teaching contexts and situations.
Developing Research Questions
Within the framework of AI-responsive Instruction, an effective research question should concentrate on the ways in which a specific assignment, activity, or lesson will enable students to achieve clearly defined educational goals and objectives. When crafting a research question and proposal, educators might consider investigating the following areas:
Relevance to Particular Educational Goals
Feasibility of the Particular Assignment / Activity / Lesson
Mechanism for Gathering and Evaluating Evidence
Ethical Considerations
Alignment with Current Theory and Practice
Openness to Sudden Innovation or Development
Needless to say, running a successful research program while teaching a full-load of classes is beyond the scope of many, if not all, educators. Realizing this, I suggest that teachers make the work lighter by experimenting through collaboration.
Breaking out of the Isolation Trap
One of the most challenging aspects of responding to Gen AI is the solitary or individual nature of the work in most schools. Despite many good intentions over the summer, very few schools within my network have gone beyond the small committee stage in their institutional responses to Gen AI. Typically, these committees consisted of a collection of faculty members and administrators who showed interest.
They have convened a few times during the late summer or early fall to establish AI Policies. However, their regular meetings were discontinued once the semester started in full swing. Since then, policy decisions have fallen back into the hands of senior administrators and superintendents, who continue to be very divided about the overall value of an AI-responsive curricula and methodologies.
In the face of these institutional barriers and tensions points, I advise teachers to work collaboratively in small cohorts to test out different hypotheses about the value, impact, and efficacy of AI-responsive classroom strategies and approaches.
Find or Build a Cohort
Here, departments, grades, or divisions might serve as a natural grouping for research inquiries. For example, an English department can spend a semester focusing on the use of AI as an in-class editor of text. Each teacher in the department can implement slightly different strategies in the classroom, and during department meetings, teachers can share notes and observations about the impact of these different approaches.
These scientific studies do not have to rise to the level of peer-reviewed research. What we seek and what we really need now is some data that suggests that these tools can have a positive impact. The goal here is to move from the level of the individual anecdote to something like a more comprehensive viewpoint on the value of implementing and integrating AI into today’s classroom.
Bringing Routine and Analysis to the Unpredictable
This is how all major technological advancements in education have proceeded. First, the tech drops, and then everyone panics. Second, the innovators test out different case-uses. Third, the researchers publish studies. Fourth, districts and schools make holistic and productive responses as institutions.
So be part of the change. I will be setting up a database soon where educators can publish, catalog, and compare their findings during the Spring Semester of 2024. Stay tuned.
I hope you enjoyed AI Educating’s 2023 Comprehensive Year-End Review. Please share these newsletters with anyone in your network who you think might benefit from this information. I have really enjoyed creating these broad recommendations as they have allowed me to synthesize a number of different thought strands and to build a secure foundation for next year’s content.
Be well! Happy New Year!
Thanks for reading Educating AI!
Nick Potkalitsky, Ph.D.
This is synththizing around a pretty useful framework. In many ways, AI is forcing us to articulate better frameworks for writing which are useful without AI. Maybe even more useful to writing without AI!
First of all, thanks for including me in this thought-provoking piece. You touch on some important topics here. Let me share my experience briefly: when I need to read a peer-reviewed paper that may contain terms I am not familiar with I ask AI to “educate me on the subject”. I think in writing this is the best thing we can do with current systems. Critical thinking is important. We need to be careful in our approaches otherwise we could get a generation that stops thinking. AI can serve as a mentor to guide us through complexities and help us better understand our research areas.