Imagining the Future of Writing: 4 Predictions for AI-Human Interactions in the Coming Decade
K-12 Divergence; Undergraduate Adaptation; AI-Reliant Professionals; AI-Resistant Minority
Reading Time: 10 Minutes
AI's Transformation of Writing: How AI is reshaping writing in education, publishing, and content creation, emphasizing the collaboration between human ingenuity and machine intelligence.
AI in Education: Strategies for incorporating AI into teaching and learning, preparing students for a future intertwined with AI, through project-based learning and digital literacy.
The Future of Professional Writing: An exploration of AI's impact on professional writing, including changes in practices, genres, and the balance between creativity and efficiency.
Navigating Ethical and Creative Frontiers: A look at the ethical, legal, and creative challenges posed by AI in writing, addressing authorship, originality, and privacy concerns, alongside the potential of AI to innovate content creation.
Introduction: Innovation as an Academic Driver
In this article, I am going to attempt the inadvisable: forecasting the future of writing for the next 5 to 10 years. As we all know, AI tools are revolutionizing the human writing process from the ground up. Many of us have already explored various AI-human synergies in pursuit of different communicative and rhetorical goals. While many of us have found these interactions productive for a variety of different tasks, over a year into our LLM journey, we still have many pressing questions about how AI will impact education, business, publishing, entertainment, politics, and our careers, to name just a few possible points of engagement and development.
In recent months, my focus has increasingly turned towards the critical role that educational systems play in equipping both the national and global workforce for a future where collaboration and competition with machines are essential for maximizing efficiency, creativity, and impact.
I'm thrilled to share that soon, the remarkable Amrita Roy of The Pragmatic Optimist will contribute a guest piece to Educating AI: the 3rd Part of the very popular and critically acclaimed (by me!!!) series "Don't Bet Against America” (Part 1 and Part 2). Her article will delve into AI's role in spurring a productivity surge in the U.S. in the coming years. This concept may be unexpected for some, especially against the backdrop of numerous reports predicting significant job losses due to AI automation.
Amrita and I are advocating a perspective that views our education system at a critical juncture over the next 3-5 years. It has the chance to either support workers and industries in developing the specialized skills and training necessary to transition millions of professionals into newly emerging roles, or it can resist change, uphold barriers and traditions, and consequently, see the U.S. fall behind in the global race for educational innovation, a value that should be aspired to in tandem with academic excellence.
Here, at least I am envisioning innovation as the essence of academic excellence, realized through inquiry-driven, project-based learning that immerses students in real-world scenarios and simulations. This approach encourages students to progressively assume responsibility for their academic achievements, as well as their social and personal growth.
It is worth restating–as we do every couple weeks at Educating AI–that current AI, including tools like ChatGPT, does not engage in what Daniel Kahneman describes as System 2 thinking—deliberative, analytical, and logical processing. Thus, the locus of innovation for the near future—and as echoed throughout my predictions—will reside within the interactive space where AI and humans intersect, never within the AI itself. This critical distinction is vital to remember as we navigate the future of educational and professional landscapes.
Source: Neurofied
In my following forecast, I am under no delusion that my words carry the force of actuality, but I find these exercises helpful in organizing my own thoughts and priorities as an AI educational pioneer–and I hope you do too! What I am about to imagine is a full future trajectory transformed by AI. Following this thought exercise, I will make some pointed recommendations about tentative next steps for the upcoming school year. While, dear readers, you should not taking these predictions too seriously (please, don’t burn your book manuscript!!!), we must note that very real changes are coming, and the best approach—one that I have advocated for many months—is to be proactive, to be the change, and to take positive steps in assisting students in the important work of making sense of the rapidly evolving writing world.
So come on board and imagine with me! Test your own imaginations against mine! Please share your observations, insights, and criticisms in the comments as we embark upon this collective creative discovery.
4 Predictions for the Future of Writing
1. The future of K-12 writing instruction will diverge, with primary schools maintaining a focus on traditional skills and secondary schools navigating the integration of AI tools.
In primary schools, instruction will continue to focus on the development of core writing skills and competencies largely without the assistance of AI tools. This approach will ensure that young students establish a strong foundation in traditional writing techniques and develop their creativity and critical thinking abilities independently.
In middle and high school, instruction will undergo a tumultuous 5 to 10 years as teachers toggle between traditional and new skills, competencies, and literacies. The process and product will be highly variable, dependent upon state, district, and local school policies, access to particular technologies, and the preparation and training offered to particular teachers. This period of transition will be challenging, as educators strive to strike a balance between incorporating new technologies and maintaining the integrity of traditional writing instruction.
Students will continue to surge ahead of teachers in terms of knowledge of new technologies, creating a serious gap between instruction and real-time usage. This disparity will put pressure on educators to adapt quickly and effectively integrate AI tools into their teaching methodologies. In the best-case scenarios, students will leave high school with traditional skills, competencies, and literacies enriched by a limited encounter with emerging and newly dominant technologies. These students will feel somewhat well-equipped to navigate the rapidly evolving landscape of written communication in their future academic and professional endeavors.
In worse-case scenarios, encounters and increased reliance on new technologies will erode traditional writing, reading, and critical thinking skill sets in the absence of wise and measured intervention on the part of schools, districts, and universities. Without proper guidance and a balanced approach, students may become overly dependent on AI tools or build, leading to a decline in their ability to generate original ideas, construct well-structured arguments, and engage in deep, analytical thinking.
2. Undergraduate institutions will adapt their curricula to meet the changing skill sets of incoming students, leading to the reemergence of unassisted writing in specific contexts and the development of new assessment methods.
As long-form essays become more difficult to evaluate authentically, colleges and universities will shift to the evaluation of process over product. Consider this possibility: Students will be required to maintain a detailed log of research insights and experiences, providing instructors with valuable information about the choices they made regarding the technology they collaborated with throughout the journey towards a particular outcome.
Higher-order thinking and content knowledge will emerge in the interconnections between these more journalistic and impressionistic reports and the final content, which may lack the distinctive voice of the researcher that previous generations were accustomed to. As students develop proficiency with AI and as the capabilities of AI tools advance, an essential component of an AI-assisted writer's skill set will involve leveraging AI to craft personalized and argumentative tones and filters tailored for distinct rhetorical contexts and objectives.
Simultaneously, colleges and universities will continue to reimagine the higher education classroom as an immersive, inquiry-based space across the curriculum.
As the demand for practical, real-world skills and experiences grows among students, parents, and alumni, the rise of AI offers a prime opportunity to broadly adopt an interactive, project-based approach to learning throughout the curriculum, where again the focus is shifted towards valuing the process above the final product. Although project-based learning has been shown to enhance student engagement, agency, and outcomes for several decades, college pedagogy has primarily remained entrenched in a lecture-delivery model. Within these inquiry-based spaces, a new form of writing may emerge or, rather, re-emerge.
As projects evolve to become more customized and personalized, AI tools are likely to fulfill roles resembling Kahneman's System 1 thinking—characterized by quick, instinctive responses. Meanwhile, students will take on the critical System 2 responsibilities, engaging in thoughtful, analytical, and reflective processing. In this dynamic, AI's role may become concentrated at the outset for brainstorming ideas and at the conclusion for refining the final draft.
When working on their detailed project narratives, students might discover that the depth and complexity of higher-order thinking required for their tasks are compromised by AI's inclination to simplify and swiftly synthesize data. This realization could lead students, especially towards the end of this decade, to actively seek out and value instruction in traditional writing methods, aiming to bypass the intellectual limitations they've experienced with AI-driven processes and to enhance their capacity for nuanced thought and creativity.
3. The vast majority of professional writers will come to rely on AI tools as a daily part of their writing practice, but they will do so in incredibly variable and unpredictable ways.
These practices will unfold along a spectrum between "humans-in-the-loop" (HITL) and "machines-in-the-loop" (MITL) practices, using the terminology defined by Alan Knowles in his pioneering study, "Machine-in-the-loop writing: Optimizing the rhetorical load."
HITL writing is a process where human involvement is essential at key points, especially in the final stages, to ensure ethical and quality outcomes.
In contrast, MITL writing is a model of AI collaborative writing where the AI acts more as an assistant than a co-author, with humans retaining the majority of the rhetorical load.
Writers will make choices about where to place themselves on the spectrum of AI-human workflow interaction based on several factors, including:
Professional expectations and restrictions
Industry standards
Employer guidelines
Client requirements
Time and expediency
Deadlines
Workload
Efficiency needs
Professional stakes
Career advancement
Reputation
Financial implications
Demands for depth of insight and originality
Audience expectations
Personal artistic vision
Unique perspective and voice
Genre and literary conventions
Specific genre requirements
Traditional storytelling techniques
Experimental and innovative approaches
Publisher requirements and expectations
Submission guidelines
Editorial preferences
Market trends
Demands of the marketplace
Reader preferences
Competitive landscape
Emerging technologies and platforms
The rate at which society, professions, industries, and publishers shift to a culture of transparency around the use of AI tools will be a driving complicating factor for adoption. Currently, the use of AI technology to complete writing tasks is still somewhat of a cultural and societal taboo.
This is partly explainable due to (1) the multiplicity and ambiguity of the concept of AI-assisted writing and (2) the tendency for the cultural imaginary to fixate on use cases that involve extreme acts of laziness, duplicity, or plagiarism.
In this possible future, I hope to outline many other potential trajectories for AI-human interactions that might lead to more widespread awareness and acceptance of these tools. The undercurrent of what I am outlining here is that after a given period, so many people will be relying on these tools that the taboo may give way to a new cultural narrative.
However–even if taboos shift–it should be noted that there will continue to be professions where the use of AI will be prohibited or heavily regulated for legal and ethical reasons. These include:
Law
Legal documents
Court proceedings
Contracts and agreements
Medicine
Medical records
Diagnostic reports
Treatment plans
Education
504 plans
Individualized Educational Programs (IEPs)
Office student records
Finance
Financial reports
Audits
Regulatory filings
Government
Official documents
Public statements
Policy proposals
In many law practices, AI is currently prohibited due to some careless use cases wherein AI-generated materials were integrated into court-bound documents that included hallucinated content.
We ought to approach any AI-enhanced medical modality with warranted skepticism. It's imperative that medical records are safeguarded by the most robust security protocols. Presently, commercial AI systems fall significantly short of providing the level of security and protection that is deemed acceptable.
Where the stakes are so high, we will likely continue to see such restrictions until more advanced systems, such as more robust Generative Adversarial Networks (GANs), are developed to identify and remove hallucinatory content. Even then, it is advisable for readers to always review AI-generated content before integrating it into their final documents.
4. A significant minority of professional writers will choose not to use AI tools as a part of their writing practice. However, they will increasingly have to work very hard to avoid doing so as AI becomes more deeply embedded into existing technologies.
The reasons these writers will refrain from AI use will range across a wide spectrum, including philosophical, ideological, practical, cognitive, and professional considerations.
While I could speculate on the nature of possible encampments around various ideological antipodes, I am most interested in the practical and cognitive aspects. As suggested in several articles over the past year, and most recently in my analysis of "excellent writing," unassisted writing entails a particular kind of cognitive and heuristic experience that allows the writer to enter into a deeply immersive and dialogical space with their own thought process.
Speaking from my own experience, I can attest to the power and pleasure of an AI-assisted, conversationally enabled writing process. However, the constant back-and-forth leads to a momentary depletion of attention and focus, as well as a kind of wavering doubt and uncertainty in a given assertion or proclamation. With just a keystroke, 10 alternative assertions could inhabit the page in a heartbeat. Which one do I really believe? Classic choice anxiety!!!
This "drowning in possibilities" effect can be overwhelming when a writer is trying to bring a draft to completion. In this crucial moment of revision, the writer gets another opportunity to push thought creatively forward. While the initial drafting process has pushed thought 85% of the way, the final edit and review will usually result in some curious synergies across the text—sentences and phrases suddenly snap to attention and link into tighter connections, sometimes yielding a tertium quid—a fresh insight—to square the circle and bring the conclusion home.
An AI writing tool might present an array of 20 corrections or five alternative conclusions. Yet, the mere act of stepping back to seek input from the AI could disrupt the writer's deep engagement with the creative source, potentially causing valuable insights to elude them. This detachment from the writing process, facilitated by reliance on AI, may diminish the writer's capacity to discern the most relevant option among the suggestions provided..
Reasons why some writers may choose to avoid AI tools:
Philosophical objections
Belief in the importance of “pure” human creativity and originality
Concerns of authorship and ownership
Ideological stances
Opposition to the increasing role of technology in creative process
Desire to maintain traditional writing practices
Practical considerations
Familiarity and comfort with traditional writing practices
Uncertainty about the reliability and consistency of AI-generated content
Cognitive affordance
Value place on the immersive, dialogical, and hierarchical nature of unassisted writing
Importance of maintaining a deep connection with one’s own thought process
Professional requirement
Industry and genre-specific expectations for human-authored content
Client or employer preferences for non-AI-assisted writing
6 Future-Oriented Takeaways for Teachers, Administrators, Trainers, and Experience Designers
By way of conclusion, I would like to offer 6 practical, specific, and detailed takeaways for classroom pedagogy, practice, and instruction based on the article:
Incorporate AI literacy into the curriculum: Develop lessons and projects that teach students how to effectively use AI writing tools, understand their limitations, and critically evaluate AI-generated content. This could include exercises where students compare AI-generated text to human-written text, discuss the ethical implications of AI in writing, and learn to fact-check and verify information.
Emphasize process-oriented writing: Shift the focus from the final product to the writing process itself. Require students to maintain detailed logs of their research, ideas, and collaborations with AI tools. Encourage them to reflect on their choices and experiences throughout the writing journey. This approach will help develop critical thinking skills and metacognitive awareness.
Implement project-based learning: Design inquiry-driven, immersive projects that simulate real-world scenarios and challenges. Encourage students to use AI tools as part of the brainstorming and editing phases, while emphasizing the importance of human-led analysis, reflection, and decision-making. This will prepare students for future careers where AI collaboration is likely to be common.
Teach AI-assisted writing strategies: Introduce students to the concept of "humans-in-the-loop" (HITL) and "machines-in-the-loop" (MITL) writing practices. Provide guidance on how to effectively collaborate with AI tools while maintaining human control and oversight. This could include lessons on prompt engineering, iterative feedback, and final editing techniques.
Foster a balanced approach to AI integration: Acknowledge the potential benefits and drawbacks of AI in writing. Encourage students to experiment with AI tools while also cultivating traditional writing skills and unassisted writing experiences. Emphasize the importance of maintaining a deep connection with one's own thought process and the value of human creativity and originality.
Promote ethical AI use and academic integrity: Develop clear guidelines and policies for the use of AI tools in the classroom. Teach students about the ethical implications of AI-assisted writing, including issues of authorship, plagiarism, and transparency. Encourage open discussions about the responsible use of AI and the importance of academic integrity in the age of artificial intelligence.
By implementing these strategies, educators can help students navigate the rapidly evolving landscape of AI-assisted writing, develop the necessary skills and critical thinking abilities to succeed in a future where human-AI collaboration is prevalent, and promote responsible and ethical use of these powerful tools.
Nick Potkalitsky, Ph.D.
I've been thinking about this post a lot the last couple of days. During part of that time, I was working on a presentation on AI and also trying to write out some of the confused muddle of feelings I have on the subject. (The latter were triggered by, but only partly about, David Runciman's 2023 book, The Handover: How We Gave Control of Our Lives to Corporations, States and AIs, which I highly recommend.) I don't know that I disagree with you fundamentally, but there are some ways that we are approaching AI in education that bother me these days. None of this is meant as an attack. I really enjoyed the essay.
One is that we are over-emphasizing writing. I realize that being a writer is at the core of the identity of most people who are writing about this, so they are going to focus on that. It is a feedback loop, and I worry that it is trapping us in too much consideration of AI. Of course we are talking about Large Language Models, but they do much more than just write. I know the programming people also write a lot about AI in education, but I don't see nearly as much about other disciplines than programming, data science, and writing in higher education. Maybe I am not looking hard enough, but I am concerned.
A second thing is around a point that many are making, but that you put very well: "Students will be required to maintain a detailed log of research insights and experiences, providing instructors with valuable information about the choices they made regarding the technology they collaborated with throughout the journey towards a particular outcome." There is a part of me that agrees with this, but a part that is perplexed and a little cynical. A good chunk of my job for the past several years has been to administer and support our university system's Turnitin instance and our online proctoring software. For years I have seen and heard the opposition of parts of the higher education community to both kinds of products. One of the criticisms is that they constitute some kind of unwarranted form of surveillance of students (especially the proctoring software). I would actually love to get rid of them. That is not happening anytime soon.
What strikes me about the proposal that students log everything they do is that we are asking them to surveil and report on themselves, internalizing the need for surveillance ever more deeply. I know this is not what is intended, but it seems an inevitable byproduct. Of course those who stray outside the lines will still log things the way they think they need to be reported, so we will not really be getting out of cheating mentality. Between this and the fact that reviewing all of this material is going to take a lot of time while overworked adjuncts multiply, we will almost inevitably see software solutions where the students are required to log all they do, but which also log their actions on the computer, then analyze that and report back. We are already starting to see that a little. Under the current economic conditions of universities, I think this is likely to spread and create a whole new surveillance regime.
I have a couple of broader concerns about AI that I think affect education in ways that we are not anticipating.
One is that AIs currently, and for the foreseeable future, need us to behave in consistent ways. They need humans who are statistically predictable. As AI is applied to education, will we teach students to be too statistically predictable. That isn't very well put, I'm struggling with the concept, but it concerns me.
The other is that we take the inevitability of AIs for granted. It may be that their rise is inevitable, but I've been spending some time looking at different factors that might slow their spread, stagnate, or face widespread rejection and hostility from society. When it comes to history, everything is contingent, but most of us do not remember that. At the very least we need to start considering what might happen under various contingencies. Suppose there are mass student protests against AI as a surveillance technology, as a technology that needs to be decolonized, over environmental/climate issues, or because of its negative impact on jobs and careers. Suppose the 2024 election is so contaminated by deepfakes that there is a mass revulsion against AI? Suppose that AI becomes so weaponized by the various world powers that it must be tightly regulated and surveilled? Those are just a few. What do those do to education? What do they do to politics? To other aspects of society?
We are restructuring education to benefit AI. What happens to education if any of those scenarios (or many others) come to pass? What would it look like if we restructured AI to benefit education?
Thanks Nick!
As always I find myself nodding all the way through. :)
I think about these questions a lot as we foreign language teachers are primarily concerned with language acquisition, communicative literacy AND cultural competence in the target language. Writing is a single aspect of leaning a language but an important one, which is why many colleagues are panicking or are reverting to traditional means of instruction, not wanting to allow students to push any buttons - thus not grasping the implications of the evolution of writing habits in society in general. To your points:
On 1: This is possible, but not very likely, at least not for a very long time. In terms of real world performance, in middle schools we see students lacking basic proficiency in their own first languages, unable to adequately write for their maturity. Also, with the considerations of age limits and data privacy of the frontier models, many are advocating for the introducing AI tools in schools at a later age - not before 14 years. Also, not all schools or students' families can afford to provide devices (which are necessary for AI exploration). Where I am, in Austria, we are "lucky" in that we have 1:1 in all middle schools. All 10 to 14 years get either tablets or laptops and have a compulsory subject of media training 1 hour in the week. So we have good circumstances, but this is not a universal picture.
On 2: I SO hope that the tertiary level will be more focused on process-oriented procedures. It makes more sense, but this is probably very difficult to implement, and many will resist.
On 3: this is already happening, so I see this as a given. Already my colleagues in Academia describe their changed workflows - with their AI as a partner, working side by side with them to research and publish papers. As you mention and an interesting thought from Ethan Mollick's book Co-Intelligence, many knowledge workers are already using many AI tools BUT they are not admitting it (taboo) for fear of criticism OR the fear that their texts will be considered of lesser value or inferior quality because an AI was involved.
On 4: I see this as well. AI will be everywhere. It will be difficult to resist.
Your writing is spot on. :)
P.S. I have actually now introduced the concept of centaurs and cyborgs to my 14-year-old students, and will mention HILP in a subsequent lesson. I think we will increasingly need to explain these concepts so that they gain competency.