MLA Releases 1st Working Paper on AI
Teachers Need Time to Train; Avoid Surveillance of Students
MLA Releases 1st Working Paper on AI
Today (Monday, July 17) is an exciting day in the unfolding conversation about implementing and integrating generative AI into today’s school.
The MLA-CCCC Joint Take Fore on Writing and AI released its first Working Paper focused on “Overview of the Issues, Statement of Principles, and Recommendations.”
The MLA is the Modern Language Association, one of US’s oldest institutions dedicated to the study of languages and the humanities. The MLA is best known for publishing the MLA Style Guide, but supports the work of humanities and languages teachers (secondary and collegiate) in numerous ways. See the introductory section of the Working Paper for more examples.
If you are interested, the MLA is offering a free training about AI and Writing on July 26 at 2 pm. Please pre-register.
Most Salient Points in MLA Working Paper
As not everyone has the time to comb through the entire Paper, I want to offer up the most salient points. On the whole, the MLA is (1) very suspicious of these new technologies, (2) has concerns about lack of resources available to teachers to prepare for the fall, and (3) cautions using detection software to police students, which may disproportionately affect marginalized groups. (Everything in italics is a direct quote from the document with page numbers indicated):
We believe that writing is an important mode of learning that facilitates the analysis and synthesis of information, the retention of knowledge, cognitive development, social connection, and participation in public life. (p. 2)
The increased use and circulation of unverified information and the lack of source transparency complicates and undermines the ethics of academic research and trust in the research process. (p. 5)
All of the text a model generates is original in the sense that it represents combinations of letters and words that generally have no exact match in the training documents, yet the content is also unoriginal in that it is determined by patterns in its training data. (6)
A model cannot reliably report on which sources in its training data contributed to any given output.
Risks to Students:
Students may miss writing, reading, and thinking practice because they submit generative AI outputs as their own work or depend on generative AI summaries of texts rather than reading.
Students may not see writing or language study as valuable since machines can mimic these skills.
Risks to Teachers:
Teachers may be asked to make significant changes to their practice without adequate time, training, or compensation for their labor…
Teachers may lack adequate support and up-to-date training to understand LLMs as they relate to our disciplines.
Teachers will need to spend time and energy developing critical AI literacy (that is, literacy about the nature, capacities, and risks of AI tools as well as how they might be used), which will divert their attention away from other teaching practices and course content unless adequate resources are given to build it into the curriculum. (p. 6)
Benefits:
It has the promise to democratize writing, allowing almost anyone, regardless of educational background, socioeconomic advantages, and specialized skills, to participate in a wide range of discourse communities. These technologies, for example, provide potential benefits to student writers who are disabled, who speak languages other than English, who are first-generation college students unfamiliar with the conventions of academic writing, or who struggle with anxiety about beginning a writing project. They also augment the drafting and revising processes of writers for a variety of purposes. (p. 8)
Principles:
Provide support for teachers as we adapt our teaching methods and materials and respond to the complexity of issues and labor involved.
Center the continued teaching and learning of writing on writers and the inherent value that writing has as a mode of learning, exploration, and literacy development for all writers.
Create guiding documents, guiding materials, and resources for students and teachers that can be a foundation for policy and discussions of best practices, one that emphasizes the value of process- focused instruction and activities to the continued development of students’ intellectual and literate lives.
Focus on approaches to academic integrity that support students rather than punish them and that promote a collaborative rather than adversarial relationship between teachers and students. We urge caution and reflection about the use of AI text detection tools. Any use of them should consider their flaws and the possible effect of false accusations on students, including negative effects that may disproportionately affect marginalized groups (p. 10)
Develop policy language around AI by promoting an ethic of transparency around any use of AI text that builds on our teaching about source citation.
…
7. Use caution about responses that emphasize surveillance or restrictions on the writing process that make the conditions of writing for class radically different from writing conditions students will encounter in other classes, work environments, and their personal lives.
8. Prioritize the development of critical AI literacy in faculty leaders and higher education administrators. By this we mean not just how AI models work but also about the risk, rewards, capacities, and complications of AI tools. (p. 11)
Follow-up
This Working Paper is a very productive addition to the conversation, and aligns with much of my own research while raising some interesting new questions. I love the clear emphasis on the necessity of writing as a tool for analysis and literacy development. The call for more teacher training and dedicated, paid time to do that training is very crucial as the beginning of the 2023-24 school year draws near.
I really like the call for “critical AI literacy. This is the first time I have seen that phrase in the literature. “AI literacy” materials do exist, but they mostly were written prior to the release of ChatGPT and focus only on the technological operations of AI. The call for a critical version of AI literacy would include a focus on “risks, rewards, capacities, and complications of AI tools” (p.11). Throughout the document, risks and complications are framed in terms of ethical implications and consequences.
I am very much taking to heart the call for caution in applying AI detectors. The last thing we want is an AI Cold War in our classrooms where our students feel constantly surveilled and where our implicit biases lead us to surveil some students more than others.
After reading this document, I feel like educators can most wisely use their remaining summer prep time writing clear policies for classroom use of AI or developing a series of processes for composing policies with the assistance of students. In tandem, schools should prioritize the development of specific “critical” AI literacy courses. These trainings will be of great assistance to teachers as they refine their policies over the course of the fall semester.
I very much appreciate the hard work of this Committee and am excited to read more of their working papers in the future.
Thanks for checking out this post. Share this post with a friend or acquaintance.
Help me keep growing this small network of interested educators, administrators, parents, and human beings!!!
Be well,
Dr. Nick Potkalitsky
Article on declining accuracy of AI this spring/summer: https://gizmodo.com/study-finds-chatgpt-capabilities-are-getting-worse-1850655728?utm_campaign=mb&utm_medium=newsletter&utm_source=morning_brew
Interesting article. I’d go so far as to say that all biological entities are a machine of some type. A computational, entropy-avoidant machine. We just happen to be filled with fluids other than oil. Altho we DO have some oils internally, too — we just call them lipids. Human, machine, potato, tomato.