The AI Winter of Our Discontent
Rising Above the AI News Cycle and Choosing a Productive Path Forward
A Frosty News Cycle
Since the beginning of the new year, the narrative around AI in media channels like X, Reddit, and Substack has notably shifted to a more critical tone. This change is fueled by a series of news stories that unambiguously spotlight the ethical challenges central to AI's development, policy-making, and security aspects. This trend reflects growing concerns and scrutiny over the impact and direction of AI technologies.
In my extensive reading, I've encountered five particularly striking stories. These not only stand out due to their content but also significantly influence public opinion regarding the future implementation and integration of AI. These stories vividly highlight the pressing ethical concerns and potential societal impacts of AI, underscoring the urgency and complexity of navigating its future trajectory.
The New York Times vs. OpenAI and Microsoft: Involves a lawsuit alleging unauthorized use of The New York Times' content by OpenAI for AI model training, raising questions about copyright law and fair use in AI.
Explicit Deepfakes of Taylor Swift: Viral nonconsensual deepfake images of Taylor Swift, sparking significant fan action and highlighting the need for stronger legal protections against deepfake technology abuses.
Lawsuit Involving Comedian Sarah Silverman: Sarah Silverman's lawsuit against Meta and OpenAI, alleging unauthorized use of her books for AI training, raising ethical concerns about the use of source material in AI development.
AI 'Girlfriend' Chatbots in OpenAI's GPT Store: OpenAI's GPT Store grappled with a surge of AI 'girlfriend' chatbots, breaching its policies against romantic or explicit content, highlighting challenges in AI marketplace regulation.
Circulations of Affect in Popular Culture
In cultural and literary studies, we closely observe the circulation of "affect" or "feeling" around conceptual debates in popular culture, discourse, and media. These affective movements often gain autonomy, unpredictably spreading across networks of individuals, social groups, communicative apparatuses, and institutions. In the realm of AI's rapid evolution, especially post the OpenAI Board's collapse and leading up to the 2024 cycle, these affective currents have turned markedly volatile. Hence, it's vital to scrutinize not just the content of AI-related narratives but also the range and nature of sentiments they evoke across society.
Humans Always in the Loop
The substance of these articles highlights a significant gap in AI regulation, particularly in training, development, and utilization. Companies responsible for these AI products and their associated use-case policies are either unable or unwilling to enforce them. In cases like AI avatar girlfriends, there's a reluctance to enforce, while in deepfakes, the challenge is more about capability. Additionally, using copyrighted content to reduce costs seems like a tactic to avoid fair business expenses. This situation begs the question of the true cost of AI, factoring in all elements - including environmental impact. The low subscription fees for services like ChatGPT4 could be a result of a governmental-business-regulatory network prioritizing short-term gains over long-term consequences. However, with AI deeply integrated into our society, a techno-pragmatic approach is essential to navigate this new reality.
Impending AI Winter?
At the same time, the larger AI narrative is increasingly clouded by speculations of an impending AI winter. This sentiment is particularly pronounced as we near a year since the launch of ChatGPT 4. Notably, key figures in the AI sector are transitioning from a stance of unabashed enthusiasm to one of cautious expectation management. Concurrently, OpenAI seems to be pivoting from its roots in AI development to a more pronounced focus on software and application promotions, a shift that hasn't escaped media scrutiny.
The popular and technology media, once fervent in their embrace of AI's early promises, now find themselves in a precarious position. As the lofty expectations of spring and summer give way to a more critical and realistic assessment, these outlets are increasingly adopting an investigative stance, perhaps in an effort to retain credibility and audience engagement.
This scenario has led to what can be aptly described as an 'AI winter of our discontent,' coinciding, quite literally, with the winter season. It's a period marked by a reevaluation of AI's potential and limitations, leading to a subdued atmosphere in the AI community. As we navigate this phase, there's a collective hope for a 'thaw,' where the prospects of AI align more closely with its real-world applications and impacts.
Beyond the “On / Off” Switch
The complex emotions stirred by recent AI news stories and the grander AI narrative are complicating efforts to develop effective plans for AI integration in educational and professional settings. While reactions of shock, awe, and dismay are understandable, especially in response to what Alberto Romero termed a "race to the bottom", it's crucial not to simplify AI's future to a mere binary decision. AI is multifaceted and demands a nuanced approach beyond a simplistic "on-off" switch in our collective imagination. This perspective is vital in shaping a balanced view of AI's role in society.
There is a growing techno-pragmatist movement and disposition amongst the writers I rely on the most and am building connection with as our Educating AI community continues to grow.
Surging Affect in Educational Ecospheres
Reflecting on the recent surge in anti-AI sentiment, I anticipate renewed opposition to AI's integration in educational settings. This echoes February 2023's anti-AI push by some schools and districts to ban AI outright. It highlights a reactionary trend in the education system since the pandemic. Initially, such responses were necessary to ensure safety, but overreliance on strict regulation can become habitual. My concern is that schools, influenced by these emotional currents, might hastily implement prohibitions without fully grasping the implications. This risks further alienating students, who may already be weary of restrictions and could increasingly disengage from educational processes.
Moving Forward: 3 Possible Pathways
We're faced with a few options. Admittedly, each option includes many variations and nuances, and ultimately schools, districts, and universities can create hybrid responses. But I feel like at this point, educators need to see the options listed out and make a choice about how they want to move forward.
Ban Ed-Tech in Classrooms: This means prohibiting computers and AI in educational settings, though it ignores the fact that students might still use AI outside of school for their work.
Ignore AI Usage: We could continue to overlook the reality that a significant portion of students are already using AI to complete their homework.
Develop an Integrative Approach: This involves creating a comprehensive method to integrate AI into modern classrooms. This is the approach that Educating AI and other forward-thinking educators in our community are advocating for.
As educators, we are at a crossroads in terms of how we approach AI in our daily work, classrooms, schools, and districts. The question is, will we allow the shifting sentiments to deter us from the effort needed to create a comprehensive method for AI integration? Will the next shocking story halt our progress, prompting us to delay addressing AI's role in education? Despite the flows of affect, the stances of our superintendents, or the flawed leadership of OpenAI, our students are already engaging with AI. Like the internet before, it's imperative for us to develop a framework that not only protects students but also nurtures meaningful interactions. This approach is crucial for equipping students with the skills necessary to contribute meaningfully to society in the 21st century.
Cultivating Techno-Pragmatism
The writing, technology, and education circles I engage with are witnessing the rise of a techno-pragmatist movement. Historically, pragmatism has often been overlooked, lacking the allure associated with more radical perspectives. It strikes a balance, acknowledging the merits of both optimistic and pessimistic viewpoints while charting a deliberate course through the middle ground. Techno-pragmatism, in particular, is intriguingly complex; in certain situations, it may even take on a seemingly radical stance through its ideas, networking, and implementations.
However, it's the ethos of balance that stands at the core of techno-pragmatism – a willingness to consider all perspectives and strive for a harmony that transcends baseless fears and unrealistic fantasies. This approach, valuing equilibrium and open-mindedness, renders techno-pragmatism particularly apt for navigating the nuanced challenges presented in the era of AI.
Writers in the Techno-Pragmatist Groove!!!
As you work through this winter of discontent, I challenge you to expand your network and include a few more techno-pragmatists in your fold. Consider reading the work of the following thinkers and writers:
Alejandro Piad Morfiis
’s "Techno-Pragmatist Manifesto" represents a striking articulation of a balanced approach in the digital age. Its beauty lies not just in its brevity but also in its rhythmic structure, which masterfully orchestrates a call-and-response dialogue between opposing viewpoints. This composition skillfully positions techno-pragmatism not as a mere compromise, but as a value-driven perspective.Contrary to the common perception of pragmatism as indecisive or value-neutral, Piad Morffis's manifesto illuminates the deep ethical commitments inherent in a techno-pragmatist stance. It reframes pragmatism as a thoughtful, value-laden choice, demonstrating that a middle path can be both principled and proactive, especially in the context of technology and its impacts on society.
Nat
, a seasoned writer on all aspects of AI, has recently delved into the complexities of AI prompting, specialized case usage, programming, and cybersecurity. With years of expertise in computing and machine learning, Nat approaches the latest developments from a balanced, median perspective. In numerous exchanges with Nat through Notes, our discussions often circle back to how current AI features or controversies echo earlier moments in technological history. Nat's insights are not only informative but also offer a grounding, engaging, and creatively rich viewpoint. Interestingly, Nat once served as a speechwriter for a prominent politician, a testament to their diverse skills. For anyone seeking a clear-eyed view away from the usual hype, Nat's writings offer a weekly insight into the pragmatic realities of AI.Josh Brake
Since the turn of the new year,
has been making significant contributions to the field of AI through his insightful publications. He tackles the challenging questions surrounding the value and utility thresholds of AI integration. His work prompts us to reflect on the skills we might be forfeiting as AI becomes increasingly embedded in our work and writing practices. Josh is on the cusp of some major breakthroughs in understanding AI's role and impact. It's an exciting journey to witness, and I highly recommend following his weekly posts to stay abreast of these evolving insights.Michael Woudenberg
has emerged as a steadfast “techno-pragmacist” voice in the discourse surrounding AI, particularly since the advent of ChatGPT. His approach is enriched by a diverse blend of personal and professional experiences, lending a unique depth to his perspectives. Michael is known for his strong resistance to the typical hype surrounding technology, advocating instead for its use in fostering more sustainable workplaces and resilient human communities. His polymathic view on these matters is both insightful and refreshing. To delve into his multifaceted insights, subscribing to his Substack is highly recommended.Rob Nelson
has recently emerged as a prominent voice in the AI and Education space on Substack, making a significant impact with his insightful essays. From the outset, he has contributed one of the most critical pieces in this field over the past year. His “On Techno-pragmatism,” a series of posts infact, revisits William James’s foundational essay on Pragmatism, extracting new and relevant insights for our current challenges with AI. I highly recommend reading these pieces thoroughly, as they offer valuable perspectives deserving of close attention. Song of the Week:
John Martyn’s “Don’t Want to Know” (1977), Solid Air
British singer-songwriter John Martyn (1948-2009), whose style echoes that of Nick Drake and Townes Van Zandt, led a life marked by turmoil and self-destruction. Yet, amidst this, he crafted songs that resonate deeply, as evidenced in one of his striking lyrics that has particularly captivated me this week: “I don’t want to know about evil, only want to know about love.” The use of keyboards coupled with the Echoplex pedal in this song creates a mesmerizing atmospheric quality, reminiscent of the sound of a vibraphone, adding a unique layer to the emotional depth of his music.
Thanks for reading this edition of Educating AI!
Nick Potkalitsky, Ph.D.
P.S. I will be presenting at EDUCON in Philadelphia on Feb 4th. Stop by if you happen to be at the conference.
Honored to be mentioned here and excited to think of techno-pragmatism as a movement. Writing about William James has felt more like a weird little historical hobby, sort of like taking and developing your own photographs in the digital age. Nice to have you point to other writers who share some of the ideas, and for me to think about pragmatism more as a weird, medium-sized hobby, one with enthusiasts who get together online and in-person.
Really insightful post! I'm personally in the pro-integrative camp... but what specific strategies can educators use to integrate AI in classrooms while also nurturing critical thinking skills?