The Impending AI Culture War and How to Avoid It
Why Do We Disagree So Much about Gen AI
Two weeks ago, Marc Andressen’s “Techno-Optimist Manifesto” set off an internet “firestorm” with its lofty idealism and utopian rhetoric. In it, the venture capitalist and billionaire sought to expose all lies being told about Gen AI and other related technologies. But when reading the document, I had a lot of difficulty to figuring out the exact contents of those lies.
After a couple reads, I eventually concluded that the biggest lie for Anderssen is that technology–and by extension the human mind and human creativity–has any possible limits. Put another way, to believe that the mind cannot think itself out of all current technological challenges and roadblocks is to tell a profound falsehood, and for someone like Anderssen, represents a profound affront to the human spirit.
A Critical Space for Reflection and Metacognition
In this post, I will take a step back from such heated polemics–God only knows we have had enough response posts to Anderssen—in order to think about what people are doing when they take strong stances on the nature of AI, technology, consciousness, or the future of any or all of these things. Today’s exercise will be one of reflection and metacognition.
The goal is to arrive at and extend a critical space between ourselves and the fiery positions we sometimes take on when engaging with these hot-buttons or the latest sensationalist posts. Please note, dear readers, that I am intentionally writing today’s newsletter from a more generalist standpoint as I think the following content will be useful not only to educators, but to a wide range of professionals and armchair-variety theorists interested in the rise and steady advancement of Gen AI and related new technologies.
Frameworks/Orientations as Thought Constructions
Although I do believe that there are myriad positions available to individuals about the societal impact, cultural significance, philosophical import, and future development of Gen AI, I will seek here to cull them down to 3 basic orientations or frameworks not because these orientations actually exist in reality as unified schools of thought, but primarily as an intellectual exercise that will hopefully allow participants to attain for a moment–a wider view of the field–and in turn a possible opening into metacognition and reflection.
As a literary historian, I have undertaken many such exercises as I studied and reified different historical periods, aesthetics, styles, movements, and genres as theoretical vantage points for the study of individual works, authors, or collaborations. It is not that the Modernist Period has a literal, physical “correlate” in the material world, rather that such periods are constructions of thought that are useful for the broader study of historical, cultural, and literary trends across time and place. In this spirit, we will proceed.
3 Basic Orientations on Gen AI and Related Technologies
Orientation 1: Techno-Idealism
Techno-idealism welcomes all new developments in Gen AI. Techno-idealism supports the contention that artificial general intelligence (AGI) is either “right around the corner” or has already been achieved. Techno-idealism believes that gains made possible through the further development of Gen AI far outweigh the immediate and long-term risks. Techno-idealism characterizes human-AI interactions as the realization of humanity’s creative evolution. Techno-idealism affirms the human ability to navigate social, ethical, and personal issues pertaining to agency, autonomy, originality, and individuality in the face of ever increasing immersion in AI-infused systems.
Orientation 2: Techno-Realism
Techno-realism assesses each new development in Gen AI in light of its own merits and in relation to other pressing concerns and important trends. Techno-realism remains cautious about assigning Gen AI the label “AGI,” but remains open to this development in light of clear criteria and convincing evidence. Techno-realism focuses primarily on short term gains and risks when evaluating different use cases and new applications and developing policies for implementation and dissemination. Techno-realism regards Gen AI as just one finite manifestation of the creative evolution of human beings. Techno-realism affirms the human ability to navigate social, ethical, and personal challenges related to Gen AI within thoughtfully curated contexts, while acknowledging the long history wherein technology has impacted human development in unpredictable and adverse ways.
Orientation 3: Techno-Pessimism
Techno-pessimism distrusts all new developments in Gen AI. Techno-pessimism usually hinges on an accompanying belief that AGI is “right around the corner” or has already arrived. Techno-pessimism believes that short-term gains on the whole do not outweigh immediate and long-term risks. Techno-pessimism characterizes human-AI interactions as a distortion of humanity’s creative evolution. Techno-pessimism denies the human ability to navigate social, ethical, and personal issues pertaining to agency, autonomy, originality, and individuality in the face of emerging AI-systems.
Why Do Individuals Gravitate Toward Particular Frameworks?
As thought constructions, these frameworks describe broad trends in our responses to Gen AI. If we remove “techno” from these designators, we have revealed 3 basic philosophical orientations in search of topical groundings: idealism, realism, and pessimism. In many ways, the history of the reception of new technology over the past several decades, even centuries, in the US and more broadly, has unfolded along with these philosophical, cultural, and ideological vectors, tangled around one another and wrestling for supremacy.
That much is probably obvious to any reader at this point: And so the real purpose of this article emerges as a deeper probe into the reasons why individuals gravitate toward particular frameworks and an examination of the possibility that in our everyday lives we actually cross-inhabit all these frameworks as we make our way through the world–sometimes through choice, and sometimes through factors largely out of our control.
A Spectrum of Causes and Conditions: From Identity to Ideation
Although my setup has the appearance of a more relativistic study of the impact of AI and its future development, I want to establish in the following a list of possible causes and conditions behind our adoption of particular frameworks that include both issues pertaining to histories, identities, and professions and those related to the actual nature of AI, its development and advancement, cultural, societal, and economic impact, and current and ongoing safety concerns and consequences.
In our debates, we are exploring vital questions that we need to find answers to, and by acknowledging how our individual and shared histories inform our thought processes, we can arrive at more satisfying answers and solutions.
An Individual Is the Sum of Many Parts
An individual user of Gen AI is the sum of many profound, complex influences and factors. To use a term made famous by Giles Deleuze, the individual is a vast “assemblage” of narratives, histories, material conditions, choices, purposes, limitations, opportunities, etc., that theoretically spiral outward across time and history into a space as grand and unpredictable as the universe itself.
And yet, we must create some boundaries around such individuals that allows us to examine and make sense of their movements, goals, aspirations, and actions within more circumscribed contexts like the “current rise and development of Gen AI technology.” Once again, these causes and conditions are structures or constructions of thought to the extent that they allow us to explore different hypotheses about individual’s thoughts, words, actions, etc.
It seems to me that several causes and conditions play large–yet difficult to predict–roles in influencing how individual users respond to Gen AI in a broader manner as characterized in the 3 orientations or frameworks:
1. Age, Race, Ethnicity, Gender, Class, Etc.
All these factors deserve much closer attention and study than can be offered in a single article. Dr. Rumman Chowdhury and her colleagues at Parity Consulting and the Berkman and Klein Center for Internet and Society at Harvard University are in the process of publishing a series of important papers on how responses to Gen AI fractalize along specific demographic and intersectional dividing lines. While it is too soon to make generalizations, we can say that identity alignments play important roles in an individual's general optimism and engagement with Gen AI and its further developments. I will offer more updates on these trends as more studies are released or published.
2. Education, Profession, Economics
In general, greater knowledge of and experience using advanced technology offers more opportunity to develop opinions about these technologies based on real-world practice. Several reports have indicated that the most recent generation of technology students and graduates are at once very excited about the promise of these new technologies, but also worried about their future job prospects.
More established workers in Big Tech and Big Data also express a wide range of responses to Gen AI as evident in news reports and in more personal posts on X, Linkedin, Reddit, and Substack. From my armchair observer perspective, there appears to be a bifurcation process at work with some scientists, engineers, and researchers welcoming the new technology as an opportunity to push boundaries and open up new fields of study, and others conceptualizing current Gen AI as nothing more than a more linguistically capable version of existing products and/or applications. The research divide appears more epistemic, ideological, and philosophical, without much sign of giving way to evidence-based studies on either side’s behalf.
In addition, economics plays an important role in these debates as individuals invest in or are invested in by wealthy institutions and corporations with both more global and more granular frameworks or orientations about Gen AI. I will continue to watch this connection between funding and orientation in upcoming posts.
3. Science Fiction Favorites
This category I create only slightly in jest. In this category, I am trying to get at the less concrete, more affective, more imaginary spaces where individuals create hypothetical versions of the future. Sometimes, individuals do this through explicit engagement with the science fiction genre. At other times, individuals do so by combing through the latest industry magazines or product profiles, imagining future uses that will have a significant benefit to particular individuals or organizations or both. Unsurprisingly, these imaginations regularly unfold along the 3 pathways outlined in the above frameworks and orientations.
(A) Techno-Idealism: In Ecotopia, Ernest Callenbach imagines a futuristic utopian society that prioritizes ecological sustainability, environmental conservation, social well-being, and uses of technology that support these broad initiatives.
(B) Techno-Realism: In The Ministry of the Future, Kim Stanley presents a near-future Earth grappling with the urgent and complex challenges of climate change and environmental degradation, where humans use technologies to make modest advances towards a better future.
(C) Techno-Pessimism: In 1984, George Orwell tells the story of a political party that uses oppressive technology to reduce all human creativity and individuality to nothingness.
Here, I am not implying that these imaginary frameworks ground the 3 orientations or frameworks, but rather that these imaginations help animate and energize the hypothetical constructions we produce when extending our past and present knowledge to the unpredictable and ultimately unknowable future.
4. Daily Interactions with Gen AI Technology
Now that Gen AI is a regular presence in our lives and work cycles, we no longer have the luxury of theorizing about it as a pure abstraction or as a tool used by only specialized researchers.
Instead, we are responding to Gen AI phenomenologically and experientially. While it is possible to mentally represent Gen AI as a static construct or experience, our interactions with Gen AI are a long, extended series of fluid, communicative exchanges that yield results of varying degrees of utility, saliency, and accuracy.
Sometimes, we walk away from an exchange amazed at the outputs, feeling as if a real conversation has taken place. At other times, we come away from a “conversation” deeply very frustrated, unable to get an LLM to reliably add together two 5-digit numbers. The point here is that Gen AI is “with us” as we respond to it: Gen AI is now part of the notional structure through which we make judgments about it.
5. Knowledge of and Research into Development, Applications, Policies, and Consequences of Gen AI
A vast body of knowledge about Gen AI now exists. Scholars, researchers, developments, designers, policy makers–just to name a few professions impacted–publish thousands of articles a month on Gen AI. Suffice to say, it is impossible for any single individual to keep up on all the advances, insights, developments, and most importantly, interpretations of these publications and findings.
Never was there a more important time in the history of science than now to find a speciality, develop a knowledge base, and start to build a comprehensive system for making and testing claims about that specialized set of knowledge.
And yet, at the same time, never was there a more important time in the history of science than now for scholars to develop bridges of understanding between circumscribed specialities in order make micro-claims more discernible on a wider scale and on broader stages so as to simultaneously push forward beneficial advances and put checks on destructive consequences.
We need polymaths! The stakes are very high. In the absence of these bridges, we continue to edge ever closer to an AI Culture War:
Definition: AI Culture War
A culture-wide territorialization of particular frameworks or orientations about Gen AI that imposes practical, conceptual, ideological, and institutional limits on the way we think, speak, and act in the world.
6. Theoretical and Philosophical Dispositions
Theoretical and philosophical dispositions are conceptual, practical, disciplinary, and sometimes ideological alignments that individuals subscribe to with varying degrees of interest, commitment, and engagement depending upon context, genre, audience, purpose, and intended consequence of a particular communicative act, hypothetical exercise, or integrative action.
In my role as an armchair observer, it seems that particular disciplines or sectors of disciplines have more optimistic commitments about the nature, applications, and future of AI. At the same time, I am noticing that proximity to particular companies or industries tends to involve an uptick in optimism or pessimism alternatively depending upon their products, investments, etc.
Late this summer,
in a series of important posts chronicles the inside-view of workers at OpenAI and their relative immunity to any outside research that contradict the company’s widespread belief that “AGI has been achieved internally,” although this online message was later erased and retracted.In the history of thought, theoretical and philosophical dispositions usually hinge other cultural, social, and aesthetic formations. In my mind, religious debates about human goodness and fallibility circulate behind some of our debates about the human ability to navigate social, ethical, and personal issues pertaining to agency, autonomy, originality, and individuality in the face of new AI-systems. To what extent are all human efforts destined to fail due to self-interestedness, self-deception, short-sightedness, or cognitive fallacies?
7. X-Factor (Not Twitter)
Ultimately, all the factors and influences in the world cannot predict or explain why an individual responds precisely the way they do to Gen AI. More precisely, our responses change profoundly and unpredictably in response to the situations we inhabit.
I have come to deeply appreciate recent modes of cognitive science that conceptualize consciousness as an interaction not just between neurons and different sectors of neural processing, but also across the social networks we engage in and even through the material objects which we use to store data and memory. Across this interactionist plane or field, our responses “ripple” and “bend” as the networks we engage in collide, contract, collapse, and rearrange through thinking. Nonetheless, all the while, we somehow manage to hold onto a notion that we are responding to Gen AI in a definite way. That perhaps is the greatest “miracle” of the entire phenomenology that we are trying to characterize in this article.
This work of reflection on different orientations toward Gen AI and the various causes and conditions behind those orientations is difficult but critical to the larger exercise of better understanding Gen AI and implementing and integrating it into life and society in safe, equitable, engaging, meaningful, and sustaining ways.
The Opportunity to “Unflatten” Our Interlocutors
Examining these vantage points, conditions, and causes invites us to “unflatten” our interlocutors or opponents, and think about the complex systems and networks that lead them to assert a particular position in a particular context.
After such consideration, we may even dare to admit that our own positions waver as we, over the course of a single day, inhabit different attitudes, perspectives, responses, and conceptualizations of Gen AI. In the morning, I may optimistically use Gen AI to develop ideas for a piece of writing; in the afternoon, I may pessimistically condemn Gen AI’s ability to complete math problems in a Substack Notes; in the evening, I may more realistically compose analyzing the causes and conditions of my responses to Gen AI for my small but thoughtful audience of readers.
There Is an AI Culture War in the Offing
While in our debates, we would like to think that we are disagreeing solely about facts and interpretations, but beneath these disagreements are flows of identity, culture, power, capital, imagination, belief, geography, temporality, determinacies, and indeterminacies that disseminate through our discourses, inflecting our conversations in myriad, dynamic ways.
The reality is that there are forces, institutions, businesses, markets, politicians, media outlets, etc., that want the topic of Gen AI to turn into the next US-driven, world-sized Culture War. And if this Culture War were to happen, its primary fuel would be those flows of identity, culture, power, and capital beneath the surface of the contest of ideas.
In the US, we already have a rigid system of polarities and binaries in place that could serve as a foundation and catalyzing agent for such a Culture War. The media is already stoking the fires: Either AI will save the economy or destroy the economy; AI will either save workers time or put all workers out of a job; AI will either spark a new era in human creativity or put all artists out of business.
Let’s Imagine an AI Culture War
In the US, we have a Presidential Election cycle rapidly approaching. We already know that bad actors who are using Gen AI to create deep fake videos of candidates and distributing those videos widely online. We also know that Gen AI’s powers of text generation allow for easy dissemination of all kinds of political speech and invective through legitimate and dummy social media accounts.
Perhaps, in the midst of the political agon of the next election cycle, one or both sides of the political divide will radicalize the usage of AI as a “culture critical” issue. Its usage will become associated with a particular ideology, class, demography, political party, etc. As we know from the history of politics, such alignment does not preclude continued reliance on said technologies by the radical in their personal lives or in their business activities.
In this hypothetical context, Gen AI would become “instrumentalized” within the grander American Polarity, with the fallout at a more local and granular level. Perhaps, school board candidates would start to run in part on hard anti-AI in-school policies. Does this seem too far-fetched? Too 1984? Perhaps… but let us not forget that we live in very strange times.
Substack as a Powerful Countermeasure Against Culture Wars
Before we allow our imaginations to run wild, let’s refocus on our exercise in reflection and meta-cognition. Within this critical context, we have the opportunity to recognize and respect the flows of human experience and wisdom beneath the clash of differing ideas and concepts. At this very juncture, I am especially for the subtle and intricate conversational dynamics that Substack provides.
Before we allow our imaginations to run wild, let's refocus on our exercise in contemplation and self-awareness. Within this crucial context, we have the opportunity to recognize and respect the underlying currents of human experience and insight that exist beneath the clash of differing ideas and concepts. At this very juncture, I am especially appreciative of the subtle and intricate conversational dynamics that Substack provides.
In my limited experience here, Substack stands out as a place where we readers and writers can speak very passionately about Gen AI, sometimes even going so far as “flatten” each others’ perspectives in pursuit of a particularly heated point, even as the ongoing, serial nature of our publication cycles encourages us to reconsider strong positions in light of emerging evidence and constant engagement in the critical activities of reading, re-reading, reflection, and metacognition.
As Substack grows, so too will its potential to either radicalize or complicate discourse surrounding Gen AI and other new technologies. By way of a soft closing, I encourage everyone on this site to continue to use this space as a protective countermeasure against divisiveness, sensationalism, fear-mongering, and un-grounded modes of utopianism (see Anderssen) that is so present on other social media space, new outlets, and popular media sites.
Not Ideas All the Way Down; Rather Humans All the Way Down
When we disagree, let us greet each moment of tension as an opportunity to pause and see through the concepts and ideas at hand to the human beneath the surface–always individuals with complex histories, identities, and affinities informing particular frameworks and orientations on Gen AI.
Thanks for reading Educating AI.
Nick Potkalitsky, Ph.D.
Some interesting ideas here Nick. One thing stood out for me is that in your three positions you don't seem to include those who are highly skeptical of the idea that AGI is close or even those who doubt that it is even possible. Your thesis stems from the idea it's coming soon. People have been claiming it is just "around the corner" for quite some time. I could argue that any talk of hypothetical AGI dangers is a distraction from the real risks inherent with the current crop of Gen AI tools. What is really happening with these Gen AI tools? There is a difference between what academics in the field say and what CEOs choose to amplify. Hype can be a weapon to influence and gain power over shareholders, investors, politicians and the general public.
Very detailed and delineated discussion of how Humans view AI and why. Andrew Smith and I poked at this a bit earlier this summer with "Beware the Binary - Avoiding Absurd AI Assumptions" where we compared the Doomsayer with the Utopist (Andreseen falls into this category) Yet neither view is right, nor particularly helpful.
https://www.polymathicbeing.com/p/beware-the-binary