The Unresolved Antinomies of Biden’s Executive Order on AI
Reckoning with the Ideologies that Animate the Internal Inconsistencies and Oppositions Within US Safety and Security Policy
Dear Readers,
I hope this message find you well. I wanted to publish a piece I have been sitting on for a while. As an educator, writer, and researcher, I have a lot of interest in AI safety, security, and regulation.
I hope you enjoy this historical and textual analysis of the Biden Executive Order on AI. Many commentators swooped in immediately to response the EO, but I wanted to settle the dust settle a bit before weighing in. So here is what I think a few months out.
In brief, the EO is ambitious in its scope and breadth, but as a presidential declaration and in absence of serious data privacy laws, the EO is a half-measure at best. Case in point: The EO promises resources to help schools respond to AI, but schools will have to wait out a lengthy committee process before seeing any tangible support from the federal government.
On October 30, 2023, President Joe Biden issued Executive (EO) on Safe, Secure, and Trustworthy Artificial Intelligence, characterizing it as “unprecedented in its significance for AI safety, security, and trustworthiness among any government measures globally.” In the same week, the Biden administration also released a Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy and a draft memorandum on Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence. And thus begins the era of big government responses of generative AI.
US Vice President Kamala Harris applauds as US President Joe Biden signs an executive order after delivering remarks on advancing the safe, secure, and trustworthy development and use of artificial intelligence, in the East Room of the White House in Washington, DC, on October 30, 2023.
Brendan Smialowski | AFP | Getty Images
The initial reception and analysis of the EO runs the gamut from dismissal as a “toothless” salvo, condemnation as “big brother surveillance” and “regulatory capture,” and praise as a hallmark of “engaged and ethical oversight.” The diversity of opinion is in part a reflection of the complexity of the document itself. The EO has 29 signatory individuals, bodies, and agencies acting to varying degrees as semi-authorial agents, extends well over 100 pages of text plotted out less than coherently and written in somewhat daunting legalese, and is directed at audiences as divergent as students, healthcare workers, government agencies, industry executives, immigration officers, researchers, technologists, and environmentalists. The rhetorical situation is nuanced and multifaceted, even as it smooths out complex histories and ideological fissures.
The EO is the culmination of a sweeping shift in social, cultural, political, and governmental responses to Big Tech that began with the critical interrogation and attempted regulation of the Social Media industry in the mid 2010s. Notably, Bruce Reed, President Biden’s Deputy Chief of Staff and primary actor in the realm of AI regulation, played a prominent role in the passage of California’s landmark privacy bill and has published essays calling for the repeal of Section 230, which release computer services from liability for information posted on their sites and through their applications.
U.S. Vice President Joseph Biden arrives for a meeting with his Chief of Staff Bruce Reed (L) June 22, 2011 on Capitol Hill in Washington, DC.
Win McNamee | Getty Images
During the 2020 campaign, President Biden initially came out strongly in support of the Repeal of Section 230, before softening his stance in light of polling data. Once in office, President Biden with the assistance of Reed established an ambitious legislative agenda including an as-of-yet unsuccessful federal data privacy act and a stringent round of initial tech-oriented focused equally on social media and AI. Late in 2022, the Biden Administration issued the Blueprint for AI Bill of Rights that primarily contextualizes AI as an imminent existential threat and thus serves as a documentary and institutional foundation from the more practical measures of the EO.
In contrast with the Blueprint, the EO’s introduction dwells more substantially on possible gains from AI, reflecting its composition after the rise of the major large language models like Bard, Claude, and ChatGPT. The document’s “Policy and Principles” section insists that “The Federal Government should lead the way to global societal, economic, and technological progress, as the United States has in previous eras of disruptive innovation and change” (Sec. 2.h). The work of the EO unfolds along six loosely connected trajectories: (1) defining AI, (2) requiring Big Tech to report on extremely large or “frontier” LLMs, (3) mobilizing agencies to investigate current AI practices, (4) developing a research base for future innovation, policy development, and immigration reform, (5) calling for the passage of federal privacy and data protection legislation, and monitoring governmental reliance of AI through watermarking and case use assessment. While the expansiveness of this field of engagement lends credence to the White House’s claims about the “special” qualities of the EO, close reading of the document reveals that most of its actions soften greatly in their details. For instance, the EO does not identify a method for watermarking government documents, but rather establishes a 240-day deadline for submitting a report on existing methods of watermarking. Countless similar examples abound in the document.
Initial criticism of the EO comes on two main fronts: (1) “free market” advocates and (2) AI startups and open-source AI developments. Steven Sinofsky, Board Partner at Andreessen Horowitz, views the above trajectories as “springboards for further regulations” empowering “the bureaucracies given actual authority to go above and beyond.” Sinofsky argues that existing laws are sufficient to regulate AI, the current technology is too nascent to create meaningful or effective regulation, and the implicit movement of the document is one of “regulatory capture. “Free market” criticism blends nicely, but is not identical with criticism emerging from the open-source and start-up AI communities.
Over the long history of AI development, there have been two competing theories about best practices for design, implementation, and safety. The closed source community, which includes the makers of many of the most popular LLMs like ChatGPT, Bard, and Claude, believe that the compute and code of these AIs are of such a significant sufficient power that they need to be protected for outside entities in order to prevent existential risks.
The open source community includes a range of different positions revolving around the central idea that greater transparency and access to compute and code makes for a much safer AI ecosphere. Open-source pioneer Andrew Ng recently insisted that “There are definitely large tech companies that would rather not have to try to compete with open source [AI], so they’re creating fear of AI leading to human extinction.” More practically, open-source advocates like Arvind Narayanan, Sayash Kapoor, and Rishi Bommasani at AI Snake Oil pointed to how the EO focuses regulation primarily on model specs and very specific applications, requiring little if any transparency when it comes to “pre-training data, fine-tuning data, labor involved in annotation, model evaluation, usage, and downstream impacts.”
Whether you regard the EO as a brilliant performative gesture that that dons the guise of regulation while quietly delaying the particulars until generative AI’s more mature shape or form emerges, or as a premature, misguided attempt at governmental overreach and regulatory capture, revealing a disconcerting alliance between close-source incumbent AI and the Biden White House, the EO struggles with central antinomies that will most certainly complicates all of its recommendations and applications.
First, the document positions regulation as the proper instrument for distinguishing between “the real” and “the false,” an antinomy that has plagued advanced computing technology since its inception and no watermarking scheme will likely resolve any time soon.
Second, the EO offers little guidance on where “existing regulation” will end and “new legislation” will begin with the exception of its call for the passage of stricter data protection policies. Most analysts agree with Sinofsky, including Turner Lee, Senior Fellow of Governance Studies at Brookings, that “Congress will put their teeth into [the EO] with some more coherent legislation that backs up many of these suggestions and proposals.” Whether this legislation will reflect the core principles of the Blueprint and the EO is a matter dependent upon many factors including President Biden’s reelection.
Third, the EO leaves unresolved the antinomy between “open” and “closed” AI, and thus ultimately defies the more difficult questions about AI policy, regulation, and safety for future bodies and documents. That the Biden administration is closing down on the “open” approach is supported by these recent comments by FTC Chair Lina Khan: “We’ve also, not too long ago, seen what’s been known as the open-first-close-later model, where firms will use openness as a way to build up their own scale and get a key foothold in the market.” Meanwhile, Europe and other international bodies appear for the time being less inclined to take sides and swoop in with an initial round of bombastic regulations.
FTC Chair Lina Khan, preparing to testify before a House committee last month. Tom Williams/CQ-Roll Call | Getty Images
Finally, the EO sheds little light on the actual import of its most used antinomy between “safe” and “dangerous” applications of AI. At times, the document reads like a work of Neo-platonism or Manichean philosophy with its vivid contrast between “good” and “evil,” “light” and “darkness.” One must look to the ancillary, draft memorandum to begin to find answers, but even there vague generalities rule the day: “identify and prioritize appropriate uses of AI” vs. “managing risks from the use of AI.”
Realistically, if we are going to move forward with this process of regulation in a substantial way, we will have to start to pin down some of these antinomies. But in doing so, we might find ourselves in a cultural clash over the ideologies that inform particular definitions and articulations of distinctions.
Thanks for reading Educating AI! Happy Holidays!
Nick Potkalitsky, Ph.D.
Further Reading
In this conclusive post of the "Year-End Comprehensive Review: Part 3," I develop strategies for AI-responsive writing instruction, offering educators tools for integrating AI into classroom teaching.
This study dives into Ian McEwan's "Machines Like Me," applying narrative analysis tools to unravel the novel's complex ethical dynamics and their implications for understanding artificial intelligence.
This article delves into the burgeoning integration of Gen AI in education, highlighting innovative classroom applications alongside a critical gap in AI literacy instruction.
An analysis of the complex landscape of AI in education, examining the varied perspectives shaping our response to technological advancements and the potential for an "AI Culture War."
This article explores the evolving nature of writing in the context of Gen AI, questioning its impact on human experiences enriched by traditional writing. It delves into how AI is reshaping the processes of generating written content, posing crucial questions about the future of human growth, development, and the intrinsic value of writing.
Very much so. And on top of that the language on the text is pretty vague. Usually legalese is hyper specific. Here’s an example. the definition of AI in the document is so broad that a spreadsheet could be defined as an artificial intelligence process. Or in the debate about real versus fake images, one might conclude from the text that any use of CGI as a violation of the document’s core principles. The EU AI Act also suffers from similar ambiguities.
The daunting legaleze. That's always the rub. When you have to sit down and debate what's being said in clear languages and the current federal code is so thick and inter-twined as to be impossible. Maybe we can put AI on smoothing that all out?