SB-1047: What California’s AI Bill Means for Education
What Could Be the Short- and Long-Term Consequences of SB-1047?
Yesterday, while prepping for class, a student asked me how new AI regulations would affect our work with AI in schools. The question struck a chord—especially with SB-1047, California’s groundbreaking AI bill, poised to reshape how AI is developed and implemented. As educators, we often see AI as a powerful classroom tool, but this bill might prompt us to think more critically about the role and risks of AI in education.
What Is SB-1047?
Introduced by Democrat Sen. Scott Wiener, SB-1047 seeks to regulate large-scale AI systems in California. The bill mandates safety standards for AI models that cost over $100 million to train or that require more than 10^26 FLOP of computing power. Among the provisions are requirements for internal safety tests, third-party audits, and even a "kill switch" for AI systems deemed potentially harmful. If passed, this legislation will transform Silicon Valley’s AI landscape, but what does this mean for education?
Short-Term Impacts: School AI Programs in Flux?
In the immediate future, SB-1047 could bring about notable changes in AI research and implementation across California’s universities and schools. While the bill primarily targets large-scale AI models developed by tech giants like Google and Meta, its effects may extend into academic and educational settings. Currently, most K-12 AI programs and smaller initiatives are likely to remain unaffected, as the bill’s regulatory thresholds are set at very high levels. However, as AI technology continues to evolve, what is considered “cutting-edge” today could become more widely adopted, potentially bringing these educational programs under increased scrutiny.
The bill’s primary goal is to ensure safety and accountability in AI development, which might slow down the pace of innovation, particularly in public universities and schools experimenting with AI tools. Educators may need to carefully assess the AI systems they use, ensuring alignment with the safety protocols established by the bill. While this might present some short-term challenges, it also offers an opportunity to ensure that educational AI remains both innovative and responsible.
Is This the Best Approach? Technical Regulation vs. Use Regulation
One of the most debated aspects of SB-1047 is its focus on regulating the technical aspects of AI rather than its application. By targeting large-scale models and the processes by which they are developed, the bill forces developers to prioritize safety at the design and training stages. Proponents argue that this is crucial to prevent harmful outcomes before they occur.
However, some in the academic and tech communities have raised concerns about whether this is the most effective way to regulate AI. They argue that it might be more appropriate to regulate the use of AI rather than its development. AI, after all, is a dual-use technology—like many powerful innovations, it can be used for both good and harm. Regulating how AI is applied, rather than imposing technical restrictions on its development, might allow for greater flexibility and innovation while still addressing potential risks.
This is a particularly relevant question for education. Should schools and universities be burdened by the same technical regulations as massive tech companies? Or would a more targeted approach—focusing on how AI is used in classrooms and for educational purposes—allow for a safer yet more adaptable integration of AI in education? These are questions that policymakers, educators, and technologists will need to grapple with as SB-1047 moves forward.
Long-Term Implications: A Shift in Pedagogy?
Looking beyond the immediate future, SB-1047 could encourage educational institutions to rethink how they integrate AI into teaching and learning. While personalized learning, AI-driven grading, and curriculum design are all promising areas, the bill’s emphasis on safety and accountability signals a broader shift toward more thoughtful and deliberate AI development—one that may prioritize long-term outcomes over rapid innovation.
However, for this shift to be truly effective, it may need to be part of a broader, multipronged approach that includes not only technical safeguards but also robust data privacy protections and ethical guidelines. As schools and universities explore the potential of AI, they must also consider how to protect student data, ensure transparency in AI-driven decision-making, and promote equitable access to these powerful tools.
An intriguing element of the bill is the proposed creation of CalCompute, a public cloud computing cluster aimed at advancing AI research for the public good. If successful, this initiative could democratize access to high-level AI resources, making them more available to schools, non-profits, and smaller academic institutions. By combining technical regulation with a commitment to public access and ethical standards, SB-1047 could help ensure that AI development in education is not only innovative but also safe, equitable, and aligned with the core values of learning.
What Comes Next?
As the AI world watches to see whether SB-1047 becomes law, educators should begin preparing for the potential changes it could bring. Whether through new safety protocols or partnerships with AI companies that meet higher regulatory standards, we’ll need to stay nimble as the landscape evolves. But if we approach this thoughtfully, the bill could end up setting a precedent that ensures AI in education is both innovative and safe—allowing us to embrace the future without leaving caution by the wayside.
Nick Potkalitsky, Ph.D.
Check out some of my favorite Substacks:
Terry Underwood’s Learning to Read, Reading to Learn: The most penetrating investigation of the intersections between compositional theory, literacy studies, and AI on the internet!!!
Suzi’s When Life Gives You AI: An cutting-edge exploration of the intersection among computer science, neuroscience, and philosophy
Alejandro Piad Morffis’s Mostly Harmless Ideas: Unmatched investigations into coding, machine learning, computational theory, and practical AI applications
Amrita Roy’s The Pragmatic Optimist: My favorite Substack that focuses on economics and market trends.
Michael Woudenberg’s Polymathic Being: Polymathic wisdom brought to you every Sunday morning with your first cup of coffee
Rob Nelson’s AI Log: Incredibly deep and insightful essay about AI’s impact on higher ed, society, and culture.
Michael Spencer’s AI Supremacy: The most comprehensive and current analysis of AI news and trends, featuring numerous intriguing guest posts
Daniel Bashir’s The Gradient Podcast: The top interviews with leading AI experts, researchers, developers, and linguists.
Daniel Nest’s Why Try AI?: The most amazing updates on AI tools and techniques
Riccardo Vocca’s The Intelligent Friend: An intriguing examination of the diverse ways AI is transforming our lives and the world around us.
Jason Gulya’s The AI Edventure: An important exploration of cutting edge innovations in AI-responsive curriculum and pedagogy.
I know you always speak from wisdom here. I am shooting from the hip here slightly. In all likelihood a mulitprong approach will be required to make progress. I still am a bit disgruntled about the recent failures to pass data privacy protections at the federal level. To me this is the substrate of a potent solution that includes tech and use regulation. You are probably right that the trickle will not impact K-12 much. Probably because it won’t impact the industry very much if we are being honest.
I wonder if the focus on size is truly warranted. I’m of the mind that AI technology will get smaller, more diverse, and niche. Certainly in education.
Also, people build small things that use large models. So where does that fit in?
I like the idea of focusing on use.