Charting A Strategic Course for AI Ethics & AI Policy Integration into Higher Education
Guest Post by David Hatami
Greetings, Dear Educating AI Readers,
Before I begin, I want to thank my readers who have decided to support my Substack via paid subscriptions. I appreciate this vote of confidence. Your contributions allow me to dedicate more time to research, writing, and building Educating AI's network of contributors, resources, and materials.
In the past two weeks, the AI x Education space has undergone a remarkable transformation. OpenAI has made its most advanced model, ChatGPT4o, accessible to all users with a login. This model is designed to engage users in longer conversations, focusing on creating a personalized and interactive experience.
Concurrently, OpenAI has disbanded its Superalignment team following the departure of Ilya Sutskever from the company. Additionally, OpenAI has brought in top talent from Inflection, the creators of Pi, another highly user-friendly chatbot. These developments highlight OpenAI's shift in focus from safety and security to enhancing comfort, ease of use, and interactivity.
Although more educationally-oriented AI tools are emerging—Google recently announced their LearnLM—ChatGPT will undoubtedly be the go-to AI tool for both students and teachers in the early part of the 2024-25 school year. Its brand appeal, power, availability, accessibility, and affordability make it hard to resist. Unfortunately, educators can only raise awareness about the potential issues with tools like ChatGPT4o.
We cannot stop the growing trend towards AI prioritizing comfort, ease of use, and interactivity over security, privacy, ethical considerations, and even the accuracy of content. Ultimately, most Americans prefer an AI tool that provides quick answers with minimal fuss and effort.
Realizing this, I am starting a series this summer about the things that teachers, administrators, and students do have control over in the AI x Education space. When thinking about how to start up this series, I decided to ask my colleague David Hatami for some help. David’s specialities are AI Ethics and Policy. “His 20-year career traverses the realms of teaching, advising, student management, higher education administration, online education, pedagogy, and curriculum development, all with a philosopher's heart.”
He has founded an educational consultancy devoted to AI Ethics and Policy: EduPolicy.ai. David’s book, "Rethinking Approaches to AI Policy & AI Ethics Creation in K-12 & Higher Education," is an embodiment of his commitment to revolutionizing education for the digital age. David will be defending his Ed.D. in a couple weeks!!! Let’s wish him good luck!!!
What I appreciate most about David’s approach is his emphasis on culture. Similar to my own thinking, David has recognized early on that AI is exposing gaps and limitations in Higher Education culture that need to be addressed for successful AI integration in academic spaces. The big insight here is that AI can assist us in this work of reinventing culture. I am happy to share this important essay by David and encourage my readers to reach out to him if they need assistance translating this vision into a practical strategy for their school or university.
Charting A Strategic Course for AI Ethics & AI Policy Integration into Higher Education
The Challenge of Rapid Technological Advancement
Over the last year we have all been inundated with the new technologies that AI has been advancing with at breakneck speed. Industry and academia have been scrambling to not only understand the new technology, but find new and innovative ways to make it work profitably within their respective ecosystems.
It's completely understandable from an administrator’s perspective, as well as that of the faculty, leading to a sense of being overwhelmed when the rules of academia which were chiseled in stone from the beginning of time, have literally turned upside down within the year. While this technology is not brand new, per se, the rapid advancement that happens on an almost daily basis from this point in time will make charting a course into the future ever more complicated and ever so necessary.
Student Adoption and Faculty Challenges
Students have already wholeheartedly embraced the new technology; from an administrator and faculty perspective this is a daunting challenge to manage. Everybody talks about AI, everybody thinks about AI, but nobody knows what to do about AI. Our biggest concern is how it affects us all and what we should do to make sure that we are still remaining true to our core principles and academic missions of educating our nation’s future.
As higher education institutions increasingly incorporate artificial intelligence (AI) into their ecosystems, provosts face the dual challenge of harnessing AI’s potential while navigating the ethical landscapes they create. Strategic integration of AI can transform learning environments, streamline administrative operations, and enhance research capabilities.
Visionary and Pragmatic Approach
Institutions thrive when they anticipate future trends and technologies; to craft anything successfully (let alone, technology) we must begin with the end in mind...therein lies the challenge. With technology changing so quickly how do we even know what our endgame even looks like before we have started.
The most visionary approach involves understanding the broad potential of AI implications and envisioning its impact on the academic and operational aspects of higher education.
However, this vision must be grounded in pragmatism to ensure successful implementation. I would propose crafting a "Phased AI Integration Plan" which aligns with institutional goals, missions and capacities to achieve maximum benefits while minimizing any potential disruptions they may bring.
Creating an AI-Positive Campus Culture
The integration of AI into academic settings is as much about culture as it is about technology. It is next to impossible to create a positive AI campus culture, without absolute "buy-in" from both faculty, staff and student stakeholders. In other words, they must be just as much a part of this process as the administrators to achieve successful implementation on any level.
Encouraging an AI-positive culture among faculty, staff, and students is essential.
There have been many instances where administrators have deferred to faculty to make the decisions in their own classroom, this is an ill-advised approach in this administrator's humble opinion. As this is bound to create two distinct camps: 'GPT all-day' and 'GPT no-way. This divide will cause dissension amongst faculty and students alike, completely upending any attempts to create a harmonious "AI Positive" campus culture environment.
Holistic and Systemic AI Policies
A student that feels confused or frustrated with their respective institution will always vote with their feet, which is an administrator's nightmare. This is why it is paramount that a university implements a holistic and systemic approach when crafting AI Policies and AI Ethics for their campuses.
For successful implementations, it requires fostering an environment where their community is informed about the benefits and challenges of AI and is actively engaged in shaping its use. A culture of open dialogue and continuous learning can demystify AI and reduce apprehensions, hence, facilitating smoother adoption.
Leveraging AI for Institutional Decision-Making
AI's ability to process vast amounts of data can be a strategic asset in institutional decision-making. To maximize its impact, universities require a multi-tiered approach. This requires engagement and buy-in from the stakeholders; requiring a holistic university policy which outlines the basic approaches and attitudes (which will most certainly vary from institution-to-institution) additionally requiring the crafting of unique policies for each specific department.
The reason for specific inter-departmental policies is very straightforward: The technology has different implications and ramifications for each individual department. However, for this to be effective it has to sit nicely, while complimenting the overreaching AI Policy of an university umbrella. By utilizing
AI-driven analytics, provosts could gain deeper insights into student performance, operational efficiency, and research outputs. These insights can inform more effective strategies for resource allocation, curriculum development, and student support services, ensuring that decisions are data-informed and strategically sound.
Upholding Ethical Standards in AI Deployment
While the capabilities of AI are vast, the ethical considerations are equally if not more significant. Provosts must mandate that AI applications uphold the institution's values and ethical standards with proper training and education. Involving not only adhering to data privacy laws and proper security guidelines which also actively promote fairness, transparency, and accountability with widespread AI deployments.
We are in such a rush to deploy AI we have forgotten about the rules, the guidelines and perhaps even some basic common sense. When we put the cart before the horse, one could reasonably be destined to go nowhere quickly. Establishing ethical guidelines and review processes for AI projects can and will safeguard our institutions against potential misuses, biases and most importantly from ourselves.
Implementing AI Ethics Courses
Introducing AI ethics courses for administrators, faculty, and students is a vital step toward ensuring responsible integration of AI technologies. These courses can lay a strong foundation for ethical stewardship by educating all stakeholders on the ethical, compliance, and legal implications of AI. Within the next six months to a year, I believe that AI ethics courses will become essential standards for compliance, providing a comprehensive framework to navigate the complex ethical landscape of AI implementation.
This proactive approach will help institutions stay ahead of potential ethical dilemmas and legal challenges, fostering a culture of responsibility and informed decision-making in the rapidly evolving field of AI.
Promoting Interdisciplinary Collaboration
The era of silos must come to an end. Institutions that insist on a regimented silo ecosystem will find themselves in a precarious position very quickly. AI’s implications and applications reach across various fields of study, making interdisciplinary collaboration vital. By encouraging collaboration between departments and disciplines, institutions can unlock innovative applications of AI that transcend traditional boundaries.
Faculty must be encouraged to embrace (please note, I did not use the word “mandate”) to change some of their traditional habits and attitudes. If institutions are to remain competitive and technologically advanced, it begins with the breaking down of departmental barriers and pivoting toward a more holistic and systemic model. These approaches not only enrich research and teaching but also prepare students for the increasingly interdisciplinary nature of the global workforce.
Conclusion
Strategically integrating AI into higher education requires a balanced approach that combines foresight with ethical responsibility. Ethical responsibility requires work and commitment that is a part of the very fabric of the new technologically savvy institution moving forward. This is not a buzzword; this is not a passing fad. This is education in the new millennia--the likes of which have not been seen in education or society-perhaps ever before.
Thanks, David, for this important contribution to the conversation. While we educators and administrators may not have much control over how big tech develops AI in the short or long term, we do have the power to foster environments where stakeholders are well-informed about the benefits and challenges of AI. We can do so by establishing hubs or networks where stakeholders actively participate in developing policies and practices for integrating and implementing these tools in tomorrow’s classrooms. By creating these collaborative spaces, we ensure that AI integration is guided by the insights and needs of those directly impacted.
After two weeks of explosive AI announcements, it’s good to remind ourselves of the extent of our locus of control—and indeed, the breadth of our responsibility.
Nick Potkalitsky, Ph.D.
Check out some of my favorite Substacks:
Terry Underwood’s Learning to Read, Reading to Learn: The most penetrating investigation of the intersections between compositional theory, literacy studies, and AI on the internet!!!
Suzi’s When Life Gives You AI: An cutting-edge exploration of the intersection among computer science, neuroscience, and philosophy
Alejandro Piad Morffis’s Mostly Harmless Ideas: Unmatched investigations into coding, machine learning, computational theory, and practical AI applications
Michael Woudenberg’s Polymathic Being: Polymathic wisdom brought to you every Sunday morning with your first cup of coffee
Rob Nelson’s AI Log: Incredibly deep and insightful essay about AI’s impact on higher ed, society, and culture.
Michael Spencer’s AI Supremacy: The most comprehensive and current analysis of AI news and trends, featuring numerous intriguing guest posts
Daniel Bashir’s The Gradient Podcast: The top interviews with leading AI experts, researchers, developers, and linguists.
Daniel Nest’s Why Try AI?: The most amazing updates on AI tools and techniques
Riccardo Vocca’s The Intelligent Friend: An intriguing examination of the diverse ways AI is transforming our lives and the world around us.
That is a really interesting and valid thought, Michael. We are all aware that humans are both emotional and logical creatures; while I'm not a neuroscientist, my guess is there is a certain unexplainable "trust" we have with a human that we have not evolved to accept with a robot. In full disclosure, I have never ridden in an autonomous vehicle. However, I have driven vehicles that have all kinds of "driver assist" technology. My observations are that driver assist (robot technology) doesn't always seem to understand or have the ability to predict what the driver's (referring to myself and other drivers on the road) intent and many times feels like it over-reacts to my perceived and anticipated next-moves.
With that said, I don't think most people believe that robotic (autonomous) drivers fully understand what to expect (and how to react) to the "other crazy drivers" out there on the road or what they are going to do next.
The need for AI Ethics stems not so much from a point of logic as is does to establish a set of baseline rules of what is and what is not permissible ways to use the technology. It's not hard to use AI for nefarious purposes; our generation has a vital responsibility to set a very high bar for teaching the next generation how to use AI with a sense of purpose along with some old-fashioned common sense.
Let's be honest, if we don't teach them proper AI Ethics now, no other generation will teach them. Remember, we are also teaching them so they will also, in turn, teach the next generation how to AI responsibly as well. Once mainstream practices have been established its very difficult (if not impossible) to swing the pendulum of morality, common sense and outright decency backwards to the point where they should rightfully be at the outset of integration.
Could not agree more, Guy. Thank you for some really excellent insights and observations!
Most of my career has been spent as a HigherEd administrator. Higher education needs to improve on their understanding of the “why” before they will ever decide to move forward with the “how”. This is exactly why I started EduPolicy.ai , to educate HigherEd, K-12 and Corporate education on AI Ethics and AI Policy development and implementation.