That is a really interesting and valid thought, Michael. We are all aware that humans are both emotional and logical creatures; while I'm not a neuroscientist, my guess is there is a certain unexplainable "trust" we have with a human that we have not evolved to accept with a robot. In full disclosure, I have never ridden in an autonomous vehicle. However, I have driven vehicles that have all kinds of "driver assist" technology. My observations are that driver assist (robot technology) doesn't always seem to understand or have the ability to predict what the driver's (referring to myself and other drivers on the road) intent and many times feels like it over-reacts to my perceived and anticipated next-moves.
With that said, I don't think most people believe that robotic (autonomous) drivers fully understand what to expect (and how to react) to the "other crazy drivers" out there on the road or what they are going to do next.
The need for AI Ethics stems not so much from a point of logic as is does to establish a set of baseline rules of what is and what is not permissible ways to use the technology. It's not hard to use AI for nefarious purposes; our generation has a vital responsibility to set a very high bar for teaching the next generation how to use AI with a sense of purpose along with some old-fashioned common sense.
Let's be honest, if we don't teach them proper AI Ethics now, no other generation will teach them. Remember, we are also teaching them so they will also, in turn, teach the next generation how to AI responsibly as well. Once mainstream practices have been established its very difficult (if not impossible) to swing the pendulum of morality, common sense and outright decency backwards to the point where they should rightfully be at the outset of integration.
Could not agree more, Guy. Thank you for some really excellent insights and observations!
Most of my career has been spent as a HigherEd administrator. Higher education needs to improve on their understanding of the “why” before they will ever decide to move forward with the “how”. This is exactly why I started EduPolicy.ai , to educate HigherEd, K-12 and Corporate education on AI Ethics and AI Policy development and implementation.
The topic of ethics is always a mess because it's not logical, it's emotional. I'm crafting an essay now on ethics and autonomous vehicles and asking why we panic about robo taxis making little mistakes but are completely OK with human drivers to the same?
This is a basic, high-level overview, which is all one can expect in a post of this length. It does presuppose that the administration and leadership of the district or the university prioritize AI policy and choose an AI-Positive or Forward attitude. In discussions of AI ethics and policy in education, this often seems to be taken as a given. I think we should at least spell out what policy options are for schools where leaders take a neutral or negative attitude. For one thing, that might make it easier to deal with those who have not decided on a direction.
Recognize that most administrators, faculty, and staff will not know the full range of ethical issues. The tendency is to narrow focus on a few topics and miss the bigger picture. I have been struggling to find effective ways to do that for more than a year now even while trying to expand my own understanding. I have come to the conclusion that:
- Foster comprehension of the tight and complex interplay between ethics, policy, pedagogy, technical considerations, and the strategies of both new and existing AI and Ed Tech vendors.
- Everyone needs to be included in the discussions and decisions, because everyone is affected.
- Budgets are going to be a big constraint on implementation. Unless you think you can get funding, realize that almost everyone involved will be doing this as an overload. Be aware that this applies to staff as well as faculty. In many institutions and districts the staff who work with information and instructional technologies may already be working at capacity. They may need to add staff or delay projects to deal with AI. This is exacerbated by the inexperience of many of these companies in dealing with educational institutions and their needs, the speed at which everything is happening, and IT departments having to learn the additional demands of vetting and implementing these technologies.
- Big data is nice for administrators to have, but to start with you may not have much data, or really any, about how AI is being used to your institution. If you were a big and complex district or institution, this may take time and effort to discover. I am generally skeptical of big data approaches. I am very skeptical of them given where most institutions are today. The collection of data can be built into the policies, but I remain convinced that during this early phase, we require high-touch approaches to navigating this period of transition.
- Recognize that different groups are going to have different needs. This will be particularly true of higher education and, especially, of larger institutions. A Law School, Journalism School, Medical School, and Business School will have different professional and ethical considerations. They will need to have some independence. If you have a multi-campus system serving urban, suburban, and rural areas, you may find some parts of the system need to focus more on equity issues.
- Define what you mean by AI and realize that some types of AI need to be treated differently from others. You likely have had some rule-based AI in your Ed Tech systems for years. If you simply lump that together with GenAI, you might have to consider shutting down some of those systems or forbidding your faculty to use certain parts of them. (I am thinking particularly about controversies over AI grading and feedback.)
These are a few of the considerations. We need also to recognize that the consequences, both positive and negative are huge, and that policies must remain flexible. You note how much has shifted in the past two weeks. There are a lot of things beyond what the AI companies are doing that could hinder or help the broader adoption of AI. These include public opinion, politics, supply-chain issues in building data centers, and much else. I spent some time this Spring outlining some of these for myself. I found it useful and continue to return to it as news breaks. If time and budget permits, having districts and institutions engage in a full-fledged scenario-planning exercise might be useful.
This is the most complex tech I have dealt with in higher education, and I have been helping to introduce and maintain academic technologies for more than a quarter-century. This is going to take a lot of patience and attention to get right.
This is based on my own experience, reading, and conversations. It does not necessarily reflect the opinions of colleagues or the University of Missouri System, my employer. I put this long comment out to share what I have been thinking, but also to get feedback and see what kinds of experiences others are having.
Great thoughts, Guy. It sounds like you're working on a treatise. I think starting with "define what you mean by AI" is the best approach I've heard in a long time. AI is a multiplicity. Treating it as a unitary concept in policy development is a mistake. I also appreciate the other nuances you introduce. Big tech developments don't happen in isolation; they echo and reverberate through larger cultural, societal, and institutional arenas. If I understand you correctly, my own writing contributes to these echoes and reverberations. Point taken!
Your point about intra-institutional differentials is also crucial to me. What works for one department may not suit another. In the K-12 space, we'll undoubtedly grapple with this issue across divisions and departments. What is appropriate for seniors may not be for middle school students. Patience is key. The impulse to over-regulate is strong, but patience is essential.
The terms being used today are just too vague, so definitions are essential.
One thing I meant to add in my original note was that it is important to discuss both policy changes and possible AI acquisitions early and often with IT and Academic Technology departments. I don't think how closely policy, procurement of new software, and the ways existing software are deployed interact. Also, those departments need to keep checking with vendors about what and how they plan to roll out any AI features. They need to know when something they are developing might conflict with policies or guidelines. Ideally, the features will be deployed in a modular fashion, allowing them to be enabled or disabled at different levels, from potentially the class to the enterprise.
Another point that does not come up much, but I think might figure into some of the ways that instructors want to use AI is protecting the intellectual property of publishers. Some instructors are building customized bots based on their course content, but that content may include textbook materials and journal articles. Basic copyright issues aside, ingesting that into an AI may also conflict with their desire to add (and possibly charge for) additional features, such as custom chatbots based on their materials. I am not aware of any litigation on that yet, but I think it is something we should anticipate.
Yes, the OpenAI updates last week threw a lot of us for a loop. I personally was lulled into the assumed of product and permissions continuity. In many ways, the wake-up call was necessary. Now I can start thinking more realistically about the future. But to your point, particularly when we subscribe to specific educationally-oriented products, keeping those lines of communication open will be especially import. The systems we administrators and teachers are setting up feel so fragile, and we need notice when updates potentially disrupt the balance. I hadn't thought of textbook-gpts. That sounds very suspicious to me. I can see making a syllabus bot on materials I have generated, but to cross over into a bot that summarizes others work seems to set a very bad example for students. Thanks for bringing this case use to my attention.
Thanks for sharing. As educators, balancing AI's ease of use with ethical considerations is crucial. Let's champion responsible AI integration in education while fostering an informed, supportive community.
That is a really interesting and valid thought, Michael. We are all aware that humans are both emotional and logical creatures; while I'm not a neuroscientist, my guess is there is a certain unexplainable "trust" we have with a human that we have not evolved to accept with a robot. In full disclosure, I have never ridden in an autonomous vehicle. However, I have driven vehicles that have all kinds of "driver assist" technology. My observations are that driver assist (robot technology) doesn't always seem to understand or have the ability to predict what the driver's (referring to myself and other drivers on the road) intent and many times feels like it over-reacts to my perceived and anticipated next-moves.
With that said, I don't think most people believe that robotic (autonomous) drivers fully understand what to expect (and how to react) to the "other crazy drivers" out there on the road or what they are going to do next.
The need for AI Ethics stems not so much from a point of logic as is does to establish a set of baseline rules of what is and what is not permissible ways to use the technology. It's not hard to use AI for nefarious purposes; our generation has a vital responsibility to set a very high bar for teaching the next generation how to use AI with a sense of purpose along with some old-fashioned common sense.
Let's be honest, if we don't teach them proper AI Ethics now, no other generation will teach them. Remember, we are also teaching them so they will also, in turn, teach the next generation how to AI responsibly as well. Once mainstream practices have been established its very difficult (if not impossible) to swing the pendulum of morality, common sense and outright decency backwards to the point where they should rightfully be at the outset of integration.
Could not agree more, Guy. Thank you for some really excellent insights and observations!
Most of my career has been spent as a HigherEd administrator. Higher education needs to improve on their understanding of the “why” before they will ever decide to move forward with the “how”. This is exactly why I started EduPolicy.ai , to educate HigherEd, K-12 and Corporate education on AI Ethics and AI Policy development and implementation.
The topic of ethics is always a mess because it's not logical, it's emotional. I'm crafting an essay now on ethics and autonomous vehicles and asking why we panic about robo taxis making little mistakes but are completely OK with human drivers to the same?
This is a basic, high-level overview, which is all one can expect in a post of this length. It does presuppose that the administration and leadership of the district or the university prioritize AI policy and choose an AI-Positive or Forward attitude. In discussions of AI ethics and policy in education, this often seems to be taken as a given. I think we should at least spell out what policy options are for schools where leaders take a neutral or negative attitude. For one thing, that might make it easier to deal with those who have not decided on a direction.
Recognize that most administrators, faculty, and staff will not know the full range of ethical issues. The tendency is to narrow focus on a few topics and miss the bigger picture. I have been struggling to find effective ways to do that for more than a year now even while trying to expand my own understanding. I have come to the conclusion that:
- Everyone needs to be exposed to the full range of ethical issues, not just those pertaining to education. There is a new document from the UN Office of the High Commissioner for Human Rights that covers maybe 90% of this. I think it may be a good starting point for discussion: https://www.ohchr.org/sites/default/files/documents/issues/business/b-tech/taxonomy-GenAI-Human-Rights-Harms.pdf
- Foster comprehension of the tight and complex interplay between ethics, policy, pedagogy, technical considerations, and the strategies of both new and existing AI and Ed Tech vendors.
- Everyone needs to be included in the discussions and decisions, because everyone is affected.
- Budgets are going to be a big constraint on implementation. Unless you think you can get funding, realize that almost everyone involved will be doing this as an overload. Be aware that this applies to staff as well as faculty. In many institutions and districts the staff who work with information and instructional technologies may already be working at capacity. They may need to add staff or delay projects to deal with AI. This is exacerbated by the inexperience of many of these companies in dealing with educational institutions and their needs, the speed at which everything is happening, and IT departments having to learn the additional demands of vetting and implementing these technologies.
- Big data is nice for administrators to have, but to start with you may not have much data, or really any, about how AI is being used to your institution. If you were a big and complex district or institution, this may take time and effort to discover. I am generally skeptical of big data approaches. I am very skeptical of them given where most institutions are today. The collection of data can be built into the policies, but I remain convinced that during this early phase, we require high-touch approaches to navigating this period of transition.
- Recognize that different groups are going to have different needs. This will be particularly true of higher education and, especially, of larger institutions. A Law School, Journalism School, Medical School, and Business School will have different professional and ethical considerations. They will need to have some independence. If you have a multi-campus system serving urban, suburban, and rural areas, you may find some parts of the system need to focus more on equity issues.
- Define what you mean by AI and realize that some types of AI need to be treated differently from others. You likely have had some rule-based AI in your Ed Tech systems for years. If you simply lump that together with GenAI, you might have to consider shutting down some of those systems or forbidding your faculty to use certain parts of them. (I am thinking particularly about controversies over AI grading and feedback.)
These are a few of the considerations. We need also to recognize that the consequences, both positive and negative are huge, and that policies must remain flexible. You note how much has shifted in the past two weeks. There are a lot of things beyond what the AI companies are doing that could hinder or help the broader adoption of AI. These include public opinion, politics, supply-chain issues in building data centers, and much else. I spent some time this Spring outlining some of these for myself. I found it useful and continue to return to it as news breaks. If time and budget permits, having districts and institutions engage in a full-fledged scenario-planning exercise might be useful.
This is the most complex tech I have dealt with in higher education, and I have been helping to introduce and maintain academic technologies for more than a quarter-century. This is going to take a lot of patience and attention to get right.
This is based on my own experience, reading, and conversations. It does not necessarily reflect the opinions of colleagues or the University of Missouri System, my employer. I put this long comment out to share what I have been thinking, but also to get feedback and see what kinds of experiences others are having.
Great thoughts, Guy. It sounds like you're working on a treatise. I think starting with "define what you mean by AI" is the best approach I've heard in a long time. AI is a multiplicity. Treating it as a unitary concept in policy development is a mistake. I also appreciate the other nuances you introduce. Big tech developments don't happen in isolation; they echo and reverberate through larger cultural, societal, and institutional arenas. If I understand you correctly, my own writing contributes to these echoes and reverberations. Point taken!
Your point about intra-institutional differentials is also crucial to me. What works for one department may not suit another. In the K-12 space, we'll undoubtedly grapple with this issue across divisions and departments. What is appropriate for seniors may not be for middle school students. Patience is key. The impulse to over-regulate is strong, but patience is essential.
The terms being used today are just too vague, so definitions are essential.
One thing I meant to add in my original note was that it is important to discuss both policy changes and possible AI acquisitions early and often with IT and Academic Technology departments. I don't think how closely policy, procurement of new software, and the ways existing software are deployed interact. Also, those departments need to keep checking with vendors about what and how they plan to roll out any AI features. They need to know when something they are developing might conflict with policies or guidelines. Ideally, the features will be deployed in a modular fashion, allowing them to be enabled or disabled at different levels, from potentially the class to the enterprise.
Another point that does not come up much, but I think might figure into some of the ways that instructors want to use AI is protecting the intellectual property of publishers. Some instructors are building customized bots based on their course content, but that content may include textbook materials and journal articles. Basic copyright issues aside, ingesting that into an AI may also conflict with their desire to add (and possibly charge for) additional features, such as custom chatbots based on their materials. I am not aware of any litigation on that yet, but I think it is something we should anticipate.
Yes, the OpenAI updates last week threw a lot of us for a loop. I personally was lulled into the assumed of product and permissions continuity. In many ways, the wake-up call was necessary. Now I can start thinking more realistically about the future. But to your point, particularly when we subscribe to specific educationally-oriented products, keeping those lines of communication open will be especially import. The systems we administrators and teachers are setting up feel so fragile, and we need notice when updates potentially disrupt the balance. I hadn't thought of textbook-gpts. That sounds very suspicious to me. I can see making a syllabus bot on materials I have generated, but to cross over into a bot that summarizes others work seems to set a very bad example for students. Thanks for bringing this case use to my attention.
Thanks for sharing. As educators, balancing AI's ease of use with ethical considerations is crucial. Let's champion responsible AI integration in education while fostering an informed, supportive community.