25 Comments

Good insights here. More than the tools, teachers need workflows for those tools.

Expand full comment
Jul 2Liked by Nick Potkalitsky

The issues in higher ed are a little different, often only as a matter of degree. One of the biggest differences is for institutions that host hospitals and clinics. The privacy of medical records (HIPAA data if you are in the US) requires a higher level of security, and because some of our software (e.g., O365) crosses academic and administrative boundaries, a tremendous amount of due diligence is required to ensure that data does not get sucked into Copilot-type applications.

Another thing that makes for a degree of difference are the requirements of the variety of subjects that are taught. While we are beginning to see some basic principles of teaching and learning with AI in both the core curriculum and individual subjects, this is still in its early stages. Having an AI orientation course may be important to transition students to using it in college, and AI certificates (to allow them to explore it more deeply) are also emerging. The ways that AI may be used in different fields (say Computer Science, Nursing, Journalism, Art, Physics, and English) are as different as those areas of study. The needs for accuracy, the consequences of mistaken information, of data breaches, and ethics are all much higher than is usual in K12. If, for instance a nursing student learns the wrong thing from an AI, that is more likely to have serious consequences for others than if a middle schooler does. Of course if the middle school's AI manages to systematically indoctrinate students into a particular way of thinking, that could also have broader implications for the students and their community. One thing I have not seen addressed much about AI tutors and teachers is that they can be hacked in ways that humans cannot. Also, unless we develop AIs with theories of mind and good understanding of the physical and social world, they will not be able to relate to students in the same way that a human teacher would.

In terms of dealing with companies, I agree with Nick that educational technology companies are more likely to provide safe and reliable AI for students than the AI giants. One thing to consider is the extent to which they are developing applications at the same time as institutional policies that may limit the use of some functions is being developed. I have already run into this a few times. The developers need to build in fine-grained settings to enable to disable a functions at an institutional or course level - preferably both.

As for the AI giants, almost all of them have exhibited unethical or questionable behavior. This raises the concerns about whether use of their tools can ever be ethical. Their often lax approach to privacy and security, let alone accuracy, means a level of vigilance is required of institutions that is more extreme than before. It is taking a great deal of time and effort to exercise that vigilance. In the short term, it is largely being done by individuals and departments taking on extra work or pushing back other projects. In the long run, it is going to require additional human, fiscal, and technical resources.

The behavior and statements of the AI giants so far are actually not promising for education. Some of them, or at least their cheerleaders, seem to have teachers and professors in their sights - just more jobs to be deskilled or eliminated. There are some who see schools, colleges, and universities withering on the vine as everyone gets a personalized AI tutor. The greater societal implications of that are large. On the other hand, for a company to capture most of those institutions would be a huge windfall in revenue, influence, and long-term power. Some of the statements from some of the companies or their leaders raise questions about the nature of this game. We do need to treat them as companies - though we need to think about the big players in terms of what they are, cloud capitalists, which operate very differently than other corporations. We also need to realize that some of these companies stated aims indicate they are playing a very high-stakes game for control of the future.

What Nick and others are starting to do is introduce educators and those who care about education to a different way of understanding education. They are leading us to consider things we might not consider otherwise, or that only a handful of staff in a school district, community college, or university have had to deal with in the past. They are also pointing to the ways that AI is changing the larger contexts in which education and educational institutions operate.

Expand full comment
author

Thanks, Guy!!! Great comment. I second the call for more fine-grained settings. AI policies are not going to be uniform across the board. If the secondary players want to be viable in the marketplace, they will need to offer products that can be modified to fit school's individual policies and prerogatives. I am interested in hearing more about cloud capitalism and different nature and stakes of the current technological integration campaign. I feel like you are hitting the nail on the head there. I appreciate your kind words of conclusion. Yes, what was once a problem/issue for only a handful of schools is now front and center for all schools. That is why voices like yours are so important as we move forward. As always, I learn more after I post from my readers' comments than I do during the research and writing phase.

Expand full comment
Jul 2Liked by Nick Potkalitsky

Within our university system, AI policy is devolved to the campus and department levels. One of my hobby horses these days is making everyone aware that they need to pull in the academic and information technology groups early on when they are thinking about policies, both to ensure those groups are aware as they review products and also to understand what may or may not be practical. There has always been a connection between policy, procurement, and support, but it is more important than ever with AI.

Expand full comment

Guy, your perspective is critical. I’m so glad you are connected with Nick. You and he are alike in that you value education as the top priority. One point you raise that resonates with me is the need to understand AI within disciplinary discourse. Discourse analysts have a long history of mapping literacy practices across the curriculum in terms of genres (cf John Swales) that are crucial for learners to master en route to expertise. Applications of AI in a course in Shakespeare differ in impactful ways when compared with, say, anatomy and physiology or the Modern Japanese History. There are commonalities, which could be excavated by an interdisciplinary task force. The answers are located in faculty collaboration facilitated be people like you and Nick who have double vision, one eye on the bot, one eye on the classroom and the learners.

Expand full comment
Jul 2Liked by Nick Potkalitsky

Terry, I've had a few experiences in the past year that drove this point home to me. Last autumn, I was part of a presentation to students in an upper-level health professions course. I was amazed by how tuned in they were to the ethical and privacy implications. More recently, I led a lunchtime discussion of AI ethics for a group of exiled journalists who are doing a summer workshop at MU. I got to spend the day with them and learned a lot about the kinds of things that are important to them. Things like protecting sources and the privacy and security implications of AI to that struck me. I am learning just how much we need to tailor ethical, privacy, and security conversations to different audiences.

Expand full comment

Thank you for your service, Guy. As a retired prof, all I can do is offer my two cents. It is very encouraging to meet you!

Expand full comment
Jul 1Liked by Nick Potkalitsky

Great article! As a quick comment, Magic School worries me. Despite teaching 10+ years, despite speaking at various conferences (with a national conference on the horizon), I don't fear AI taking my job. But I would fear an unlicensed teacher *with* AI being hired for cheaper.

Through the grapevine I've heard administrators in my state who refuse to purchase materials because AI can do it. Why buy stuff when AI creates the content itself, the lesson plans, the questions, the assignments, and the answer keys? Just tell it the so-called standards and poof! All there! Once teachers create the curriculum map, never rehire them and hire warm bodies instead.

It's the Chinese Room with a twist: If the computer tells the teacher what to do and what to say, are they *really* a teacher? The teacher doesn't have to know the content, they just have to know the software. And while textbooks have accomplished this for decades--I'll call a spade a spade--AI goes a lot further.

If I'm questioned for my teaching, I'd rather shove an academic article in someone's face than a computer program.

Expand full comment
Jul 2Liked by Nick Potkalitsky

This is great! I really like your point about the whole laundry list of companies that exist to make AI safe shouldn't actually be a thing.

Also, I'm flattered that my newsletter was mentioned at the end!

Expand full comment
Jul 1Liked by Nick Potkalitsky

Claude 3.5 Sonnet has shown a slight edge in performance across a variety of benchmarks in areas such as reading, programming, mathematics, and vision, outperforming AI models like GPT-4o and other competitors, as well as its own flagship model, Claude 3 Opus.

A multitude of educational industry assistants can be developed, such as:

Intelligent Education Assistant: Interacts with students through voice or text, providing personalized learning suggestions and answering questions.

Intelligent Essay Grading Assistant: Analyzes students' grammar, spelling, logic, and expression abilities, automatically evaluating and correcting students' essays.

Virtual Laboratory Model: Provides a realistic experimental environment and experience through virtual reality and simulation technology, enhancing students' experimental skills and scientific thinking abilities.

Intelligent Learning Assessment Model: Conducts automated learning assessments and feedback by analyzing students' learning behaviors and performance.

Intelligent Tutoring Model: Mimics the role of a human tutor, engaging in one-on-one interactions and tutoring with students, providing detailed answers and guidance tailored to students' questions and needs.

Expand full comment
author

Fascinating, Ming. The key factor from the perspective of this article is how much does it cost for each of these applications, how easy are they for teachers and students to interact together in these applications, and what safety protections do they offer students, teachers, and schools. There are no shortage of tools. But Ed tech is a largely unregulated space. Claims about efficacy do not have to be proven. Etc.

Expand full comment
Jul 1Liked by Nick Potkalitsky

Great run through of some accessible options for young kids-thanks for this!

Expand full comment
author

I appreciate your feedback, Brent!

Expand full comment
Jul 1Liked by Nick Potkalitsky

Great post! I'm so glad my only kiddo is 21. This also resonates for me because there is the same or similar level of difficulty in looking for safe and secure AI solutions for those of us working at organizations who are considered critical infrastructure,

Expand full comment
author

Yes, no simple solutions. Just risks and benefits.

Expand full comment
Jul 1Liked by Nick Potkalitsky

The disconnect between how AI companies want to provide generative AI to educational institutions and what schools and colleges are prepared to pay and/or do to set up that access is the most important AI-related topic of the summer. Thanks for laying out some of the options so clearly.

Very useful for school administrators trying to understand the landscape, but also for the technology companies trying to sell into this market. Much of this gets framed by enthusiasts as "Why do schools and colleges move so slowly? Don't you know there is an AI revolution underway? We must train our students for the future of work!"

There are good reasons for moving carefully. The trade offs you describe between safety and affordability are complex and any way a school decides to provide access to their teachers and students has risks.

Expand full comment
author

Thanks, Rob!!! Yes, some many folks have already jumped into writing the curriculum, and schools still haven't figured out how to put the tools into students' hands. Really, I am just writing from my experience as an educator. I am glad it is resonating.

Expand full comment

Always an interesting read!

Expand full comment
author

Glad I caught your interest. Thanks for your support!

Expand full comment

Absolutely an issue full of insights and worth reading! Among the various tools and companies mentioned, I was really struck by Magic School and the idea of ​​going beyond the personalized and creating an 'augmented class' by AI (we could say). I am very curious to see the developments of this type of tool and if there are researchers who are focusing on this in AI and Education research.

Expand full comment
author

Thanks, Riccardo. I really appreciate your support. Magic School has been an early player in the market. They have a pretty good product, whether they will be able to survive competition with Khanmigo. We shall see!!!

Expand full comment
author

Thanks Sam. I remember your MVP post fondly. And yes, I agree. My low cost of entry is a bit wishful thinking. In reality, schools do have funds for new tech, but getting may require shedding old tech that is no longer relevant.

Expand full comment
author

Yes, that was my thinking for a long time. But then after a series of trainings, I learned that most teachers have only figured out tool use for their own personal work cycles. As to how to branch out to students still very much an open question as overviewed in the article. I have found that it is hard to plan out a student work cycle for a hypothetical tool. At least that is where I am in my own course building.

Expand full comment
author

Hmmm… interesting. This is the downside of these tools being cheap and accessible. How the narrative has shifted from admin resistance to over reliance. Thanks for the heads up!

Expand full comment

Thank you Nick for the wonderful article! Personally I have points of agreement and disagreement. Let me start in order that comes top of mind: safety as a focal point for large technology companies to start. Absolutely is a must for any product release for consumers. For tech companies focusing on vertical industry specific products, compliances such as FERPA should be in the MVP definition. Any general purpose solution I believe should also have guardrails and these companies are hungry for data to train on and have taken shortcuts to do so and I believe that is an erosion on trust that needs to be built back.

Second is price. Yes I agree that price is a barrier for low income families and districts with limited funding. Unfortunetly right now, developing generative AI technologies is costly and the dominant business model is a subscription. I’d argue that a dominant solution should revive funding from the government but there are systematic issues there more broadly that I can not get into now.

Finally all of these solutions need to adhere to some of the MVP requirements I lay out here https://medium.com/@sam.r.bobo/an-edtech-minimally-viable-product-377cfed3e62c. Thank you for all of your time and effort you put into these blog post Nick!

Expand full comment