17 Comments
Jun 17Liked by Nick Potkalitsky

Nick, I can't speak to the K-12 environment; I just don't know enough about what's going on there or what is going on there in the last 35 years. I've been thinking a lot this week about the beginnings of the use of the web in higher education. That was one of the last times there was a technology that anyone could use for free that also had such transformative power. Committees and task forces were formed, workshops were held, money and release time was made available for professional development. Eventually, departments like the one in which I work were formed to support its use.

The environment is very different this time around. We have had three decades of web use on campuses (Netscape Navigator, which was the first browser for most of us, so perhaps a parallel to ChatGPT arrived 30 years ago this October). Most of the use was on desktops. Laptops were not that common yet, certainly not in classrooms, and all connections were wired until 1997. Cellphones were more likely to be seen on TV than on a midwestern campus. Educational technology was largely coming from universities and a few companies. Higher education was becoming more expensive but was still seen by most as a positive good. Privacy and security were minimal concerns. Intellectual property and acceptable use were the main worries. There was very little talk of ethics.

The environment today is far different. Higher education is expensive and often beleaguered. Most educational technology comes from for-profit corporations and startups. Over the last fifteen years or so, venture capital has discovered education. The infusion of money and business practices has created a highly competitive landscape, so that when something trendy and innovative like GenAI comes along, it is immediately incorporated by companies looking for an advantage. (An interesting side note is that the big textbook publishers, most of whom have had other forms of AI in their courseware for years, have been more cautious and slower off the mark with GenAI.) Laws and the interpretation of laws have changed. So have accreditation standards. So has the campus policy environment. Privacy, security, intellectual property, and acceptable use have become much more important over the last thirty years. Courses, especially those offered only online, have to follow design guidelines and faculty may have to undergo online teaching certification. There are larger numbers of instructional designers to provide advice and instructional technologists to provide support. In addition to training from the university on how to use the technologies, there are trainers from all of the educational technology companies. All are offering advice and guidance. This may be oriented towards a particular set of design principles or the way a particular product can be used. In the best cases, the faculty voice is heard. In the worst ones, and there are a few of these coming from ed tech companies, the technology forces the instructor to teach in a certain way. In the wake of the Pandemic, many instructors are more used to asking for and accepting advice on how to teach with technology than before. I think that is significant too. There are still plenty of experimenters, don't get me wrong, the faculty have all kinds of different skill levels and levels of comfort with using technology in their teaching, but the Pandemic does feel like a turning point in what they are willing to accept.

Something else that is different today is fear. There was some, but mostly wonder, in the early days of the web. The first wave of reaction to ChatGPT by professors was different. My involvement began because of cheating and plagiarism concerns. I am the lead support for Turnitin for our university system, and at that time was also the lead on automated proctoring software. Both had already exposed me to a lot of ethical concerns about the products we were using, while the proctoring programs introduced me to algorithmic bias and the coded gaze in 2020. I got involved in AI because of the fears and the ethical concerns that were manifest already in December and January following ChatGPT's release.

This is new in my experience. I think it is partly shaped what's going on with Generative AI. The range and extent of the fears have changed in the last 18 months. While there is still a lot of concern about plagiarism and cheating, the broader ethical questions have come to the fore. Concerns about bias are much more prevalent. So are issues of equity, and threats to creativity, intellectual property, privacy, and security, as well as to society, the economy, democracy, the environment, and the climate.

All of this is much different from the early days of the web on campus. It means more policies and guidelines. For some it means more acceptance of the ways an application allows one to teach. For others it means more experimentation. Over all, it means even more attention paid to technology in teaching and learning than before.

I think we have been fortunate on our campuses to see a lot of room made for individuals and departments to work through policies and uses. We have also had fruitful collaboration between professors and instructional designers in making the technology beneficial to classes.

Now that more policies and guidelines are being introduced, there is an attempt being made to balance the freedom to experiment (or even reject AI) with the need to maintain quality and uphold ethical and legal standards. There is a lot going on. It is exciting, worrying, and often frustrating. I can see that in such an environment there is a temptation to dictate and hope that it can be held at bay.

Expand full comment
author

Thanks, Guy!!! There is a lot to unpack here. It sounds like college and university communities are doing a lot of heavy lifting right now. It is nice to hear that there are collaborative spaces opening up in the midst of a technological cycle that is introducing continuous updates and new products. Even the very early reports from the MLA back in mid-2023 spoke very strongly and directly to issues of equity, bias, IP, privacy, and security.

I feel like this has become the work of my blog this summer. As K-12 teachers onboard to this tech for materials generation, there will be a natural opening up to more classroom experimentation. But we will still haven't figured out so many basic questions: What is the status of student information (particularly students younger than 18) when input into commercial grade AI models? What is the status of that information when input into educationally-designed AI models like Khanmigo or PowerTools? How should permissions work when students use these tools? What policies and processes should schools implement and follow? To me, the whole AIxEducation K-12 blog-o-sphere has jumped into curriculum development, and we still haven't answered these basic questions. And let me tell you, they are the questions schools want answered. And as you suggest, beneath these questions, are a host of other questions about bias, IP, security, and the big one--the overall purpose for integrating these tools into schools in the first place. Thanks for helping me think through some of these complexities. If you have any insights, let me know.

Expand full comment
Jun 18Liked by Nick Potkalitsky

I ran an idea to a Prof I know where I am a staff member of and he told me, Ashley you are a doctoral student of Ed Tech and 90% of professors know their subject matter but they don't know how to design a course. That was an interesting statement.

Expand full comment
Jun 21·edited Jun 21Liked by Nick Potkalitsky

People are overlooking the big picture. Addressing the current educational problems with AI is useless because education is on the brink of a massive transformation. It's predicted that AI language model models will eventually replace teachers and educational institutions entirely. Why would you need to attend Oxford University when you could have a personalized AI tutor with expertise surpassing that of Oxford, available to you 24/7? This AI tutor would have infinite patience and knowledge, dedicated solely to you, guiding you through assignments and keeping track of your progress all from your phone. This will lead to a huge improvement in the quality and accessibility of education. Costs would drop to almost nothing, making high-quality education available to every child in the world with internet access, regardless of their location. Third world countries would have Oxford level education for every citizen at almost no costs. This represents a tremendous social change. Soon, the impact of these changes will become evident, causing educational institutions to panic. They will likely attempt to remain relevant by developing their own branded and paywalled AI teacher models. However, they are already so far behind in this field that it is doubtful if they will ever be able to catch up.

Expand full comment
author

Thanks, Emanuele, for contributing. I guess the quality of these tutors is still very much an open question. I certainly am very excited about the possibility. At the same time, I am asking questions about their limitations. A fellow AI x education writer Leon Kurze summarized the debate very succinctly in this recent post on LinkedIn. I thought you might find it interesting: https://www.linkedin.com/feed/update/urn:li:activity:7209736908078280704/?commentUrn=urn%3Ali%3Acomment%3A(activity%3A7209736908078280704%2C7209920529036726273)&dashCommentUrn=urn%3Ali%3Afsd_comment%3A(7209920529036726273%2Curn%3Ali%3Aactivity%3A7209736908078280704)

Expand full comment
Jun 21Liked by Nick Potkalitsky

Emanuele, it may depend on what larger contexts concern us most. There are lots of places to start. Here are a few:

Why would costs necessarily drop? Would they only drop at first and then increase once we are hooked? Assuming writers like Yanis Veroufakis (Technofedualism) and David Runciman (The Handoff) are correct about what is happening in politics and the economy, what would be the consequences? What would be the tradeoffs?

How would we handle biases in training data and guardrails? It is not just that they would have to be negotiated nationally. In the US, at this point, they would have to be negotiated state-by-state and perhaps even school district by district.

What broader lessons would we be teaching the students about the relationship between humans, machines, other living things, and the world in general?

Expand full comment

Nick and Emanuele, these comments have prompted me to think a lot more about different ways of looking at AI. I have posted the first of four a four part series on my blog, https://guywilson.substack.com/p/ways-of-seeing-ai. The second part will contain a deeper exploration of my reply here. I hope I am not distorting your meaning in this series.

Expand full comment

Nick, I have a PD workshop in AI and Education and a few min after introductions I am going to do your exercise of asking the teachers about the subject. As a future Ed Tech leader I have come to realization that my role has been listener and then how can we have practical innovation within the domain and be the most helpful to individuals who have a lot of passion and knowledge and usually do not have people who are passionate about listening and working on solutions.

Expand full comment
author

Thanks for becoming part of the conversation Ashley. It is a good time to do a PhD. So many new pathways are opening up. So many new tools to assist you with your work. I have heard such things from instructors. The thing is… is that we all do it. Any time we start teaching to an assignment, there goes design thinking. Let me know how the exercise goes. I suggest using a shared writing space to record initial insights, questions, and inquiries. That way you can keep yourself accountable. Implicitly you are also starting to build a bridge toward pitching a second training.

Expand full comment

Very thoughtful piece, Nick. My experiences leading workshops tells me that if you provide opporunities early for teachers to express their concerns, misgivings, hesitations, or (in this case) successes, you create an environment of openess and sharing that lays a strong foundation for teacher engagement. And the more teachers actively engage with a technology the more likely they are to develop a measure of confidence, comfort, and understanding to integrate the technology.

Expand full comment

I teach primarily post-university EFL grads, so my focus is almost giving them a language-communication surrogate agent in the form of genAI, but I have wrestled with the idea of use of AI in K-12. The most obvious exemplary use case of genAI in my mind is as an idea generator, sounding board and as a guided first draft writing assistant. But this presupposes that the user can judge the quality of the output. In terms of workflow, this is a shift from creator to collaborator/evaluator. And in terms of writing ability, to take one example, I think you are right, not even many first year jni students are competent evaluators of writing (at least i wasn't back then, if my writing ability was any indication).

Expand full comment

Great points here and crucial to systems thinking in that we need to step back to begin the see the whole picture. Only then can we see novel uses for AI in tailored applications.

Expand full comment
author

Yes, I am building a whole system right now. I will start previewing it at surface depth via the Substack to test out some of the ideas.

Expand full comment
Jun 17Liked by Nick Potkalitsky

Love this Nick (and thanks for the shout out). I think two things are critical here:

Curiosity and empathy. Trainers should ask lots of questions, to explore WITH teachers. Each group has different contexts, different constraints, different inertia, etc. Trainers need to identify root causes, perceptions, and challenge our own assumptions. THEN start training, to make sure it hits the target. This is entrepreneurship 101 - provide the value the customer needs to solve the problems they are currently experiencing.

I have yet to have this happen in my training work, but I always ask for students to be involved. They, after all, are our ultimate customer. We need their voice in any conversation involving new teaching methods, new directions for learning, etc.

Expand full comment
author

Thanks, Doan!!! I am thinking of incorporating some initial surveying into my training routine to really gauge what stakeholders what to know and learn how to do. Good point about students!!! They could very easily to the subject of a follow up post with a similar title.

Expand full comment

It's always been time to listen to teachers. It's so rarely done.

Expand full comment

Ugly t

Expand full comment