Yup, I also arched my eyebrows when I read the advice to "Use ChatGPT for citations." Knowing how often LLMs make up plausible-sounding sources and links, that's just bad advice, especially if the person reading it isn't aware of hallucinations.
They try to paint over it with "double-check the sources," but that would require exactly the kind of "grunt work" that they suggest replacing.
I agree that ChatGPT can help with citation formatting and structuring, but you still gotta do your legwork to make sure it's not all bullshit.
Yes, it is a little disconcerting. The effortless generation of fake source. The readiness to fess up to the generation of those sources. The second round reassurances still send me back to the sources to do my own checking. Net gain: some new sources found, still have to do the legwork.
I appreciate your nuanced take, even if it's less effective at driving clicks.
Your conclusion is spot on. In my conversations with students, while getting a good grade is always present as a motivating factor, students *also* know they're going to have to use AI in their careers. They want to begin exploring AI usage within the "safe" confines of school, in which a trusted teacher can help them learn how to use it effectively. Unfortunately, AI is still largely forbidden (for good reasons!) in academic settings, which means student usage of AI is guided by what they see on TikTok.
Yes, I hear you on the "for good reason" part. I wonder how we can facilitate playground-esque experiences where students can start to build literacy skills in lower stakes environments. Perhaps where summative grades are not tied directly to the experiences. Part of me is starting to see the need for something like an old school computer skills class at the center of secondary school. Something not attached to a particular discipline.
As a test, I tried using ChatGPT to check references for a revision. I finally got it to sort of work by pasting sections of the text, then asking for a alphabetized list of citations that I could check against my reference list. As is the case with most tasks, AI helped, but didn't completely do the task for me (which is fine). BTW, if you're interested, I recently gave my take on OpenAI's advice, which largely tracked with what you're saying. https://open.substack.com/pub/aigoestocollege/p/openais-advice-on-writing-with-ai
People love to argue right, especially when they can do it in a mob? Anyone who has been to school knows all about how to teach kids. The belief that there isn’t a knowledge base for the profession is widespread. What I really don’t appreciate from OpenAI is their buy in to this myth. The problem is OpenAI has social structure, resources, power. Schooling is fragmented geographically and culturally. The only people in position to decide how AI can help teaching and learning are educators who study their students inside professional ethics—and qualified researchers who study them.
Engagement may be going down as you back away from the anti-OpenAI stance, Nick, but if it's any consolation, I have actually chosen to unfollow a couple of prominent commentators in the AI/education intersection because, over time, they have become increasingly contrarian and hypercritical to the point of only representing a very polarized opinion. It's a great tactic for driving online engagement and clicks, but it's also a tactic that lacks nuance. And nuance is exactly what's needed in the discussion at the moment.
I 100% align with your description of being sucked into the vortex of OpenAI. It's a real feeling of having an angel on one shoulder and a devil on the other both whispering their opinions into your ear as you consider the newest feature release to drive radical productivity, and at the same time the moral and ethical implications the same feature could have downstream. I think you described that contradiction beautifully.
Once again, a great article. Thanks for putting the time and thought into how to articulate these themes that so many of us are considering.
Yup, I also arched my eyebrows when I read the advice to "Use ChatGPT for citations." Knowing how often LLMs make up plausible-sounding sources and links, that's just bad advice, especially if the person reading it isn't aware of hallucinations.
They try to paint over it with "double-check the sources," but that would require exactly the kind of "grunt work" that they suggest replacing.
I agree that ChatGPT can help with citation formatting and structuring, but you still gotta do your legwork to make sure it's not all bullshit.
Yes, it is a little disconcerting. The effortless generation of fake source. The readiness to fess up to the generation of those sources. The second round reassurances still send me back to the sources to do my own checking. Net gain: some new sources found, still have to do the legwork.
I appreciate your nuanced take, even if it's less effective at driving clicks.
Your conclusion is spot on. In my conversations with students, while getting a good grade is always present as a motivating factor, students *also* know they're going to have to use AI in their careers. They want to begin exploring AI usage within the "safe" confines of school, in which a trusted teacher can help them learn how to use it effectively. Unfortunately, AI is still largely forbidden (for good reasons!) in academic settings, which means student usage of AI is guided by what they see on TikTok.
Yes, I hear you on the "for good reason" part. I wonder how we can facilitate playground-esque experiences where students can start to build literacy skills in lower stakes environments. Perhaps where summative grades are not tied directly to the experiences. Part of me is starting to see the need for something like an old school computer skills class at the center of secondary school. Something not attached to a particular discipline.
As a test, I tried using ChatGPT to check references for a revision. I finally got it to sort of work by pasting sections of the text, then asking for a alphabetized list of citations that I could check against my reference list. As is the case with most tasks, AI helped, but didn't completely do the task for me (which is fine). BTW, if you're interested, I recently gave my take on OpenAI's advice, which largely tracked with what you're saying. https://open.substack.com/pub/aigoestocollege/p/openais-advice-on-writing-with-ai
People love to argue right, especially when they can do it in a mob? Anyone who has been to school knows all about how to teach kids. The belief that there isn’t a knowledge base for the profession is widespread. What I really don’t appreciate from OpenAI is their buy in to this myth. The problem is OpenAI has social structure, resources, power. Schooling is fragmented geographically and culturally. The only people in position to decide how AI can help teaching and learning are educators who study their students inside professional ethics—and qualified researchers who study them.
Engagement may be going down as you back away from the anti-OpenAI stance, Nick, but if it's any consolation, I have actually chosen to unfollow a couple of prominent commentators in the AI/education intersection because, over time, they have become increasingly contrarian and hypercritical to the point of only representing a very polarized opinion. It's a great tactic for driving online engagement and clicks, but it's also a tactic that lacks nuance. And nuance is exactly what's needed in the discussion at the moment.
I 100% align with your description of being sucked into the vortex of OpenAI. It's a real feeling of having an angel on one shoulder and a devil on the other both whispering their opinions into your ear as you consider the newest feature release to drive radical productivity, and at the same time the moral and ethical implications the same feature could have downstream. I think you described that contradiction beautifully.
Once again, a great article. Thanks for putting the time and thought into how to articulate these themes that so many of us are considering.