Thanks so much for asking, Anna. I took last year off from teaching, so I've not yet taught in the Post-ChatGPT era yet. This fall I will be teaching a 6-8 person class on the History of Higher Education to master's students and plan to invite students to incorporate AI feedback in their writing process (if students want to use it), so I'll check out the prompt library as MyEssayFeedback.ai as a possible resource for my students.
That comment was about my own practice as a writer. I'm in the middle of my third attempt to use an LLM for editorial feedback, mostly trying to get it to read drafts and give notes the way an editor would. The first two attempts, one with Claude 3 and another with GPT-4, I got nothing useful. I've been using Claude 3.5 Sonnet for a few weeks and have received two suggestions that I acted upon. Both times, transitions between paragraphs that had seemed fine to me, but were after the LLM pointed it out, awkward or abrupt. That's out of 50+ suggestions.
So, better than the first two attempts, but I'm not feeling like its worth the effort. I fully admit it could be bias preventing me from seeing the feedback as valuable.
I'm playing around with prompts to see if I can get better results. So far, the one that generates the least irritating feedback is to tell it to act like the editor of a journal of wide-ranging inquiry about culture, education, and technology that offers writers and readers the opportunity for sustained reflection, uncluttered by academic jargon. (most of that is stolen from Raritan Quarterly's self-description).
That's so interesting. I wonder if it would help if you gave it samples of the kinds of feedback you find useful along with sample drafts.
Thanks, I love that phrasing and will look up Raritan Quarterly.
I like asking it about what I can clarify or expand. I also like nonprescriptive feedback. But it depends what I need as a writer in a given moment--at many moments I don't turn to it and it probably wouldn't be a good choice to spend my time with it.
Good conversation. It is sometime worth just seeing what a model like Sonnet will do if asked to improve based on metrics you feed it, or in light of a piece or series of pieces you either find comparable or at the next step you in development. One of my guest writers Alan Knowles stresses that there is value now saving your writing projects at different stages in their development so that you can teach your preferred model how writing usually evolves over time for you. If the model has this deeper history, perhaps the feedback will bump up from the general to the particular.
Thank you so much for this thoughtful review and the respectful pushback against some tendencies in the book!
I'm curious about your comment "I find trying to elicit feedback from an LLM mostly frustrating." I find it really useful... with Claude 3.5 right now. We have a prompt library on MyEssayFeedback.ai https://myessayfeedback.ai/oer/types-of-feedback-library
Thanks so much for asking, Anna. I took last year off from teaching, so I've not yet taught in the Post-ChatGPT era yet. This fall I will be teaching a 6-8 person class on the History of Higher Education to master's students and plan to invite students to incorporate AI feedback in their writing process (if students want to use it), so I'll check out the prompt library as MyEssayFeedback.ai as a possible resource for my students.
That comment was about my own practice as a writer. I'm in the middle of my third attempt to use an LLM for editorial feedback, mostly trying to get it to read drafts and give notes the way an editor would. The first two attempts, one with Claude 3 and another with GPT-4, I got nothing useful. I've been using Claude 3.5 Sonnet for a few weeks and have received two suggestions that I acted upon. Both times, transitions between paragraphs that had seemed fine to me, but were after the LLM pointed it out, awkward or abrupt. That's out of 50+ suggestions.
So, better than the first two attempts, but I'm not feeling like its worth the effort. I fully admit it could be bias preventing me from seeing the feedback as valuable.
I'm playing around with prompts to see if I can get better results. So far, the one that generates the least irritating feedback is to tell it to act like the editor of a journal of wide-ranging inquiry about culture, education, and technology that offers writers and readers the opportunity for sustained reflection, uncluttered by academic jargon. (most of that is stolen from Raritan Quarterly's self-description).
That's so interesting. I wonder if it would help if you gave it samples of the kinds of feedback you find useful along with sample drafts.
Thanks, I love that phrasing and will look up Raritan Quarterly.
I like asking it about what I can clarify or expand. I also like nonprescriptive feedback. But it depends what I need as a writer in a given moment--at many moments I don't turn to it and it probably wouldn't be a good choice to spend my time with it.
Good conversation. It is sometime worth just seeing what a model like Sonnet will do if asked to improve based on metrics you feed it, or in light of a piece or series of pieces you either find comparable or at the next step you in development. One of my guest writers Alan Knowles stresses that there is value now saving your writing projects at different stages in their development so that you can teach your preferred model how writing usually evolves over time for you. If the model has this deeper history, perhaps the feedback will bump up from the general to the particular.