Good points. I'd only double tap that the best use isn't to replace the human writer but to help the human writer get better. Writer's won't hesitate to take a creative writing course, think about that persona when using AI review. Also, do attend those courses because writing is about connecting with others and learning socially. AI is a great augmentation tool. Don't let it replace human interaction.
Absolutely, positively! Since I was a kid, all I ever wanted to do was be a fiction writer. I study character development, wrote every day, even when I was a kid, and I think it’s fair to say I obsessed about what makes a character come to life. That was my primary focus of my energy, and I have a lot of energy. I always have
I discovered quickly that creating characters to active filters for AI capabilities was so much more effective than prompt engineering. And doing exactly that has been the focus of my attention for the past year and a half.
I have over 30 years of experience in high tech, working with information, knowledge, and connecting people with the information they need to live better, work better, and be better.Having access to an information interface that can be tuned with character features is next level information interaction, and I’m so glad that other people in the character development space are discovering this too.
Great points! Ideally, if we could show people that "prompt engineering" is more a writing skill than a "tech" skill, this shift would happen more naturally. I think it already is, but I am still amazed at the furrowed brows and confused stares I sometimes receive when I tell an audience or person that using AI well is very much a similar cognitive process to writing well. That tells me we still have a ways to go.
Mike - this is great stuff. Fascinating and interesting. My experience with some of these types of activities (not nearly as detailed but modest attempts in a slightly different context) is that many students do not have the background knowledge or writing chops to interact with AI's in this way. They also aren't super adept readers to be able to parse through the AI outputs very effectively, especially in a single class period. How do you respond to critics who might say that we really still need to focus on developing basic student writing skills before throwing them into the deep end of interacting with AI's in this way? I am torn because using AI in the manner you suggest here can produce some really useful exercises but if they can't write well enough to take advantage, aren't we putting the cart before the horse? The issue is also one of time - the more time we spend with LLM's in the classroom the less time we have to do unaided AI work. Or is there a magical blend of the two? That's the struggle I'm currently having and I'm fortunate to teach highly motivated and generally strong and well supported students. Many are still skeptical and fearful of using AI for anything because of the cheating stigma. We're all learning about this as we go and I really appreciate the examples and experimentation you're bringing to the table. Thanks to both you and Nick!
The modelling of this skill comes after students have developed domain expertise in the given subject. Then and only then can we model this skill for them -- in context. Model it against a non-exemplar and they will build the very skills you (rightly) point out they don't have.
End of unit, once a semester, in the context of a larger assessment, and with the inclusion of a mini-unit that utilizes comparative transcript analysis to build the writing, reading, and critical thinking skills necessary to engage in this way.
There's more to it, but that's the high-level overview! Hope that helps and would love to hear any feedback you have.
Very helpful and much better context. One piece of feedback I have, and I wonder if others feel the same, is that at times I find myself hesitant to share everything I know and have done with AI because ... well, once the cat is out of the bag, it's really hard to put it back in! Based on everything I've read of yours, it's clear you have used AI a lot - to test drive, in your own work, and everything in between. I've been a heavy AI user since early 2023 and as a result have become very facile with most of the models - Chat, Gemini, Claude, NotebookLM to name a few - with multiple workflows, projects, GPT's, etc .... Not to mention many of the other top AI productivity tools for specialized tasks. There is almost nothing I can't do better by bringing aspects of AI's strengths to the table. This is both empowering and frightening because I do know what our more advanced kids are doing. Students who know how to use AI better than most teachers are simply running circles around them and I think that gap is only going to get wider. AI can be used for everything students are asked to do, including helping complete their reading assignments, prepare for class discussion, and so many other ways that are not "detectable" in any meaningful way. So, after your transcript analysis session, how do you put the horse back in the barn? It's almost as if, the more successful the AI lesson, the harder it's going to be to say, well, now you shouldn't use these incredibly powerful tools for these other things. Students don't see those lines as brightly as we want. Also, what happens when they take your skills and bring them to other classes where teachers are more skeptical and resistant to AI? These are the AI questions that keep me up at night ...
Students who do this get better at using AI, but not better at using AI to cheat. They see that more thoughtful approaches produce better outputs, which makes them less prone to engaging with AI on a surface level. It's akin to "once you see it, you can't unsee it."
Examples -
1) After our first go-round on this (with a fictional Holden Caulfield chatbot), my students went from thinking that character bots were cool to thinking that they were dumb. The reason? I forced my students to act like journalists with Holden instead of engaging as a buddy -- or trying to "learn from the bot." When they engaged like journalists (sparring partner), they found all the gaps and realized -- oh, these things aren't as good as I thought they were.
This dovetails with research that shows that students with more AI literacy tend to cheat with AI far less.
2) After we did this via brainstorming with ChatGPT, my students commented how "hard" it was to actually use AI well. Their perception changed from "this thing can do anything for me" to "oh wow, I've been using it really poorly." Subsequently, they have the necessary experience to know that when they engage with AI, if they don't *think*, they are basically using it like a third grader. That develops intrinsic motivation to actually engage actively rather than passively.
Will they do that every time? No, of course not. But AI literacy is about making good choices (IMO), not using AI perfectly. And in order to make good choices, you have to know and experience the universe of choices that exist.
Last point - I would argue the cat is already out of the bag. As you point out, they are using it -- and using it with basically no guidance except from TikTok. That's their current benchmark.
By using comparative transcript analysis and grading the chats, I redefine the benchmark for AI use for my students. That's part of our job now, as far as I can tell. I can't force them to do anything, but I can show them the difference and require that they practice the skill in a closely constructed and monitored arena.
Your point is well-taken, but in my opinion the fear is unfounded. I'm having a helluva time convincing folks otherwise, but it's the truth as far as I can see it! Ha ha.
I like your optimism but it does not necessarily jibe with my experience. Getting better at AI means the opportunity to get better at using it in ways that many teachers would prefer they don't. Maybe they don't do it in YOUR class because they know how knowledgeable you are (I have had a generally similar reaction from my students) but I do think you are being naive if you think they aren't bringing those skills to other assignments. But, as we both agree, the cat is already out of the bag and most students do not know how to use AI very well. I'm more and more starting to think that will have to accept that AI "assistance", like the internet in general, is just part of our new reality and adjust accordingly. I do agree that, to some extent, the approach of supervising AI use and modeling it in ways educators deem to be potentially productive will be the path forward. The other rub, however, is that not only are most teachers and professors not aligned with that point of view (just look at the recent wave of AI backlash across Substack) but very, very few feel as confident as you or I do to be able to engage in this high level work. I do think it will be the single biggest challenge for education in the coming years. Kudos to you for getting out in front of it.
Yes, I see your point and agree! You are right that they will likely take this into other classes, and that can cause serious friction among faculty. I try to frame it in those moments as -- they are using it in your class already, wouldn't it be better if we started modelling what we want to see? And at the end of the day, wouldn't it be better if they were bringing the skills that modelled by a teacher into other classes than the ones they are being taught by friends or on social media? So, in that sense, it is perhaps best viewed as the lesser of two evils.
But I absolutely see your point and wasn't trying to say that they won't use AI in other classes - just that they are likely to be a touch more thoughtful than they otherwise might have been. Hope that makes sense -- because I do agree with you!
The question is going to be whether folks like you and I - who are experimenting with, acknowledging, and, frankly, trying to teach ethical and effective AI usage borne of our own experience - going to continue to be the outliers (at least I am at my school - I don't know about yours) or will others make the same choices in the coming years. Otherwise it's going to be a lonely place in 2026, 2027, etc...
There’s a rare precision here the kind that reveals itself in the pauses between lines as much as in the words themselves.
Conversational authoring, at its best, feels less like instructing and more like listening forward, inviting the next sentence to arrive rather than forcing it.
Your piece captures that subtle dance beautifully.
I work at the intersection of security and narrative architecture, exploring how the way we frame information shapes both trust and action.
Always open to weaving perspectives with fellow builders of thoughtful systems. Appreciate you sharing this craft so generously.
I LOVE the concept of "listening forward." That is so true, and I would argue that listening forward involves listening first to oneself. Not only is that "art," but it's metacognition. And being good at using AI is really just being a metacognitive person who reflects naturally. AI literacy/fluency IS metacognition!
Thanks for your comment Benta, would love to hear more about your work!
I completely agree metacognition is the quiet backbone of true AI fluency. The capacity to observe our own mental movements, question them, and re-author them is what turns a prompt from a command into a conversation with possibility.
In my work, I look at how security frameworks can be reframed through this same lens of inner and outer listening:-
• How do organizations "listen forward" to emerging risks rather than react backward?
• How do we design systems that encourage reflective decision-making rather than brittle compliance?
• How do we maintain narrative coherence in an era when information is increasingly fragmented and often weaponized?
Your emphasis on self-reflection as the foundation of conversational authoring resonates deeply. It suggests a shared space where narrative craft and secure system design converge, both depend on a willingness to pause, examine, and reimagine.
I’d love to explore how these ideas might connect further especially around cultivating "metacognitive resilience" in both individuals and organizations facing accelerated AI change.
Thank you for opening this door. It feels like a conversation worth unfolding layer by layer.
While I write my articles myself, and don't intend to change that practice, I've started to use TTS AI to voicify them into podcasts, including representing my own first-person written voice/style. Would be interesting to see how that comes across to people interested in the topics discussed in the above article.
To me, it's a great use case, especially if you can truly get the AI to capture your voice. I find that it's hit-or-miss, but would love to hear about how you go about getting it to work for you!
Good points. I'd only double tap that the best use isn't to replace the human writer but to help the human writer get better. Writer's won't hesitate to take a creative writing course, think about that persona when using AI review. Also, do attend those courses because writing is about connecting with others and learning socially. AI is a great augmentation tool. Don't let it replace human interaction.
Absolutely, positively! Since I was a kid, all I ever wanted to do was be a fiction writer. I study character development, wrote every day, even when I was a kid, and I think it’s fair to say I obsessed about what makes a character come to life. That was my primary focus of my energy, and I have a lot of energy. I always have
I discovered quickly that creating characters to active filters for AI capabilities was so much more effective than prompt engineering. And doing exactly that has been the focus of my attention for the past year and a half.
I have over 30 years of experience in high tech, working with information, knowledge, and connecting people with the information they need to live better, work better, and be better.Having access to an information interface that can be tuned with character features is next level information interaction, and I’m so glad that other people in the character development space are discovering this too.
Great points! Ideally, if we could show people that "prompt engineering" is more a writing skill than a "tech" skill, this shift would happen more naturally. I think it already is, but I am still amazed at the furrowed brows and confused stares I sometimes receive when I tell an audience or person that using AI well is very much a similar cognitive process to writing well. That tells me we still have a ways to go.
Lots and lots of potential with AI/agents and story/character development.
Agreed! Similar skill, different application.
Mike - this is great stuff. Fascinating and interesting. My experience with some of these types of activities (not nearly as detailed but modest attempts in a slightly different context) is that many students do not have the background knowledge or writing chops to interact with AI's in this way. They also aren't super adept readers to be able to parse through the AI outputs very effectively, especially in a single class period. How do you respond to critics who might say that we really still need to focus on developing basic student writing skills before throwing them into the deep end of interacting with AI's in this way? I am torn because using AI in the manner you suggest here can produce some really useful exercises but if they can't write well enough to take advantage, aren't we putting the cart before the horse? The issue is also one of time - the more time we spend with LLM's in the classroom the less time we have to do unaided AI work. Or is there a magical blend of the two? That's the struggle I'm currently having and I'm fortunate to teach highly motivated and generally strong and well supported students. Many are still skeptical and fearful of using AI for anything because of the cheating stigma. We're all learning about this as we go and I really appreciate the examples and experimentation you're bringing to the table. Thanks to both you and Nick!
Thank you very much, and you are very right!
The modelling of this skill comes after students have developed domain expertise in the given subject. Then and only then can we model this skill for them -- in context. Model it against a non-exemplar and they will build the very skills you (rightly) point out they don't have.
End of unit, once a semester, in the context of a larger assessment, and with the inclusion of a mini-unit that utilizes comparative transcript analysis to build the writing, reading, and critical thinking skills necessary to engage in this way.
There's more to it, but that's the high-level overview! Hope that helps and would love to hear any feedback you have.
Very helpful and much better context. One piece of feedback I have, and I wonder if others feel the same, is that at times I find myself hesitant to share everything I know and have done with AI because ... well, once the cat is out of the bag, it's really hard to put it back in! Based on everything I've read of yours, it's clear you have used AI a lot - to test drive, in your own work, and everything in between. I've been a heavy AI user since early 2023 and as a result have become very facile with most of the models - Chat, Gemini, Claude, NotebookLM to name a few - with multiple workflows, projects, GPT's, etc .... Not to mention many of the other top AI productivity tools for specialized tasks. There is almost nothing I can't do better by bringing aspects of AI's strengths to the table. This is both empowering and frightening because I do know what our more advanced kids are doing. Students who know how to use AI better than most teachers are simply running circles around them and I think that gap is only going to get wider. AI can be used for everything students are asked to do, including helping complete their reading assignments, prepare for class discussion, and so many other ways that are not "detectable" in any meaningful way. So, after your transcript analysis session, how do you put the horse back in the barn? It's almost as if, the more successful the AI lesson, the harder it's going to be to say, well, now you shouldn't use these incredibly powerful tools for these other things. Students don't see those lines as brightly as we want. Also, what happens when they take your skills and bring them to other classes where teachers are more skeptical and resistant to AI? These are the AI questions that keep me up at night ...
Ah, therein lies the rub!
Students who do this get better at using AI, but not better at using AI to cheat. They see that more thoughtful approaches produce better outputs, which makes them less prone to engaging with AI on a surface level. It's akin to "once you see it, you can't unsee it."
Examples -
1) After our first go-round on this (with a fictional Holden Caulfield chatbot), my students went from thinking that character bots were cool to thinking that they were dumb. The reason? I forced my students to act like journalists with Holden instead of engaging as a buddy -- or trying to "learn from the bot." When they engaged like journalists (sparring partner), they found all the gaps and realized -- oh, these things aren't as good as I thought they were.
This dovetails with research that shows that students with more AI literacy tend to cheat with AI far less.
2) After we did this via brainstorming with ChatGPT, my students commented how "hard" it was to actually use AI well. Their perception changed from "this thing can do anything for me" to "oh wow, I've been using it really poorly." Subsequently, they have the necessary experience to know that when they engage with AI, if they don't *think*, they are basically using it like a third grader. That develops intrinsic motivation to actually engage actively rather than passively.
Will they do that every time? No, of course not. But AI literacy is about making good choices (IMO), not using AI perfectly. And in order to make good choices, you have to know and experience the universe of choices that exist.
Last point - I would argue the cat is already out of the bag. As you point out, they are using it -- and using it with basically no guidance except from TikTok. That's their current benchmark.
By using comparative transcript analysis and grading the chats, I redefine the benchmark for AI use for my students. That's part of our job now, as far as I can tell. I can't force them to do anything, but I can show them the difference and require that they practice the skill in a closely constructed and monitored arena.
Your point is well-taken, but in my opinion the fear is unfounded. I'm having a helluva time convincing folks otherwise, but it's the truth as far as I can see it! Ha ha.
I like your optimism but it does not necessarily jibe with my experience. Getting better at AI means the opportunity to get better at using it in ways that many teachers would prefer they don't. Maybe they don't do it in YOUR class because they know how knowledgeable you are (I have had a generally similar reaction from my students) but I do think you are being naive if you think they aren't bringing those skills to other assignments. But, as we both agree, the cat is already out of the bag and most students do not know how to use AI very well. I'm more and more starting to think that will have to accept that AI "assistance", like the internet in general, is just part of our new reality and adjust accordingly. I do agree that, to some extent, the approach of supervising AI use and modeling it in ways educators deem to be potentially productive will be the path forward. The other rub, however, is that not only are most teachers and professors not aligned with that point of view (just look at the recent wave of AI backlash across Substack) but very, very few feel as confident as you or I do to be able to engage in this high level work. I do think it will be the single biggest challenge for education in the coming years. Kudos to you for getting out in front of it.
Yes, I see your point and agree! You are right that they will likely take this into other classes, and that can cause serious friction among faculty. I try to frame it in those moments as -- they are using it in your class already, wouldn't it be better if we started modelling what we want to see? And at the end of the day, wouldn't it be better if they were bringing the skills that modelled by a teacher into other classes than the ones they are being taught by friends or on social media? So, in that sense, it is perhaps best viewed as the lesser of two evils.
But I absolutely see your point and wasn't trying to say that they won't use AI in other classes - just that they are likely to be a touch more thoughtful than they otherwise might have been. Hope that makes sense -- because I do agree with you!
The question is going to be whether folks like you and I - who are experimenting with, acknowledging, and, frankly, trying to teach ethical and effective AI usage borne of our own experience - going to continue to be the outliers (at least I am at my school - I don't know about yours) or will others make the same choices in the coming years. Otherwise it's going to be a lonely place in 2026, 2027, etc...
There’s a rare precision here the kind that reveals itself in the pauses between lines as much as in the words themselves.
Conversational authoring, at its best, feels less like instructing and more like listening forward, inviting the next sentence to arrive rather than forcing it.
Your piece captures that subtle dance beautifully.
I work at the intersection of security and narrative architecture, exploring how the way we frame information shapes both trust and action.
Always open to weaving perspectives with fellow builders of thoughtful systems. Appreciate you sharing this craft so generously.
I LOVE the concept of "listening forward." That is so true, and I would argue that listening forward involves listening first to oneself. Not only is that "art," but it's metacognition. And being good at using AI is really just being a metacognitive person who reflects naturally. AI literacy/fluency IS metacognition!
Thanks for your comment Benta, would love to hear more about your work!
I completely agree metacognition is the quiet backbone of true AI fluency. The capacity to observe our own mental movements, question them, and re-author them is what turns a prompt from a command into a conversation with possibility.
In my work, I look at how security frameworks can be reframed through this same lens of inner and outer listening:-
• How do organizations "listen forward" to emerging risks rather than react backward?
• How do we design systems that encourage reflective decision-making rather than brittle compliance?
• How do we maintain narrative coherence in an era when information is increasingly fragmented and often weaponized?
Your emphasis on self-reflection as the foundation of conversational authoring resonates deeply. It suggests a shared space where narrative craft and secure system design converge, both depend on a willingness to pause, examine, and reimagine.
I’d love to explore how these ideas might connect further especially around cultivating "metacognitive resilience" in both individuals and organizations facing accelerated AI change.
Thank you for opening this door. It feels like a conversation worth unfolding layer by layer.
While I write my articles myself, and don't intend to change that practice, I've started to use TTS AI to voicify them into podcasts, including representing my own first-person written voice/style. Would be interesting to see how that comes across to people interested in the topics discussed in the above article.
To me, it's a great use case, especially if you can truly get the AI to capture your voice. I find that it's hit-or-miss, but would love to hear about how you go about getting it to work for you!