Discussion about this post

User's avatar
Tony Jones's avatar

What the inconsistent scoring did was to sever the connection between effort and outcome at the exact moment genuine thinking had produced real work. She spent twenty minutes doing the thing we actually want students to do, then watched it cost her marks. The lesson that teaches is about whether the game is worth playing honestly.

I've watched a version of this across 40 years in NZ classrooms. Students who had to work hardest to meet expectations often developed the tenacity that held them at transition points. Students for whom the system consistently rewarded compliance over thinking often hadn't needed to build it. AI now offers every student the compliance pathway at scale. When the feedback ecosystem can't distinguish between them, the rational move is exactly what your student did.

The fix isn't better tools, though you're right that we need them. It's what you already did in that session: make the thinking the assessed thing, not the product. When the trace matters more than the score, the tool's inconsistency stops being decisive.

Michael Woudenberg's avatar

The worst part is that this has nothing to do with AI. AI is just grading a predefined bias. It's also interesting that the 'drive to the algo' was something I really noticed first in Resumes. The over-inflation of tasks and the reframing to slip through the ATS and lazy screeners, not what the hiring company actually wanted.

6 more comments...

No posts

Ready for more?