Discussion about this post

User's avatar
Josh Gellers, PhD's avatar

I really like this framing! It’s useful not only for education, but really any industry seeking to grapple with how to establish guidelines around AI use.

Jonathan Jackson's avatar

Cross posting my comment from LinkedIn 😊

Hey Nick, regarding the point “A red-coded assessment is only red if the student decides to treat it that way.” indeed this is an issue if a pure labelling exercise is taking place but our interpretation of “red” assessments at Queen Mary University of London is structurally secure assessments eg. invigilated exams or vivas.

Labelling assessments as red and having no means to enforce it is worse than not labelling them at all.

Here’s our approach / interpretation:

Red - structurally secure assessments

Amber - “open” assessments, AI use optional, may require structural redesign of assessments to be authentic, challenging, meaningful

Green - AI required, as part of embedded AI literacy

While the balance will vary between programmes, I feel most of the hard work is going to have to happen within “Amber”.

In short, we’re using the discursive labelling approach to highlight where all the structural redesign needs to happen.

1 more comment...

No posts

Ready for more?