In the previous post about this topic, I discussed the challenges of assessing online discussion posts as a proxy for learning and also a proxy for physical real conversations. In this blog, I wanted to advance a few ideas about how this might be changed – and what that change might look like in terms of classroom practice. I’m drawing on the notion of ergative assessment – that is assessment as a verb, rather than a noun. Let me try to explain: one of the key affordances of computers (speaking very simply and very broadly) is that they are capable of doing much more, much more quickly – within the boundaries and parameters they have been given (recognising that they can learn according to certain routine and training protocols).
While I don’t think this affordance has been suitably leveraged in education yet, it does suggest some intriguing possibilities. Take the case of discussion boards. Imagine a class where everything is done asynchronously, via a leanring management system, and everything was part of a final assessment. That is, assessment, rather than being hyper visible and hyperfocused for all students, was part of the structure of the whole course – and was, well , not quite invisible, but certainly less of a focus of the course. Behind the scenes (or the curtain, for those Wizard of Oz fans), the LMS is busy collecting, aggregating, evaluating and sorting every students every contribution to the LMS, and then determining a grade based on that contribution. Obviously, this would require much more than a time-based or word count-based analysis of student contributions (and likely the ability to do this hasn’t been developed yet, but I’m speaking philosophically here), but imagine it was possible for an AI to do all of this and come up with some kind of grade.
The question that intrigues me is whether students would continue to take their conversations out of the ‘learning habitat’ – or would they embrace (or at least not resist) this model of learning and assessment? At first, I felt that there might be some issues in regards to ethical matters – and I don’t want to dismiss those – but I think the wider study of technology and sociology might suggest some answers. After all, I think most people with a social media account have acknowledged that we are sources of constant data harvesting, and Google (or whatever) knows lots about us. In some ways, we trade off our privacy because of the advantages such tools provide for us. Would students do the same for assessment? That is, if the tool was guaranteed to give them an appropriate assessment score, would they trade off their concerns about what was posted and where?