Learning Analytics and the Outsourcing of Teacherly Judgement

A brown judge's gavel sits on a white table.
Photo by Tingey Injury Law Firm on Unsplash

Who does our thinking for us as teachers?

Increasingly, it seems like both educational technology companies and our own administrators are interested in outsourcing the judgement part of our teaching to algorithms. I have written about these algorithms before, and my most significant concern with them is this notion of outsourcing something which is, for better or for worse, a bit part of what it means to teach — at least insofar as evaluation and grading goes.

It’s intuitively clear to most of us, I think, that what we use to train an algorithm matters, and that biased content in will result in biased content out. But recent research into algorithmic recommendation suggests that the mechanism itself, not simply the content used for training it, can be the source of bias. This is especially dangerous in a culture that pretends at the neutrality of algorithmic tools for it own convenience.

Perhaps the greatest source of harm is that the illusion of neutrality algorithms have can be exploited in attempts to roll back protections against discrimination. Appeals to the neutrality of algorithms as a cover for discriminatory outcomes has become a fairly common trope.

Catherine Stinson, Algorithms Are Not Neutral

The truth is that I think a lot of learning analytics are pointless but hopefully largely harmless at the level of collection (I’m talking, like, who clicked where in the Learning Management System stuff), but I’m still troubled that we hand so much data over to faculty without any kind of training on how to make use of it. I give this maybe-trivial example all the time, but if I can track when my students do their homework, and I have an in-built bias for morning people (because of our inherent superiority, obviously), how do I ensure that doesn’t impact my evaluations — and is anyone checking?

But that becomes much less trivial when what is given to faculty moves out of the realm of data and into the realm of judgement. For example, some tools profess to tell us what kind of a student someone is based on this kind of data and some kind of secret sauce algorithm. So we get told the student who submits all their assignments one minute before the deadline is in trouble or in need of some kind of intervention when what they maybe just are is busy.

Of course, the ideal is for us to know our students, so we know when someone’s behaviour changes significantly or what pressures are weighing on them. But of course, most of us don’t work at institutions where that is possible, because we design universities not for learning, but for scale. 300-student lectures and precarious faculty teaching eight sections across three universities to make ends meet. These are not the conditions where it is possible to know our learners. And so, instead of fixing the real problems, our senior leaders are drawn to shiny technological solutions.

Let’s outsource judgement.

I don’t love any kind of learning analytics, but I do see a world of difference between giving faculty the means to micromanage or microanalyze student behaviour — an ill-advised zero-sum game — and giving faculty a label on a student, whether that label is “at risk” or “strong collaborator” or “solitary worker.” I worry that once that label is heard, it’s hard to un-hear.

And then there’s the question of where those labels go. It would be one thing if they were course-specific, but what if that label gets to the next class you are taking before you do? Listen, in elementary school, a teacher read the roll and looked directly at me and said, “Oh no. Not another little Gray.” My big brother’s performance as a student prejudiced that teacher about how I would be in her class. (Sorry to throw you under the bus, Simon.) As it was, she was wrong about who I would be as student in a different course, but we never got along after that. Indeed, her preconception reframed who I was and how I behaved. How do algorithmically-defined labels on students circumscribe them any less than that?

And of course, as more enterprise-level tools masquerade as teaching and learning tools, we should ask if this data captured by Microsoft and LinkedIn and these other corporate-first products we’ve invited into our classrooms is going to follow our students into their working lives. And whether they have the right to make mistakes, to start again, to fail — all things we’re supposed to make room for as educators. And when all else fails, do these tools give students the right to be forgotten?

I always caution faculty that we should be mindful of what data we ask for, because once we have it, we become responsible for it and for its ethical use. When in doubt, don’t look at it. And I think that’s doubly true of the algorithmic judgements that are increasingly sold to us as some kind of neutral truth. The algorithm ain’t neutral, and neither is the judgement.

Just don’t look. It’s not radical, it’s not transformative, and it’s not the whole answer. But it is a start. When in doubt about what you will do with the information and how you will make ethical use of it, just don’t look.


Leave a Reply

Your email address will not be published. Required fields are marked *