The GIF above is a pretty accurate metaphor for how I’ve felt for the last two weeks. One week of no study was all it took for me to fall behind. And so, week 9 proved an opportune time to participate in the weekly ‘Larcing’ about with your data activity.

Tasked with generating at least three reports (and the freedom to choose those ‘themes’ we wished to analyse) I did what I suspect most did: and ran a report for every week of the course so far, selecting all themes. If Moodle has collected this much data on my interaction with the VLE, why would I limit myself to viewing only some of the data? Was this a belated attempt to leverage some control over the process?

So, what did the LARC data tell me and how was it presented?

Your attendance has in general been poor

Oh. What about the following week?

Your attendance has in general been poor and this week you logged on less often than usual

Oh.

Hmm. OK, there were some other words but these were the first and this is what I remember.

Interestingly, it is only possible to run weekly reports in LARC. As such, in order to gain a sense of how I was performing (according to the metrics) over the duration of the course, I had to input the quantitative data into a spreadsheet myself. What did this tell me?

  • My ‘attendance’ never reached above 32% of class average.
  • My ‘interaction’ performed much better, nearly always recording as above 100% of class average.
  • My ‘social engagement’ also appeared to be pretty poor (only twice recording as above class average).

So how did this data, presented in this way, make me feel?

The first thing to mention is the assumptions made by the data and presented back to me don’t quite feel right. Not because it shows me as performing poorly (I expected as such) but because it doesn’t resemble my recollection of my involvement with the course. For example, I haven’t posted in the forums in weeks (as evidenced by the accumulative tally of positive / negative / neutral posts) so why does my ‘social engagement’ score not reflect this? How is LARC calculating social engagement?

I am assuming that ‘interaction’ is calculated by clicks on core and recommended reading. This seems to equate to what I remember.

Is ‘attendance’ calculated simply by the number of times I have logged into Moodle? Surely it’s more complex than that? Does it include the number of clicks performed once logged in? Does it include interaction with the forums? Is there an overlap between the metrics used for measuring attendance, and those used for measuring ‘social engagement’?

You do not really care what others in the class think about you

This is true. But what if I did? No one (and no algorithm) can tell someone how they feel. So why is LARC attempting to do so? What is it hoping to achieve by including this in the ‘feedback’?

A question of trust.

The definition of learning analytics as adopted by the associated First International Conference on Learning Analytics and Knowledge in 2011 is thus:

“the measurement, collection, analysis and reporting of data about learners and their contexts, for purposes of understanding and optimising learning and the environments in which it occurs”.

I would like to argue that optimising learning requires a relationship of trust. When we partake in a multiple choice quiz we trust that there are discrete answers of which a machine is capable of identifying as correct or incorrect. However, when we allow our data traces to be interpreted by a machine into making predictions regarding success, we become understandably cautious. Our relationship with the machine has become distrustful. Perhaps we are frightened that the machine can tell us things about ourselves we would rather not know. Or perhaps we intrinsically understand that Big Data is being done to us. Also, what does it do to the relationship between student and tutor? I must confess to being a little disappointed at the thought that when my tutor reached out to me to check in after a period of inactivity, she might not have done so because she happened to notice my absence, but because she was prompted to by some red traffic light on her dashboard.

One question Hamish McLeod posed on the forum this week was ‘how might the average student react to this information?’ Learning Analytics ‘solutions’ are often presented as universal. But, I would argue, there is no such thing as an average student. Different demographics will often respond differently to this kind of information. For everyone who wants to know how they are doing compared to their peers, there is someone else who doesn’t. And if we simply make this information available and say ‘it’s your choice if you wish to access’ aren’t we transferring the responsibility of presenting that data from the institution to the individual?

Does hidden algorithms = hidden agenda?

The LARC activity for this week asked us, the IDEL students, to reverse engineer the tool to attempt to identify the logic behind the tool. Many of the algorithms used in learning analytics are proprietorial and therefore hidden. But are there other (good) reasons why the algorithms should be concealed? One argument is if they were revealed (to students), would it offer opportunities, like teaching to the test, to ‘game the system’? I’m not sure how convincing this particular argument is. Let’s go on a thought experiment.

I am a student on the IDEL course and I have access to LARC from the start of the course. I generate a report in weeks 1 and 2 and notice that all I have to do to improve my attendance score is to log in to Moodle at least once a day. I do this (because it’s easy and I care about my scores, even if they don’t feed into any summative assessment) but no other change of behaviour is recorded. My reports don’t raise any kind of red flag to my tutor, who continues to feedback on my blog posts.

Has having access to LARC benefitted me (the student) in any way? Would knowing how the attendance score is calculated benefit me in any way? Or would it mean I spend precious, valuable time, trying to improve my score? Time which would be better spent reading course material and interacting on the forums?

As an exercise in trying to understand which metrics were used in the algorithms employed in LARC, I found this weeks activity incredibly useful. But for every metric I could think of, I could think of countless others which aren’t measured (and aren’t measurable). So, bearing in mind we are working with a limited data set, how responsible is it to provide students with this information? How do we know that we are telling students the ‘right’ thing? What is the right thing? Aren’t there as many ‘right things’ as there are people? Is it ethical to capture this data and present it to administrators and tutors but keep it from students? Conversely, is it ethical to present this data to students when it may provoke the opposite of the desired effect of encouraging greater participation and success? In short, is it going to make them feel like the guy in the GIF at the top of this post?