Learning analytics for improving evidence-based teaching

I recently attended a workshop facilitated by SICSA entitled Learning analytics for improving evidence-based teaching. There were around 20 delegates in all, representing several universities across Scotland. Pavlos Andreadis was also representing the University of Edinburgh. Discussion sessions were led by Kassim Terzic.

The invited guest speaker was Andrew Cormack, Chief Regulatory Adviser at Jisc Technologies, who gave an informative talk on Learning Improvement, Ethics and Law. You can read more about Andrew’s thoughts on GDPR and education on his blog.

For most of the day, we formed smaller groups to discuss various aspects of learning analytics data points at our respective institutions. Summaries of our discussions can be found below.


What learning-related data is currently gathered and stored in your institution? 

– Grades (exam, coursework)

– Entry Requirements

– NSS evaluation (likert scale + free text)

– Engagement

– Attendance (eg Glasgow Caledonian require swipe access to lecture rooms)

– Lecture capture

– Forums

– Reading lists (different access methods)

St Andrews also conducts exit type interviews with its students, measuring longer term feedback. Pavlos also noted that feedback is also given in the form of complaints, either to the student union or director of studies (or both). This kind of feedback is less likely to be logged in a central system.


How is impact of changes on learning currently measured in your institution?

– Scatter plots (modules vs all)

– Average / standard deviation (final grade / exam only / per exam question)

– Student evaluation (module boards)

– Trends per student

– Averages / histograms over years

– Student background (how do students with a programming background compare with those who don’t re final classification)

– Selection of exam question.


How can statistical and data-focussed approaches help evidence learning outcomes?

Consistent data points across all courses and all institutions:

– Learning outcomes

– Delivery method

– Assessment method

A proper experiment in this area would require an active intervention and a control group. This, however, would be ethically difficult. It was suggested we look instead at ‘passive’ interventions. For example, could we use text analysis of discussion boards to demonstrate understanding of key concepts in the lecture? Or look backwards for evidence of a student request for change and if the resulting change had the desired effect?

We also discussed how machine learning could be useful in predicting students who are ‘at risk’. A word of caution here though:  the University of Edinburgh has a Learning Analytics Policy and has developed seven principles to sit alongside this policy. One of these principles states:

“Our vision is that learning analytics can benefit all students in reaching their full academic potential. While we recognise that some of the insights from learning analytics may be directed more at some students than others, we do not propose a deficit model targeted only at supporting students at risk of failure.”


What data-driven approaches could be applied in a study across Scottish Universities?

We then discussed how we could evaluate a passive intervention across multiple HE institutions. One suggestion was to find a course common to all Computer Science programmes (eg a second year Database course) which has remained relatively consistent across 7-10 years, to use in the data study.  We could introduce a guest lecture to this course, which could be delivered remotely. We could then measure:

– Engagement

– Satisfaction

– Learning (grades)

and compare these data points to previous cohorts.


A personal note

Without a clear understanding of what ‘gap’ this guest lecture was aiming to fill, I remain unsure as to what such an experiment would tell us. I preferred (my own) suggestion of introducing more (and different) types of assessment into a stable, common course across programmes. How we assess, how often we assess, why we assess and the feedback loop is ripe material for a lot of research into education just now, and it is my personal opinion that we could contribute to this research in a meaningful way.

Finally, a cautionary tale: on 29 May I attended a talk given by Joel Smith from Carnegie Mellon University entitled: The Eye of the Needle: New Understandings of the Complex Barriers to Instructional Innovation with Technology. In this talk, Professor Smith talked about the twin challenges of any TEL project:

– effectiveness in improving learning outcomes

– effectiveness in terms of adoption and sustainability (ie the implementation of innovations).

Demonstrating the former does not necessarily lead to the latter. We should always keep this in mind when looking to evidence better teaching.

 

 

Share

Leave a Reply

Your email address will not be published. Required fields are marked *