Assessment and feedback: are we really getting anywhere?

CC0
CC0

The answer is I think so! Many of my reflections resonate with the arguments made in the excellent blog post by Chris Perkins from LLC.

He makes the point that by focusing on feedback timeliness and quality, we risk ending up in a vicious cycle of assessment and feedback ‘fatigue’.

We are seeing increasingly consistent messages coming from multiple sources: the National Student Survey, Course Enhancement Questionnaire (CEQ) data, the LEAF project which include:

  • Over-assessment and deadline log-jams which seem common across a number of programmes. This causes stress to students which can be compounded by perceived disparity in required workload and credit available for some students
  • Agency / assessment literacy. There is a sense that sometimes students are not actually sure what is expected of them or that their expectations and those of their markers are different.

Both these points are linked and explain why wherever possible I am encouraging colleagues to consider the fundamental importance of course and programme design in relation to assessment and feedback. A focus on feedback as an entity that we need to ‘do’ better and better will simply lead us to Chris’s identified ‘feedback fatigue’. If however, we take a step back and work out how we build assessment literacy opportunities into our courses together with a sound assessment and feedback structure that doesn’t overburden staff or students, then we are on a winner. As Medland (2016) states in reviewing the current status of the literature in this area:

“It calls for assessment to be a central aspect of curriculum design and development that is integral to teaching and learning, rather than an afterthought”.

For me it is no surprise that in the last round of NSS, the assessment question with the strongest correlation to overall satisfaction was ‘the criteria used in marking have been clear in advance’. This doesn’t just translate to students knowing a marking scheme exists in a course book; rather that they have had practical opportunities to engage with and use these marking schemes and thus begin to understand for themselves what quality work is and what markers are looking for. As captured perfectly by O’Donovan, B., M. Price, and C. Rust. (2004):

“single-mindedly relying on the explicit expression of assessment standards and criteria cannot, on its own, adequately help students to understand assessors’ perceptions and expectations of assessment”.

Some of my own research with veterinary medical students has shown that building in assessment literacy interventions (e.g. tutorials where students mark or consider the quality of previous students’ submissions) can actually reduce students’ need for feedback as they go into assessments with a more complete understanding of where the goalposts are for them and what is expected (Rhind and Paterson, 2015). Other examples across the University include that described in Philip Cook and Claire Duncanson’s excellent blogpost. This is also a major area of interest of Dr Cathy Bovill from IAD who presented on the subject at our October Directors of Teaching network.

To conclude, I am encouraged by the increasing sense of the importance of course and programme design in moving us in the right direction – not only in terms of NSS, but also (and arguably much more importantly), because it is based on sound pedagogical principles.

Next steps:

Course and programme design options include ELDER  and various CPD opportunities through IAD such as Practical strategies workshops and Board of Studies training for convenors and administrators. We are working this year to expand the support for course and programme design and encourage anyone interested to get in touch with Jenny Scoles, Academic Developer at IAD (jenny.scoles@ed.ac.uk).

References

Deeley, SJ & Bovill, C 2017, ‘Staff-student partnership in assessment: Enhancing assessment literacy through democratic practices’ Assessment & Evaluation in Higher Education, vol 42, no. 3, pp. 463-477.

Emma Medland (2016) Assessment in higher education: drivers, barriers and directions for change in the UK, Assessment & Evaluation in Higher Education, 41:1, 81-96, DOI:10.1080/02602938.2014.982072

O’Donovan, B., M. Price, and C. Rust. 2004. Know what I mean? Enhancing student understanding of assessment standards and criteria Teaching in Higher Education 9, no. 3:325–35.

Rhind SM, Paterson J. Assessment Literacy: Definition, Implementation, and Implications. Journal of Veterinary Medical Education 2015; 42(1):28-35.

Susan Rhind

Professor Susan Rhind is Chair of Veterinary Education and Assistant Principal Assessment and Feedback. She has strategic leadership for assessment and feedback developments across the University and works with staff and students to develop strategies and policies to support academics in developing and delivering quality feedback.

Leave a Reply

Your email address will not be published. Required fields are marked *