“I know a first when I see one!”: Developing transparent marking descriptors with the help of students

Image credit: Sheldon from Big Bang Theory – meme from ProgrammerHumor Reddit community

In this post, Phil Marston looks at a way of helping students provide what markers are looking for and for markers to provide the feedback students find useful. Having previous roles as an Education Developer, Phil is currently a Learning Technologist for the School of Social and Political Sciences. This post is part of the Learning and Teaching Enhancement Theme: Assessment and Feedback Principles and Priorities.


The University’s new assessment and feedback principles include ‘Our assessment and feedback practices will be reliable, robust and transparent’ – and, as is highlighted in the opening post in this series, students do not always consider it to be clear how their marks were awarded. Indeed, recent work by Advance HE highlights this has a specific impact on student mental health as they navigate the uncertainty of what it is they are supposed to be doing and where it is they could make improvements.

Assessment and feedback is a topic of major interest in Higher Education across the world. Fortunately, that means we are not alone nor are we the first to investigate how to address this, although it also indicates there are no straight forward answers.

Making sense of Assessment and Feedback

In my previous role, through an institution-wide focus group with students, co-organised with the students’ association, we discovered that many students didn’t really understand why they were doing the assessment tasks they were being asked to do nor why they were receiving the feedback they were receiving.

In interviews with directors of teaching across the institution, there was also concern that it was increasingly hard to meet the need to provide more and more feedback.

Fortunately, detailed analysis of the transcripts from the focus groups and departmental interviews, in conjunction with the literature on assessment and feedback offered us a possible avenue to address the dilemma.

 Marking schemes, rubrics and their descriptors

Guided by the literature (Nicol & Macfarlane-Dick, 2006; Nicol, 2010; Adcoft, 2011; O’Donovan, Rust & Price, 2016; Meer & Chapman, 2015; Falchikov, 2005), and like the example Cathy Bovill mentions in her recent blog post on co-creating assessment and feedback, we chose to dig into how we used marking schemes and rubrics with our assessments. In particular, we looked at how we could involve students in this process.

In designing marking schemes, be they in the form of basic marking descriptors or more comprehensive rubrics, a set of assessment criteria and accompanying descriptors can go a long way to:

  1. Clarify how marks are awarded,
  2. Make visible what is being required in assessments,
  3. Provide the basis for constructive feedback,

but also,

4. Help students develop the ability to self-assess.

What we found helpful

Inspired by the oft heard phrase “I know a first when I see one”, over a period of weeks we set out to try to describe the specifics of ‘firstness’ (and second and thirdness)[1].

We created a poster-sized blank rubric and, every now and again, over coffee, we would have a go at filling in the boxes using post it notes (see image below). This turned out to be a fun exercise that, on reflection, showed our notions of degree classification closely matched Bloom’s Taxonomy [2]. This in turn made it easier to flesh out our descriptors for the grade differentiation. This way, we came up with a generic rubric that could form the starting point for creating assessment specific rubrics.

Post-it Note Rubric Poster

Making sure assessment criteria are not too broad in what they cover certainly helps, and matching them to the stated learning outcomes is helpful too. This sometimes requires restating the learning outcomes (and, of course, updating the course catalogue in the longer term). Overall, this provides coherence around the purpose of assessments. If we are clear about the criteria and the descriptions of what we are looking for at each grade, it is a lot easier to be flexible about the kinds of evidence that could be submitted for assessment, which is helpful when it comes to more innovative assessment design. For example, as long as a criterion is met (e.g. “illustrate xyz”), it doesn’t matter whether the evidence is submitted as text, audio or video (unless, of course, the form of evidence is a criterion in its own right).

The single most helpful thing in this situation is to have a conversation with the students about what they think the descriptors are describing. We had meetings dedicated to rewriting descriptors in a shared language. This uncovered misunderstandings between what staff thought they were asking for and what the students thought was being asked of them. It seems to help the students understand, not just the role of assessment in a university setting, but understand the academic endeavour of working with knowledge that is always provisional. That, in higher education, assessment is often less about scaling the next level and more about learning the necessary techniques and preparing to try to make sense of what is as of yet unknown.

Some take-aways

  • Avoid subjective language such as “excellent” or “basic” or “generally” to distinguish between grade bands, as this can seem arbitrary. Use descriptions of different types of attainment instead, for example “provides a viable experimental design” might be a grade A, whereas “lists the steps in the methodology” may only be a grade D (Bloom’s Taxonomy can help here).
  • Discussing descriptors with students helps them not only be clearer about what is required of them in assignments and what is being conveyed in feedback, but helps clarify the role of assessment in their learning at university.
  • This also allows them to better judge the quality of their own work prior to submission (better guess what mark they might get).
  • At least some students will likely feel less stressed about whether they are submitting the “right thing” in assessments.
  • Not all students need to be directly involved in co-authoring the descriptors for all students to benefit, but they do need to have the opportunity to contribute.
  • Staff will find it easier to provide targeted feedback using the shared language of the marking descriptors.
  • Markers on large courses will find the shared language of the descriptors easier to calibrate marking.

If you have any questions or would like to discuss how to co-design useful marking schemes and rubrics, just ask: phil.marston@ed.ac.uk 

References

Adcroft, A. (2011) The Mythology of Feedback, Higher Education Research and Development, Vol. 3: No. 4 pp 405-419 https://doi.org/10.1080/07294360.2010.526096

Falchikov, N. (2005) Improving Assessment Through Student Involvement: Practical solutions for aiding learning in higher and further education. Oxon: RoutledgeFalmer

Nicol, D. (2010) From monologue to dialogue: improving written feedback processes in mass higher education, Assessment & Evaluation in Higher Education, Vol. 35: No. 4 pp 501-517 https://doi.org/10.1080/02602931003786559

Meer, N. & Chapman, A. (2015) Co-creation of Marking Criteria: Students as Partners in the Assessment Process, Business & Management Education in HE https://www.tandfonline.com/doi/full/10.11120/bmhe.2014.00008

Nicol, D. & Macfarlane-Dick, D. (2006) Formative assessment and self‐regulated learning: a model and seven principles of good feedback practice, Studies in Higher Education, Vol. 31 No. 2 pp 199-218 https://doi.org/10.1080/03075070600572090

O’Donovan, B. Rust, C. and Price, M. (2016) A scholarly approach to solving the feedback dilemma in practice, Assessment & Evaluation in Higher Education, Vol. 41 No. 6 pp 938-949 https://doi.org/10.1080/02602938.2015.1052774

[1] See Exemplar Descriptors.

[2] See Writing Assessment Criteria Descriptors.


picture of editor/producerPhil Marston

Phil Marston is a Learning Experience Designer with 25 years of experience as Learning Technologist and Educational Development Adviser in Higher Education. He has experience of designing, developing, and delivering education technology projects, teaching educational theory at postgraduate level, computing at undergraduate level, mindfulness, and, once upon a time, outdoor education.

Leave a Reply

Your email address will not be published. Required fields are marked *