Assessment and feedback: How are rubrics used across the University of Edinburgh?
This blog post builds on the second LOUISA Academic User Group (AUG) and Professional Services User Group (PSUG) meeting which took place in February 2025.
It draws on insights from members and colleagues across the University to showcase how colleagues in various disciplines support assessment and feedback through the integration of rubrics and learning technology.
Name | Role | School/Institute | College |
Amy Westray-Farmer | Learning Technology Adviser | School of Medicine | College of Medicine and Veterinary Medicine |
Chris Smith | Head of English Language Pre-Sessional and Director of Quality and Curriculum | Centre for Open Learning | College of Arts, Humanities, and Social Sciences |
Emma McAllister, Mary Collacott, Joe Noteboom | Learning Technology Team
|
Edinburgh Futures Institute | College of Arts, Humanities, and Social Sciences |
Ross Galloway | Personal Chair of Physics Education | School of Physics and Astronomy | College of Science and Engineering |
We are grateful to all contributors for sharing their valuable insights and expertise
How has the use of rubrics helped you to enhance assessment and feedback in your School/College?
From the courses that have engaged with rubrics, it has brought greater consistency and has given students clearer insight into their assessments. Rubrics help students understand why they received their grade and identify ways to improve in future.
It has also saves markers time in some regards, as we have demonstrated how to copy and edit existing rubrics, this has saved a lot of time and markers are more keen to adopt rubrics as a result.
Amy Westray-Farmer, School of Medicine
We’ve been using rubrics for a long time in English Language Education (ELE). Academic language assessment has been quite advanced in its use of analytic rubrics and feedback processes. Nevertheless, the new minimum standards have helped advance a redesign of our rubrics, with particular attention to ensuring they are clear and comprehensible for students.
Chris Smith, Centre for Open Learning
In our context, rubrics provide a practical way to establish a shared understanding of assessment criteria across diverse subject areas. This is particularly important in an interdisciplinary setting, where course organisers may bring different approaches to assessment and feedback, as well as varied interpretations of terms such as assessment criteria, grade descriptors, and marking rubrics. By offering clear, consistent guidance, rubrics also help ensure that marking and feedback remains focused on the assessed criteria, supporting fair and valid assessment for students from different disciplinary backgrounds, regardless of whether their strengths lie in writing, coding, or visual communication.
Learning Technology Team, Edinburgh Futures Institute
Rubrics help address two key issues – consistency and transparency. They are used extensively for assessing students’ final year projects, which are marked by 80 academic staff. While other assessments in physics are more objective, this project is subjective and the focus is on assessing quality, making consistency challenging.
The introduction of the rubrics has helped there. Our rubrics are quite detailed, providing specific criteria, such as “the structure of the report flowed logically,” rather than vague descriptors like “Good” or “Very Good.” The second benefit is transparency. Students are interested to know how they will be assessed for different aspects of the project, and sharing the rubric in advance allows them to see this clearly.
Ross Galloway, School of Physics and Astronomy
Do you think standardising rubrics would improve assessment and feedback in your School/College?
I think rubrics should be standardised to an extent. I think they should be standardised in terms of quality of feedback, as one student should not have a clear advantage over another due to the quality and amount of feedback they have received. So having some sort of wider policy or set of expectations for the quality of rubrics would be beneficial.
To create some consistency within the College I created a rubrics package that included information as well as some basic templates. The idea being that staff could use these templates and adapt them to their course assessments (saving them time in the long run). Originally, we found that staff were keen to use them, but understandably reluctant to spend the limited time they had making multiple rubrics from scratch – this bank of examples was to help mitigate this. This also ensured that at least the initial draft was a good quality starting point.
However, they shouldn’t be so standardised that they don’t allow for customisation or more specific feedback, as they would become meaningless.
Amy Westray-Farmer, School of Medicine
I do, and we are slowly working through a project where, after revising our pre-sessional rubrics, we revise them for our other credit-bearing academic language courses so they are aligned and have a shared language.
Chris Smith, Centre for Open Learning
Given Edinburgh Futures Institute (EFI)’s 10-credit course structure, the high volume of assessments makes consistency essential. Shared criteria help reduce ambiguity, support transparent marking, and make it easier to explain grades.
However, standardisation comes with challenges. The current Learn rubric tool only supports numerical scoring, which doesn’t suit all marking styles as some course organisers prefer to give descriptive feedback without scores. While a qualitative rubric option is expected in the June 2025 Learn release, we’ve developed interim workarounds.
At present, we offer offline templates for qualitative rubrics, which are uploaded as Word documents in the feedback field. While functional, this adds steps and can reduce feedback visibility for students.
To support varied approaches, we recommend two approaches: using the built-in Learn rubric tool or uploading a completed qualitative rubric. We also plan to clarify where assessment criteria should be shared in Learn and to explore the new qualitative features once released.
Learning Technology Team, Edinburgh Futures Institute
Standardising the systems and the philosophy behind using rubrics is beneficial. We currently use three systems: our homegrown online marking system, Gradescope, and Learn rubrics. Adding more could complicate things for both students and markers. However, I’m less convinced about standardising the rubric criteria themselves. The strength of rubrics lies in their ability to be specific and targeted to the learning outcomes of a course. If we overly standardise, we risk losing this advantage and ending up with generic criteria like “C satisfactory” or “E marginal fail.”
The criteria and descriptors should be custom to reflect the unique aspects of each subject. For instance, rubrics used for physics assessments wouldn’t be appropriate for biological sciences or engineering, because they have different values and expectations. While consistency is important within a School, it’s less clear that uniformity across different disciplines and assignments would be beneficial.
Ross Galloway, School of Physics and Astronomy
Did anything in the User Group meeting challenge or reinforce your current assessment and feedback practices?
It highlighted how other areas of the University have clear policies and procedures, that we are lacking in our College – though this will hopefully change following the College modernisation later this year. It also highlighted how differently our College functions, regarding links with the NHS and how clinicians deliver assessment and feedback.
Amy Westray-Farmer, School of Medicine
What struck me was that nothing in the meeting seemed surprising or unreasonable. Most practices discussed were consistent with what we do, or they were approaches we might take if we had similar assessments. This consistency likely reflects that rubrics are now well-established pedagogically. I’ve seen discussions about them since around 2010, and they’ve become fairly mainstream.
There seems to be a convergent evolution in how rubrics are used. Colleagues in literature and cultures use them similarly to how we do in physics and astronomy, despite differing tasks and criteria. This shared understanding across disciplines suggests rubrics are an effective pedagogical tool.
Ross Galloway, School of Physics and Astronomy