When Nicholas Stern’s review of the Research Excellence Framework dropped last July, many wondered if REF2021 might herald the beginning of the end for the often burdensome exercise, and pave the way for a sleeker, more intuitive REF in 2021. In today’s blog Pauline Jones asks whether the new assessment criteria will really reflect the research conducted at university campus across the UK, and whether the proposals will really reduce the burden of work?
The early September publication of the ‘Initial Decisions on REF2021’ fired the starting gun on preparation for the next Research Excellence Framework across the UK. It has given us some clues to the parameters of the exercise, while leaving plenty of specifics for us to look forward to over the next 12 months.
The REF, the assessment of research conducted in UK universities which happens every six years, has been constantly reviewed and revised since it began as the Research Selectivity Exercise in 1986. Designed to drive selective funding allocations for universities, today, the annual Research Excellence Grant allocation based on REF results is worth over £75 million to Edinburgh each year.
There have been some constant features. Expert peer review of publications has always been at its heart, and despite repeated efforts to find a cheaper, quicker, metrics-driven way of assessing output quality, no system has received the confidence of UK universities. The 2021 exercise will be no different. Output review has always been set in the context of the research environment: infrastructure, strategies and description of the research strengths in the departments in which research is conducted. The importance of this section has grown, with REF2014 requiring detailed narrative description of the research environment and, famously, the introduction of the impact of research beyond academia as an explicit separate section.
The September announcement confirmed that the relative influence of the different parts of the exercise – outputs, impact and environment – is changing for 2021. Outputs remain the most highly weighted, but it has gone from influencing 65% of the REF score, to 60%, with the remaining 5% added to impact. Impact will now influence 25% of the score – rising higher once the description of impact support strategies and structures in the environment section (weighted at 15%) are taken into account. Now more than ever, universities will be judged on their efforts and successes in ensuring that the research that they carry out has a benefit beyond the academic sphere.
A fundamental change has been confirmed: we must now submit every eligible academic staff member to the REF. In previous exercises, we have chosen which staff are included. The merits of this have long been debated: on the pro side, the exercise was never intended to be comprehensive, but rather a representative showcase of the best research taking place, enough information to inform the selective allocation of funding without too much burden. Conversely, it has been argued that selection is unfair, potentially career-damaging, to staff who are not selected; and it does not paint a true picture of the research carried out in departments. With the latter argument having won for now, this critical change to the exercise will make a big difference to submissions.
Alongside this, gone is the general expectation that all academics will submit four outputs. Instead, there will be a minimum number required, probably one; a maximum number, maybe five or six; and an average of something less than four across the submission.
Finally, the announcement signaled a move to give more credit for outputs to the university at which an academic is based when created, not just to the one at which they work at the time of submission. In 2008 and 2014, the assessment regarded research outputs as ‘portable’: moving with an academic when they join a new university. Regarded by some as a major driver of a ‘transfer market’, there have been calls for this to stop. For REF2021, it seems this will be partially implemented – a proportion may still be portable, or credit may be shared – but the direction of travel is towards no portability for future REFs.
Taken together, the new rules reflect one of REF’s underlying principles: it assesses research conducted at universities – it doesn’t assess individuals. However, in practice, the proposals won’t reduce the burden of the exercise, intended to be an underlying purpose of the recent Stern review of REF – it will add to the choices that academic experts will need to make in deciding which outputs go in.
The changes make it more vital than ever that academic experts are an active part of the exercise. We need academic judgement to shape the submission, and academics remain crucial in carrying out the assessment. The call for academics to chair assessment panels is out, with a closing date of 11th October. Within Edinburgh, we are starting to consider how the exercise will be shaped. Academic judgement is going to be more important than ever in deciding what our submission looks like, and I am looking forward to working with all of you on the challenge.
Pauline Jones is the Head of Strategic Performance and Research Policy at the University of Edinburgh