As a postdoc at QMUL in the time leading up to last Research Excellence Framework (REF) exercise, I picked up on a certain unease amongst academics, one that I didn’t notice in the run up to its predecessor, the Research Assessment Exercise (RAE) in 2008. The REF is an “evolved” version of the RAE, whose major adaptation involved an evaluation of the economic and societal impacts of academic research. This threw many researchers for a loop as before this, associating their efforts with any sort of ‘impact’ outside the academy was a very different exercise; one usually tied to funding applications. In addition to the impact assessment, the REF also looks at an institution’s research outputs (e.g. publications), and the research environment that an institution creates to support and promote research. The resultant rankings from the REF inform the allocation of block grants to UK Higher Education Institutions (HEIs) from the four HE Funding councils.
The impact section in the REF, worth 20% of the final assessment, asked researchers to write up a case study that identified the ‘real world problem’ tied to their research along with how the results worked towards a solution of that problem, identify the people who benefitted most from that research, and attempt to quantify the “reach” and significance of the research product that extended beyond the discipline itself. Essentially, this exercise is retrospective and demands a quantitative or qualitative assessment of socio-economic impacts tied to specific research outputs. There have been many pieces written that take a critical view of this approach to assessing research impact, so I won’t go into much detail here.
Much of the unease I originally spoke of in the lead-up to the REF came about because this sort of exercise had never really been demanded of academic researchers, en masse. Collecting data on how your research might be impactful is a different stream of research in and of itself; one as onerous as writing up a PhD thesis, according to some. On the plus side, mechanisms for collecting such information now form part of the research planning process, suggesting that any subsequent REFs should be less burdensome in that respect. Moreover, the sector now has a growing and publicly searchable database of case studies that document the wide and varied benefits of the UK’s investment in HE research.
Despite its criticisms, the REF (and however it may evolve from here on in) is here to stay. To attempt to learn from the initial run and inform future iterations of the exercise, Lord Stern was commissioned by the Minister for Universities and Science to review the UK’s approach to research excellence. Released in July 2016, The Stern Report: Building on Success and Learning from Experience used 40 interviews and over 300 submissions of feedback from participating researchers and HEIs to identify several major problem areas with REF2014. These were:
- problems of cost, demotivation, and stress associated with the selectivity of staff submitted to the REF
- strengthening the focus on the contributions of Units of Assessment (subject areas) and universities as a whole, thus fostering greater cohesiveness and collaboration and allowing greater emphasis on a body of work from a unit or institution rather than narrowly on individuals
- widening and deepening the notion of impact to include influence on public engagement, culture and teaching as well as policy and applications more generally
- reducing the overall cost of the work involved in assessment, costs that fall in large measure on universities and research institutions
- helping to support excellence wherever it is found (i.e. not just within larger research-focused HEIs)
- helping to tackle the underrepresentation of interdisciplinary research in the REF
- provide for a wider and more productive use of the data and insights from the assessment exercise for both the institutions and the UK as a whole
The report details twelve recommendations that should, by and large, help to address many of the issues identified in the previous approach.
One thing that did seem to be largely missing from this report were recommendations related to the “negative influences” from such an exercise, sentiments that received strong and very strong support in the feedback responses that informed it (see below). Perhaps this has to with what I mentioned earlier about the REF (or some analogous exercise to it) being here to stay, but Stern’s Report spoke only of the exercise and the overall impact agenda as necessities and as positive influences in the sector and didn’t really capture much of the “voice of dissent” that is clear throughout much of the HE press and blogosphere. (One such example can be found here.) Whatever your stance on impact, some have spoken favourably of the REF’s impact case studies and how they may have influenced the decision to begin to correct the UK’s yearly research expenditure for inflation that came about in the 2015 spending review.
Some very positive recommendations that the Stern Report made involved a significant shift in how the REF assesses impact. The first suggests that each HEI should provide a unified statement on research environment and impact that would detail an institution’s ethos with respect to “high quality research and research-related activities, […] support for interdisciplinary and cross-institutional initiatives and impact.” REF2014 involved a research environment template for each subject area, which led to a fair amount of repetition as these approaches would often be replicated across many different units of a university, for example. So this using this approach, each HEI’s submission stands to be more streamlined which is useful to both those who prepare the submissions and those who review them.
The second recommendation I wanted to highlight involved the broadening of how the term impact should be interpreted; that it “need not solely focus on socio-economic impacts, but should also include impact on government policy, on public engagement and understanding, on cultural life, on academic impacts outside the field and on impacts on teaching”. The significance here is a formal recognition that different disciplines lend themselves to different forms of research impact. Moreover in promoting interdisciplinary work, it is essential that panellists who review REF submissions also view the forms of impact from this more comprehensive perspective.
Thirdly, and this goes hand-in-hand with the previous recommendation, impact case studies “could be linked to a research activity and a body of work as well as broad range of research outputs.” Whereas the last REF required that case studies be linked to specific outputs, this suggestion broadens that scope to include situations where an investigator or group’s expertise has been influential, but isn’t specifically traceable, in a linear sort of way, to any one particular output.
Each of the twelve recommendations is itself a composite of several sub-recommendations, so even the two that I highlight here have other facets worthy of further exploration – but that’s why the CAPD offers a workshop on research impact (keep an eye out for 2016/17 offerings!)
Other recommendations deal with more logistical flaws in the approach used in 2014, addressing loopholes that allowed HEIs “to game” or manipulate their REF submissions to paint a more favourable picture, for example. These, as well as others, including the portability of certain research outputs from one institution to another, will be discussed in my next post.
- The Stern Report: Building on Success and Learning from Experience
- Research Excellence Framework (REF) review: synthesis of responses submitted to the REF review call for evidence and follow-up interviews