Beware the ‘Research Excellence Framework’ ranking in the humanities
Share this on
47241

Beware the ‘Research Excellence Framework’ ranking in the humanities

Beware the ‘Research Excellence Framework’ ranking in the humanities

UK universities often tout their performance in the Research Excellence Framework (REF), the most recent of which was conducted in 2014. It’s seen as another league table where universities are ranked according to the quality of their research, as opposed to other indicators like teaching, employer reputation, faculty/student ratio, etc. Students shopping for which institution to apply to sometimes rely on this metric as a measure of quality research.

The process of expert review under the REF assesses three distinct elements: quality of outputs (eg publications, performances and exhibitions), impact beyond academia, and the environment that supports research. Submissions are then awarded either a one-star, two-star, three-star or four-star ranking.

Four UK higher education funding bodies – Research England, the Scottish Funding Council (SFC), the Higher Education Funding Council for Wales (HEFCW), and the Department for the Economy, Northern Ireland (DfE) – undertake this assessment to provide accountability for public investment in research,  establish reputational yardsticks and inform the selective allocation of funding for research.

Like all rankings, the REF has its share of shortcomings. The latest criticism comes from an op-ed in The Guardian which details why it’s a damaging exercise for the humanities.

The scale of the task, the crude scoring system and the measure of impact outside universities are listed as the three main factors why the REF cannot produce an honest or meaningful assessment of research in the humanities.

“There are too few assessors to provide competent, specialised judgement on the range of work submitted. The workload imposed on them requires superhuman capacities: along with their normal teaching and research, panel members must read the equivalent of a full-length book every day for nine months,” wrote John Marenbon, a senior research fellow of Trinity College, Cambridge. Time is better spent focusing on teaching and research, he argues.


The scoring system fails to distinguish the different values produced by different outputs. Monographs, a detailed written study of a single specialized subject or an aspect of it, are “20 times the length of an article and require 50 times as much work” compared to articles. It’s a long-term project – taking up four or five years of research and writing – where most of the ground-breaking research is produced.

But when the REF does not place the appropriate value on monographs, researchers place their focus on articles to gain recognition, a regrettable trend as articles are described as a relatively mediocre activity, even superficial.

Within the humanities, “Impact” again becomes one of the REF’s key sore points. Previous criticisms have lamented that if researchers approached their work to bring impact outside the academy, this could encroach on academic freedom. Furthermore, to measure “impact” fairly and impartially is an impossible task.

In his op-ed, Marenbon said this measure of impact, which will account for a quarter of the score in the next REF in 2021, is failing the humanities.

“The criteria are not designed for humanities subjects, but rather for scientific discoveries or technological advances. They exclude academic books that reach a wide audience of general readers – the real way scholars in history, literature, art and philosophy make an impact.

“The REF, then, causes real harm, distorting the working patterns of academics in the humanities, discouraging them from popularising their ideas, and deceiving them with assessments made in such a way that they cannot be reliable.”

Liked this? Then you’ll love…

China is set to beat the US for top STEM research. Here’s why.

University rankings are measuring the wrong things, say researchers