According to a recent paper published by Queen Mary’s, the UK government’s research evaluation system appears to incentivize academics to produce a greater quantity of work, but at the expense of quality.
The team analyzed over 3.5 million publications from UK researchers both before and after deadlines for the Research Excellence Framework (REF), the government’s evaluation system for universities. The study reveals concerning data trends that suggest REF deadlines drive academics to prioritize quantity over quality, with potentially damaging consequences for both institutions and researchers.
Research activity
As many academics face pressure from their institutions and grant bodies to demonstrate research activity, regular assessments such as REF are utilized to allocate funding and evaluate performance, which can have significant financial and career implications.
The study shows that ahead of each REF deadline in the analyzed period (1996, 2001, 2008, and 2014), UK researchers produced a significantly higher number of papers. However, these tended to be published in lower-impact journals, which resulted in fewer citations and a higher likelihood of retraction.
Interestingly, the study found that after the REF assessments, UK researchers produced a lower quantity of higher-quality papers. Furthermore, the publications made after the REF showed greater variation in quality, indicating that researchers were more willing to experiment in novel areas when not under the pressure of REF deadlines. These findings highlight potential limitations of the UK government’s research evaluation system and its impact on academic research quality.
“These patterns are especially worrisome because they do not seem to just set the time for natural research cycles with their ups and downs,” the researchers explain. “The same researchers produce a steadier flow of papers in years that they spend outside the UK.”
Shifting incentives
While the researchers argue that the shifting incentives in the REF are largely unintentional, they also urge a rebalancing of the scales to be undertaken, alongside the provision of better support for more long-term and exploratory research.
“If you work in a fast-paced field such as computer science, an evaluation every five years may not matter so much in terms of which projects you pursue, or which journals you publish in—but if your projects can take more than five years, the REF can be really disruptive,” they explain.
“If you give researchers too much time, they operate under less pressure and may slack off, or are reluctant to cut ambitious projects that have not taken off despite investment. If you give them too little time, they may stick to low-hanging fruit: more established research streams, easier journals. It’s unfortunate that designers of cross-field evaluations often forget that research areas differ in where the sweet spot is.”