Can metrics replace peer review in indicating the quality & impact of research?

Reading time
5 mins
Can metrics replace peer review in indicating the quality & impact of research?

The metric tide is certainly rising. 

James Wilsdon, Professor of science and democracy at the University of Sussex

Mapping the quality and impact of scientific research has been a concern of all major stakeholders of research and development. The two most prominent methods of evaluating research are peer review and quantitative metrics. The peer review process, wherein experienced researchers critique a manuscript, has been held in high esteem and is considered to add great value to the quality of research. Quantitative analyses of published research such as the impact factor provide a metrics-based assessment of research. Thus, it is clear that while peer review provides a qualitative assessment, metrics provide quantitative indicators of impact. Over the years, with the growing pressure for evaluation of public spending on higher education and research, evaluation metrics are being attached more importance than ever before. Metrics play a particularly vital role in processes of research assessment such as the Research Excellence Framework (REF) where quantitative indicators are used in conjunction with peer review to assess the impact of research.

To review the role metrics play in determining the quality and impact of research, the Higher Education Funding Council for England (HEFCE), which drives REF, set up a committee in 2014 with the intent of improving the ways in which funders and policy makers use quantitative indicators for the next research assessment exercise. The Independent Review of the Role of Metrics in Research Assessment and Management released a report in July 2015 titled The Metric Tide, chaired by Professor James Wilsdon from the Science Policy Research Unit (SPRU), University of Sussex, and was supported by experts in scientometrics, research funding, research policy, publishing, university management and administration. This report explores in detail the potential uses and limitations of research metrics and indicators, and sheds light on some important issues such as: ‘Can metrics replace peer review?’ and ‘Can a balance be struck between peer review and metric-based alternatives?’

Scientometric data helps in indicating the progression of knowledge. However, the reason metrics such as the journal impact factor and h-indices have garnered criticism is that they fail to take into account the finer qualitative threads of research that cannot be quantified using metrics. Most of the new tools of research assessment – including alternative metrics and online platforms and tools – are in early stages of development. Hence, the report suggests that they should not be overly relied upon without a deeper understanding about the way the data is collected and analyzed. According to Wilsdon et al., “Evidence of a robust relationship between newer metrics and research quality remains very limited, and more experimentation is needed.” They further add that, “The majority of those who submitted evidence, or engaged with the review in other ways, are sceptical about any moves to increase the role of metrics in research management.”
To throw light on whether metrics can replace peer review, the report looked at the applicability of metrics within different research cultures, compared the peer review system with metric-based alternatives, and considered what balance might be struck between the two. The peer review system is not free of pitfalls, but “it is the least worst form of academic governance we have, and should remain the primary basis for assessing research papers, proposals and individuals, and for national assessment exercises like the REF,” states the report. Majority of the respondents that helped compile the report (26 responses, 13 of which were from learned societies supported peer review) were of the opinion that “metrics must not be seen as a substitute for peer review” and should “continue to be the ‘gold standard’ for research assessment.” It was also suggested that metrics could support peer reviewers in “making nuanced judgements about research excellence.” Thus, peer review continues to be favored by experts across disciplines.

The evidence collected by the report suggests that the HEFCE’s 2008 pilot project to explore the potential for bibliometrics found that bibliometrics were not sufficiently robust at that point in time to replace peer review in the REF. Hence, the report recommends that peer review should not be replaced by metrics in the next REF assessment.

One of the main ideas The Metric Tide report propagates is that “Metrics should support, not supplant, expert judgement.” It introduces the idea of ‘informed peer review’ wherein specific bibliometric data would supplement peer review, depending on the goal and context of assessment, for an all-round evaluation.  

The dominant idea of the report is that the future of research evaluation would need to have a more mature research system that is founded on a combination of advanced metrics and informed peer review. Wilsdon et al. sum this up as “Perhaps a new body of ‘translational bibliometrics’ literature to flesh out the concept of informed peer review will emerge from these initiatives.” 

For an overview of The Metric Tide report, click Responsible metrics can change the future of research evaluation: The Metric Tide report

Be the first to clap

for this article

Published on: Jul 23, 2015

Sneha’s interest in the communication of research led her to her current role of developing and designing content for researchers and authors.
See more from Sneha Kulkarni


You're looking to give wings to your academic career and publication journey. We like that!

Why don't we give you complete access! Create a free account and get unlimited access to all resources & a vibrant researcher community.

One click sign-in with your social accounts

1536 visitors saw this today and 1210 signed up.