Can metrics replace peer review in indicating the quality & impact of research?
The metric tide is certainly rising.
James Wilsdon, Professor of science and democracy at the University of Sussex
Mapping the quality and impact of scientific research has been a concern of all major stakeholders of research and development. The two most prominent methods of evaluating research are peer review and quantitative metrics. The peer review process, wherein experienced researchers critique a manuscript, has been held in high esteem and is considered to add great value to the quality of research. Quantitative analyses of published research such as the impact factor provide a metrics-based assessment of research. Thus, it is clear that while peer review provides a qualitative assessment, metrics provide quantitative indicators of impact. Over the years, with the growing pressure for evaluation of public spending on higher education and research, evaluation metrics are being attached more importance than ever before. Metrics play a particularly vital role in processes of research assessment such as the Research Excellence Framework (REF) where quantitative indicators are used in conjunction with peer review to assess the impact of research.
To review the role metrics play in determining the quality and impact of research, the Higher Education Funding Council for England (HEFCE), which drives REF, set up a committee in 2014 with the intent of improving the ways in which funders and policy makers use quantitative indicators for the next research assessment exercise. The Independent Review of the Role of Metrics in Research Assessment and Management released a report in July 2015 titled The Metric Tide, chaired by Professor James Wilsdon from the Science Policy Research Unit (SPRU), University of Sussex, and was supported by experts in scientometrics, research funding, research policy, publishing, university management and administration. This report explores in detail the potential uses and limitations of research metrics and indicators, and sheds light on some important issues such as: ‘Can metrics replace peer review?’ and ‘Can a balance be struck between peer review and metric-based alternatives?’
The evidence collected by the report suggests that the HEFCE’s 2008 pilot project to explore the potential for bibliometrics found that bibliometrics were not sufficiently robust at that point in time to replace peer review in the REF. Hence, the report recommends that peer review should not be replaced by metrics in the next REF assessment.
One of the main ideas The Metric Tide report propagates is that “Metrics should support, not supplant, expert judgement.” It introduces the idea of ‘informed peer review’ wherein specific bibliometric data would supplement peer review, depending on the goal and context of assessment, for an all-round evaluation.
The dominant idea of the report is that the future of research evaluation would need to have a more mature research system that is founded on a combination of advanced metrics and informed peer review. Wilsdon et al. sum this up as “Perhaps a new body of ‘translational bibliometrics’ literature to flesh out the concept of informed peer review will emerge from these initiatives.”
For an overview of The Metric Tide report, click Responsible metrics can change the future of research evaluation: The Metric Tide report