Responsible metrics can change the future of research evaluation: The Metric Tide report


Reading time
8 mins
Responsible metrics can change the future of research evaluation: The Metric Tide report

Metrics hold real power: they are constitutive of values, identities and livelihoods.

James Wilsdon,
in his foreword for The Metric Tide

Metrics and rankings play an important role in research communication. At an overall level, they are big data about the research we produce and as such they reflect our need and ability to use this data to understand trends or patterns and make informed decisions. More specifically, metrics help examine the potential impact of research, and in doing so they indicate the degree to which researchers(s) have been successful. They also provide a framework for analyzing published research and for identifying journals that are disseminating path breaking work. However, over the years, the excessive and somewhat unfiltered application of research evaluation metrics, citation metrics in particular, has led to several problems. For example, the journal impact factor is considered by many as the single most reliable indicator of journal prestige; it plays a major role in grant applications; it is also used by many institutions to evaluate the publication success of researchers. This has resulted in undue pressure on researchers and journals to stay ahead in the game, even if this means adopting unethical practices such as fudging data. Another recent and extreme example is that of the death of Professor Stefan Grimm from the Imperial College London following his inability to cope with the sheer pressure of delivering against institutional metrics. By now most of us know that there is a lot of scope for improvement as far as the metrics that evaluate research and academic performance are concerned.


It is this thought that led the Higher Education Funding Council for England (HEFCE) to undertake an Independent Review of the Role of Metrics in Research Assessment and Management. The review was chaired by Professor James Wilsdon from the Science Policy Research Unit (SPRU), University of Sussex, and was supported by experts in scientometrics, research funding, research policy, publishing, university management and administration. This group set out to learn more about the current situation of impact metrics and indicators in the UK, including their potential uses and limitations. They also looked at how metrics were being used within institutions and across disciplines in the UK. Here are some of their main findings, which they published in a report titled The Metric Tide.

  • Metrics may be taking over.
    Over the years, several factors have led to a misplaced emphasis on numbers and rankings in research: focus on the accountability of public spending on higher education and research; policymakers’ demands for measured intelligence on research quality and impact; pressure on academic institutions to manage strategies for research; pressure on institutions, journals, and researchers to stay ahead in the ranking number game; and easy availability of information on evaluation metrics and tools for analyzing them.
     
  • There is no universal approach to metrics.
    There are different approaches to how metrics are defined, described, calculated, and used by the research community. On the one hand, quantitative indicators and alternative metrics could help increase transparency and accountability in research. But excessive reliance on “narrow, poorly-designed indicators – such as journal impact factors (JIFs) – can have negative consequences” (p. 9 of the report). There is considerable skepticism about bluntly using quantitative metrics and there is a need to design and apply metrics in a more structured manner.
     
  • The research community is very loyal to the peer review process.
    The peer review system is an older form of research evaluation, and despite its flaws, it continues to be the preferred form of academic governance. The creation and application of carefully designed quantitative indicators will prove to be useful when used in conjunction with peer review, for they will supplement peer review to provide all-round evaluation.
     
  • There is not enough transparency in the creation and application of metrics.
    The misplaced emphasis on quantitative indicators could lead to some metrics being “gamed,” and this needs to be acknowledged and addressed.
     
  • The data infrastructure for managing metrics should be robust.
    We need better, more accurate systems to organize data collection and analysis. Currently the administrative load imposed by the task of calculating and applying metrics might not be efficient. In some cases, there may be a need to cut down peer review costs when reliable evaluation metrics can be used, but this may differ across disciplines.
     
  • There should be more research on research.
    The research community needs to focus on larger questions, too, such as those related to science policy (e.g., ‘Why are we funding what we are funding?’). This is where the scientometrics community can help by providing reliable and flexible quantitative indicators that enable better assessments.

 

Introducing responsible metrics
Building on the concept of "responsible research and innovation (RRI)", the report proposes “the notion of responsible metrics as a way of framing appropriate uses of quantitative indicators in the governance, management and assessment of research.” The primary characteristics of responsible metrics are:

  • Robustness: They are based on the most accurate data available and have a large scope.
  • Humility: They acknowledge that quantitative evaluation can supplement qualitative assessment, but it cannot replace it.
  • Transparency: The processes of data collection and analyses are open and transparent, enabling those being evaluated to test and verify the results.
  • Diversity: Responsible metrics account for differences in discipline and they use different indicators to allow for capturing the “plurality of research and researcher career paths across the system.”
  • Reflexivity: Indicators can be updated after their potential effects have been tested.


The underlying philosophy of the concept of responsible metrics is that it is possible to devise quantitative indicators that are flexible, more inclusive, reflexive, and closer to the actual information available.

Recommendations to curb the surging metric tide
Using responsible metrics as the bases, the report lists 20 recommendations for stakeholders in the UK’s research community, in the areas of leadership, funding, governance, management, administration, data infrastructure, research information management, information sources, etc. While many suggestions are specific to research in the UK, some recommendations have general applicability to research management across the globe. They are:

  • The research community should revisit its opinion of the role quantitative indicators play in research assessment.
  • Principles for research management and assessment should be defined and led at the institutional level. These principles, along with the use of responsible metrics, should be propagated by those research managers and administrators.
  • There should be more clarity on the role of evaluation metrics in career progression and academic appointment.
  • Researchers should be more scrupulous about using quantitative indicators to boost CV value.
  • Funders should have their own principles for using metrics to evaluate research, instead of bluntly applying the metrics available.
  • To increase transparency in data collection and analysis, principles that support open and reliable research management should be developed and adopted by all stakeholders in research.
  • The use of unique identifiers should be encouraged, e.g., the use of ORCID should be made mandatory in the UK.
  • Publishers should mandate unique IDs and ISSNs to enable smoother tracking for research administrators. Institutions, too, should have their own unique identifiers.
  • Digital Object Identifiers (DOIs) should be extended to all research outputs, and not be restricted to manuscripts.
  • There should be an increase in investment in research information infrastructure. Research funders, too, should invest more in “the science of science policy.”
  • At an overall level, research data management infrastructure should be improved and made more efficient.


The idea behind undertaking this 15-month long project was to provide a sort of meta-analysis of research evaluation metrics as well as help research assessment processes such as the Research Excellence Framework (REF) – an evaluation system extensively used by higher education institutions in the UK to measure research quality and impact – plan its future assessment cycles. But the observations and recommendation made by Wilsdon et al. may well be applicable to institutions, funding bodies, journals, and researchers across the globe. The report rightly indicates that a one-size-fits-all approach will not work; instead, “it is sensible to think in terms of research qualities, rather than striving for a single definition or measure of quality.” Further, it is clear that at the fundamental level, responsible metrics pave the way for a more flexible, tolerant, and responsive research evaluation system, one that reflects the dynamic nature of scientific development. Whether this philosophy will give a new direction to the rising metric tide and enable more reliable research evaluation remains to be seen.
 

Sources:

  • Wilsdon, J., et al. (2015). The Metric Tide: Report of the Independent Review of the Role of Metrics in Research Assessment and Management. DOI: 10.13140/RG.2.1.4929.1363
  • Image designed by freepik.com


Related reading:

 

Be the first to clap

for this article

Published on: Jul 23, 2015

Passionate about scholarly publishing, always looking to have memorable conversations with researchers and industry professionals across the globe
See more from Jayashree Rajagopalan

Comments

You're looking to give wings to your academic career and publication journey. We like that!

Why don't we give you complete access! Create a free account and get unlimited access to all resources & a vibrant researcher community.

One click sign-in with your social accounts

1536 visitors saw this today and 1210 signed up.