The advance and decline of the impact factor


Reading time
4 mins
The advance and decline of the impact factor

The impact factor is one of the most discussed topics in the publishing and scientific community. Thomson Reuters assigns most journals a yearly impact factor (IF), which is the mean citation rate during that year for the papers published in that journal during the previous two years. Last month, Thomson Reuters released the much-awaited Journal Citation Reports (JCR), with the new journal impact factors for 2013. According to Thomson Reuters, the latest JCR features 10,853 journal listings across 232 disciplines and 83 countries. A total of 379 journals received their first impact factor. Additionally, 37 journals were suppressed due to questionable citation activity. Suppressed journals are re-evaluated after two-years and it is decided whether they should be included in the JCR.

Here are some attention-grabbing highlights from the JCR: 66 journals were banned from the 2013 impact factor list because of excessive self-citation or ‘citation stacking,’ wherein journals cite themselves or each other excessively. According to Thomson Reuters, 55% journals show an increase whereas 45% show a decrease in impact factor this year. One such journal with a declined impact factor is PLoS ONE—the world’s largest journal by number of papers published. PLoS ONE’s impact factor has dropped by 16%, from 4.4 in 2010 (when it published 6,749 articles) to 3.7 in 2012 (when it published 23,468 articles). Interestingly, while several people in the publishing industry are discussing details of the new JCR, some journals and researchers are not bothered by it. Why is this so?

Researchers and publishing professionals are well aware of the increasing criticisms against the impact factor. Last year, a new initiative was launched to make academic assessment less dependent on the impact factor. In December 2012, a group of editors and publishers of scholarly journals gathered at the Annual Meeting of The American Society for Cell Biology in San Francisco to discuss current issues related to how the quality of research output is evaluated and how scientific literature is cited. They also wanted to find ways to ensure that a journal’s quality matched with the impact of its individual articles. In this meeting, they came up with a set of recommendations that is referred to as the San Francisco Declaration on Research Assessment (DORA). These recommendations mainly focus on practices relating to research articles published in peer-reviewed journals, and seek to find methods for improving the way in which the quality of research output is evaluated. The themes addressed in DORA are as follows:

  • The need to eliminate the use of journal-based metrics, such as journal impact factors, in funding, appointment, and promotion considerations
  • The need to assess research on its own merits rather than on the basis of the journal in which the research is published
  • The need to capitalize on the opportunities provided by online publication (such as relaxing unnecessary limits on the number of words, figures, and references in articles, and exploring new indicators of significance and impact).

Although DORA does not propose methods to achieve all this, it tries to clearly set out the problems regarding the impact factor and provide a path to overcome these problems. It discourages the use of impact factors to measure the impact of individual research articles; assess a researcher’s scientific contribution; and decide on a researcher’s promotion, hiring, and funding. DORA suggests the use of other journal-based metrics that provide a clearer picture of a journal’s performance such as the 5-year impact factor, EigenFactor, SCImago, h-index, editorial and publication times, etc.

DORA has got a positive response from many in the academic and scientific community worldwide. More than 8000 individuals and 300 organizations have signed up for it. Of the signers, 6% are in the humanities and 94% in scientific disciplines; 46.8% were from Europe, 8.9% from South America, and 5.1% from Asia. However, some critics have pointed out that DORA criticizes the impact factor harshly without providing a better alternative to assess journals’ and authors’ impact. They feel that impact factors have remained strong for long due to their reliability, which DORA does not acknowledge. Thomson Reuters has released a statement in response to DORA. While Reuters accepts that the impact factor does not and is not meant to measure the quality of individual articles in a journal, they say that it does correlate to the journal’s reputation in its field.

As authors and researchers, do you think DORA would bring about a change in the scientific world? Would you support it? Please share your views.

Also read about why the journal impact factor should not be used to evaluate research impact and other prevailing debates on the impact factor.

 

Be the first to clap

for this article

Published on: Jan 08, 2014

Sneha’s interest in the communication of research led her to her current role of developing and designing content for researchers and authors.
See more from Sneha Kulkarni

Comments

You're looking to give wings to your academic career and publication journey. We like that!

Why don't we give you complete access! Create a free account and get unlimited access to all resources & a vibrant researcher community.

One click sign-in with your social accounts

1536 visitors saw this today and 1210 signed up.