Why you should not use the journal impact factor to evaluate research
Impact factor, an index based on the frequency with which a journal's articles are cited in scientific publications, is the most widely used citation metric to evaluate the influence of published research and the prestige of researchers. However, the reliance on impact factor to assess a researcher’s worth has frequently been called into question. This enlightening series covers the buzz around the latest impact factor release, and delves deeper into interesting views such as: Why is it not enough to use the journal impact factor to evaluate research? What makes for good science? Can any other metrics replace impact factor?
The impact factor of a journal is a simple average obtained by considering the number of citations that articles in the journal have received within a specific time frame.5 A previous article "The impact factor and other measures of journal prestige" touched upon its calculation and features. This article delves a little deeper into the fallacies of the impact factor and points that you should consider when using it.
|How the JIF should be used||How the JIF should not be used|
|As a measure of journal prestige and impact||To evaluate the impact of individual articles and researchers|
|To compare the influence of journals within a specific subject area||To compare journals from different disciplines|
|By librarians, to manage institutional subscriptions||By funding agencies, as a basis for grant allocation|
|By researchers, to identify prestigious field-specific journals to follow and possibly submit to||By authors, as a singular criterion of consideration for journal selection|
|By journals, to compare expected and actual citation frequency and compare themselves with other journals within their field||By hiring and promotion committees, as a basis for predicting a researcher’s standing|
|By publishers, to conduct market research6||By authors, to compare themselves|
Characteristics of the JIF
Below are listed some of the features and shortcomings of the JIF that should be well understood in order to prevent misuse of this metric:
- The JIF is a measure of journal quality, not article quality. The JIF measures the number of citations accrued to all the articles in a journal, not to individual articles. Following the well-known 80-20 rule, the top 20% articles in a journal receive 80% of the journal’s total citations; this holds true even for the most reputed journals like Nature.8 So, an article published in a journal with a high JIF has not necessarily had high impact: it is very well possible that the article itself has not received any citations. Conversely, a few highly cited papers within a particular year can result in anomalous trends in a journal’s impact factor over time.9
- Only citations within a two-year time frame are considered. The JIF is calculated considering only those citations that a particular journal has received within 2 years prior. However, different fields exhibit variable citation patterns. While some fields such as health sciences receive most of their citations soon after publication, others such as social sciences garner most citations outside the two‐year window.11 Thus, the true impact of papers cited later than the two-year window goes unnoticed.
- The nature of the citation is ignored. As long as a paper in a journal has been cited, the citation contributes to the journal’s impact factor, regardless of whether the cited paper is being credited or criticized.8,11 This means that papers being refuted or exemplified as weak studies can also augment a journal’s impact factor. In fact, even papers that have been retracted can increase the impact factor because, unfortunately, citations to these papers cannot be retracted.
- Only journals indexed in the source database are ranked. Thomson Reuters’ Web of Science®, the source database for the calculation of the JIF, contains more than 12,000 titles. Although this figure is reasonably large and is updated annually, several journals, especially those not published in English, are left out. Thus, journals not indexed in Web of Science don’t have an impact factor and cannot be compared with indexed journals.12
- The JIF varies depending on the article types within a journal. Review articles are generally cited more often than other types of articles because the former present a compilation of all earlier research. Thus, journals that publish review articles tend to have a higher impact factor.13
- The JIF is discipline dependent. The JIF should only be used to compare journals within a discipline, not across disciplines, as citation patterns vary widely across disciplines.14 For example, even the best journals in mathematics tend to have low IFs, whereas molecular biology journals have high IFs.
- The data used for JIF calculations are not publicly available. The JIF is a product of Thomson Reuters®, a private company that is not obliged to disclose the underlying data and analytical methods. In general, other groups have not been able to predict or replicate the impact factor reports released by Thomson Reuters.8
- The JIF can be manipulated. Editors can manipulate their journals’ impact factor in various ways. To increase their JIF, they may publish more review articles, which attract a large number of citations, and stop publishing case reports, which are infrequently cited. Worse still, cases have come to light wherein journal editors have returned papers to authors, asking that more citations to articles within their journal—referred to as self-citations—be added.15
These are some of the reasons you should not look at the JIF
as a measure of research quality. It is important to explore other more relevant indicators for this purpose, possibly even in combination. If the JIF is used by a grant-funding body or your university, it might be a good idea to list your h index and citation counts for individual articles, in addition to the impact factors of journals in which you have published. This will help strengthen your argument on the quality and impact of your papers, regardless of the prestige of the journals you have published in.
Finally, remember that the nature of research is such that its impact may not be immediately apparent to the scientific community. Some of the most noteworthy scientific discoveries in history were recognized years later, sometimes even after the lifetime of the contributing researchers. No numerical metric can substitute actually reading a paper and/or trying to replicate an experiment to determine its true worth.
- Garfield E (2006). The history and meaning of the journal impact factor, The Journal of the American Medical Association, 295: 90-93.
- Brumback RA (2009). Impact factor wars: episode V−The empire strikes back. Journal of Child Neurology, 24: 260-262.
- Brischoux F and Cook T (2009). Juniors seek an end to the impact factor race. Bioscience, 59: 238-239.
- Rossner M, Epps HV, and Hill E (2007). Show me the data. The Journal of Cell Biology, 179: 1091-1092.
- Adler R, Ewing J, Taylor P (2008). Citation Statistics. Joint Committee on Quantitative Assessment of Research, International Mathematical Union. [http://www.mathunion.org/fileadmin/IMU/Report/CitationStatistics.pdf]
- Garfield E (2005). The agony and the ecstasy—the history and meaning of the journal impact factor. Presented at the International Congress on Peer Review and Biomedical Publication. [http://garfield.library.upenn.edu/papers/jifchicago2005.pdf]
- Saha S, Saint S, and Christakis DA (2003). Impact factor: a valid measure of journal quality? Journal of the Medical Library Association, 91: 42-46.
- Neylon C and Wu S (2009). Article-level metrics and the evolution of scientific impact. PLoS Biology, 7: 1-6.
- Craig I (2007). Why are some journal impact factors anomalous? Publishing News. Wiley-Blackwell. [http://blogs.wiley.com/publishingnews/2007/02/27/why-are-some-journal-impact-factors-anomalous/]
- EASE. EASE statement on inappropriate use of impact factors. [http://www.ease.org.uk/publications/impact-factor-statement]
- West R and Stenius K. To cite or not to cite? Use and abuse of citations. In: Publishing Addiction Science: A Guide for the Perplexed. Babor TF, Stenius K, and Savva S (eds). International Society of Addiction Journal Editors. [http://www.who.int/substance_abuse/publications/publishing_addiction_science_chapter4.pdf]
- Katchburian E (2008). Publish or perish: a provocation. Sao Paulo Medical Journal, 202-203.
- The PLoS Medicine Editors (2006). The impact factor game. PLoS Medicine 3(6): e291.
- Smith L (1981). Citation analysis. Library Trends, 30: 83-106.
- Sevinc A (2004). Manipulating impact factor: An unethical issue or an Editor’s choice? Swiss Medical Weekly, 134:410.
You're looking to give wings to your academic career and publication journey. We like that!
Why don't we give you complete access! Create a free account and get unlimited access to all resources & a vibrant researcher community.
Subscribe to Journal Selection