The very first time I wrote an academic blog post, it was about the impact factor. It was June 2012; the latest Journal Citation Reports had just been released, and the impact factor was a trending topic. At the time, I didn’t know much about it. After doing plenty of research, I found mixed opinions, including many strong ones about the need to boycott the impact factor, but I was afraid to dismiss it as a metric, because I could see how much researchers seemed to revere it. So I erred on the side of caution and wrote a balanced post about the pros and cons of the impact factor — why it’s great but should be used with caution.
Four years later, here I am again, and I wonder what’s changed. Of the surface, I would say not much. Thomson Reuters had just released the 2016 Journal Citation Reports and once again sent the scholarly publishing world into a frenzy. #ImpactFactor quickly became a trending topic of discussion on social media. Journals that are happy with their latest rankings are publishing editorials and tweeting about their new impact factors. Authors who are almost ready with their submissions are reassessing their target journals based on their new impact factors. Academic publishing blogs like Editage Insights are chiming in with a series of posts on the impact factor, many of them reiterating how the impact factor has its pros and cons, how it is overused and misused, how newer measures of research quality should be embraced, and so on.
I’ll admit that since 2012 scholarly publishing has seen a boom in useful discussions about alternative metrics; science for the public; and measuring research impact in terms of how it touches and improves people’s lives, how much it’s being discussed on social media, and how many lay people are aware of it. But what about the ground reality? I was recently in China and asked several authors what aspects they consider when selecting a target journal for their submission. I wasn’t surprised to learn that the impact factor is still the most important thing for them. And why not, if academic and funding institutions still heavily rely on the impact factor when making decisions related to promotion, tenure, and grant disbursement!
Digitization is revolutionizing academic publishing, and the Internet is changing the way people access and absorb information. Here’s an analogy I can think of to explain this: I love cooking and I’m pretty good at it, clearly a trait I’ve picked up from my mother. Now I think about how my mum treasured her recipe books; how she would pull them out, dust them off, and browse through stacks to find that one recipe she had used years ago. I on the other hand have never owned a single recipe book. I usually have an idea for something I want to make, I look it up online, skim through several recipes for the same dish, and mix and match and put something together. I have just one large recipe book — a collection of blog posts bookmarked on my browser.
Now consider this: Authors still consider the impact factor very important when selecting a journal to submit to. But do citing authors care as much what journal a relevant paper has been published in? When researchers are writing their introduction and discussion sections, they look up keywords to find relevant papers, and they cite those papers. Would they omit relevant citations just because those papers are published in a journal with an impact factor below some threshold limit? Isn’t it ironic then that the impact factor is based on how widely individual papers are cited but is still meant to reflect the quality of entire journals?
Does the impact factor of a journal even matter when more and more published research is being deemed irreproducible, when more and more individual papers are being retracted, when the media is misrepresenting and inaccurately inflating the significance of published research, when more and more authors are complaining about journals not being transparent about their reasons for rejection?
Going back to my cooking analogy, if I think about all the recipes I look up online, I find it very important to read the comments, where readers rate the recipe, share their experience with replicating it, or suggest changes to the ingredient proportions now and then. In all fields of business, consumer reviews and ratings form the base indicator of quality and success. Extending this line of thinking to academia, researchers are customers for journals, in both the traditional subscription (reader pays) and open access (author pays) models. Isn’t it only fair then that the customers be given a voice and that their voice counts in journal rankings?
So how do I think journal quality should be measured? I would want to give authors a forum to rate and review journals based on how well they explain their requirements, how well they handle submissions, how promptly they respond to author inquiries. And should there be a need for authoritative bodies that rank journals, these are some of the criteria that should determine the rankings, among other factors like the thoroughness of the journal screening and peer review processes.
For research papers themselves, citation metrics and altmetrics are great indicative tools, but none of these are complete in themselves. All types of research studies —with robust or questionable methods — have equal opportunity for garnering citations and social mentions, and it’s very difficult to consider these indicators within their appropriate context. On the contrary, if these measures are used in isolation, the “impact” of a research paper could well be determined, or at least influenced, by the budget the publisher or authors’ institution can devote to promoting the published paper.
Instead of merely quantitative measures, I would want a forum where researchers can review and rate published articles, comment on whether the research could be successfully reproduced, and suggest enhancements to the protocols. Journals that rack-up good user reviews and ratings on their collective individual articles would thus automatically qualify as high-ranking, high-impact journals.
A lot has changed since I first wrote about the impact factor in 2012. Needless to say, my own views about the impact factor have changed, along with my confidence in expressing them. And I do believe that academics everywhere echo my sentiments. In fact, last year, through an Independent Review of the Role of Metrics in Research Assessment and Management, the Higher Education Funding Council for England proposed several great recommendations for how the academic community should change its approach to assessing research.
But what is not changing fast enough, and doing more harm than good, is the way funding bodies and institutions use the impact factor to measure the worth of a researcher, the way researchers feel the need to rely on the impact factor for journal selection decisions, and the fact that a mere number seems to hold more sway than researchers’ real-life experiences working with journals. It’s time for research and researcher quality assessment to keep pace with all the other changes sweeping over the academic publishing world.