Is peer review really effective? The case of 120 withdrawn papers
Peer review is perceived as one of the mainstays of scientific publishing. Papers that undergo the peer review process are generally considered to be of high quality since they are scrutinized by experts before publication. Nevertheless, this process has also been criticized on various grounds. The recent case of withdrawal of 120 fake papers from well-known peer-reviewed journals such as Springer and Institute of Electrical and Electronic Engineers (IEEE) has sparked a discussion among the scientific community regarding the quality of peer review and whether journals actually conduct a peer review.
Cyril Labbé, a computer scientist of Joseph Fourier University in Grenoble, France, worked for two years and found that 120 conference proceedings and papers related to specific conferences were computer-generated gibberish. A computer program called SCIgen, an invention of researchers at the Massachusetts Institute of Technology (MIT) in Cambridge, was used to spawn these nonsensical papers. SCIgen randomly combines strings of words to produce fake papers that can seem logical to a person without experience or expertise in that field of study. However, these papers were published by journals claiming to have a peer review process in place, which raises questions about whether these papers really underwent the peer review process, and if they did, how these papers found their way to publication.
Let’s consider some concerns this case of withdrawn papers has raised:
1. Is there a problem with the peer review system?
Expecting peer reviewers to invariably spot fraudulent data might be a tough proposition. In the case of these withdrawn papers, the content seemed vague, but the terms used were plausible. For instance, a paper titled “TIC: a methodology for the construction of e-commerce” contained this in its abstract:
In recent years, much research has been devoted to the construction of public-private key pairs; on the other hand, few have synthesized the visualization of the producer-consumer problem. Given the current status of efficient archetypes, leading analysts famously desires the emulation of congestion control, which embodies the key principles of hardware and architecture. In our research, we concentrate our efforts on disproving that spreadsheets can be made knowledge-based, empathic, and compact.
It is understood that a paper that passes peer review does not necessarily imply that the research is flawless. Due to the lack of adequate incentives, peer review is at times not as rigorous as it should be. It is possible that these papers were submitted to prove precisely this. As Steven Harnad, an American cognitive scientist, comments, "Qualified referees are a scarce, over-harvested resource. It is not easy to find the right referees; ill-chosen referees (inexpert or biased) can admit a bad paper; they can miss detectable errors.”
2. Is the “publish or perish” culture responsible for such scandals?
Labbé opines that high pressure on research scientists to publish, and to do so frequently, creates an environment where publishing fake research can be incentivized. It is a well-known fact that in some fields and nations, career advancement depends on the number of publications researchers have to their credit, with an equal importance attached to the impact factor of the journals they get published in.
So it’s likely that some researchers resort to unethical publication practices such as submitting fake papers. Michael Behe, a Professor of Biological Sciences at Lehigh University in Pennsylvania, retorts, "It looks like there are far more charlatans than we thought, or conversely, the economic benefits of securing tenure far outweigh the punishment for getting caught."
3. Is there a difference between the peer review of subscription-based and open access journals?
Since open access publishing involves the authors paying to get published, questions are raised by many about the quality of peer review and the research that gets published. Incidentally, the 120 withdrawn papers were published in subscription-based journals. Does this put to rest claims that open access publishers have less rigorous peer review than subscription-based ones?
As Cyril Labbé, the scientist who exposed the fake published papers, surmises, this scandal indicates a “spamming war started at the heart of science.” Although the intentions behind submitting the fabricated papers aren’t clear—whether to test journals, as a prank, or to defame authors/editors—it is, indeed, disturbing that esteemed journals with appropriate publication processes could so effortlessly be misled. Such instances raise questions regarding the credibility of the scientific community. While journals should be more vigilant of the research they publish, it’s time researchers stop considering the volume of publication as a measure of success.
Do you think journals actually put papers through peer review and how effective is it? Is it appropriate that the number of publications a researcher has should define future career opportunities? Please share your views.