Should I use ChatGPT to write a peer review?


Reading time
6 mins
Should I use ChatGPT to write a peer review?

Being a researcher is an exciting and intellectually stimulating journey, but it’s not without its challenges. One of the burdens we face in this profession is reviewer fatigue: the responsibility of writing peer reviews for our colleagues’ work. Now, don’t get me wrong, peer review is an essential process that ensures the quality and integrity of scientific research. However, it can sometimes feel like an additional weight on our shoulders. Picture this: you’ve spent countless hours conducting experiments, analyzing data, and writing your own paper. And then, you receive a request to review someone else’s work and feel guilty about declining the request, since it might adversely affect your career as a scientist. Can large language models (LLMs) like ChatGPT make it easier for you to peer review others’ papers? And if yes, should you use them? Let’s take a look at various views around using ChatGPT or any other LLM as a peer reviewing tool.

An experimental investigation of AI-generated peer review

Back in 2021, Checco et al.[1] developed an AI tool to experimentally investigate whether AI could serve as an alternative to a human peer reviewer during a journal’s manuscript evaluation process. The system they developed often provided outcomes similar to those from human peer reviewers, but it’s worth noting that they focused on what they termed “a rather superficial set of features” such as readability and format. They also found potential for bias, as LLMs and other machine learning tools are trained with data from the past, which may have inherent biases, and are therefore likely to reflect these biases in their outputs too.  

Recommendations after the launch of GPT3

After GPT3 (more commonly called just ChatGPT) took the academic world by storm in late 2022, Hosseini and Horbach (2023)[2] took a close look at how LLMs are being used in the publication process. What they discovered is pretty interesting. On the one hand, LLMs can be handy tools for summarizing peer review comments and creating initial draft decision letters. But here’s the catch—they could also make existing problems in the peer review system worse. It turns out that fraudsters might take advantage of LLMs to produce more authentic and well-written fake reviews. Another thing worth mentioning is that LLMs are still in the early stages of development. Right now, they seem more suitable for enhancing the first draft of a review rather than writing a review from scratch. Based on their findings, the researchers strongly advise journal editors and peer reviewers to be upfront about whether and how they’ve used LLMs in making decisions related to manuscripts. Transparency is key!

Drawbacks of LLMs in peer review

Donker (2023)[3] shared in The Lancet Infectious Diseases his firsthand experience of using an LLM to generate a peer review. And guess what? The results were not what he expected. The AI-generated peer review report seemed to have a lot of comments that sounded genuine, but here’s the twist—they had nothing to do with the actual manuscript being reviewed. Not only that, but the LLM even went ahead and generated a bunch of bogus references to cite. The review looked professional and balanced on the surface, but it lacked any specific critical content related to the manuscript or the study it described. The real danger here is that someone who hadn’t thoroughly read the manuscript might mistake it for an authentic review report. And to make matters worse, those unrelated comments could even be taken as reasons to reject the paper! His experience led him to strongly recommend against using LLMs for peer review.

So, should you use an LLM while peer reviewing a paper?

You can see from the above discussion that using the current form of LLMs as a replacement for human effort in peer review isn’t a good idea. If you are still tempted to save some time and effort, here are a set of steps you must take to prevent issues for the concerned authors and journal.

  1. Confirm that the journal and publisher permit you to use LLMs as a peer reviewer. Publishers like Emerald[4] do not approve of the use of AI tools as a substitute for peer review. The Program Chairs of the ICCV 2023[5] conference explicitly term it unethical to use LLMs in any part of the peer review process. Every reviewer for ICCV 2023 has to confirm that their comments reflect their genuine opinions and that no part of their report is AI-generated.
  2. If your journal confirms they don’t mind you using LLMs, you first must draft a peer review report that 100% is your own work. Read the manuscript, form your own views on its strengths and weaknesses and take a judgment call on its novelty and significance.
  3. After you’ve finished the above draft, check what output an LLM can provide as peer review. Use this as a second pair of eyes, to identify issues you might have overlooked. However, you must cross-verify all recommendations from the LLM on the basis of your own knowledge of the field and of the scientific process.
  4. If necessary, use an LLM to polish or summarize your peer review report. LLMs can be useful if you struggle with getting the tone of your comments right, or if are not confident of your English writing skills. But again, read the final output very carefully and make sure that your intended meaning has been conveyed fully and accurately.
  5. Fully disclose to the journal whether and how you’ve used LLMs. This step is key for maintaining transparency and safeguarding the integrity of the peer review process. You’ll need to confirm that all opinions in the peer review report are genuinely your own, and that you take responsibility for them.

Conclusion

As a busy researcher, LLMs are like a shortcut to becoming a super-efficient and productive peer reviewer! But before you get too carried away, you’ve got to be careful. We’re still in the early stages of LLM development, you know. Ethical concerns and the importance of human judgment come into play here, and LLMS—as they currently are—don’t seem to be able to replicate a human peer reviewer. But here’s the exciting part—imagine newer and more sophisticated LLMs coming into the picture. These newer LLMs could be like trusty sidekicks, giving peer reviewers that extra pair of eyes to catch things they might miss, but still allowing them to leverage their human expertise and skills. Now that’s something worth looking forward to!

 

Be the first to clap

for this article

Published on: Jun 23, 2023

An editor at heart and perfectionist by disposition, providing solutions for journals, publishers, and universities in areas like alt-text writing and publication consultancy.
See more from Marisha Fonseca

Comments

You're looking to give wings to your academic career and publication journey. We like that!

Why don't we give you complete access! Create a free account and get unlimited access to all resources & a vibrant researcher community.

One click sign-in with your social accounts

1536 visitors saw this today and 1210 signed up.