AI and peer review: Collaborative intelligence and human expertise


Reading time
7 mins
AI and peer review: Collaborative intelligence and human expertise

Typically, the initial processing of an article takes a couple of weeks. Identifying and selecting peer reviewers takes another 1–2 weeks, and the actual review could take a month (or more).1 The annual cost of peer review per reviewer has recently been estimated to be over 1200 USD.2 Imagine, the journey of a manuscript must be arduous, visiting the editorial offices of multiple journals. Millions of hours are dedicated annually to the peer review of manuscripts that have been rejected once and subsequently resubmitted to different journals!1 It is amply clear that the system needs some changes to reduce the burden on peer reviewers. 

There has been much excitement around the rapidly developing field of artificial intelligence (AI) in every sphere of life. AI has the potential to enhance the quality and efficiency of peer review in academic publishing, but there might be some caveats. In this article, we provide an overview of the possibilities and concerns of using AI in peer review and how best AI can come together with human expertise. 

Potential applications of AI in peer review 

AI tools can help streamline the peer review process, dramatically cutting down the time and effort that reviewers and journal editors need to put in. In fact, peer review has benefited a great deal from automation in workflows for many years now, e.g., in detecting plagiarism (iThenticate, CrossCheck), duplicate submissions, and image manipulation. There are many more steps that can harness the power of AI for better automation and efficient peer review: 

1. Quick screening for various parameters 

a. Ethical issues: AI can be used to identify and flag violations of ethical standards by screening for ethical approval statements, whether and how consent was obtained, inclusion of appropriate disclosures, etc. Screening can be done to check if funder requirements such as grant numbers, clinical trial registration, and open data requirements have been met. 

b. Basic checks for the journal: While ensuring a match of scope of submitted manuscripts with the journal needs human judgment, AI tools can assist with basic assessments, such as confirming that the submission is one of the journal’s accepted article types and assessing the quality of writing, data presentation, and basic formatting. 

c. Compliance with reporting guidelines: Automated tools can aid in assessing a study’s compliance with basic reporting items per EQUATOR guidelines (e.g., CONSORT for randomized trials, PRISMA for systematic reviews and meta-analyses).  

d. Pinpointing flaws in data analyses: AI systems (e.g., tools like Tableau and DataWrapper) can help reviewers map trends and patterns in the data to identify potential issues.

2. Finding and matching reviewers to manuscripts 

Machine learning algorithms may be trained to analyze the keywords, abstract, text, and references of the manuscript and suggest suitable reviewers with the necessary expertise, availability, and reviewer history. The Web of Science Reviewer Locator is an example of such a reviewer recommender system. 

3. Summarizing manuscript content 

Reviewers may benefit from tools which draw out key concepts to summarize the manuscript. This can help the reviewer save time and focus on reviewing the content. 

Potential problems of AI in peer review 

Despite AI’s potential to expedite peer review, employing AI for peer review does raise certain issues. AI tools may struggle to determine a paper’s relevance and grasp the research’s contextual significance within the literature. The tools might not successfully “judge” if methods are suitable for the research question or if the data support authors’ conclusions. A completely AI-generated review would simply be a rehashed version of the paper, devoid of creativity and originality of input in terms of opinion and expert assessment.  

Further, AI is prone to inaccuracies because of hallucination.3 It might also be affected by pre-existing biases in the training data. Another problem is confidentiality. Manuscripts under review are meant to be confidential before publication. However, when a manuscript is fed into the system, it might have implications in copyright violations and plagiarism. 

Another caveat is that it can lead to an overreliance on tools that generate summarized content. 

Current views and guidelines on AI use for peer review 

Researchers often endure reviewer fatigue, having to review reams of manuscripts and even grant proposals. Many AI tools are in use by journals to address such challenges. However, some publishers have voiced concerns about reliance on AI for peer review. In fact, NIH peer reviewers are not allowed to use generative AI tools in peer review for grant applications or research proposals. The Australian Research Council has also banned generative AI for peer review. The journal Science has also banned the use of generative AI by reviewers. Meanwhile, the U.S. National Science Foundation and the European Research Council are working towards developing guidelines for appropriate uses of AI in peer review.4 

The use of AI in peer review raises ethical, social, and technical challenges that need to be addressed. Broadly, the AI system should be transparent, explainable, and unbiased. It should be reliable and accurate, meaning that the AI system should produce consistent and valid results. The AI system should respect the confidentiality of the reviewers and authors, and the users should be accountable for the outcomes. Importantly, users must disclose the use of such tools in their workflows. In cases where publishers use AI tools, appropriate training must be provided to the peer reviewers.  

Looking ahead: Important considerations 

Considering all the promises and pitfalls, we can say that AI alone will not solve peer review workflow problems. With human collaboration and oversight, however, AI tools can bring in much needed efficiency and speed. 

Automating steps like screening for basic formatting, article type, text and image plagiarism, and compliance with ethics and reporting guidelines can circumvent a great deal of strain and drudgery. Delegating such tasks to AI will also ensure that no element gets skipped, which is apt to happen erroneously by a human reviewer. For example, the AI tool may prompt a reviewer if any portions have been missed from the critique. By relinquishing such ancillary steps, the reviewer can focus on the aspects where they can contribute more meaningfully and in ways AI cannot. This might even feel rewarding to the reviewer. Overall, AI interventions can contribute to reducing reviewer burnout and enabling reviewers to handle more reviews with less strain. 

Journal editors often deal with limited reviewer pools. AI assistance at different levels could help reduce the number of reviewers needed per manuscript. What’s more, AI’s support in the identification of suitable reviewers can help reduce time wasted in contacting mismatched or unavailable reviewers, thereby avoiding unnecessary delays in reviewer assignment.  

To conclude 

AI tools have the capability to automate numerous tasks in peer review, supporting reviewers and journal editors throughout the process. A great approach would be assigning onerous steps that do not require original human input to AI, while letting reviewers focus on the finer intellectual aspects and rigor of studies. 

Generative AI is evolving rapidly, and the concerns of today might be addressed in coming years, if not months, relieving reviewers and editors from even more tasks. However, clear policies for acceptable use will need to be developed, for which open dialogues are warranted from all stakeholders in academia and academic publishing. Embracing a careful and prudent approach for using AI will be important in defining the direction of peer review in the years to come. 

 

References 

  1. Huiman, J, Smits, J. Duration and quality of the peer review process: the author’s perspective. Scientometrics 113, 633–650 (2017). 

  1. LeBlanc, A.G., Barnes, J.D., Saunders, T.J. et al. Scientific sinkhole: estimating the cost of peer review based on survey data with snowball sampling. Res Integr Peer Rev 8, 3 (2023). https://doi.org/10.1186/s41073-023-00128-2 

  1. Weise, K., Metz, C. When A.I. chatbots hallucinate. The New York Times (2023). https://www.nytimes.com/2023/05/01/business/ai-chatbots-hallucination.html  

  1. Kaiser, J. Science funding agencies say no to using AI for peer review. Science (2023). doi: 10.1126/science.adj7663  

 

 

Be the first to clap

for this article

Published on: Sep 25, 2023

Sunaina did her masters and doctorate in plant genetic resources, specializing in the use of molecular markers for genotyping horticultural cultivars
See more from Sunaina Singh

Comments

You're looking to give wings to your academic career and publication journey. We like that!

Why don't we give you complete access! Create a free account and get unlimited access to all resources & a vibrant researcher community.

One click sign-in with your social accounts

1536 visitors saw this today and 1210 signed up.