Using artificial intelligence (AI) in peer review


Reading time
6 mins
Using artificial intelligence (AI) in peer review

At first sight, it might seem that AI and the peer review process have nothing in common. As a researcher, you want your paper to be reviewed as quickly as possible. Peer reviewers are human, and no machine has (yet) come up with the ability to read a paper and decide if it is worth publishing. In the run up to Peer Review Week 2020, which focuses on the theme Trust in Peer Review, we wonder why talk about AI and peer review? 

 

First, let’s clarify what kind of AI we are talking about. The term is used very widely to cover many different processes, some of them sophisticated, others very simple. It is immediately obvious that, for example, the AI required to drive a car without a human driver is highly elaborate and mission-critical; you cannot afford a single error in the process without potentially causing a disaster. Yet self-driving cars depend on a high number of simpler processes, many of which use well-known, tried and tested technology. For example, the ability for cars to provide feedback to the driver when parking has been around for many years – in fact, the first to be introduced was in 1999. It’s not difficult to see how parking a car is dependent on many constituent processes, one of the simplest being able to estimate distances via a camera. The ability to park a car is built up from a number of discrete tasks of this kind.  

 

Let’s look now at peer review. While the crucial task of assessing a paper for novelty and innovation, may continue to be human-centered, there are several much more mundane operations that the publisher has to carry out when a manuscript is first received. And, clearly, there is a need to improve the current process: a 2016 peer-review survey carried out by Wiley found that the rate of acceptance of invitations to review had dropped by 5% in the last five years before the survey. With fewer reviewers accepting, and more papers being published, you can see the problem in years to come. Is there a way we can use AI to assist us?  

 

An Elsevier report from 2018 identified three tasks in the peer review process that could use automation:  

  1. Match a manuscript with a suitable candidate reviewer’s qualifications and interest 
  2. Remove candidates who have a potential conflict of interest 
  3. Look for signals that might indicate the candidate reviewer’s willingness to accept a request for review.   

First is the problem of identifying a relevant reviewer. That doesn't seem so simple, certainly not for humans! Whatever the academic domain, no publisher can have a big enough staff with good knowledge of all the submitted subjects that enable them to find current experts. Paradoxically, the higher the level of education, the more specialized the knowledge, so even if a publisher hires only editors with a doctorate, their knowledge will cover only a tiny fraction of all the submitted papers. Hiring more experts, in other words, is not the answer.  

 

In practice, the human editor becomes an instant expert. They spend some time looking at names and titles of papers in the subject area to identify who has published recently in the same subject area. But this process is slow and rather random, with no guarantee that the potential reviewers identified will have the right expertise. In fact one survey (“Why do Peer Reviewers decline to review", 2014) found that 14% of suggestions to peer reviewers were rejected because the reviewers approached felt that others had more relevant subject knowledge. That doesn’t suggest a very precise manual selection process. 

 

Instead, UNSILO uses a corpus-based concept extraction tool. Its concept engine identifies hundreds of concepts (key phrases that distinguish each article from all the others in the corpus) from a submission, ranks them in order of relevance to that paper, and then, for biomedical content, matches the resulting cluster of concepts with 29 million articles and abstracts in the PubMed corpus. Having matched articles, the system then identifies the authors of these matching papers. Finally, it ranks the authors by relevance, and presents the top 20 for the editorial office to review.  

 

As for conflict of interest, this is a relatively straightforward matching exercise. There are several possible types of conflict of interest (COI), but for one COI check, the system simply compares the affiliations of the manuscript author and of the proposed reviewers. If they are the same, the proposed reviewer is discarded. To carry out such a check by hand might take only a few minutes. But in a world where 3,000 new scientific articles are published every day, the saving of a few seconds for each submission is a very clear benefit. 

 

Finally, the machine carries out a few counts, and presents the results to the human editor for his or her assessment. How many papers has this potential reviewer authored? If too few, the reviewer may not have sufficient experience. If the reviewer has authored hundreds of papers, they will probably be too senior to agree to a review. So the machine can be configured, with the guidance of the human, to deliver results with the right criteria.  

 

A key aspect of the machine component is that it does not make the final choice of reviewer – the human does that. The machine simply states some names that match by the various criteria outlined above. In other words, the use of relatively simple checks such as these enable the humans to do what humans do best: to form judgements. Machines cannot (yet) carry out peer reviews; but, like the assisted parking tool, they can help humans use their judgement more quickly and more accurately.  

 

About UNSILO 

The UNSILO peer reviewer finder is just one of the many AI-related tools now available from CACTUS for the academic publishing workflow. UNSILO also provides a range of technical checks, plus (for content discovery) a recommender engine and an automatic subject collection builder. UNSILO also powers the continuous update of the world’s largest collection of nano-related content at the Springer Nature nano site 

Head of sales and business development at UNSILO, the Danish machine-learning arm of Cactus Communications
See more from Michael Upshall

Comments

You're looking to give wings to your academic career and publication journey. We like that!

Why don't we give you complete access! Create a free account and get unlimited access to all resources & a vibrant researcher community.

One click sign-in with your social accounts

1536 visitors saw this today and 1210 signed up.