Video: How AI is Redefining the Future of Peer Review with Iva Grabarić Andonovski


Can AI replace human peer reviewers? How much should we rely on AI tools for simplifying one of the most crucial processes of scholarly communication? Watch the video to know what Iva Grabarić Andonovski (Vice President, EASE) has to say on what role AI is likely to play in the future of peer review. 

Q: Where do you see the greatest potential for AI to support peer reviewers today?
A: Because there is a great potential, of course, in the use of AI tools. You can use them, for example, in manuscript pre-screening, for checking the text originality. It’s usually, well, it’s actually used in that way so far. So, it’s not a new thing for us. And also for the reference management. Also, researchers are using AI tools, but you can use them also to check whether all the references are properly mentioned. You can also check image quality and image originality, although it’s not that accurate for that. I can say that, for example, it detects some images as AI generated, but they are not. So, in that way, you should be careful while using AI tools. You can use it. It’s very good for checking datasets and for code. So, writing codes and checking codes. In that way, it could be useful at least for initial pre-screening.  

But when we are talking aboutuse of AI tools by peer reviewers, we need to be aware of the fact that many publishers and journal editors are not allowing the use of AI for peer review process because it could present a breach of confidentiality. Because the data that you are inputting, the prompts that you’re providing to the AI tool are being used for model training. So, you cannotbe 100% sure that the information will be handled with confidentiality. 

Q: What are the limits of AI when it comes to evaluating manuscripts? Can it handle nuance, ethics, or context?
A: So, when we are talking about the limitations of the use of AI tools for peer review, we must be aware of the fact that there is potential for bias or hallucinations. Biases can be a result of the biases present at the moment when the AI tool was trained. So, for example, if there are some biases towardsrace, ethnicity, etc. in the original data, it could be extrapolated when you’re using the AI tools unintentionally. So, this should be also checked. The hallucinations are something which are quite frequent when you’re using AI tools. For example, these are data that arefabricated by the AI tools, I would say, without scientific evidence. Because the AI tools are trained in a way that they need to provide you with the answer, even when it cannot find any available data. So, the output in this case would not be scientific evidence, but this will be like a best guess.Actually, a statistical anticipation of the possible results. So, we must be aware of that when using AI tools to check the output and to be aware that all the data is really accurate. For example, when you are asking, for example, ChatGPT,to provide you with some information about the part of the text that you are contemplating on and ask it to provide some references, if there are no available references, then the tool will make them up. I tried it. It was very funny. And when you give back the prompt that and saying that these references are not valid ones, I cannot find it, then it replaced it with other references which are also false. So, you need to be very careful. And also there is a problem of AI tools cannot sometimes grasp the context of the study or some nuances which are, for example,relevant for a particular industry or for particular country or region etc. 

Q: Have you seen any real-world examples where AI tools helped—or hurt—the peer review process?
A: And we are reading more and more about the reports of the use of AI tools by peer reviewers, which was revealed by authors. And this is something whicheditors should take care about. I was just reading the other day the report that the author was very frustrated because he received the report saying, for example, one part of the text was: “This is summarization of your report.” These are the usual outputs of the AI tools. And sometimes you can read the line like, “Sure, I can rephrase” etc.So, this was the evidence that the editor didn’t check the peer review at all. He just or she sent it to the authors. So, we need to be careful about it and each peer review should be checked, of course, for its validity by the editor or someone in the editorial team. Also, there is an increase in the reports about false positives. For example, AI tools detected some image manipulation, it was like, for example, that the image was AI generated, but it was not. So sometimes something appears to the AI tool as artificial, but it’s not. So, it’s hard to just rely on the AI tool.It also can detect like, for example, there are some biases in detecting the text written by non-native English speakers as AI-generated text. And this is something which is increasingly reported. So unfortunately, the AI tools are not that skillful to distinguish what is written by non-native English speaker and what is AI-generated text. 

Q: In your experience, what aspects of peer review are uniquely human and irreplaceable?
A: And if we are comparing the peer review report generated by AI tools or by human, we can see the differences. For example, the AI tools cannot understand the context of the research, how it fits into the previously published data,the relevance of the report published or the previously published results, and how your results as an author are relevant for the community in the region or worldwide. So, these are the aspects that AI tools cannot grasp.Also, additional thing, I would say that AI tools cannot provide a response which would encourage authors to investigate. For example, a certain part of their study in more depth or to use a different approach or different methods. So, these are the things that could be only provided by humans. So, in that way,they are not replaceable. And what AI tools primarily lack is personal experience. There is something which cannot be easily replaced. So, I would say this is something which needs to be taken into account when relying on AI tools alone. Of course, to use AI tools, you should be educated in a way to be able to understand how they work, what are the good quality prompts, so to provide enough information for AI tools to produce the best possible outcome. 

Q: Should peer reviewers be trained to work alongside AI tools? What kind of literacy do they need?
A: Of course, to use AI tools, you should be educated in a way to be able to understand how they work, what are the good quality prompts, so to provide enough information for AI tools to produce the best possible outcome. And also, each peer reviewer should be aware of the potential biases and hallucinations.And I would advise to check the recommendations of major institutions, organizations or associations such as Committee on Publishing Ethics or European Association ofScience Editors because there is a lot of debate on the use of the AI tools in peer review process. And recommendations are guiding journal editors but also authors and publishers how to use the tools ethically and efficiently. 

Q: What advice would you give to editors or journals looking to responsibly incorporate AI into peer review workflows?
A: And I would advise to check the recommendations of major institutions, organizations or associations such as Committee on Publishing Ethics or European Association ofScience Editors because there is a lot of debate on the use of the AI tools in peer review process. And recommendations are guiding journal editors but also authors and publishers how to use the tools ethically and efficiently. And Committee on Publishing Ethics has published a report which is like a discussion paper on the use of AI tools in decision-making process. And theyrecommend not to rely only on the use of AI tools, to have control over the editorial process at each stage. And also you have clear instructions from the journals for the authors and for the reviewers providing enough information what is recommended, what is acceptable,for which purposes you can use the AI tools, and how should you disclose the use of AI tools. So, this is the most, I would say, relevant aspect. It’s transparency. So, if the journal or editor is using AI tools in any of the stages,from pre-screening to the production, it needs to be clearly declared in the instructions and clearly communicated to the authors. And also, the journal editors or journals or publishers should provide the recommendations to peer reviewers as well. And clearly describe how they can use the AI tools for peer review,where should they use it, and how to disclose it. So, everything needs to be clearly communicated to the authors and vice versa, I would say. So, if we can agree that the AI tools will be most probably used by all stakeholders, then we just need to be aware of the possible consequences of its use andhow this should be clearly communicated to all. So, there is a lot of information available on internet about how the AI tools are using the prompts to produce outputs and how you can use them and train them efficiently for text polishing for reference checking and similarity text. These tools are quite easy to use. So, if you are not using AI tools to generate some more complex reports, this would be easily used by any author or peer reviewer. 

Q: Do you think disclosing the use of AI in peer review will become standard practice in the future? Why or why not?
A: Also for the authors it’s important to declare whether they have been using AI tools for writing the manuscript. So, there are recommendations either to have a separate part of the manuscript, which is called, for example, “AI statements” or “The use of AI tools” or something like that.Or to write in the, for example, acknowledgement section that you were using AI tools for language polishing or any other data management or any other way. So, these are very relevant things. I would say the transparency and education andthe fact that we need to acknowledge that AI tools are there. And they will be used by many more researchers and peer reviewers in future. So, we need to adapt to that and have a degree of flexibility. And editors should be aware of the fact that peer reviewers will most probably use the AI tools at some point, but we need tohave a degree of transparency to be able to produce quality peer review output which is also confidential and also transparent and unbiased. So, I would say, this will be the most difficult challenges in the future regarding the use of AI tools. 

Want to know if your paper is ready for peer review? Get your manuscript evaluated by expert reviewers using our Pre-Submission Peer Review Service.

Found this useful?

If so, share it with your fellow researchers

Related post

Related Reading