Video: Why Responsible Use of AI Matters with Hong Zhou
Using AI is not wrong. What’s wrong is not using AI the right way! There’s always discussion around how researchers should responsibly use AI. But what exactly does that mean? Watch this video to find out what Hong Zhou (Senior Director of AI Product & Innovation, Wiley) thinks of using AI tools for peer review.
Q: Where do you see the greatest potential for AI to support peer reviewers today?
A: Right. The greatest potential lies in helping us publish papers both faster and better. And the peer review today faces two big challenges. The risk of the research misconduct and the difficulty of finding qualified reviewers quickly. AI is providing value in both areas. For example, AI powered integrity tools are now widely adopted. There’s more than 50 vendors offering such services in the market, like the Wiley Papermill Detection Service, which is embedded successfully in the Wiley research exchange platform, STM integrity hub etc. On the reviewer side, AI tools are making it possible to identify suitable experts from vast reviewer database in seconds. Something that would have been, you know, nearly impossible manually. So, while this research exchange review system, for instance, can recommend the most appropriate reviewers from hundreds of thousands of candidates, saving editors weeks of effort. So, looking ahead, AI is evolving beyond the integrity check and the reviewer matching. We are starting to see the emergence of AI review and assistant tools that can help reviewers structure their report and highlight the key strengths and weaknesses or even help editors interpret the reviewer feedback. So, the long-term trajectory points towards AI taking on the initial assessment of manuscript to evaluate quality, readability, novelty, and contributions before the human reviewer adds the deep subject expertise and critical judgment to make the informed decision. So automatically this is not about speed versus quality. This is about combining human expertise with AI capability to achieve speed with quality.
Q: Have you seen any real-world examples where AI tools helped—or hurt—the peer review process?
A: Absolutely! So, AI is already making a tangible impact on the peer review process across several areas. We are seeing the tools, well not seeing, but actually Wiley has. And we are already working on this. They use AI for reviewer suggestion, manuscript triage, language enhancement, scope and integrity checks, and even summarization and literature recommendations to help editors and reviewers quickly grasp the manuscript’s key point. So this is not about just the futuristic concept. They are already in production today and being integrated into the editorial workflow at scale. One concrete example is Wiley’s AI-powered Papermill Detection Service. It’s launched last year in London book fair. It now screens more than 60,000 manuscripts every month. So flagging the suspicious pattern automatically, at the same time, AI also powers the review in invitation process helping to send out hundreds of thousands of invitations by identifying the most relevant expert from the vast reviewer database. So what would have taken editors hours or days can now be done in minutes. So free them to focus on the judgement and decision making. So more value-added work. So we have also seen major progress in the area where humans face real limitations. For instance, image manipulation detection can be nearly impossible to spot with the human naked eyes. But AI can highlight suspicious alterations, manipulations in seconds. Similarly, while no editor can manually select thousands and hundreds of potential reviewers, AI can make it.
Q: In your experience, what aspects of peer review are uniquely human and irreplaceable?
A: Right as it’s called the peer review, it’s about judgment and accountability and those remain uniquely human. AI can provide powerful assistance, but it cannot be held responsible for decisions nor does it have the legal or ethical standing to take ownership of outcome, such as assigning copyright or bearing the accountability for errors. So equally important, humans still bring critical thinking, contextual awareness, and the deep subject matter expertise that AI cannot fully replicate. AI is improving rapidly but it’s best viewed today as a smart assistant. Excellent at the heavy lifting stuff, like the filtering out out-of-scope submissions, helping to draft papers and report, and flagging the potential misconduct, or recommending suitable reviewers etc. So however, the final decision rests with human editors and reviewers. They provide the evaluation of novelty, significance, and the ethical consideration that go beyond what algorithm can capture today. So that human looped model is crucial not only for the college but also for the trust in the scholarly record. So rather than replacement, the future I think is about collaboration. AI augments the process by improving efficiency and consistency while the human ensures regular responsibility and judgment.
Q: Should peer reviewers be trained to work alongside AI tools? What kind of literacy do they need?
A: Yes, the training is absolutely essential. Right now there’s still a lot of misunderstanding and even fear around the AI in research in publishing. So for example, CACTUS recently in a survey found that many researchers still, you know, fear the use of AI and equate the use of AI with misconduct which creates hesitation and prevents transparency. So we need to address this misconception directly. For reviewers, training should go beyond how to use AI. It’s also about when to use it and how to use responsibly. So that means understanding AI limitations, setting the right expectation, and recognizing that AI isn’t perfect. It is also far from useless. So its value depends heavily on how and where it is applied. So finally, literacy also means learning how to interpret and question AI output. For example, if a tool suggests a reviewer, the editor will want to know why. And also why this plot, this figure is identified as a duplication in the image. So that explainability builds the trust and it’s just as important as improving the algorithm themselves. So yes, the peer reviewers should be trained not just to use AI but to collaborate with it effectively and responsibly.
Q: What advice would you give to editors or journals looking to responsibly incorporate AI into peer review workflows?
A: So, I would like to highlight three main areas. Be vigilant about risk. So AI can generate misinformation. The false or inaccurate output due to the errors in the training data, limited knowledge etc. This information deliberately misleading the content, like hallucinations because large language model generate response based on the patterns but not facts. And another is bias, it’s another concern. The trained data and the algorithm can unintentionally reinforce existing inequities. So another one is protecting copyright and confidentiality. So editors and reviewers must consider whether the institution policy allows the use of unpublished manuscript in generated AI tools or product. So uploading the confidential work without safeguard could violate the trust and the privacy. The third one is about “judge the research, not the truth.” A paper should be accepted or rejected based on the methodology, soundness, novelty, originality etc. But not simply rather AI was involved. So this is a misconception. So the key is transparency. The real risk isn’t disclosure, it is hiding lack of the transparent trust, while the disclosure builds it. So that’s why clear guidelines and training are essential. So the author, not only the reviewer and editor, the author also feels supported and confident to use AI responsibly. For example Wiley has surveyed more than 5,000 researchers reviews and about 70% of the participants are looking to the publisher to get them on the safe and responsible use of AI. So in short, the success of AI adoption in peer review depends on building the trust, collaboration, and governance. All these underpinned by transparency and open communication.
Q: Do you think disclosing the use of AI in peer review will become standard practice in the future? Why or why not?
A: Yes, at least in the near future. So disclosures foster a culture of transparency and avoid the risk that come with hiding AI use. So hiding creates distrust. Disclosure builds the trust. So it also helps editors, reviewers, and even authors themselves to better evaluate and contextualize the work. So we have already seen this shift. For example, in the recent the CoP AI forum discussion one of the key themes was moving beyond simply detecting AI generated text towards a verified compliance with AI use disclosure and editorial standards. In other words, disclosure is becoming a standard quality control step rather than a special integrity task. So many publishers are now publishing detailed guidelines on the AI use, not only for authors but increasingly for editors and reviewers as well. So a framework is emerging to support this culture shift. For example, the CoP AI guidelines, STM integrity task force, and most recently the European Commission’s Living Guidelines on the Responsible Use of Generative AI in Research is published in April. These are all examples of how the industry is developing and codifying the best practice. So, in short, disclosure will almost certainly become standard in the near term, I think. But whether it will remain necessary in the long term will depend on how seamlessly AI is embedded into the scholarly ecosystem and how the society chooses to govern and also normalize its use.
Want to know if your paper is ready for peer review? Get your manuscript evaluated by expert reviewers using our Pre-Submission Peer Review Service.
