
{"id":37981,"date":"2025-09-16T13:00:38","date_gmt":"2025-09-16T07:30:38","guid":{"rendered":"https:\/\/www.editage.com\/insights\/?post_type=video&#038;p=37981"},"modified":"2025-09-18T09:36:09","modified_gmt":"2025-09-18T04:06:09","slug":"why-responsible-use-of-ai-matters-with-hong-zhou","status":"publish","type":"video","link":"https:\/\/www.editage.com\/insights\/why-responsible-use-of-ai-matters-with-hong-zhou","title":{"rendered":"Why Responsible Use of AI Matters with Hong Zhou"},"content":{"rendered":"<p><span class=\"TextRun SCXW52247234 BCX0\" lang=\"EN-US\" xml:lang=\"EN-US\" data-contrast=\"auto\"><span class=\"NormalTextRun SCXW52247234 BCX0\">Using AI is not wrong. <\/span><span class=\"NormalTextRun SCXW52247234 BCX0\">What\u2019s<\/span><span class=\"NormalTextRun SCXW52247234 BCX0\"> wrong is not using AI the right way!<\/span> <span class=\"NormalTextRun SCXW52247234 BCX0\">There\u2019s always discussion around how <\/span><span class=\"NormalTextRun SCXW52247234 BCX0\">researchers should responsibly use AI<\/span><span class=\"NormalTextRun SCXW52247234 BCX0\">. But what exactly does that mean? <\/span><span class=\"NormalTextRun SCXW52247234 BCX0\">Watch <\/span><span class=\"NormalTextRun SCXW52247234 BCX0\">this<\/span><span class=\"NormalTextRun SCXW52247234 BCX0\"> video to <\/span><span class=\"NormalTextRun SCXW52247234 BCX0\">find out<\/span><span class=\"NormalTextRun SCXW52247234 BCX0\"> what <\/span><\/span><strong><span class=\"TextRun SCXW52247234 BCX0\" lang=\"EN-US\" xml:lang=\"EN-US\" data-contrast=\"auto\"><span class=\"NormalTextRun SCXW52247234 BCX0\">Hong Zhou<\/span><\/span><span class=\"TextRun SCXW52247234 BCX0\" lang=\"EN-US\" xml:lang=\"EN-US\" data-contrast=\"auto\"><span class=\"NormalTextRun SCXW52247234 BCX0\"> (<\/span><span class=\"NormalTextRun SCXW52247234 BCX0\">Senior Director of AI Product &amp; Innovation, Wiley<\/span><span class=\"NormalTextRun SCXW52247234 BCX0\">)<\/span><\/span><\/strong><span class=\"TextRun SCXW52247234 BCX0\" lang=\"EN-US\" xml:lang=\"EN-US\" data-contrast=\"auto\"> <span class=\"NormalTextRun SCXW52247234 BCX0\">thinks<\/span><span class=\"NormalTextRun SCXW52247234 BCX0\"> of using AI tools for peer review<\/span><span class=\"NormalTextRun SCXW52247234 BCX0\">.<\/span><\/span><\/p>\n<p><b><span data-contrast=\"auto\">Q:<\/span><\/b> <b><span data-contrast=\"auto\">Where do you see the greatest potential for AI to support peer reviewers today?<\/span><\/b><br \/>\n<b><span data-contrast=\"auto\">A: <\/span><\/b><span data-contrast=\"none\">Right. The greatest potential lies in helping us publish papers both faster and better. And the peer review today faces two big challenges. The risk of the research misconduct and the difficulty of finding qualified reviewers quickly. AI is providing value in both areas. For example, AI powered integrity tools are now widely adopted. There\u2019s more than 50 vendors offering such services in the market, like the Wiley Papermill Detection Service, which is embedded successfully in the Wiley research exchange platform, STM integrity hub etc. On the reviewer side, AI tools are making it possible to identify suitable experts from vast reviewer database in seconds. Something that would have been, you know, nearly impossible manually. So, while this research exchange review system, for instance, can recommend the most appropriate reviewers from hundreds of thousands of candidates, saving editors weeks of effort. So, looking ahead, AI is evolving beyond the integrity check and the reviewer\u202fmatching. We are starting to see the emergence of AI review and assistant tools that can help reviewers structure their report and highlight the key strengths and weaknesses or even help editors interpret the reviewer feedback.\u202fSo, the long-term trajectory points towards AI taking on the initial assessment of manuscript to evaluate quality, readability, novelty, and contributions before the human reviewer adds the deep subject expertise and critical judgment to make the informed decision. So automatically this is not about speed versus quality. This is about combining human expertise with AI capability to achieve speed with quality.<\/span><span data-ccp-props=\"{&quot;134233117&quot;:false,&quot;134233118&quot;:false,&quot;201341983&quot;:0,&quot;335559738&quot;:0,&quot;335559739&quot;:0,&quot;335559740&quot;:240}\">\u00a0<\/span><\/p>\n<p><b><span data-contrast=\"auto\">Q: Have you seen any real-world examples where AI tools helped\u2014or hurt\u2014the peer review process?<\/span><\/b><span data-ccp-props=\"{&quot;134233117&quot;:false,&quot;134233118&quot;:false,&quot;201341983&quot;:0,&quot;335559738&quot;:0,&quot;335559739&quot;:0,&quot;335559740&quot;:240}\">\u00a0<\/span><\/p>\n<p><b><span data-contrast=\"auto\">A: <\/span><\/b><span data-contrast=\"auto\">Absolutely!<\/span><span data-contrast=\"none\"> So, AI is already making a tangible impact on the peer review process across several areas. We are seeing the tools, well not seeing, but actually Wiley has. And we are already working on this. They use AI for reviewer suggestion, manuscript triage, language enhancement, scope and integrity checks, and even summarization and literature recommendations to help editors and reviewers quickly grasp the manuscript\u2019s key <\/span><span data-contrast=\"auto\">point. So this is not about just the futuristic\u202fconcept. They are already in production today and <\/span><span data-contrast=\"none\">being integrated into the editorial workflow at scale. One concrete example is Wiley\u2019s AI-powered Papermill Detection Service. It\u2019s launched last year in London book fair. It now screens more than 60,000 manuscripts every month. So flagging the suspicious pattern automatically, at the same time, AI also powers the review in invitation process helping to send out hundreds of thousands of invitations by identifying the most relevant expert from the vast reviewer database. So what would have taken editors hours or days can now be done in minutes. So free them to focus on the judgement and decision making. So more value-added work. So we have also seen major progress in the area where humans face real limitations. For instance, image manipulation detection can be nearly impossible to spot with the human naked eyes. But AI can highlight suspicious alterations, manipulations in seconds. Similarly, while no editor can manually select thousands and hundreds of potential reviewers, AI can make it.<\/span><span data-ccp-props=\"{&quot;134233117&quot;:false,&quot;134233118&quot;:false,&quot;201341983&quot;:0,&quot;335559685&quot;:15,&quot;335559738&quot;:0,&quot;335559739&quot;:0,&quot;335559740&quot;:240}\">\u00a0<\/span><\/p>\n<p><b><span data-contrast=\"auto\">Q: In your experience, what aspects of peer review are uniquely human and irreplaceable?<\/span><\/b><br \/>\n<b><span data-contrast=\"auto\">A: <\/span><\/b><span data-contrast=\"auto\">Right as it\u2019s called the peer review,\u202fit\u2019s about judgment and accountability and those <\/span><span data-contrast=\"none\">remain uniquely human. AI can provide powerful assistance, but it cannot be held responsible <\/span><span data-contrast=\"auto\">for decisions nor does it have the legal or\u202fethical standing to take ownership of outcome, such as assigning copyright or bearing the\u202faccountability for errors. So equally important, <\/span><span data-contrast=\"none\">humans still bring critical thinking, contextual awareness, and the deep subject matter expertise that AI cannot fully replicate. AI is improving rapidly but it&#8217;s best viewed today as a smart assistant. Excellent at the heavy lifting stuff, like the filtering out out-of-scope submissions, helping to draft papers and report, and flagging the potential misconduct, or recommending suitable reviewers etc. So however, the final decision rests with human editors and reviewers. They provide the evaluation of novelty, significance, and the ethical consideration that go beyond what algorithm can capture today. So that human looped model is crucial not only for the college but also for the trust in the scholarly record. So <\/span><span data-contrast=\"auto\">rather than replacement, the future I think is\u202fabout collaboration. AI augments the process by improving <\/span><span data-contrast=\"none\">efficiency and consistency while the human ensures regular responsibility and judgment.<\/span><span data-ccp-props=\"{&quot;134233117&quot;:false,&quot;134233118&quot;:false,&quot;201341983&quot;:0,&quot;335559685&quot;:15,&quot;335559738&quot;:0,&quot;335559739&quot;:0,&quot;335559740&quot;:240}\">\u00a0<\/span><\/p>\n<p><b><span data-contrast=\"auto\">Q: Should peer reviewers be trained to work alongside AI tools? What kind of literacy do they need?<\/span><\/b><br \/>\n<b><span data-contrast=\"auto\">A: <\/span><\/b><span data-contrast=\"none\">Yes, the training is absolutely essential. Right now there\u2019s still a lot of misunderstanding and even fear around the AI in research in publishing. So for example, CACTUS recently in a survey <\/span><span data-contrast=\"auto\">found that many researchers still, you know,\u202ffear the use of AI and equate <\/span><span data-contrast=\"none\">the use of AI with misconduct which creates hesitation and prevents transparency. So we need to address this misconception directly. For reviewers, training should go beyond how <\/span><span data-contrast=\"auto\">to use AI. It\u2019s also about when to use it and how\u202fto use responsibly. So that means understanding <\/span><span data-contrast=\"none\">AI limitations, setting the right expectation, and recognizing that AI isn\u2019t perfect. It is also far from useless. So its value depends heavily on how and where it is applied. So finally, literacy also means learning how to interpret and question AI output. For example, if a tool suggests a reviewer, the editor will want to know why. And also why this plot, this figure is identified as a duplication in the image. So that explainability builds the trust and it\u2019s just as important as improving the algorithm themselves. So yes, the peer reviewers should be trained not just to use AI but to collaborate with it effectively and responsibly.<\/span><span data-ccp-props=\"{&quot;134233117&quot;:false,&quot;134233118&quot;:false,&quot;201341983&quot;:0,&quot;335559685&quot;:15,&quot;335559738&quot;:0,&quot;335559739&quot;:0,&quot;335559740&quot;:240}\">\u00a0<\/span><\/p>\n<p><b><span data-contrast=\"auto\">Q: What advice would you give to editors or journals looking to responsibly incorporate AI into peer review workflows?<\/span><\/b><br \/>\n<b><span data-contrast=\"auto\">A: <\/span><\/b><span data-contrast=\"none\">So, I would like to highlight three <\/span><span data-contrast=\"auto\">main areas. Be vigilant about risk. So\u202fAI can generate misinformation. The false <\/span><span data-contrast=\"none\">or inaccurate output due to the errors in the training data, limited knowledge etc. This information deliberately misleading the content, like hallucinations because large language model generate response based on the patterns but not facts. And another is bias, it\u2019s another concern. The trained data and the algorithm can unintentionally reinforce existing inequities. So another one is protecting copyright and confidentiality. So editors and reviewers must consider whether the institution policy allows the use of unpublished <\/span><span data-contrast=\"auto\">manuscript in generated AI tools or product. So\u202fuploading the confidential work without safeguard could violate <\/span><span data-contrast=\"none\">the trust and the privacy. The third one is about \u201cjudge the research, not the truth.\u201d A paper should be accepted or rejected based on the methodology, soundness, novelty, originality etc. But not simply rather AI was involved. So this is a misconception. So the key is transparency. The real risk isn\u2019t disclosure, it is hiding lack of the transparent trust, while the disclosure builds it. So that\u2019s why clear guidelines and training are essential. So the author, not only the reviewer and editor, the author also feels supported and confident to use AI responsibly. For example Wiley has surveyed more than 5,000 researchers reviews and about 70% of the <\/span><span data-contrast=\"auto\">participants are looking to the publisher\u202fto get them on the safe and responsible use of AI. <\/span><span data-contrast=\"none\">So in short, the success of AI adoption in peer review depends on building the <\/span><span data-contrast=\"auto\">trust, collaboration, and governance. All these\u202funderpinned by transparency and open communication.<\/span><span data-ccp-props=\"{&quot;134233117&quot;:false,&quot;134233118&quot;:false,&quot;201341983&quot;:0,&quot;335559685&quot;:15,&quot;335559738&quot;:0,&quot;335559739&quot;:0,&quot;335559740&quot;:240}\">\u00a0<\/span><\/p>\n<p><b><span data-contrast=\"auto\">Q:<\/span><\/b> <b><span data-contrast=\"auto\">Do you think disclosing the use of AI in peer review will become standard practice in the future? Why or why not?<\/span><\/b><br \/>\n<b><span data-contrast=\"auto\">A:<\/span><\/b><span data-contrast=\"auto\">\u202f<\/span><span data-contrast=\"none\">Yes, at least in the near future. So disclosures foster a culture of transparency and avoid the risk that come with hiding AI use. So hiding creates distrust. Disclosure builds the trust. So it also helps editors, reviewers, and even authors themselves to better evaluate and contextualize the work. <\/span><span data-contrast=\"auto\">So we have already seen this shift. For\u202fexample, in the recent the CoP AI forum discussion <\/span><span data-contrast=\"none\">one of the key themes was moving beyond simply detecting AI generated text towards a verified compliance with AI use disclosure and editorial standards. In other words, disclosure is becoming a standard quality control step rather than a special integrity task. So many publishers are now publishing detailed guidelines on the AI use, not only for authors but increasingly for editors and reviewers as well. So a framework is emerging to support this culture shift. For example, the CoP AI guidelines, STM integrity task force, and most recently the European Commission\u2019s Living Guidelines on the Responsible Use of Generative AI in Research is published in April. These are all examples of how the industry is developing and codifying the be<\/span><span data-contrast=\"auto\">st practice. So, in short, disclosure will almost certainly become standard in the near term, I think. But whether it will remain necessary in the long term will depend on how seamlessly AI is embedded into the scholarly ecosystem and how the society chooses to govern and also normalize its use.<\/span><span data-ccp-props=\"{&quot;134233117&quot;:false,&quot;134233118&quot;:false,&quot;201341983&quot;:0,&quot;335559685&quot;:15,&quot;335559738&quot;:0,&quot;335559739&quot;:0,&quot;335559740&quot;:240}\">\u00a0<\/span><\/p>\n<p><b><i><span data-contrast=\"auto\">Want to know if your paper is ready for peer review? Get your manuscript evaluated by expert reviewers using our <\/span><\/i><\/b><a href=\"https:\/\/www.editage.com\/services\/other\/pre-submission-peer-review?utm_source=editageinsights&amp;utm_medium=article-boilerplate&amp;utm_campaign=why-responsible-use-of-ai-matters-with-hong-zhou\" target=\"_blank\" rel=\"noopener\"><b><i><span data-contrast=\"none\">Pre-Submission Peer Review Service<\/span><\/i><\/b><\/a><b><i><span data-contrast=\"auto\">.<\/span><\/i><\/b><span data-ccp-props=\"{}\">\u00a0<\/span><\/p>\n","protected":false},"author":8,"featured_media":37983,"template":"","meta":{"_acf_changed":false,"inline_featured_image":false},"new_categories":[],"new_tags":[5830,5869],"series":[5941],"class_list":["post-37981","video","type-video","status-publish","has-post-thumbnail","hentry","new_tags-peer-review","new_tags-peer-review-week","series-peer-review-week-2025"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v25.0 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>Why Responsible Use of AI Matters with Hong Zhou | Editage Insights<\/title>\n<meta name=\"description\" content=\"What exactly does it mean to use AI responsibly? Watch this video to find out about where and how AI tools can be used for peer review.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.editage.com\/insights\/why-responsible-use-of-ai-matters-with-hong-zhou\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Why Responsible Use of AI Matters with Hong Zhou | Editage Insights\" \/>\n<meta property=\"og:description\" content=\"What exactly does it mean to use AI responsibly? Watch this video to find out about where and how AI tools can be used for peer review.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.editage.com\/insights\/why-responsible-use-of-ai-matters-with-hong-zhou\" \/>\n<meta property=\"og:site_name\" content=\"Editage Insights\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/Editage\" \/>\n<meta property=\"article:modified_time\" content=\"2025-09-18T04:06:09+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/www.editage.com\/insights\/wp-content\/uploads\/2025\/09\/Hong-thumbnail.png\" \/>\n\t<meta property=\"og:image:width\" content=\"1280\" \/>\n\t<meta property=\"og:image:height\" content=\"720\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/png\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:site\" content=\"@Editage\" \/>\n<meta name=\"twitter:label1\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data1\" content=\"8 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"WebPage\",\"@id\":\"https:\/\/www.editage.com\/insights\/why-responsible-use-of-ai-matters-with-hong-zhou\",\"url\":\"https:\/\/www.editage.com\/insights\/why-responsible-use-of-ai-matters-with-hong-zhou\",\"name\":\"Why Responsible Use of AI Matters with Hong Zhou | Editage Insights\",\"isPartOf\":{\"@id\":\"https:\/\/www.editage.com\/insights\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/www.editage.com\/insights\/why-responsible-use-of-ai-matters-with-hong-zhou#primaryimage\"},\"image\":{\"@id\":\"https:\/\/www.editage.com\/insights\/why-responsible-use-of-ai-matters-with-hong-zhou#primaryimage\"},\"thumbnailUrl\":\"https:\/\/www.editage.com\/insights\/wp-content\/uploads\/2025\/09\/Hong-thumbnail.png\",\"datePublished\":\"2025-09-16T07:30:38+00:00\",\"dateModified\":\"2025-09-18T04:06:09+00:00\",\"description\":\"What exactly does it mean to use AI responsibly? Watch this video to find out about where and how AI tools can be used for peer review.\",\"breadcrumb\":{\"@id\":\"https:\/\/www.editage.com\/insights\/why-responsible-use-of-ai-matters-with-hong-zhou#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/www.editage.com\/insights\/why-responsible-use-of-ai-matters-with-hong-zhou\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/www.editage.com\/insights\/why-responsible-use-of-ai-matters-with-hong-zhou#primaryimage\",\"url\":\"https:\/\/www.editage.com\/insights\/wp-content\/uploads\/2025\/09\/Hong-thumbnail.png\",\"contentUrl\":\"https:\/\/www.editage.com\/insights\/wp-content\/uploads\/2025\/09\/Hong-thumbnail.png\",\"width\":1280,\"height\":720,\"caption\":\"AI for peer review\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/www.editage.com\/insights\/why-responsible-use-of-ai-matters-with-hong-zhou#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/www.editage.com\/insights\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Why Responsible Use of AI Matters with Hong Zhou\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/www.editage.com\/insights\/#website\",\"url\":\"https:\/\/www.editage.com\/insights\/\",\"name\":\"Editage Insights\",\"description\":\"\",\"publisher\":{\"@id\":\"https:\/\/www.editage.com\/insights\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/www.editage.com\/insights\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/www.editage.com\/insights\/#organization\",\"name\":\"Editage Insights\",\"url\":\"https:\/\/www.editage.com\/insights\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/www.editage.com\/insights\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/www.editage.com\/insights\/wp-content\/uploads\/2024\/09\/editage-insights-logo-1-scaled.webp\",\"contentUrl\":\"https:\/\/www.editage.com\/insights\/wp-content\/uploads\/2024\/09\/editage-insights-logo-1-scaled.webp\",\"width\":2560,\"height\":324,\"caption\":\"Editage Insights\"},\"image\":{\"@id\":\"https:\/\/www.editage.com\/insights\/#\/schema\/logo\/image\/\"},\"sameAs\":[\"https:\/\/www.facebook.com\/Editage\",\"https:\/\/x.com\/Editage\"]}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Why Responsible Use of AI Matters with Hong Zhou | Editage Insights","description":"What exactly does it mean to use AI responsibly? Watch this video to find out about where and how AI tools can be used for peer review.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.editage.com\/insights\/why-responsible-use-of-ai-matters-with-hong-zhou","og_locale":"en_US","og_type":"article","og_title":"Why Responsible Use of AI Matters with Hong Zhou | Editage Insights","og_description":"What exactly does it mean to use AI responsibly? Watch this video to find out about where and how AI tools can be used for peer review.","og_url":"https:\/\/www.editage.com\/insights\/why-responsible-use-of-ai-matters-with-hong-zhou","og_site_name":"Editage Insights","article_publisher":"https:\/\/www.facebook.com\/Editage","article_modified_time":"2025-09-18T04:06:09+00:00","og_image":[{"width":1280,"height":720,"url":"https:\/\/www.editage.com\/insights\/wp-content\/uploads\/2025\/09\/Hong-thumbnail.png","type":"image\/png"}],"twitter_card":"summary_large_image","twitter_site":"@Editage","twitter_misc":{"Est. reading time":"8 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"WebPage","@id":"https:\/\/www.editage.com\/insights\/why-responsible-use-of-ai-matters-with-hong-zhou","url":"https:\/\/www.editage.com\/insights\/why-responsible-use-of-ai-matters-with-hong-zhou","name":"Why Responsible Use of AI Matters with Hong Zhou | Editage Insights","isPartOf":{"@id":"https:\/\/www.editage.com\/insights\/#website"},"primaryImageOfPage":{"@id":"https:\/\/www.editage.com\/insights\/why-responsible-use-of-ai-matters-with-hong-zhou#primaryimage"},"image":{"@id":"https:\/\/www.editage.com\/insights\/why-responsible-use-of-ai-matters-with-hong-zhou#primaryimage"},"thumbnailUrl":"https:\/\/www.editage.com\/insights\/wp-content\/uploads\/2025\/09\/Hong-thumbnail.png","datePublished":"2025-09-16T07:30:38+00:00","dateModified":"2025-09-18T04:06:09+00:00","description":"What exactly does it mean to use AI responsibly? Watch this video to find out about where and how AI tools can be used for peer review.","breadcrumb":{"@id":"https:\/\/www.editage.com\/insights\/why-responsible-use-of-ai-matters-with-hong-zhou#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.editage.com\/insights\/why-responsible-use-of-ai-matters-with-hong-zhou"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.editage.com\/insights\/why-responsible-use-of-ai-matters-with-hong-zhou#primaryimage","url":"https:\/\/www.editage.com\/insights\/wp-content\/uploads\/2025\/09\/Hong-thumbnail.png","contentUrl":"https:\/\/www.editage.com\/insights\/wp-content\/uploads\/2025\/09\/Hong-thumbnail.png","width":1280,"height":720,"caption":"AI for peer review"},{"@type":"BreadcrumbList","@id":"https:\/\/www.editage.com\/insights\/why-responsible-use-of-ai-matters-with-hong-zhou#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/www.editage.com\/insights\/"},{"@type":"ListItem","position":2,"name":"Why Responsible Use of AI Matters with Hong Zhou"}]},{"@type":"WebSite","@id":"https:\/\/www.editage.com\/insights\/#website","url":"https:\/\/www.editage.com\/insights\/","name":"Editage Insights","description":"","publisher":{"@id":"https:\/\/www.editage.com\/insights\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.editage.com\/insights\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/www.editage.com\/insights\/#organization","name":"Editage Insights","url":"https:\/\/www.editage.com\/insights\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.editage.com\/insights\/#\/schema\/logo\/image\/","url":"https:\/\/www.editage.com\/insights\/wp-content\/uploads\/2024\/09\/editage-insights-logo-1-scaled.webp","contentUrl":"https:\/\/www.editage.com\/insights\/wp-content\/uploads\/2024\/09\/editage-insights-logo-1-scaled.webp","width":2560,"height":324,"caption":"Editage Insights"},"image":{"@id":"https:\/\/www.editage.com\/insights\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.facebook.com\/Editage","https:\/\/x.com\/Editage"]}]}},"_links":{"self":[{"href":"https:\/\/www.editage.com\/insights\/wp-json\/wp\/v2\/video\/37981","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.editage.com\/insights\/wp-json\/wp\/v2\/video"}],"about":[{"href":"https:\/\/www.editage.com\/insights\/wp-json\/wp\/v2\/types\/video"}],"author":[{"embeddable":true,"href":"https:\/\/www.editage.com\/insights\/wp-json\/wp\/v2\/users\/8"}],"version-history":[{"count":0,"href":"https:\/\/www.editage.com\/insights\/wp-json\/wp\/v2\/video\/37981\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.editage.com\/insights\/wp-json\/wp\/v2\/media\/37983"}],"wp:attachment":[{"href":"https:\/\/www.editage.com\/insights\/wp-json\/wp\/v2\/media?parent=37981"}],"wp:term":[{"taxonomy":"new_categories","embeddable":true,"href":"https:\/\/www.editage.com\/insights\/wp-json\/wp\/v2\/new_categories?post=37981"},{"taxonomy":"new_tags","embeddable":true,"href":"https:\/\/www.editage.com\/insights\/wp-json\/wp\/v2\/new_tags?post=37981"},{"taxonomy":"series","embeddable":true,"href":"https:\/\/www.editage.com\/insights\/wp-json\/wp\/v2\/series?post=37981"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}