Podcast: Do language barriers affect the quality of peer review feedback? Interview with Sin Wang Chong

Do language barriers affect the quality of peer review feedback? Interview with Sin Wang Chong

If you were asked to list the most important attributes one needs to have as a peer reviewer, what would they be? Very likely, you’ll talk about subject matter expertise, critical-thinking skills, and the ability to provide a sound, objective assessment on the quality of research presented in a manuscript. Which are all, no doubt, imperative.

But most of us rarely think of peer review as performing an important social function, as well as an intellectual one—of supporting fellow researchers in your field and building collegial relationships through peer review feedback. That’s where the concept of “feedback literacy” comes in.

In this interview series, I speak with Professor Sin Wang Chong, who has done extensive work exploring the value of feedback literacy in scholarly peer review. This year’s Peer Review Week focuses on the theme Peer Review and the Future of Publishing, and so, I ask Sin Wang how this particular social value of peer review can be strengthened and what direction he believes peer review should take in the future.

In the first segment, Sin Wang describes what feedback literacy is and why it’s important in peer review and scholarly publishing.

In this second segment, we discuss how researchers from all linguistic backgrounds can learn to provide constructive feedback (even though English is the primary language used for global scholarly communication) and the potential ways AI can help peer reviewers improve their feedback quality ethically.

In the third segment, Sin Wang talks about the importance of shaping the future of peer review based on the principles of inclusivity and empowerment, in particular, empowerment of early career researchers.

About Sin Wang Chong: Sin Wang is Director of Impact and Innovation at the International Education Institute, University of St Andrews, and Head of Evidence Synthesis at the National Institute of Teaching in England. Concurrently, he is a visiting and adjunct professor at a number of universities in Asia, England, and the United States. 

He is Chair of the Scottish Association for Teaching English as a Foreign Language (SATEFL) and serves on the Council of the British Educational Research Association and the Executive Committee of the British Association for Applied Linguistics.

Sin Wang’s research interests are in evidence synthesis, educational assessment, language education, and higher education. He is Associate Editor of two SSCI-indexed journals: Innovation in Language Learning and Teaching and Higher Education Research & Development. He is co-founder and co-director (with Shannon Mason) of Scholarly Peers, a platform to support doctoral students and early career researchers to navigate journal peer review.


[Audio transcript]


A vast majority of researchers today come from non–English-speaking backgrounds and most scholarly communication occurs in English, whether it’s through academic manuscripts or interpersonal interactions. So how can both English-speaking and non–English-speaking researchers take this into account and improve the way they receive and give feedback?

Sin Wang

I think language is, as you said, is an important consideration. What I mean by language really is you know, the tone and the expressions and the kind of wordings more than, you know, the kind of linguistic accuracy or the kind of proficiency level. Of course, you need to be up to a certain proficiency level to be able to convey your idea.

But I actually don’t think that peer reviewers need to use very difficult, technical, and complex vocabulary or sentence patterns to write up their reports. Actually, I would prefer a report that is written in very plain and direct simple language. It may be in English or in other languages. So I just want to make that clear. When we talk about the language issue or the language barrier, I don’t want to focus too much on the linguistic aspect of it.

I really want to focus on the kind of socio-linguistic aspect of it or how we use the language in a professional manner. So there are different ways or different questions. I would like to share with listeners here that you can ask yourselves when you provide feedback or before you provide feedback to authors. Remember, we talked about feedback literacy at the very beginning of the interview, and I talk about three attributes of feedback literacy.

The first one is understand the nature of feedback. The second one is to make judgment. The third one is to manage emotions. I’m going to give you or share with you some prompting questions which we can use to kind of check whether our feedback is appropriate or useful. So in terms of understanding the nature of feedback, some questions we can ask ourselves include how can I determine my feedback focus or focuses?

How can I provide feedback that is actionable, specific, and manageable? In order to do that, I have to list out the kind of areas that I would like to focus on and prioritize the areas. How can I provide feedback that facilitates authors’ reflections? In my own practice as a reviewer, I often use questions a lot.

So phrasing something as a question helps the authors to reflect on certain issues. Also it makes the whole conversation more kind of open-minded and, you know, more welcoming. The second aspect of feedback literacy is managing emotion. And under that I have a few questions that I would like us to ask ourselves.

First, how can I show respect and appreciation to the author explicitly? The word here is “explicitly.” I know a lot of peer reviewers, deep down, they show a lot of appreciation. They may be impressed, but they don’t put it down in writing. And the problem with peer review is that you don’t get to see the peer reviewer often or you don’t even know who the peer reviewer is. So I think it’s important to put down our respect, our appreciation, our praises very, very explicitly. Of course, it’s not that kind of generic ones, but very specific and evidence-based.

The second question is how can I direct my feedback to focus on the manuscript but not the author? How can I gain the author’s trust by giving feedback in a professional manner? Again, the author doesn’t know the identity of the peer reviewer, in most cases, in some disciplines, in particular. How can the authors know that the reviewer is somebody they can trust? They’ve never met the person, never seen their faces, never talked to them. It’s only through reading the words in the reports that the authors establish a kind of relationship with the peer reviewer, and that kind of trusting relationship is extremely important.

Finally, how can I clarify my position as a peer reviewer and acknowledge the limitations of my perspective? This question I learned from one of the peer reviewers of one of my papers at the very end of their report, they mentioned that “I came from this certain background. I use this methodology a lot, so my perspective is kind of coming from there and it may be very limited.”

So I was so impressed to see that. So acknowledging limitations as a peer reviewer. For example, you can say I’m an expert of the methodology that you use, but I’m not very familiar with the topic actually, or vice versa, so that when editors and authors read your feedback, they can kind of gauge what to focus on more.

Finally, the last component of feedback literacy is making judgment, and I have a number of reflective questions for that as well. First, what are the expectations of the journal or the journals as stipulated in the authors guidelines? So we have to know what are the success criteria. How are these guidelines translated into practice as exemplified in some of the latest publications?

Because a lot of the criteria are designed to be vague and it’s okay to be vague because they are designed to be applicable to all publications. But how do those guidelines translate into practice? So we have to look at some publications as examples. If you have published in the journal before, what did the reviewers and editors focus on? You can learn from their focuses.

Finally, what do your colleagues who have published in the journal think? So these questions will help you think about the focuses and inform your judgment. I hope it’s not too much of a list.


No, that’s perfectly fine. I think...what I really appreciated about your perspective is how you talked about building trust with the author. That, you know, whether you are proficient in the language will not be as important as your intention. And I think that’s what a lot of non–English-speaking authors actually worry about, you know, one, whether their points are coming across clearly, but more importantly, is their intention coming across clearly through their feedback.

So I think this is good assurance for them that if they focus on the main objective of why they’re offering feedback, then everything else is fine, it will fall into place, and it’s okay to not be proficient in the language. So that’s really useful.

Sin Wang

And I said something very quickly, Sorry.

I think sometimes we think peer review feedback as something very technical and it is it can be very technical, but it’s also a very kind of everyday-life kind of communication. So it’s essentially communication, right? Essentially like how we are chatting right now, how you’re chatting with a friend. So it’s very daily-life and conversational actually. So I think if we have that in mind when we structure our feedback and put down our feedback, that would help a lot.

We’re not trying to showcase how much we understand about a topic. We’re trying to help a friend, help a colleague to improve their work, and we want to make sure they understand our message. So simple language, simple approach, direct, positive. And I think that would be the way to go.


With AI tools being such a major topic of discussion in the past few months, what role do you see sophisticated tools like ChatGPT playing in improving the quality of reviewer comments…the feedback quality of reviewer comments?

Sin Wang

Yeah, I have, I have some ideas, but before that, just a few disclaimers I think that are that are important. First, personally, I think AI is a tool. It’s a tool and AI cannot replace reviewers. Okay? AI is used with the right reason, for example, by reviewers who want to improve the quality of feedback, but not because they want to be irresponsible, right?

And then I think AI can never write the review for the reviewers. AI can help refine the reviews and the review. But reviewers need to take the sole and ultimate responsibility of the content of their feedback. In other words, I think reviewers need to own the review they produce with or without the support of AI. And also, when appropriate, I think reviewers should acknowledge in their report the roles played by AI, if any.

So these are a bunch of disclaimers I think that that are good to have before I share my views.

So there are several ways I think AI like ChatGPT can be used in peer review. I think reviewers can kind of divide workload with AI a little bit. For example, reviewers can focus on the content of the feedback more, whereas AI can be used to kind of adjust the tone, the wordings, the clarity, the language, or the structure of the reviewers’ report.

We can also ask AI to paraphrase the written feedback that reviewers have produced to make it more positive or constructive. So it’s about changing the tone or the wordings. We can also ask AI to present feedback in a more organized and comprehensive way. For example, we can ask ChatGPT to add headings, suggest headings, or restructure the feedback provided that I, as the peer reviewer has already written, the feedback.

Also, based on journal publication criteria, we can ask AI to generate ideas that need to be covered in the feedback. For example, if you have a set of criteria, we can ask ChatGPT about five main areas that we have to cover in our report that align with the journal’s expectations.

So these are some of the ways I have I can think of and I have used some of these ways, but not all of them. But it’s a topic that I think it’s so new and it will never go away. So I think it’s a very important topic for us to all think about. And again, it goes back to my disclaimer, the very beginning AI is a tool. And our responsibility as an academic, as a teacher, as a reviewer, as an editor is to think about an effective and meaningful way to help make things better.

You're looking to give wings to your academic career and publication journey. We like that!

Why don't we give you complete access! Create a free account and get unlimited access to all resources & a vibrant researcher community.

One click sign-in with your social accounts

1536 visitors saw this today and 1210 signed up.