Researchers react to ChatGPT’s authorship credits on published literature and preprints

Reading time
3 mins
Researchers react to ChatGPT’s authorship credits on published literature and preprints

AI writing tools have been gaining popularity in recent years. Now with the advancements in natural language processing technology, they are becoming more accurate and sophisticated. One such AI chatbot released by tech company OpenAI is ChatGPT, which has made immense foray in academia. The authorship of research papers and preprints are being credited to the chatbot, sparking reactions from researchers, publishers, and journal editors.     

ChatGPT is a large language model that is trained on a massive amount of text data from the internet. Owing to its ability to generate human-like text, ChatGPT can be used for a variety of tasks such as language translation, text summarization, and question answering. Researchers are using it to write research papers. In fact, ChatGPT can also be used to generate figures, tables, and other visual elements that are commonly used in research papers. 

The ethical concerns around the use of ChatGPT have been rife since authors are expected to take responsibility of their work. Giving authorship credit to an AI tool has brough into question the meaning attached to the term “author” and their contribution. “An attribution of authorship carries with it accountability for the work, which cannot be effectively applied to LLMs,”1 opines Magdalena Skipper, editor-in-chief of Nature. Publishers expect authors to ascertain the validity of their work, so instead of citing ChatGPT as the author, they recommend citing any use of ChatGPT in the acknowledgements section of the work.   

Additionally, it’s proved difficult to distinguish the output generated by the AI tool and humans. A group of researchers from the University of Chicago selected 50 research papers published in well-known journals and then generated abstracts with the help of ChatGPT. The output passed the plagiarism check, and both AI-output detector and human reviewers were unable to distinguish the two with complete accuracy.  

Since the AI writing tool bases its output on the existing information available online, validating its accuracy can be challenging. Catherine Gao, who led the study at the University of Chicago, expressed her concern about researchers being exposed to fabricated content and warned about the “implications for society at large because scientific research plays such a huge role in our society.”2 Taking this discussion further, Irene Solaiman, whose area of study is social impact of AI said, “These models are trained on past information and social and scientific progress can often come from thinking, or being open to thinking, differently from the past.”3  

Taking this discussion further, Arvind Narayanan, a computer scientist at Princeton University, points out that the real concern is not the chatbot but the “perverse incentives”4 around hiring and promotion that lead authors to rely on such tools to generate output.  

It remains to be seen how the discussions, perspectives, and policies around the usage ChatGPT develop. What are your thoughts on the chatbot? Please share your views in the comment section below.  


1. Stokel-Walker, C. ChatGPT listed as author on research papers: many scientists disapprove. (2023) 

2,3, 4. Else, H. Abstracts written by ChatGPT fool scientists. (2023) 

1 clap

for this article

Published on: Jan 30, 2023

Sneha’s interest in the communication of research led her to her current role of developing and designing content for researchers and authors.
See more from Sneha Kulkarni


You're looking to give wings to your academic career and publication journey. We like that!

Why don't we give you complete access! Create a free account and get unlimited access to all resources & a vibrant researcher community.

One click sign-in with your social accounts

1536 visitors saw this today and 1210 signed up.