Video: Can Humans Identify AI-generated Text?
There is an increasing concern around the unethical use of AI tools to write research papers. That’s why there are several AI detection tools that journals and publishers use to determine whether the writing is AI-generated. But if humans were asked to the same, do you think they would be successful? Could they correctly differentiate between human-written and AI-generated texts?
In this video, James Sonne (Anatomy & Neurobiology, Ph.D. Integrated Biomedical Sciences) explores a research paper that analyzed whether or not humans were able to detect AI output. A study found that AI was theoretically good at detecting AI output as well as original abstracts. But they found that humans were especially bad at detecting whether something was generated by AI.
Turns out, blinded human reviewers falsely identified 14% of original abstracts as AI generated. Humans were only able to correctly identify 68% of AI-generated abstracts as AI generated. So our false positive rate was 14%. Another interesting fact that came from this research was that the AI-generated text scored significantly low on plagiarism detection. The average rate for plagiarism detection among original abstracts was around 50%. And 50% of the abstracts were detected as having plagiarism within them, even though they did not intentionally plagiarize. Whereas the AI-generated text had a plagiarism score of nearly 0%!
Does this mean that plagiarism detectors prefer AI-generated text? Gets you thinking whether journals should screen submissions for AI-generated text at all, especially with our poor ability to detect AI-generated text and with AI-generated text being considered better by plagiarism detectors. Watch the video to see what experts say on navigating this complex challenge in research writing.
Break that writer’s block! Check out Paperpal and simplify your academic writing with AI assistance.


