Rethinking Literature Review in the Era of Artificial Intelligence


Reading time
4 mins
 Rethinking Literature Review in the Era of Artificial Intelligence

Scientific explorations are vastly different today than what they were just a few decades ago. Interdisciplinarity has become the norm, rigor and reproducibility standards have risen, and we are propelling ahead at unprecedented rates. Research no longer occurs in silos, but at the crossroads of various disciplines. In addition, the sheer volume of yearly publications has expanded to such a great extent that it is nearly impossible for an individual researcher to keep up with. Millions of new research articles are published every year, with some fields generating thousands each week. Thus, the foundational task of literature review has become highly daunting and somewhat of a bottleneck for researchers, especially early-career researchers.

Against this background, it is not surprising that many have turned to artificial intelligence (AI), which has made its presence felt in nearly every domain of human life. AI tools and platforms are appealing to researchers as they make life much easier and give rapid results. However, it is important to realize that AI cannot and, most definitely, should not replace a researcher’s role in literature review. This is because AI cannot really achieve what a literature review is fundamentally meant to achieve: the identification and synthesis of robust and relevant data for the purpose of gaining meaningful insights and advancing knowledge.

Why have AI tools for literature review become popular?

Whether you are an established researcher or an early-career researcher trying to establish yourself, it is extremely important to keep up with publications in the field. However, this no longer means having to go through a few key papers in a niche field. Rather, it requires a researcher to go through thousands of papers, the majority of which are interdisciplinary in nature, to identify relevant literature. Furthermore, literature review is not the only job that a researcher has to do; they also have to plan experiments, execute them, analyze their findings, collaborate with other researchers, attend workshops/meetings, network with peers, and so many other things.

In such a scenario, AI helps ease the burden of literature review. Searches are no longer only based on keywords, but rather they are based on semantics. AI can look for articles that have a similar context or meaning and seem relevant to the given keywords and phrases. It has the ability to scour thousands of papers across databases in just a few seconds. It can pick out articles that seem relevant and summarize each of them to make it easier for you to decide whether the article is of relevance to you. What would have taken months if done in a traditional manual way, can now be achieved in a matter of seconds! This extraordinary speed and breadth that AI tools offer is unparalleled.

But does that mean AI can replace humans in literature review? Absolutely not! If a researcher only went by what an AI tool offers, they will almost inevitably go down a rabbit hole that is bound to backfire.

The vetting process remains a human task

Consider the following scenario: You are a geneticist who works on cervical cancer, and you want to pursue a line of investigation that involves computational modeling of large datasets to identify certain patterns. This would require you to search across disciplines and scour through thousands of papers to find relevant literature to build your hypothesis. You turn to an AI platform for help. It easily identifies 500 candidates from the sea of articles out there in a matter of a few seconds. This seems incredibly efficient and feels like you have saved a lot of time.

But those 500 results likely also include:

  • abstracts that don’t include complete datasets
  • preprints of papers that are counted in the 500 candidates
  • papers that may have overlapping datasets and hence cannot be considered separately
  • papers that are published in predatory journals
  • papers that were later retracted
  • papers that are not quite relevant to your line of thinking

The actual count, once you remove the dubious and irrelevant ones, may be closer to approximately 50 that you can then evaluate.

But you would only realize this if you took the effort to go through all the papers that the AI identifies for you. AI cannot perform the vetting process for you. It can only help you rapidly search through the haystack; it has limited reasoning capabilities and does not know how to evaluate. Picking out the relevant and credible studies out of the identified candidates requires domain expertise, the ability to identify what a robust study entails, and lots of time!

But if a researcher has to still read through and evaluate so many papers, how can they find the time to do it all?

Engaging Experts Could be a Pragmatic Solution

Typically, in academia, researchers feel that they need to do everything by themselves to take ownership. Though the notion of taking intellectual ownership by doing the literature review by yourself is appealing, it has become increasingly impractical for an individual or even a research group to conduct literature review while also maintaining research productivity. Modern literature review requires various skill sets, such as the ability to optimally use AI search tools, knowledge of quality standards and how to filter search results, knowledge of statistics to assess the significance of the collected data, and domain expertise.

A solution to this is to outsource. Engage trained research experts to assist with literature review. Such experts can provide targeted support – be it for screening a large number of abstracts, extracting large datasets to meet your study criteria, verifying the credibility of the journal or robustness of a study, or even verifying the citations and claims in a paper. It is important to realize that this approach does not take away the intellectual ownership of a project from the researcher; the researcher would still be the one responsible for framing the research objectives, interpreting the body of work, drawing conclusions, and building the narrative.

Furthermore, seeking such expert assistance can also be cost-effective considering the saved time. Today, we have platforms such as Kolabtree that provide researchers with opportunities to engage the service of experts for literature review.

 

Conclusion: A Sustainable Workflow Involving AI and Humans

In today’s research landscape, which is moving at an accelerated pace, the bottleneck of literature review can be easily addressed using a structured workflow that is not entirely AI-driven or entirely human-driven. The most sustainable approach a researcher can take is to use AI for the breadth and speed that it offers for literature search; seek assistance from experts for ensuring depth (i.e. evaluating the AI-identified candidates, ensuring relevance and quality, extracting data); and finally synthesize the results, interpret them, and frame concepts themselves. Such a division of labor will eventually strengthen the research itself, because errors and oversight would be minimized, superficial or weak evidence would be cast aside, and fatigue-driven errors would be reduced.

In conclusion, even though AI has solved the issue of speed, which is no doubt valuable, it has not solved the more difficult problem of ensuring reliability, rigor, and cautious interpretation. This remains a deeply human task, and a collaboration between machine, human experts, and the researcher is the optimal approach to tackle literature review.

References 

  1. Sivapathasundharam B. Publications! Where are we going? J Oral Maxillofac Pathol. 2024;28(1):7–10. https://doi.org/10.4103/jomfp.jomfp_77_24.
  2. Baker DP, Powell JJW. Scientists around the world report millions of new discoveries every year − but this explosive research growth wasn’t what experts predicted. The Conversation. 2024. https://doi.org/10.64628/AAI.uygd7pshv.
  3. Chigbu UE, Atiku SO, Du Plessis CC. The Science of Literature Reviews: Searching, Identifying, Selecting, and Synthesising. Publications. 2023;11(1):2. https://doi.org/10.3390/publications11010002.
  4. Anup Sinha. Unveiling the Evolution of Semantic Search: From Keywords to Contextual Understanding. Infosys Blogs. https://blogs.infosys.com/digital-experience/emerging-technologies/unveiling-the-evolution-of-semantic-search.html
  5. Moens M, Guy Nagels G, Wake N, Goudman L. Artificial intelligence as team member versus manual screening to conduct systematic reviews in medical sciences. iScience. 2024;28(10):113559. https://doi.org/10.1016/j.isci.2025.113559.
  6. https://www.kolabtree.com/how-it-works?utm_source=blog&utm_medium=organic&utm_campaign=Edit%E2%80%A6

Found this useful?

If so, share it with your fellow researchers


Related post

Related Reading