Video: Are AI Screening Tools Biased?
Artificial intelligence (AI) drives various aspects of a research workflow, and understating how AI tools work has become crucial. As authors, it is natural to be concerned about AI screening tools, especially with many journals using them to filter out submissions. AI bias does occur, but how does it happen? And what can be done to minimize it?
Watch this video where James Sonne (Anatomy & Neurobiology, Ph.D. Integrated Biomedical Sciences) dives into a real-life case that perfectly shows how bias in data can lead to biased AI outputs. He takes you through a fascinating example from Amazon’s AI journey. Between 2014 and 2015, Amazon built an AI recruiting tool to help speed up hiring and identify top-performing job candidates, especially in software development roles. But there was one major problem: The AI learned biases hidden inside the historical data it was trained on.
Because software engineering has traditionally been a male-dominated field, the resumes the AI model trained on were mostly from men. As a result, the AI started to downgrade applications that included female-associated terms, even something as simple as mentioning a “women’s chess club”! This shocking bias came to light in a 2018 news report, where it was revealed that Amazon had quietly scrapped the tool after noticing discrimination against female candidates.
Watch the video to understand how this bias was formed, what it teaches us about machine learning and training data, and how large language models process information through tokens, probabilities, and patterns. Learn how AI systems absorb human biases from the data they are trained on and why choosing the right training data matters.
Break that writer’s block! Check out Paperpal and simplify your academic writing with AI assistance.
