Video: How Prompt Engineering Controls AI Output
How does prompt engineering actually control the output given by a large language model (LLM)? In the context of artificial intelligence (AI), “temperature” is a parameter that configures the randomness, creativity, and predictability of the text generated by LLMs. This significantly influences how AI tools process and think through a prompt before giving the user a response.
Watch this video where James Sonne (Anatomy & Neurobiology, Ph.D. Integrated Biomedical Sciences) clarifies how this “temperature” controls the creativity and predictability of an AI’s output. Using an easy example of traveling through a high-dimensional vector space, we explain how AI can either follow a straight, laser-like path (predictable and flat language) or explore a wider cone of possibilities (more creative, human-like writing). This “cone” is what temperature represents. A low temperature gives you safe, repetitive answers. A higher temperature lets the AI tool pick from a wider range of words, even ones with lower probability, to create more interesting and surprising results.
We also connect this to something practical: AI detection. You’ll see how simple prompt engineering using more literary or expressive wording can push AI tools to choose less predictable tokens and instantly reduce detection rates. In fact, one study showed that adding a single phrase boosted creativity so much that the AI detection rate dropped from 70% to just 3.3%.
So, if you’ve ever wondered why AI sometimes sounds robotic or how people tweak prompts to make AI output feel more human, this video is for you!
Break that writer’s block! Check out Paperpal and simplify your academic writing with AI assistance.

