Back to publications
Beyond Quantification: Navigating Uncertainty in Professional AI Systems
Paper Details
Published: 2025/11/08
Journal: RSS: Data Science and Artificial Intelligence
Volume: Volume 1, Issue 1
Paper Links
PDFThe growing integration of large language models across professional domains transforms how experts make critical decisions in healthcare, education, and law. While significant research effort focuses on getting these systems to communicate their outputs with probabilistic measures of reliability, many consequential forms of uncertainty in professional contexts resist such quantification. A physician pondering the appropriateness of documenting possible domestic abuse, a teacher assessing cultural sensitivity, or a mathematician distinguishing procedural from conceptual understanding all face forms of uncertainty that cannot be reduced to percentages. This paper argues for moving beyond simple quantification toward richer expressions of uncertainty essential for beneficial AI integration. We propose participatory refinement processes through which professional communities collectively shape how different forms of uncertainty are communicated. Our approach acknowledges that uncertainty expression is a form of professional sense-making that requires collective development rather than algorithmic optimization.
Authors
Jess Montgomery
Cambridge University
Executive Director, Accelerate Science
Neil D. Lawrence
Cambridge University
The DeepMind Professor of Machine Learning
Diana Robinson
University of Cambridge
Research Assistant
Carl Henrik Ek
Cambridge University
Senior Lecturer
Sylvie Delacroix
Umang Bhatt
Jacopo Domenicucci
Gaël Varoquaux
Vincent Fortuin
Yulan He
Tom Diethe
Neill Campbell
Mennatallah El-Assady
Søren Hauberg