When an AI model makes a prediction, how do we know whether to trust it? This lecture is all about measuring and understanding uncertainty — the idea that sometimes a model's answer might be wrong, and we'd like to know when that's most likely to happen.
The Big Question
Imagine a deep neural network looks at a photo of a lion and says "giraffe." The lecture starts from a practical question: for any single prediction, can we figure out how confident the model really is? This isn't about overall accuracy on a test set — it's about whether we should trust *this particular* answer.
Why Uncertainty Matters
The lecture covers three main reasons we care about uncertainty:
1. Safety and reliability — In high-stakes settings (think self-driving cars or medical imaging), we might want the model to say "I'm not sure" rather than give a bad answer. This is called selective prediction or classification with a reject option.
2. Helping other systems make decisions — If a model can output probabilities rather than just a single answer, downstream systems can weigh costs and risks. For example, in personalized medicine, the cost of a wrong diagnosis depends on the patient.
3. Improving the model — Understanding where and why a model is uncertain helps us decide what to fix: collect more data? Add new features? Change the model architecture?
Dr. Thomas G. Dietterich is a Distinguished Professor Emeritus and a pioneering figure in machine learning, recognized for foundational contributions to ensemble methods, hierarchical reinforcement learning, and AI robustness. He has advanced the field through influential research on error-correcting output coding, the multiple-instance problem, and the integration of regression trees into probabilistic graphical models.
Dietterich has held prominent leadership roles, including president of the Association for the Advancement of Artificial Intelligence and founding president of the International Machine Learning Society. He has received major awards for his service and continues to contribute as a journal editor, program chair, and advisor to research organizations.
Watch the Recording
Session Summary
Key takeaways, concepts, and references from this session — compiled by our team so you can revisit the ideas and help you digest the content.
Want to know when new sessions go live?
Sign up to get notified about upcoming lectures.