PyData Amsterdam 2024

Uncertainty quantification: How much can you trust your machine learning model?
09-20, 14:10–14:45 (Europe/Amsterdam), Rembrandt

Uncertainty identification in machine learning is crucial for making robust decisions, enhancing model trustworthiness, and assessing risks. By quantifying and understanding uncertainty, machine learning practitioners can build more reliable and trustworthy AI systems.

Imagine having a machine learning model that predicts whether a given image contains a cat or not. While traditional machine learning approaches provide binary predictions (cat or not cat) for each image, you do not know the confidence level of the model in each prediction.

Conformal prediction (CP) is a machine learning framework for uncertainty quantification that adds a layer of confidence estimation to model predictions. Instead of just giving a binary answer, it provides a range of possible outcomes (prediction sets) along with a measure of how confident it is in each outcome. These prediction sets come with coverage guarantees for the true outcome, ensuring that they will detect at least a specified percentage of true values. Importantly, conformal prediction is agnostic to the underlying machine learning model, and it makes no assumptions about the underlying data distribution. In other words, it is a model-agnostic and distribution-free approach.

As a result, conformal prediction offers a robust framework that empowers stakeholders to make more informed decisions, particularly in high-stakes domains such as healthcare, finance, and autonomous systems


This talk provides a gentle introduction to conformal prediction, covering its fundamentals and practical applications in both regression and classification problems.

By the end of the talk, the audience will:
- Understand the Motivation Behind Uncertainty Quantification
- Learn why uncertainty quantification is crucial for decision-making
- Explore the Conformal Prediction Framework
- Understand its main advantages compared to other methods
- Learn how conformal prediction works in practice

This talk is for data scientists and ML practitioners interested in uncertainty quantification and trustworthy AI. While most necessary background knowledge will be covered, attendees should be familiar with common ML concepts, including training data, predictive models, etc.

He is a data scientist at KNAB with over 6 years of experience in the financial services industry. He currently works for KNAB and prior to this, he worked at ING as a data scientist across various domains, including commercial, people analytics, and financial crime and fraud.
He holds a PhD in computational physics and is passionate about conducting research in Trustworthy AI and Generative AI topics.