Data & Drinks is back, and we're kicking off the 4th season of our event with two new and interesting talks! The first talk will discuss the outcomes of a research that relied on training deep learning models with large labeled or unlabeled datasets to increase the accuracy of radiation treatment in cancer patients. The second will go over the limitations of explainability methods, and will discuss possible solutions or ideas on how to resolve these issues.
Our guest speakers are Hidde Fokkema, PhD candidate in Mathematical Machine Learning at the University of Amsterdam, Monika Grewal, PhD candidate in The National Research Institute for Mathematics and Computer Science in the Netherlands (CWI), and Dustin van Weersel, Machine Learning Engineer at Xomnia.
The event includes dinner, drinks and a lot of networking opportunities with data professionals from Amsterdam and beyond.
Summary of the talks:
Talk #1: Limitations of Explainability methods
Many Explainability methods are developed currently. Notable and well known examples are LIME, SHAP, Grad-CAM, and many more. However, the precise interpretation of these methods is not properly defined from a mathematical point of view. In this talk, we will define some of the desirably interpretations one would want to give to these methods, and we will see how some of these goals are not achievable, result into contradictions, or even result into negative consequences. Finally, we will discuss possible solutions or ideas on how to resolve these issues.
Talk #2: Learning Clinically Acceptable Segmentation of Organs at Risk in Cervical Cancer Radiation Treatment from Clinically Available Annotations
Deep learning models benefit from training with a large dataset (labeled or unlabeled). Following this motivation, Monika and Dustin will present an approach to learn a deep learning model for the automatic segmentation of Organs at Risk (OARs) in cervical cancer radiation treatment from a large clinically available dataset of Computed Tomography (CT) scans containing data inhomogeneity, label noise, and missing annotations. Our experimental results show that learning from a large dataset with our approach yields a significant improvement in the test performance despite missing annotations in the data.
About the speakers:
Hidde Fokkema: Hidde is a PhD student in Mathematical Machine Learning at the University of Amsterdam. He researches Explainability methods from a formal mathematical point of view. In his research, he tries to derive formal guarantees of what can and cannot work for explainability methods. Before starting his PhD, he worked as a data analyst at a consultancy firm, while completing his Master’s degree in Mathematics.
Monika Grewal: Monika completed her master's degree in Electronics & Communication Engineering from JIIT, India. She has a background in neuroimaging research, digital signal and image processing. Before joining her Ph.D. at CWI, she worked on projects aiming to detect pathologies in brain CT scan images using deep learning techniques.
Dustin van Weersel: For more than 5 years, Dustin has worked as a data scientist and machine learning engineer at Amsterdam-based AI consultancy Xomnia. He delivered a wide array of projects to industry leading clients in the Netherlands, including TIP Group, Rabobank, and Centrum Wiskunde & Informatica, where he served as a data scientist to help in the Multi-Objective Deformable Image Registration (MODIR) project.
- 17:30-18.30: Walk-ins & dinner
- 18.30-18.35: Introduction to Xomnia
- 18:35-19:05: Limitations of Explainability methods by Hidde Fokkema
- 19:05-19:15: Break
- 19.15-19.45: Learning Clinically Acceptable Segmentation of Organs at Risk in Cervical Cancer Radiation Treatment from Clinically Available Annotations by Monika Grewal and Dustin van Weersel
- 19.45-20.30: Borrel