We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
First Steps to Interpretable Machine Learning | Natalie Beyer
Learn how to bring interpretability to your machine learning models using SHAP, a game theory-based explanation package, and discover the importance of model interpretation in high-stakes applications and real-world examples.
- Interpretability is crucial in machine learning. Without understanding the predictions made by a model, we cannot trust the outcomes.
- SHAP (SHapley Additive exPlanations): a package for explaining predictions in machine learning models. It’s based on a game theory paper from 1953.
- Game theory explanation: SHAP values show how each feature contributed to the prediction, just like a game where each player contributes to the overall profit.
- Model interpretation is important: understanding what the model is doing, especially in high-stakes applications, to prevent biased or inaccurate predictions.
- Real-world examples: the talk includes examples from a welfare system automation project, a Kickstarter campaign data set, and a product design company.
- Feature importance: understanding which features contribute most to the predictions can help identify problematic areas in the model.
- Model evaluation: evaluate models not only by accuracy but also by understanding how they make predictions.
- Interpretability in deep learning models: SHAP can also be used to understand how deep learning models make predictions, even on complex data such as images.
- Collaboration: data scientists and machine learning engineers must work together to design and interpret models correctly.