We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Ade Idowu - Hands on Intro to Developing Explainability for Recommendation Systems
Discover the importance of explainability in recommendation systems, and learn hands-on methods to develop and evaluate explainable recommendations, including association rules, matrix factorization, and post-hoc explanation methods like LIME.
- The goal of recommendation systems is to compute unknown ratings, and explainability is critical to understand the intuition behind recommendations.
- Association rules can be used to explain recommendations by identifying patterns in user behavior.
- Matrix factorization is a method to decompose user-item interaction matrices into lower-dimensional spaces, making recommendations more interpretable.
- Model-based explainability couples the recommendation engine with an explainability model, providing insights into the predictions made by the model.
- Post-hoc explanation methods, such as LIME, can explain black box models by generating explanations based on perturbations around a sample input.
- Explainability is crucial in recommendation systems, and it’s no longer acceptable to build models that are opaque.
- There are various metrics to evaluate the fidelity of explanations, including precision, recall, and relevance.
- NLP-based models can generate text-based explanations, which can be visualized as word clouds or sentences.
- Regularization is essential to ensure that the model is generalizable and not overfitting.
- Matrix factorization is a fundamental technique in collaborative filtering, and it’s used to predict user ratings.
- The first demo shows an example of a movie recommendation system based on user-item interaction data.
- The second demo shows how to use the matrix factorization method to compute recommendations.
- The third demo demonstrates the use of LIME to explain a black box model by generating explanations based on perturbations around a sample input.
- The fourth demo shows how to use a visual style explanation to provide insights into the recommendations made by the model.
- The fifth demo demonstrates the use of association rules to explain recommendations.
- The sixth demo shows how to use a knowledge-based recommendation system to provide explanations based on external data.