June Edition: Get into SHAP. The ins and outs of a powerful… | by TDS Editors | Jun, 2022
The ins and outs of a powerful explainable-AI approach
The power and size of machine learning models have grown to new heights in recent years. With greater complexity comes the need for more accountability and transparency—both for the practitioners who build these models and for those who interpret their results.
Within the wide field of explainable AI, one approach that has shown great promise (and drawn a lot of attention) is SHAP (from “SHapley Additive exPlanations”); as its creators put it, it is a “game theoretic approach to explain the output of any machine learning model.” We’ve published some excellent work on SHAP in recent weeks, and for June’s Monthly Edition we’ve decided to share several standout articles covering it from multiple angles, from the highly theoretical to the extremely hands on.
Happy reading, and thank you for your support of our authors’ work,
TDS Editors Highlights
- Explainable AI: Unfold the Blackbox
For a clear and thorough introduction to XAI in general and SHAP and Shapley values in particular, look no further than Charu Makhijani’s new post. (May 2022, 10 minutes) - The Art of Explaining Predictions
Conor O’Sullivan stresses the importance of human-friendly explanations, and demonstrates SHAP’s power to produce them. (May 2022, 11 minutes) - SHAP’s Partition Explainer for Language Models
How do Shapley values, Owen values, and the partition explainer relate to each other? For her debut TDS post, Lilo Wagner looks under the hood of the SHAP library. (May 2022, 9 minutes) - Introduction to SHAP Values and their Application in Machine Learning
For a full, patient walkthrough of the math behind SHAP and how it works out in real-life ML contexts, here’s Reza Bagheri’s definitive guide. (March 2022, 81 minutes) - SHAP: Explain Any Machine Learning Model in Python
For a quicker, hands-on approach to SHAP, you can always revisit Khuyen Tran’s popular tutorial. (September 2021, 9 minutes) - Explaining Measures of Fairness
Finally, SHAP creator Scott Lundberg has written extensively about the library here on TDS. In this perennial favorite from our archives, Scott brings together two crucial concepts: explainability and fairness. (March 2020, 11 minutes)
Original Features
From author Q&As to podcast episodes, our team puts together original features for your reading and listening pleasure — here are several recent highlights:
Popular Posts
If you’d like to dive into some of the articles and conversations that generated the most buzz last month, here are some of the most-read posts from May.
We’ve been privileged to share work by excellent new TDS authors in the past month. Please join us in welcoming Margo Hatcher, Orjuwan Zaafarani, Oskar Niemenoja, Milton Simba Kambarami, Maxime Cupani, Marie-Anne Mawhin, Pavle Marinkovic, Divyanshu Raj, Dhruv Gangwani, Karat Sidhu, Chaoyu Yang, Alex Molas, Chris Walsh, Jarosław Pawłowski, Matthew Leyburn, Ana Isabel, Avi Chawla, Erik Balodis, Lilo Wagner, Sai Pavan Yekula, Daniel Reedstone, Jacob Pieniazek, Marie Truong, Charlotte P., Zihan Zhang, Cuong Phan, Rohan Agarwal, Jens Fuglsang Ringsholm, Ilya Yalchyk, Benton Tripp, Mattbbiggs, Sinan Gültekin, Devesh Rajadhyax, Ethan Crouse, Arnaud Capitaine, Kevin Berlemont, PhD, Sambarger, Ella Wilson, Sadik Bakiu, Alexander Kovalenko, Andrii Shchur, Joleen Bothma, Malak Sadek, Sriram Kumar, Pan Cretan, Eldar Jahijagic, among others. If you’d like to see your name here in a future monthly edition, we’d love to hear from you.
The ins and outs of a powerful explainable-AI approach
The power and size of machine learning models have grown to new heights in recent years. With greater complexity comes the need for more accountability and transparency—both for the practitioners who build these models and for those who interpret their results.
Within the wide field of explainable AI, one approach that has shown great promise (and drawn a lot of attention) is SHAP (from “SHapley Additive exPlanations”); as its creators put it, it is a “game theoretic approach to explain the output of any machine learning model.” We’ve published some excellent work on SHAP in recent weeks, and for June’s Monthly Edition we’ve decided to share several standout articles covering it from multiple angles, from the highly theoretical to the extremely hands on.
Happy reading, and thank you for your support of our authors’ work,
TDS Editors Highlights
- Explainable AI: Unfold the Blackbox
For a clear and thorough introduction to XAI in general and SHAP and Shapley values in particular, look no further than Charu Makhijani’s new post. (May 2022, 10 minutes) - The Art of Explaining Predictions
Conor O’Sullivan stresses the importance of human-friendly explanations, and demonstrates SHAP’s power to produce them. (May 2022, 11 minutes) - SHAP’s Partition Explainer for Language Models
How do Shapley values, Owen values, and the partition explainer relate to each other? For her debut TDS post, Lilo Wagner looks under the hood of the SHAP library. (May 2022, 9 minutes) - Introduction to SHAP Values and their Application in Machine Learning
For a full, patient walkthrough of the math behind SHAP and how it works out in real-life ML contexts, here’s Reza Bagheri’s definitive guide. (March 2022, 81 minutes) - SHAP: Explain Any Machine Learning Model in Python
For a quicker, hands-on approach to SHAP, you can always revisit Khuyen Tran’s popular tutorial. (September 2021, 9 minutes) - Explaining Measures of Fairness
Finally, SHAP creator Scott Lundberg has written extensively about the library here on TDS. In this perennial favorite from our archives, Scott brings together two crucial concepts: explainability and fairness. (March 2020, 11 minutes)
Original Features
From author Q&As to podcast episodes, our team puts together original features for your reading and listening pleasure — here are several recent highlights:
Popular Posts
If you’d like to dive into some of the articles and conversations that generated the most buzz last month, here are some of the most-read posts from May.
We’ve been privileged to share work by excellent new TDS authors in the past month. Please join us in welcoming Margo Hatcher, Orjuwan Zaafarani, Oskar Niemenoja, Milton Simba Kambarami, Maxime Cupani, Marie-Anne Mawhin, Pavle Marinkovic, Divyanshu Raj, Dhruv Gangwani, Karat Sidhu, Chaoyu Yang, Alex Molas, Chris Walsh, Jarosław Pawłowski, Matthew Leyburn, Ana Isabel, Avi Chawla, Erik Balodis, Lilo Wagner, Sai Pavan Yekula, Daniel Reedstone, Jacob Pieniazek, Marie Truong, Charlotte P., Zihan Zhang, Cuong Phan, Rohan Agarwal, Jens Fuglsang Ringsholm, Ilya Yalchyk, Benton Tripp, Mattbbiggs, Sinan Gültekin, Devesh Rajadhyax, Ethan Crouse, Arnaud Capitaine, Kevin Berlemont, PhD, Sambarger, Ella Wilson, Sadik Bakiu, Alexander Kovalenko, Andrii Shchur, Joleen Bothma, Malak Sadek, Sriram Kumar, Pan Cretan, Eldar Jahijagic, among others. If you’d like to see your name here in a future monthly edition, we’d love to hear from you.