Techno Blender
Digitally Yours.

June Edition: Get into SHAP. The ins and outs of a powerful… | by TDS Editors | Jun, 2022

0 75


The ins and outs of a powerful explainable-AI approach

Photo by Héctor J. Rivas on Unsplash

The power and size of machine learning models have grown to new heights in recent years. With greater complexity comes the need for more accountability and transparency—both for the practitioners who build these models and for those who interpret their results.

Within the wide field of explainable AI, one approach that has shown great promise (and drawn a lot of attention) is SHAP (from “SHapley Additive exPlanations”); as its creators put it, it is a “game theoretic approach to explain the output of any machine learning model.” We’ve published some excellent work on SHAP in recent weeks, and for June’s Monthly Edition we’ve decided to share several standout articles covering it from multiple angles, from the highly theoretical to the extremely hands on.

Happy reading, and thank you for your support of our authors’ work,

TDS Editors

TDS Editors Highlights

  • Explainable AI: Unfold the Blackbox
    For a clear and thorough introduction to XAI in general and SHAP and Shapley values in particular, look no further than Charu Makhijani’s new post. (May 2022, 10 minutes)
  • The Art of Explaining Predictions
    Conor O’Sullivan stresses the importance of human-friendly explanations, and demonstrates SHAP’s power to produce them. (May 2022, 11 minutes)
  • SHAP’s Partition Explainer for Language Models
    How do Shapley values, Owen values, and the partition explainer relate to each other? For her debut TDS post, Lilo Wagner looks under the hood of the SHAP library. (May 2022, 9 minutes)
  • Introduction to SHAP Values and their Application in Machine Learning
    For a full, patient walkthrough of the math behind SHAP and how it works out in real-life ML contexts, here’s Reza Bagheri’s definitive guide. (March 2022, 81 minutes)
  • SHAP: Explain Any Machine Learning Model in Python
    For a quicker, hands-on approach to SHAP, you can always revisit Khuyen Tran’s popular tutorial. (September 2021, 9 minutes)
  • Explaining Measures of Fairness
    Finally, SHAP creator Scott Lundberg has written extensively about the library here on TDS. In this perennial favorite from our archives, Scott brings together two crucial concepts: explainability and fairness. (March 2020, 11 minutes)

Original Features

From author Q&As to podcast episodes, our team puts together original features for your reading and listening pleasure — here are several recent highlights:

Popular Posts

If you’d like to dive into some of the articles and conversations that generated the most buzz last month, here are some of the most-read posts from May.


The ins and outs of a powerful explainable-AI approach

Photo by Héctor J. Rivas on Unsplash

The power and size of machine learning models have grown to new heights in recent years. With greater complexity comes the need for more accountability and transparency—both for the practitioners who build these models and for those who interpret their results.

Within the wide field of explainable AI, one approach that has shown great promise (and drawn a lot of attention) is SHAP (from “SHapley Additive exPlanations”); as its creators put it, it is a “game theoretic approach to explain the output of any machine learning model.” We’ve published some excellent work on SHAP in recent weeks, and for June’s Monthly Edition we’ve decided to share several standout articles covering it from multiple angles, from the highly theoretical to the extremely hands on.

Happy reading, and thank you for your support of our authors’ work,

TDS Editors

TDS Editors Highlights

  • Explainable AI: Unfold the Blackbox
    For a clear and thorough introduction to XAI in general and SHAP and Shapley values in particular, look no further than Charu Makhijani’s new post. (May 2022, 10 minutes)
  • The Art of Explaining Predictions
    Conor O’Sullivan stresses the importance of human-friendly explanations, and demonstrates SHAP’s power to produce them. (May 2022, 11 minutes)
  • SHAP’s Partition Explainer for Language Models
    How do Shapley values, Owen values, and the partition explainer relate to each other? For her debut TDS post, Lilo Wagner looks under the hood of the SHAP library. (May 2022, 9 minutes)
  • Introduction to SHAP Values and their Application in Machine Learning
    For a full, patient walkthrough of the math behind SHAP and how it works out in real-life ML contexts, here’s Reza Bagheri’s definitive guide. (March 2022, 81 minutes)
  • SHAP: Explain Any Machine Learning Model in Python
    For a quicker, hands-on approach to SHAP, you can always revisit Khuyen Tran’s popular tutorial. (September 2021, 9 minutes)
  • Explaining Measures of Fairness
    Finally, SHAP creator Scott Lundberg has written extensively about the library here on TDS. In this perennial favorite from our archives, Scott brings together two crucial concepts: explainability and fairness. (March 2020, 11 minutes)

Original Features

From author Q&As to podcast episodes, our team puts together original features for your reading and listening pleasure — here are several recent highlights:

Popular Posts

If you’d like to dive into some of the articles and conversations that generated the most buzz last month, here are some of the most-read posts from May.

FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Techno Blender is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – [email protected]. The content will be deleted within 24 hours.

Leave a comment