Techno Blender
Digitally Yours.

7 Pitfalls to avoid while using Model-Agnostic Interpretation Techniques | by Satyam Kumar | Jun, 2022

0 68


General pitfalls of interpretable machine learning

Image by Mohamed Hassan from Pixabay

Interpretable machine learning techniques are becoming more popular among the data science community as more and more complex machine learning algorithms are adopted which are not easily interpretable.

Model-Agnostic Interpretation techniques do not care about the underlying models, but they have the capability to interpret the model and provide insightful model interpretation. Some of the popular model-agnostic interpretation techniques for machine learning models are partial dependence plots (PDP), permutation feature importance (PFI), LIME, and SHAP. These model-agnostic interpretation techniques can lead to wrong insights or conclusions if applied incorrectly. In this article, we will discuss some of the popular 8 pitfalls to avoid while using an interpretation technique.

The article is inspired from a August 2021 paper by Christoph Molnar and his team. I have summarized the article in easy-to-understand text.

The below-mentioned 8 pitfalls refer to where can a data scientist go wrong if they use the interpretation techniques.

1) Assuming one fits all Interpretability:

There are different kinds of Interpretation techniques serving different purposes. The data scientist first needs to think about what interpretability is required based on business constraints.

Solution: Any single model interpretation technique does not fit all use-case or models. SHAP is preferred to compute feature importance wrt to model prediction, as it computes shapely values for each feature per instance. Whereas, Permutation Feature Importance (PFI) is preferred to compute feature importance wrt to model generalization.

(Source), Selection of popular model-agnostic interpretation techniques

2) Bad Model Generalization:

A model interpretation technique makes insights based on assumption that the model is the best fit. Model Interpretation is as good as the underlying model. Underfitted or Overfitted models can result in misleading interpretation insights.

Solution: The data scientist should monitor, track and debug the underlying model and tune it to get a robust and generalized model.

(Source), Bias Variance tradeoff

3) Unnecessary use of Complex Models:

Sometimes simple and complex models have similar performance on the desired metrics. A simple model should always be preferred over a complex model with comparable performance for model interpretation.

Solution: Models such as linear regression, and decision trees are much simpler to interpret compared to SVM with kernels, or complex Neural network models.

(Source), Top: Performance estimates on training and test data for a linear regression model (underfitting), a random forest (overfitting), and a support vector machine with radial basis kernel (good fit), Bottom: PDPs for the data-generating process (DGP) — which is the ground truth — and for the three models

4) Ignoring Feature Dependence:

Agnostic-model interpretation techniques such as PFI, PDP, LIME, or SHAP can be misleading for machine learning models trained on multicollinearity datasets.

Solution: The data scientist should use various statistical tests, and visualization to check the presence of correlation in the dataset, and the same should be treated prior to modeling and interpretation.

(Source), Interpretation with extrapolation

5) Ignoring Model and Approximation Uncertainty:

Many interpretation techniques only provide a mean estimate and do not quantify uncertainty. Ignoring sources of uncertainty can result in the interpretation of noise in the data.

Solution: The data scientist must try repeatedly to compute metrics but with different bootstrap samples to quantify uncertainty.

(Source), Partial dependence plot for feature ‘x1′

6) Failure to Scale to High Dimensionality Setting:

Interpreting results for high-dimensional data is quite difficult for the human mind. Applying model interpretation to a high-dimensional dataset may lead to an overwhelming and high-dimensional output. Also computing model interpretation for high-dimensional data is computationally expensive.

Solution: It is advised to use the dimensionality reduction technique or group similar features during the feature engineering pipeline.

Please refer to my previous article to know 8 Dimensionality Reduction Techniques.

7) Unjustified Casual Interpretation:

Most practitioners are interested in casual insights into the data-generating process, which a model-agnostic interpretation technique fails to provide. Standard supervised ML models are designed to merely exploit associations and not model casual relationships.

Solution: Data science practitioners must carefully access whether sufficient assumptions can be made about underlying data generation, learned models, and interpretation techniques. If the assumptions are met, then casual interpretation can be possible.

(Source), Plot for number of significant features vs number of features

In this article, we have discussed 7 pitfalls of model-agnostic interpretation techniques e.g. in the case of bad model generalization, dependent features, interactions between features, or causal interpretations, that need to keep in mind prior to generating model interpretations

[1] General Pitfalls of Model-Agnostic Interpretation Methods for Machine Learning Models (17 August 2021): https://arxiv.org/pdf/2007.04131.pdf

Thank You for Reading


General pitfalls of interpretable machine learning

Image by Mohamed Hassan from Pixabay

Interpretable machine learning techniques are becoming more popular among the data science community as more and more complex machine learning algorithms are adopted which are not easily interpretable.

Model-Agnostic Interpretation techniques do not care about the underlying models, but they have the capability to interpret the model and provide insightful model interpretation. Some of the popular model-agnostic interpretation techniques for machine learning models are partial dependence plots (PDP), permutation feature importance (PFI), LIME, and SHAP. These model-agnostic interpretation techniques can lead to wrong insights or conclusions if applied incorrectly. In this article, we will discuss some of the popular 8 pitfalls to avoid while using an interpretation technique.

The article is inspired from a August 2021 paper by Christoph Molnar and his team. I have summarized the article in easy-to-understand text.

The below-mentioned 8 pitfalls refer to where can a data scientist go wrong if they use the interpretation techniques.

1) Assuming one fits all Interpretability:

There are different kinds of Interpretation techniques serving different purposes. The data scientist first needs to think about what interpretability is required based on business constraints.

Solution: Any single model interpretation technique does not fit all use-case or models. SHAP is preferred to compute feature importance wrt to model prediction, as it computes shapely values for each feature per instance. Whereas, Permutation Feature Importance (PFI) is preferred to compute feature importance wrt to model generalization.

(Source), Selection of popular model-agnostic interpretation techniques

2) Bad Model Generalization:

A model interpretation technique makes insights based on assumption that the model is the best fit. Model Interpretation is as good as the underlying model. Underfitted or Overfitted models can result in misleading interpretation insights.

Solution: The data scientist should monitor, track and debug the underlying model and tune it to get a robust and generalized model.

(Source), Bias Variance tradeoff

3) Unnecessary use of Complex Models:

Sometimes simple and complex models have similar performance on the desired metrics. A simple model should always be preferred over a complex model with comparable performance for model interpretation.

Solution: Models such as linear regression, and decision trees are much simpler to interpret compared to SVM with kernels, or complex Neural network models.

(Source), Top: Performance estimates on training and test data for a linear regression model (underfitting), a random forest (overfitting), and a support vector machine with radial basis kernel (good fit), Bottom: PDPs for the data-generating process (DGP) — which is the ground truth — and for the three models

4) Ignoring Feature Dependence:

Agnostic-model interpretation techniques such as PFI, PDP, LIME, or SHAP can be misleading for machine learning models trained on multicollinearity datasets.

Solution: The data scientist should use various statistical tests, and visualization to check the presence of correlation in the dataset, and the same should be treated prior to modeling and interpretation.

(Source), Interpretation with extrapolation

5) Ignoring Model and Approximation Uncertainty:

Many interpretation techniques only provide a mean estimate and do not quantify uncertainty. Ignoring sources of uncertainty can result in the interpretation of noise in the data.

Solution: The data scientist must try repeatedly to compute metrics but with different bootstrap samples to quantify uncertainty.

(Source), Partial dependence plot for feature ‘x1′

6) Failure to Scale to High Dimensionality Setting:

Interpreting results for high-dimensional data is quite difficult for the human mind. Applying model interpretation to a high-dimensional dataset may lead to an overwhelming and high-dimensional output. Also computing model interpretation for high-dimensional data is computationally expensive.

Solution: It is advised to use the dimensionality reduction technique or group similar features during the feature engineering pipeline.

Please refer to my previous article to know 8 Dimensionality Reduction Techniques.

7) Unjustified Casual Interpretation:

Most practitioners are interested in casual insights into the data-generating process, which a model-agnostic interpretation technique fails to provide. Standard supervised ML models are designed to merely exploit associations and not model casual relationships.

Solution: Data science practitioners must carefully access whether sufficient assumptions can be made about underlying data generation, learned models, and interpretation techniques. If the assumptions are met, then casual interpretation can be possible.

(Source), Plot for number of significant features vs number of features

In this article, we have discussed 7 pitfalls of model-agnostic interpretation techniques e.g. in the case of bad model generalization, dependent features, interactions between features, or causal interpretations, that need to keep in mind prior to generating model interpretations

[1] General Pitfalls of Model-Agnostic Interpretation Methods for Machine Learning Models (17 August 2021): https://arxiv.org/pdf/2007.04131.pdf

Thank You for Reading

FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Techno Blender is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – [email protected]. The content will be deleted within 24 hours.
Leave a comment