Package smile.feature.importance


package smile.feature.importance
Feature importance.

Global explanations tries to describe the model as whole, in terms of which variables/features influenced the general model the most. Two common methods for such an overall explanation is some kind of permutation feature importance or partial dependence plots. Permutation feature importance measures the increase in the prediction error of the model after permuting the feature's values, which breaks the relationship between the feature and the true outcome. The partial dependence plot (PDP) shows the marginal effect one or two features have on the predicted outcome of a machine learning model. A partial dependence plot can show whether the relationship between the target and a feature is linear, monotonic or more complex.

Local explanations tries to identify how the different input variables/features influenced a specific prediction/output from the model, and are often referred to as individual prediction explanation methods. Such explanations are particularly useful for complex models which behave rather different for different feature combinations, meaning that the global explanation is not representative for the local behavior.

Local explanation methods may further be divided into two categories: model-specific and model-agnostic (general) explanation methods. The model-agnostic methods usually try to explain individual predictions by learning simple, interpretable explanations of the model specifically for a given prediction. Three examples are Explanation Vectors, LIME, and Shapley values.

  • Interfaces
    Class
    Description
    SHAP<T>
    SHAP (SHapley Additive exPlanations) is a game theoretic approach to explain the output of any machine learning model.
    SHAP of ensemble tree methods.