How to interpret/explain machine learning predictions?

How to interpret/explain machine learning predictions?

1 Like
var model = SklearnPipe.make({
  name: "sklearnDecisionTreeRegressor",
  technique: SklearnTechnique.make({
    name: "tree.DecisionTreeRegressor",
    processingFunctionName: "predict",
    hyperParameters: {
      'random_state': 1
    }
  }),
  interpretTechnique: Eli5InterpretTechnique.make()
});

var trainedModel = model.train(X, y);
var interpretOutput = trainedModel.interpret(X);

// Note:
// If the model is a classifier, you need to provide
interpretTechnique: Eli5InterpretTechnique.make({positiveClass: 1})
// then the contribution is towards that class label
1 Like

Is this a surrogate decision tree?

There are other methods that provide more interpretability for a black box model.

  • Disparate impact analysis
  • Individual conditional expectation (ICE) [16]
  • Local interpretable model-agnostic explanations (LIME) [17]
  • Linear models
  • Monotonic gradient boosting machines (GBMs)
  • Partial dependence
  • Residual analysis
  • Rulefit [18]
  • Sensitivity analysis
  • Shapley feature importance [19]
  • Surrogate decision trees

To name several from h2o.ai’s interpretable ai forums.

1 Like