10–14 Sept 2023
Europe/Madrid timezone

Global Importance Measures for Machine Learning Model Interpretability, an Overview

12 Sept 2023, 17:50
20m
Auditorium

Auditorium

Speaker

Bertrand Iooss (EDF R&D)

Description

Machine learning (ML) algorithms, fitted on learning datasets, are often considered as black-box models, linking features (called inputs) to variables of interest (called outputs). Indeed, they provide predictions which turn out to be difficult to explain or interpret. To circumvent this issue, importance measures (also called sensitivity indices) are computed to provide a better interpretability of ML models, via the quantification of the influence of each input on the output predictions. These importance measures also provide diagnostics regarding the correct behavior of the ML model (by comparing them to importance measures directly evaluated on the data) and about the underlying complexity of the ML model. This communication provides a practical synthesis on post-hoc global importance measures that allow to interpret the model generic global behavior for any kind of ML model. A particular attention is paid to the constraints that are inherent to the training data and the considered ML model: linear vs. nonlinear phenomenon of interest, input dimension and strength of the statistical dependencies between inputs.

Classification Mainly methodology
Keywords Sensitivity analysis, Shapley, Sobol' indices, Relative weight analysis

Primary authors

Presentation materials

There are no materials yet.