Conveners
Invited session: 2
- Bertrand IOOSS (EDF R&D)
Being able to quantify the importance of random inputs of an input-output black-box model is at the cornerstone of the fields of sensitivity analysis (SA) and explainable artificial intelligence (XAI). To perform this task, methods such as Shapley effects and SHAP have received a lot of attention. The former offers a solution for output variance decomposition with non-independent inputs, and...
Explainable Artificial Intelligence (XAI) stands as a crucial area of research essential for advancing AI applications in real-world contexts. Within XAI, Global Sensitivity Analysis (GSA) methods assume significance, offering insights into the influential impact of individual or grouped parameters on the predictions of machine learning models, as well as the outcomes of simulators and...
Despite attractive theoretical guarantees and practical successes, Predictive Interval (PI) given by Conformal Prediction (CP) may not reflect the uncertainty of a given model. This limitation arises from CP methods using a constant correction for all test points, disregarding their individual epistemic uncertainties, to ensure coverage properties. To address this issue, we propose using a...