Speaker
Description
We introduce a novel framework for explainable AI time series forecasting based on a local surrogate base model. An explainable forecast, at a given reference point in time, is delivered by comparing the change in the base model fitting before and after the application of the AI-model correction. The notion of explainability used here is local both in the sense of the feature space and the temporal sense. The validity of the explanation (fidelity) is conditioned to be persistent throughout a sliding influence-window. The size of this window is chosen by the minimization of a loss functional comparing the local surrogate and the AI-correction, where we make use of smoothing approximations of the original problem to enjoy of differentiation properties. We illustrate the approach on a publicly available atmospheric probe dataset. The proposed method extends our method of BAPC (Before and After correction Parameter Comparison) previously defined in the context of explainable AI regression.
Classification | Mainly methodology |
---|---|
Keywords | Time series forecast, Explainable AI, Local Surrogates |