Speaker
Description
Our previous contribution to ENBIS included an introduction of BAPC (‘Before and After correction Parameter Comparison’), a framework for explainable AI time series forecasting, which has formerly been applied to logistic regression. An initially non-interpretable predictive model (such as neural network) to improve the forecast of a classical time series ’base model’ is used. Explainability of the correction is provided by fitting the base model again to the data from which the error prediction is removed. This follow-up work is devoted to the practical application of the framework by (1) showcasing the method to explain changes in the dynamics of a physical system, (2) providing guidance on the choice of the interpretable and correction model pair based on explainability-accuracy tradeoff analysis and (3) comparing our method with the state of the art on explainable time-series forecasting. In this context, the BAPC is able to identify the set of model parameters and the time window that brings maximum explanation to the AI-correction local behavior, hence delivering explanation both in the form of feature-importance and time-importance.
Type of presentation | Talk |
---|---|
Classification | Both methodology and application |
Keywords | XAI, time-series forecasting, physics informing ML |