17–18 May 2021
Online
Europe/London timezone

Interpretability and Verification in AI

18 May 2021, 14:25
20m
Online

Online

Data Science in Process Industries Technology

Speaker

PIERRE HAROUIMI

Description

Does Artificial Intelligence (AI) have to be certified?

AI modeling continues to grow in all industries, thus have a real impact on our day-to-day life. The explainability or interpretability of AI models is becoming more and more important nowadays, to understand the black box behind our algorithms.

Engineers and data scientists must understand and explain their models before sharing with other teams and scaling them to production. Interpretability can answer these questions: Which variables are pertinent in the model? Why does the model predict this value? Why does the model have a wrong prediction?
Moreover, the model has to be tested, validated, and verified. Indeed, in many industries, regulatory requirements or certifications are put in place so that a model can be used and deployed in production. Here are some examples:
- Finance: for credit loans, how can we certify that a model is not biased?
- Medical: for cancer cell detection, how can we debug the model if we are wrong?
- Automotive: for autonomous driving, how can we be sure that the model won’t be different once deployed in real-time?

In this webinar, we will answer these problematics, using MATLAB and its functionalities to build interpretability models, and then automate the validation and verification with the Unit Testing Framework and the continuous integration.

Highlights:
- AI capabilities in MATLAB (Machine & Deep Learning)
- Functions to interpret & explain AI black box models
- Unit Testing Framework & CI deployment

Primary author

Presentation materials