This course will introduce the concepts of interpretability and explainability in machine learning applications. The learner will understand the difference between global, local, model-agnostic and model-specific explanations. State-of-the-art explainability methods such as Permutation Feature Importance (PFI), Local Interpretable Model-agnostic Explanations (LIME) and SHapley Additive exPlanation (SHAP) are explained and applied in time-series classification. Subsequently, model-specific explanations such as Class-Activation Mapping (CAM) and Gradient-Weighted CAM are explained and implemented. The learners will understand axiomatic attributions and why they are important. Finally, attention mechanisms are going to be incorporated after Recurrent Layers and the attention weights will be visualised to produce local explanations of the model.
Ce cours fait partie de la Spécialisation Informed Clinical Decision Making using Deep Learning
Offert par


À propos de ce cours
Python programming and experience with basic packages such as numpy, scipy and matplotlib
Ce que vous allez apprendre
Program global explainability methods in time-series classification
Program local explainability methods for deep learning such as CAM and GRAD-CAM
Understand axiomatic attributions for deep learning networks
Incorporate attention in Recurrent Neural Networks and visualise the attention weights
Compétences que vous acquerrez
- attention mechanisms
- explainable machine learning models
- model-agnostic and model specific models
- global and local explanations
- interpretability vs explainability
Python programming and experience with basic packages such as numpy, scipy and matplotlib
Offert par

University of Glasgow
The University of Glasgow has been changing the world since 1451. It is a world top 100 university (THE, QS) with one of the largest research bases in the UK.
Programme de cours : ce que vous apprendrez dans ce cours
Interpretable vs Explainable Machine Learning Models in Healthcare
Deep learning models are complex and it is difficult to understand their decisions. Explainability methods aim to shed light to the deep learning decisions and enhance trust, avoid mistakes and ensure ethical use of AI. Explanations can be categorised as global, local, model-agnostic and model-specific. Permutation feature importance is a global, model agnostic explainabillity method that provide information with relation to which input variables are more related to the output.
Local Explainability Methods for Deep Learning Models
Local explainability methods provide explanations on how the model reach a specific decision. LIME approximates the model locally with a simpler, interpretable model. SHAP expands on this and it is also designed to address multi-collinearity of the input features. Both LIME and SHAP are local, model-agnostic explanations. On the other hand, CAM is a class-discriminative visualisation techniques, specifically designed to provide local explanations in deep neural networks.
Gradient-weighted Class Activation Mapping and Integrated Gradients
GRAD-CAM is an extension of CAM, which aims to a broader application of the architecture in deep neural networks. Although, it is one of the most popular methods in explaining deep neural network decisions, it violates key axiomatic properties, such as sensitivity and completeness. Integrated gradients is an axiomatic attribution method that aims to cover this gap.
Attention mechanisms in Deep Learning
Attention in deep neural networks mimics human attention that allocates computational resources to a small range of sensory input in order to process specific information with limited processing power. In this week, we discuss how to incorporate attention in Recurrent Neural Networks and autoencoders. Furthermore, we visualise attention weights in order to provide a form of inherent explanation for the decision making process.
À propos du Spécialisation Informed Clinical Decision Making using Deep Learning
This specialisation is for learners with experience in programming that are interested in expanding their skills in applying deep learning in Electronic Health Records and with a focus on how to translate their models into Clinical Decision Support Systems.

Foire Aux Questions
Quand aurai-je accès aux vidéos de cours et aux devoirs ?
À quoi ai-je droit si je m'abonne à cette Spécialisation ?
Une aide financière est-elle possible ?
D'autres questions ? Visitez le Centre d'Aide pour les Étudiants.