Explainable artificial intelligence: unveiling what machines are learning

Explain or perish!

Figure 1: An example architecture of a deep learning model.
Figure 2: The need for interpretability in machine learning models. Icons from: Prosymbols, Smashicons, Freepik and Becris.

Unboxing the black box

Figure 3: LIME algorithm for tabular data. A) Random forest predictions given features x1 and x2. Predicted classes: 1 (dark) or 0 (light). B) Instance of interest (big dot) and data sampled from a normal distribution (small dots). C) Assign higher weight to points near the instance of interest. D) Signs of the grid show the classifications of the locally learned model from the weighted samples. The white line marks the decision boundary (P(class=1) = 0.5). Image and description from Christoph Molnar [3]
Figure 4: Guided BackProp algorithm, from Springenberg et al. [13].
Figure 5: DTD algorithm, from Bach et al. [16].

Meta-Explainability

Figure 6: Chest X-ray image. (Image from Pexels)

INESC TEC work on xAI

Figure 7: Example of a test image and the related complementary explanations. For each test image, there is an automatic decision associated, and two types of explanations, one rule-based and one case-based. Figure from Silva et al. [20].
Figure 8: Example of explanations for the same sample when it is in train or in test. Figure from Sequeira et al. [22].

Short bio of authors

References

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
INESC TEC

INESC TEC is a private non-profit research institution, dedicated to scientific research and technological development.