[BibTeX] [RIS]
Sharpening Local Interpretable Model-agnostic Explanations for Histopathology: Improved Understandability and Reliability
Type of publication: Inproceedings
Citation:
Booktitle: MICCAI 2021
Series: LNCS
Year: 2021
Month: October
Publisher: Springer
Abstract: Being accountable for the signed reports, pathologists may be wary of high-quality deep learning outcomes if the decision-making is not understandable. Applying off-the-shelf methods with default configurations such as Local Interpretable Model-Agnostic Explanations (LIME) is not sufficient to generate stable and understandable explanations. This work improves the application of LIME to histopathology images by leveraging nuclei annotations, creating a reliable way for pathologists to audit black-box tumor classifiers. The obtained visualizations reveal the sharp, neat and high attention of the deep classifier to the neoplas-tic nuclei in the dataset, an observation in line with clinical decision making. Compared to standard LIME, our explanations show improved understandability for domain-experts, report higher stability and pass the sanity checks of consistency to data or initialization changes and sensitivity to network parameters. This represents a promising step in giving pathologists tools to obtain additional information on image classification models. The code and trained models are available on GitHub.
Keywords: Deep Learning, explainable AI (XAI), interpretability, machine learning
Authors Graziani, Mara
Palatnik de Sousa, Iam
Vellasco BR, Marley M
Costa da Silva, Eduardo
Müller, Henning
Andrearczyk, Vincent
Added by: []
Access rights: r:read access level e:Edit access level (Edit all rights)
Total mark: 0
Attachments
  • paper2380.pdf
Notes
    Topics