Sharpening Local Interpretable Model-agnostic Explanations for Histopathology: Improved Understandability and Reliability
Art der Publikation: | Artikel in einem Konferenzbericht |
Zitat: | |
Buchtitel: | MICCAI 2021 |
Serie: | LNCS |
Jahr: | 2021 |
Monat: | Oktober |
Verlag: | Springer |
Abriss: | Being accountable for the signed reports, pathologists may be wary of high-quality deep learning outcomes if the decision-making is not understandable. Applying off-the-shelf methods with default configurations such as Local Interpretable Model-Agnostic Explanations (LIME) is not sufficient to generate stable and understandable explanations. This work improves the application of LIME to histopathology images by leveraging nuclei annotations, creating a reliable way for pathologists to audit black-box tumor classifiers. The obtained visualizations reveal the sharp, neat and high attention of the deep classifier to the neoplas-tic nuclei in the dataset, an observation in line with clinical decision making. Compared to standard LIME, our explanations show improved understandability for domain-experts, report higher stability and pass the sanity checks of consistency to data or initialization changes and sensitivity to network parameters. This represents a promising step in giving pathologists tools to obtain additional information on image classification models. The code and trained models are available on GitHub. |
Schlagworte: | Deep Learning, explainable AI (XAI), interpretability, machine learning |
Autoren | |
Hinzugefügt von: | [] |
Zugriffsrechte: | r:![]() ![]() |
Gesamtbewertung: | 0 |
Anhänge
|
|
Notizen
|
|
|
|
Themen
|
|
|