Integration of local and global features explanation with global rules extraction and generation tools
| Art der Publikation: | Artikel |
| Zitat: | |
| Zeitschrift: | Post-proceedings of EXTRAAMAS 2022 |
| Jahr: | 2022 |
| Monat: | Juni |
| Abriss: | Widely used in a growing number of domains, Deep Learning predictors are achieving remarkable results. However, the lack of trans- parency (i.e., opacity) of their inner mechanisms has raised trust and employability concerns. Nevertheless, several approaches fostering mod- els of interpretability and explainability have been developed in the last decade. This paper combines approaches for local feature explanation (i.e., Contextual Importance and Utility – CIU) and global feature ex- planation (i.e., Explainable Layers) with a rule extraction system, namely ECLAIRE. The proposed pipeline has been tested in four scenarios em- ploying a breast cancer diagnosis dataset. The results show improvements such as the production of more human-interpretable rules and adherence of the produced rules with the original model. |
| Schlagworte: | Local explainability · Global explainability · Feature rank- ing · rule extraction |
| Autoren | |
| Hinzugefügt von: | [] |
| Gesamtbewertung: | 0 |
|
Anhänge
|
|
|
Notizen
|
|
|
|
|
|
Themen
|
|
|
|
|
