Integration of local and global features explanation with global rules extraction and generation tools
| Publicatietype: | Artikel |
| Citatie: | |
| Tijdschrift: | Post-proceedings of EXTRAAMAS 2022 |
| Jaar: | 2022 |
| Maand: | Juni |
| Samenvatting: | Widely used in a growing number of domains, Deep Learning predictors are achieving remarkable results. However, the lack of trans- parency (i.e., opacity) of their inner mechanisms has raised trust and employability concerns. Nevertheless, several approaches fostering mod- els of interpretability and explainability have been developed in the last decade. This paper combines approaches for local feature explanation (i.e., Contextual Importance and Utility – CIU) and global feature ex- planation (i.e., Explainable Layers) with a rule extraction system, namely ECLAIRE. The proposed pipeline has been tested in four scenarios em- ploying a breast cancer diagnosis dataset. The results show improvements such as the production of more human-interpretable rules and adherence of the produced rules with the original model. |
| Trefwoorden: | Local explainability · Global explainability · Feature rank- ing · rule extraction |
| Auteurs | |
| Toegevoegd door: | [] |
| Totaalscore: | 0 |
|
Bestanden
|
|
|
Aantekeningen
|
|
|
|
|
|
Onderwerpen
|
|
|
|
|
