Integration of local and global features explanation with global rules extraction and generation tools
Type of publication: | Article |
Citation: | |
Journal: | Post-proceedings of EXTRAAMAS 2022 |
Year: | 2022 |
Month: | June |
Abstract: | Widely used in a growing number of domains, Deep Learning predictors are achieving remarkable results. However, the lack of trans- parency (i.e., opacity) of their inner mechanisms has raised trust and employability concerns. Nevertheless, several approaches fostering mod- els of interpretability and explainability have been developed in the last decade. This paper combines approaches for local feature explanation (i.e., Contextual Importance and Utility – CIU) and global feature ex- planation (i.e., Explainable Layers) with a rule extraction system, namely ECLAIRE. The proposed pipeline has been tested in four scenarios em- ploying a breast cancer diagnosis dataset. The results show improvements such as the production of more human-interpretable rules and adherence of the produced rules with the original model. |
Keywords: | Local explainability · Global explainability · Feature rank- ing · rule extraction |
Authors | |
Added by: | [] |
Total mark: | 0 |
Attachments
|
|
Notes
|
|
|
|
Topics
|
|
|