Explanation of Deep Learning Models via Logic Rules Enhanced by Embeddings Analysis, and Probabilistic Models
Type of publication: | Inproceedings |
Citation: | |
Booktitle: | Post-proceedings of the 6th International Workshop on EXplainable and TRAnsparent AI and Multi-Agent Systems |
Year: | 2024 |
Month: | August |
Abstract: | Deep Learning (DL) models are increasingly dealing with heterogeneous data (i.e., a mix of structured and unstructured data), calling for adequate eXplainable Artificial Intelligence (XAI) methods. Nevertheless, only some of the existing techniques consider the uncer- tainty inherent to the data. To this end, this study proposes a pipeline to explain heterogeneous data-based DL models by combining embed- ding analysis, rule extraction methods, and probabilistic models. The proposed pipeline has been tested using synthetic data (multi-individual food items tracking). This study has achieved (i) inference enhancement through probabilistic and evidential reasoning, (ii) generation of logical explanations based on extracted rules and predictions, and (iii) integra- tion of textual data into the explanation pipeline through embedding analysis. |
Keywords: | Deep Learning, Heterogeneous data processing, Preference modeling, rule extraction, Uncertainty reasoning, XAI |
Authors | |
Added by: | [] |
Total mark: | 0 |
Attachments
|
|
Notes
|
|
|
|
Topics
|
|
|