TY - CONF T1 - Explainable Multi-Agent Systems through Blockchain Technology A1 - Calvaresi, Davide A1 - Mualla, Yazan A1 - Najjar, Amro A1 - Galland, Stépane A1 - Schumacher, Michael TI - In Post-Proceedings of EXTRAAMAS 2019 (to appear) Y1 - 2019 UR - https://link.springer.com/chapter/10.1007/978-3-030-30391-4_3 KW - blockchain KW - explainability KW - goal-based XAI KW - MAS KW - UAV N2 - Advances in Artificial Intelligence (AI) are contributing to a broad set of domains. In particular, Multi-Agent Systems (MAS) are increasingly approaching critical areas such as medicine, autonomous vehicles, criminal justice, and financial markets. Such a trend is producing a growing AI-Human society entanglement. Thus, several concerns are raised around user acceptance of AI agents. Trust issues, mainly due to their lack of explainability, are the most relevant. In recent decades, the priority has been pursuing the optimal performance at the expenses of the interpretability. It led to remarkable achievements in fields such as computer vision, natural language processing, and decision-making systems. However, the crucial questions driven by the social reluctance to accept AI-based decisions may lead to entirely new dynamics and technologies fostering explainability, authenticity, and user-centricity. This paper proposes a joint approach employing both blockchain technology (BCT) and explainability in the decision-making process of MAS. By doing so, current opaque decision-making processes can be made more transparent and secure and thereby trustworthy from the human user standpoint. Moreover, several case studies involving Unmanned Aerial Vehicles (UAV) are discussed. Finally, the paper discusses roles, balance, and trade-offs between explainability and BCT in trust-dependent systems. ER -