Explainable Multi-Agent Systems through Blockchain Technology
Type of publication: | Proceedings |
Citation: | |
Booktitle: | In Post-Proceedings of EXTRAAMAS 2019 (to appear) |
Year: | 2019 |
URL: | https://link.springer.com/chap... |
Abstract: | Advances in Artificial Intelligence (AI) are contributing to a broad set of domains. In particular, Multi-Agent Systems (MAS) are increasingly approaching critical areas such as medicine, autonomous vehicles, criminal justice, and financial markets. Such a trend is producing a growing AI-Human society entanglement. Thus, several concerns are raised around user acceptance of AI agents. Trust issues, mainly due to their lack of explainability, are the most relevant. In recent decades, the priority has been pursuing the optimal performance at the expenses of the interpretability. It led to remarkable achievements in fields such as computer vision, natural language processing, and decision-making systems. However, the crucial questions driven by the social reluctance to accept AI-based decisions may lead to entirely new dynamics and technologies fostering explainability, authenticity, and user-centricity. This paper proposes a joint approach employing both blockchain technology (BCT) and explainability in the decision-making process of MAS. By doing so, current opaque decision-making processes can be made more transparent and secure and thereby trustworthy from the human user standpoint. Moreover, several case studies involving Unmanned Aerial Vehicles (UAV) are discussed. Finally, the paper discusses roles, balance, and trade-offs between explainability and BCT in trust-dependent systems. |
Keywords: | blockchain, explainability, goal-based XAI, MAS, UAV |
Authors | |
Added by: | [] |
Total mark: | 0 |
Attachments
|
|
Notes
|
|
|
|
Topics
|
|
|