The Wildcard XAI: from a Necessity, to a Resource, to a Dangerous Decoy
Type of publication: | Article |
Citation: | |
Journal: | In post-proceedings of the 6th International Workshop on EXplainable and TRAnsparent AI and Multi-Agent Systems |
Year: | 2024 |
Month: | August |
Abstract: | There has been a growing interest in Explainable Artificial Intelligence (henceforth XAI) models among researchers and AI programmers in recent years. Indeed, the development of highly interactive technologies that can collaborate closely with users has made explainability a necessity. This intends to reduce mistrust and the sense of unpredictability that AI can create, especially among non-experts. Moreover, the potential of XAI as a valuable resource has been recognized, considering that it can make intelligent systems more user-friendly and reduce the negative impact of black box systems. Building on such considerations, the paper discusses the potential dangers of large language models (LLMs) that generate explanations to support the outcomes produced. While these models may give users the illusion of control over the system's responses, they actually have persuasive and non-explanatory effects. Therefore, it is argued here that XAI, appropriately regulated, should be a resource to empower users of AI systems. Any other apparent explanations should be reported to avoid misleading and circumventing effects. |
Keywords: | Anthropomorphism, Dependency, XAI |
Authors | |
Added by: | [] |
Total mark: | 0 |
Attachments
|
|
Notes
|
|
|
|
Topics
|
|
|