Dear guest, welcome to this publication database. As an anonymous user, you will probably not have edit rights. Also, the collapse status of the topic tree will not be persistent. If you like to have these and other options enabled, you might ask Admin (Ivan Eggel) for a login account.
 [BibTeX] [RIS]
The Wildcard XAI: from a Necessity, to a Resource, to a Dangerous Decoy
Type of publication: Article
Citation:
Journal: In post-proceedings of the 6th International Workshop on EXplainable and TRAnsparent AI and Multi-Agent Systems
Year: 2024
Month: August
Abstract: There has been a growing interest in Explainable Artificial Intelligence (henceforth XAI) models among researchers and AI programmers in recent years. Indeed, the development of highly interactive technologies that can collaborate closely with users has made explainability a necessity. This intends to reduce mistrust and the sense of unpredictability that AI can create, especially among non-experts. Moreover, the potential of XAI as a valuable resource has been recognized, considering that it can make intelligent systems more user-friendly and reduce the negative impact of black box systems. Building on such considerations, the paper discusses the potential dangers of large language models (LLMs) that generate explanations to support the outcomes produced. While these models may give users the illusion of control over the system's responses, they actually have persuasive and non-explanatory effects. Therefore, it is argued here that XAI, appropriately regulated, should be a resource to empower users of AI systems. Any other apparent explanations should be reported to avoid misleading and circumventing effects.
Keywords: Anthropomorphism, Dependency, XAI
Authors Carli, Rachele
Calvaresi, Davide
Added by: []
Total mark: 0
Attachments
  • The Wildcard XAI: from a Neces...
Notes
    Topics