What "counts" as explanation in social interaction?
Type of publication: | Misc |
Citation: | |
Publication status: | Published |
Year: | 2023 |
Month: | November |
Howpublished: | 2nd TRR 318 Conference Measuring Understanding, University of Paderborn, Paderborn, Germany |
URL: | https://saulalbert.net/blog/wh... |
Abstract: | Measuring explainability in explainable AI (X-AI) usually involves technical methods for evaluating and auditing automated decision-making processes to highlight and eliminate potential sources of bias. By contrast, human practices of explaining usually involve doing explanation as a social action (Miller, 2019). X-AI’s transparent machine learning models can help to explain the proprietary ‘black boxes’ often used by high-stakes decision support systems in legal, financial, or diagnostic contexts (Rudin, 2019). However, as Rohlfing et al. (2021) point out, effective explanations (however technically accurate they may be), always involve processes of co-construction and mutual comprehension. Explanations usually involve at least two parties: the system and the user interacting with the system at a particular point in time, and ongoing contributions from both explainer and explainee are required. Without accommodating action, X-AI models appear to offer context-free, one-size-fits-all technical solutions that may not satisfy users’ expectations as to what constitutes a proper explanation. |
Keywords: | AI, artificial intelligence, ethnomethodology and conversation analysis, explainability, Explainable AI, explainable AI (XAI), explainable artificial intelligence, eXplainable Artificial Intelligence (XAI) |
Authors | |
Added by: | [] |
Total mark: | 0 |
Attachments
|
|
Notes
|
|
|
|
Topics
|
|
|