TY - GEN T1 - What "counts" as explanation in social interaction? A1 - Albert, Saul A1 - Buschmeier, Hendrik A1 - Cyra, Katharina A1 - Even, Christiane A1 - Hamann, Magnus A1 - Licoppe, Christian A1 - Mlynar, Jakub A1 - Pelikan, Hannah A1 - Porcheron, Martin A1 - Reeves, Stuart A1 - Rudaz, Damien A1 - Sormani, Philippe A1 - Tuncer, Sylvaine Y1 - 2023 UR - https://saulalbert.net/blog/what-counts-as-explanation-in-social-interaction/ KW - AI KW - artificial intelligence KW - ethnomethodology and conversation analysis KW - explainability KW - Explainable AI KW - explainable AI (XAI) KW - explainable artificial intelligence KW - eXplainable Artificial Intelligence (XAI) N2 - Measuring explainability in explainable AI (X-AI) usually involves technical methods for evaluating and auditing automated decision-making processes to highlight and eliminate potential sources of bias. By contrast, human practices of explaining usually involve doing explanation as a social action (Miller, 2019). X-AI’s transparent machine learning models can help to explain the proprietary ‘black boxes’ often used by high-stakes decision support systems in legal, financial, or diagnostic contexts (Rudin, 2019). However, as Rohlfing et al. (2021) point out, effective explanations (however technically accurate they may be), always involve processes of co-construction and mutual comprehension. Explanations usually involve at least two parties: the system and the user interacting with the system at a particular point in time, and ongoing contributions from both explainer and explainee are required. Without accommodating action, X-AI models appear to offer context-free, one-size-fits-all technical solutions that may not satisfy users’ expectations as to what constitutes a proper explanation. ER -