Dear guest, welcome to this publication database. As an anonymous user, you will probably not have edit rights. Also, the collapse status of the topic tree will not be persistent. If you like to have these and other options enabled, you might ask Admin (Ivan Eggel) for a login account.
 [BibTeX] [RIS]
LLM-based Evaluation Methodology of Explanation Strategies
Type of publication: Proceedings
Citation:
Publication status: Accepted
Booktitle: Proceedings of EXTRAAMAS 2025
Year: 2025
Abstract: As data privacy regulations, such as the EU AI Act and EU Data Act, become increasingly stringent, processing real user data for AI models like movie recommendation systems has grown more challenging. Moreover, the human-centric data collection and evaluation of Explainable AI (XAI) systems are often costly and time-consuming; making it hard to sustain. Hence, this study adopts the Synthetic Behavior Generation (SBG) approach, leveraging large language models (LLMs) to evaluate AI explanations while ensuring compliance with regulations and providing cost-effective solutions for human feedback. To assess the quality of these explanations, we utilize three different LLMs, which are fed syntactically generated user behaviors to evaluate explanations of an AI system as if they were real users. The evaluation focuses on key criteria such as convincingness, clarity, accuracy, and the impact on decision-making, facilitating a thorough assessment of explanation effectiveness. The results indicated that LLMs can deliver structured and consistent evaluations based on the provided synthetic user behavior.
Keywords: Explanation Evaluation, Large Language Models (LLMs), recommender systems, Synthetic Data Generation \and Explainable AI (XAI)
Authors Soyarar, Ege
Aydoğan, Reyhan
Buzcu, Berk
Calvaresi, Davide
Added by: []
Total mark: 0
Attachments
    Notes
      Topics