TY - CONF T1 - LLM-based Evaluation Methodology of Explanation Strategies A1 - Soyarar, Ege A1 - Aydoğan, Reyhan A1 - Buzcu, Berk A1 - Calvaresi, Davide TI - Proceedings of EXTRAAMAS 2025 Y1 - 2025 KW - Explanation Evaluation KW - Large Language Models (LLMs) KW - recommender systems KW - Synthetic Data Generation \and Explainable AI (XAI) N2 - As data privacy regulations, such as the EU AI Act and EU Data Act, become increasingly stringent, processing real user data for AI models like movie recommendation systems has grown more challenging. Moreover, the human-centric data collection and evaluation of Explainable AI (XAI) systems are often costly and time-consuming; making it hard to sustain. Hence, this study adopts the Synthetic Behavior Generation (SBG) approach, leveraging large language models (LLMs) to evaluate AI explanations while ensuring compliance with regulations and providing cost-effective solutions for human feedback. To assess the quality of these explanations, we utilize three different LLMs, which are fed syntactically generated user behaviors to evaluate explanations of an AI system as if they were real users. The evaluation focuses on key criteria such as convincingness, clarity, accuracy, and the impact on decision-making, facilitating a thorough assessment of explanation effectiveness. The results indicated that LLMs can deliver structured and consistent evaluations based on the provided synthetic user behavior. ER -