TY - CONF T1 - Towards a Meaningful Communication and Model Aggregation in Federated Learning via Genetic Programming A1 - Pacioni, Elia A1 - Fernández de Vega, Francisco A1 - Calvaresi, Davide TI - Proceedings of the 17th International Conference on Agents and Artificial Intelligence: ICAART Y1 - 2025 VL - 3 SP - 1427 EP - 1431 PB - Springer CY - Porto, Portugal SN - 978-989-758-737-5 SN - 2184-433X UR - https://www.scitepress.org/Papers/2025/133804/133804.pdf M2 - doi: 10.5220/0013380400003890 KW - Communication Efficiency KW - federated learning KW - Genetic Programming KW - Models Aggregation KW - Multi-Agent Systems N2 - Federated Learning (FL) enables collaborative training of machine learning models while preserving client data privacy. However, its conventional client-server paradigm presents two key challenges: (i) communication efficiency and (ii) model aggregation optimization. Inefficient communication, often caused by transmitting low-impact updates, results in unnecessary overhead, particularly in bandwidth-constrained environments such as wireless or mobile networks or in scenarios with numerous clients. Furthermore, traditional aggregation strategies lack the adaptability required for stable convergence and optimal performance. This paper emphasizes the distributed nature of FL clients (agents) and advocates for local, autonomous, and intelligent strategies to evaluate the significance of their updates—such as using a ''distance'' metric relative to the global model. This approach improves communication efficiency by prioritizing impactful updates. Additionally, the paper proposes an adaptive aggregation method leveraging genetic programming and transfer learning to dynamically evolve aggregation equations, optimizing the convergence process. By integrating insights from multi-agent systems, the proposed approach aims to foster more efficient and robust frameworks for decentralized learning. ER -