Document Type
Article
Publication Date
2024
Abstract
Federated Learning (FL) has emerged as a powerful paradigm, allowing multiple decentralized clients to collaboratively train a machine learning model without sharing their raw data. When combined with Multi-access Edge Computing (MEC), it enhances the utilization of computation and storage resources at the edge, enabling local data training on edge nodes. Such integration reduces latency and facilitates real time processing and decision-making while ensuring data privacy. However, this decentralized approach introduces security and trust challenges, as models can be compromised through data poisoning attacks, such as label flipping attacks. The trustworthiness of these edge nodes and the integrity of their data are critical for performance and reliability of FL models. This paper introduces an adaptive zero trust framework that, by default, does not assume any edge node as trustworthy. It continuously validates edge data before each training round and checks its model to ensure that only reliable contributors are included in the global model aggregation. The results of the proposed framework reduce the impact of malicious nodes, maintaining the global model accuracy even in scenarios with high numbers of malicious edge nodes, showcasing its robustness and reliability
Recommended Citation
Hamdy, Abeer Dr., "Adaptive Trust Management for Data Poisoning Attacks in MEC-based FL Infrastructures" (2024). Computer Science. 47.
https://buescholar.bue.edu.eg/comp_sci/47