Abstract
Differentially Private Federated Learning (DP-FL) is a novel machine learning paradigm that integrates federated learning with the principles of differential privacy. In DP-FL, a global model is trained across decentralized devices or servers, each holding local data samples, without the need to exchange raw data. This approach ensures data privacy by adding noise to the model updates before aggregation, thus preventing any individual contributor’s data from being compromised. However, ensuring the integrity of the model updates from these contributors is paramount. This research explores the application of autoencoders as a means to detect anomalous or fraudulent updates from contributors in DP-FL. By leveraging the reconstruction errors generated by autoencoders, this study assesses their effectiveness in identifying anomalies while also discussing potential limitations of this approach.
Original language | English |
---|---|
Title of host publication | Proceedings of the 21st International Conference on Security and Cryptography, SECRYPT 2024 |
Editors | Sabrina De Capitani Di Vimercati, Pierangela Samarati |
Publisher | SCITEPRESS-Science and Technology Publications, Lda. |
Pages | 467-474 |
Number of pages | 8 |
ISBN (Electronic) | 9789897587092 |
DOIs | |
Publication status | Published - 2024 |
Event | 21st International Conference on Security and Cryptography, SECRYPT 2024 - Dijon, France Duration: 8 Jul 2024 → 10 Jul 2024 Conference number: 21 |
Publication series
Series | Proceedings of the International Conference on Security and Cryptography |
---|---|
ISSN | 2184-7711 |
Conference
Conference | 21st International Conference on Security and Cryptography, SECRYPT 2024 |
---|---|
Abbreviated title | SECRYPT 2024 |
Country/Territory | France |
City | Dijon |
Period | 8/07/24 → 10/07/24 |
Keywords
- Anomaly Detection
- Autoencoder
- Differential Privacy
- Federated Learning