The School of Computer Science is pleased to present…
Defending Federated Learning Against Model Poisoning Attacks
MSc Thesis Proposal by: Ibraheem Aloran
Date: Wednesday December 6, 2023
Time: 12:00 – 1:00 pm
Location: Essex Hall, Room 122
Abstract:
Federated Learning (FL) is a machine learning framework that allows multiple clients to contribute their data to a single machine learning model without sacrificing their privacy. Although FL addresses some security issues, it is still susceptible to model poisoning attacks where malicious clients aim to corrupt the main learning model by sending poisoned updates. FLDetector is a defense to address this issue by detecting majority of the malicious clients and removing them from the FL setting. The main idea behind it is that corrupted updates are inconsistent compared to honest updates. FLDetector leverages this and detects malicious clients based on their update consistency. One issue with this method is that FLDetector always clusters clients into two clusters when it can, even when it’s not the optimal number of clusters. This causes it to misclassify a fraction of honest clients as malicious and removes them from the FL setting. This prevents the machine learning model from learning useful data.
The proposed method resolves this by using Gap statistics to determine the optimal number of clusters to cluster the clients. The clients are then clustered and the cluster with the lowest average malicious score is classified as the honest clients while the rest of the clusters are classified as malicious. This will allow FLDetector to remove majority of malicious clients while keeping all honest clients. Experiments so far show an improvement in performance.
Thesis Committee:
Internal Reader: Dr. Dima Alhadidi
External Reader: Dr. Ahmed Hamdi Sakr
Advisor: Dr. Saeed Samet