FRIDA: Free RIder Detection using Attacks
In Machine Learning, the model is designed to learn patterns from a dataset based on which it should make its prediction. However, besides general patterns, the model could learn explicit information corresponding to specific data points, which could lead to privacy leakage. Such leakage could be revealed with Membership Inference Attack, which determines whether a particular data sample was used for training.
In Federated Learning, multiple individuals train a single model together in a privacy-friendly way, i.e., their underlying datasets remain hidden from the other participants. As a consequence of this distributed setup, dishonest participants might behave maliciously by free-riding (enjoying the commonly trained model while not contributing to it).
The student's interdisciplinary task is to read about the Membership Inference Attacks and the free-riding problem in Federated Learning. Furthermore, to propose a framework that connects the two, i.e., use Membership Inference Attack to determine whether the participant used actual data or just random noise during training.
For similar topics, please visit https://crysys.hu/member/pejo#projects