ENGLISH / MAGYAR
Kövess
minket

Poisoning Shapley

2023-2024/II.
Dr. Pejó Balázs

In Federated Learning, multiple individuals train a single model together in a privacy-friendly way, i.e., their underlying datasets remain hidden from the other participants. It is well known that it is possible to poison the training data to decrease the model's performance in general (un-targeted attack) or for a specific class (targeted attack). Moreover, it is also possible to poison the data such that the desired fairness objective is destroyed or the privacy of the data samples is compromised.  

Contribution measuring techniques, such as the Shapley value, assign values to each participant, reflecting their importance or usefulness for the training. The question naturally arises; by injecting malicious participants into the participant pool, is it possible to manipulate the contribution scores of other participants (i.e., arbitrarily increase or decrease). 

The student's task is to get familiar with Contribution Score Computation techniques as well as poisoning attacks within Federated Learning and empirically test (aka with experiments) whether such control is feasible and to what extent.

For similar topics, please visit https://crysys.hu/member/pejo#projects


1
1