FRAP: Capture the Fairness/Robustness/Accuracy/Privacy Trade-Off
The chief goal of today's Machine Learning models is Accuracy: the higher this value, the better the model. This is a vast overlook, as there are several other objectives a Machine Learning model should be optimized for, such as Robustness, Fairness, and Privacy.
Robustness captures how resistant the model is towards hostile behavior, e.g., injecting malicious samples in the training set. For instance, misclassifying 1% of the Stop signs as Give Way in autonomous driving is more desirable than only misclassifying 0.1% as Speed Limit 100.
Fairness considers the model's behavior on different subgroups of the population. For example, a face recognition software with 99% accuracy over the whole population could be more desirable than another with 99.9% accuracy overall but with 50% on a small ethnic group.
Finally, Privacy deals with undesired information leakage: a model with superior performance might leak a severe portion of its training data. In contrast, another with decent performance only reveals a negligible amount.
As highlighted above, there are clear connections between Accuracy and the three mentioned aspects. Additionally, there are initial research papers on any triad with Accuracy. Yet, other combinations are unexplored. The student's task is to select two (from Robustness, Privacy, and Fairness) and measure their effect on each other. A Game-theoretic model is desired to determine the optimal setting based on some predefined incentives.
For similar topics, please visit https://crysys.hu/member/pejo#projects