Security and Privacy in Machine Learning
Machine Learning (Artificial Intelligence) has become undisputedly popular in recent years. The number of security critical applications of machine learning has been steadily increasing over the years (self-driving cars, user authentication, decision support, profiling, risk assessment, etc.). However, there are still many open privacy and security problems of machine learning. Students can work on the following topics:
Own idea: If you have any own project idea related to data privacy, or the security/privacy of machine learning, and I find it interesting, you can work on that under my guidance... You'll get +1 grade in that case. (Contact: Gergely Acs)
Robustness, evasion in malware detection: Adversarial examples are maliciously modified samples where the modification is visually imperceptible yet the prediction of the model on this slightly modified sample is very different compared to the unmodified sample. A potential task can be to develop solutions to distinguish adversarial and benign samples, or to develop robust training algorithms for malware detection.
Privacy and Security of Federated Learning: Federated learning allows multiple parties to collaborate in order to train a common model, by only sharing model updates instead of their training data. Even if this architecture seems more privacy-preserving at first sight, recent works have highlighted numerous privacy attacks that can be deployed to infer private and sensitive information. The task is to develop privacy and/or security attacks against federated learning, and mitigate these attacks.
Anonymization: Sequential data includes any data where data records contain the sequence of items of a user (e.g., location trajectories, time-series data such as electricity consumption, browsing history, etc.). A potential task can be to develop (GDPR compliant) anonymization methods so that individuals are not re-identifiable anymore in the dataset.
More information: https://www.crysys.hu/education/projects/