ENGLISH / MAGYAR
Kövess
minket

Security and Privacy in Machine Learning

2025-2026/II.
Dr. Ács Gergely

Machine learning is increasingly deployed in critical areas such as healthcare, security, and software development, but unresolved security and privacy issues hinder its safe and reliable adoption. Models can unintentionally leak sensitive data from training sets or prompts, raising concerns under strict regulations like GDPR and the upcoming EU AI Act. They are also vulnerable to adversarial inputs, prompt injection attacks, or data poisoning, which can compromise integrity and reliability. Our research tackles these challenges by both developing attacks to expose vulnerabilities and designing defenses to close them — from adversarial robustness techniques to privacy-preserving methods like Differential Privacy. At the same time, we explore how to ensure fairness in collaborative training, where multiple organizations (e.g., hospitals or banks) jointly train models without sharing raw data. This raises fundamental questions: How do we measure contributions fairly? How do we prevent cheating? And can we predict the value of collaboration before training even begins? Students can explore these questions through projects such as: 

  • Testing the robustness of large language models against adversarial prompts or code-generation vulnerabilities.
  • Investigating privacy leaks in generative AI, where models may memorize and reveal sensitive training data.
  • Own idea: If you have any own project idea related to the trustworthiness (security, privacy, fairness) of machine learning, and we find it interesting, you can work on that under our guidance. (Contact: Gergely Ács or Balázs Pejó)

Required skills: none 
Preferred skills: basic programming skills (e.g., python), machine learning (not required) 


5
3