ENGLISH / MAGYAR
Kövess
minket

Biztonságos kód generálásának támogatása sérülékenységek jellemzőinek azonosításával

2025-2026/I.
Dr. Gazdag András
Koltai Beatrix

 Large Language Models (LLMs) are increasingly used to generate code, but in certain contexts, they produce insecure or vulnerable code. The risk is not uniform - some coding scenarios and patterns make vulnerability generation more likely, while others do not. Identifying these risky contexts is crucial for improving the generation of AI-assisted code.   
 
Develop a solution that can detect when an LLM is more likely to generate insecure code. The system should analyze existing code and recognize critical triggers that increase the risk of vulnerability generation.


1
1