Two interesting talks on LLM for cybersecurity by SESAM researcers at Ericsson and Synteda
Image generate by Google GeminiIn the final weeks of 2025, SESAM researcher Oleksandr Adamov delivered two invited talks to SESAM industrial partners Ericsson and Synteda, addressing contemporary challenges at the intersection of cybersecurity and artificial intelligence.
LLM for threat modeling
The first talk examined how Large Language Models (LLMs) can be leveraged to enhance threat modeling and shift security verification earlier in the software development lifecycle. By assisting analysts during the design phase, LLMs can identify potential vulnerabilities, infer realistic attack paths, and generate plausible threat scenarios directly from architectural artifacts. This enables the automated translation of high-level threat models into concrete test cases and mitigation strategies before implementation begins. As a result, security assurance moves from a predominantly reactive activity to a predictive and preventive practice, reducing the likelihood that exploitable weaknesses propagate into deployed systems.
LLM in cyberwarfare
The second talk focused on threat intelligence and the evolution of cyberwarfare in the context of the Ukraine invasion. Oleksandr traced the progression of state-sponsored cyber operations since 2015, beginning with destructive campaigns such as BlackEnergy and KillDisk, followed by the globally disruptive NotPetya attack. He then analyzed the 2022 escalation marked by multiple coordinated wiper attacks, including WhisperGate, HermeticWiper, and CaddyWiper, increasingly delivered via stealthy and fileless techniques. By 2025, according to CERT-UA reports, a new phase had emerged in which advanced persistent threat groups integrated LLMs—such as Qwen 2.5-Coder-32B—into their operations. These models enabled autonomous reconnaissance, decision-making, and data exfiltration, signaling the advent of reasoning-capable malware and the first documented use of just-in-time LLM execution in active cyber campaigns. The talk concluded with a structured taxonomy of AI-augmented attack mechanisms and a discussion of defensive strategies based on machine learning, behavioral analysis, and counter-LLM techniques.
Oleksandr Adamov is a researcher at Blekinge Institute of Technology (BTH), where his work focuses on the application of artificial intelligence to cybersecurity and software security engineering.