More Complex Vulnerabilities due to Generative AI

More complex vulnerabilities due to generative AI

Large language models (LLM) are finding their way into more and more IT products. From Kaspersky’s perspective, this means that the size of the attack surface is increasing significantly.

The security service provider has investigated the impact of artificial intelligence (AI) on the security landscape. The focus is on its use by attackers, but also on how it can be used in defense. With this in mind, Kaspersky sees these five top issues:

More complex vulnerabilities. LLMs following text commands are increasingly being integrated into consumer products. This will create new complex vulnerabilities at the intersection of (probabilistic) generative AI and traditional (deterministic) technology. As a result, the attack surface will increase. Developers must therefore create new security measures, for example by requiring user consent for actions initiated by LLM agents.

Comprehensive AI assistant for cyber security experts. Red Team members and security experts are increasingly leveraging the potential of generative AI for innovative cybersecurity tools. This could lead to the development of assistants that use LLM or Machine Learning (ML) to automate Red Team tasks. [Note Silicon: Red Teams work as external attackers on behalf of companies and authorities to test the effectiveness of their IT security management].

Neural networks to generate images for scams. This year, scammers could expand their tactics by using neural networks and AI tools to create more convincing scam content.

AI will not fundamentally change the cybersecurity world. Despite the AI trend, Kaspersky does not expect a fundamental change in the threat landscape in the near future. Just like cybercriminals, IT security managers will use the same or more advanced generative AI tools to improve the security of software and networks.

More initiatives and regulations. Synthetic (artificially generated) content will need to be labeled, requiring further regulation and investment in detection technologies. Developers and scientists will develop methods to make synthetic media more easily identifiable and traceable through watermarking.