Unreliable AI: Addressing the Cybersecurity Risks of LLM-Based System

Logo
Presented by

Vladislav Tushkanov - Research Development Group Manager | Machine Learning Technology |AI Research | Kaspersky

About this talk

As AI systems based on large language models become more powerful and widespread, more cybersecurity challenges emerge. From jailbreaks to indirect prompt injections, they are vulnerable to a wide array of LLM-specific threats. However, all these problems can be boiled down to one single issue: LLMs are probabilistic algorithms that are inherently unreliable, just like any other ML system. Does it mean they cannot be useful? Absolutely not! In this talk, we will discuss the challenges of secure and reliable LLM-based applications and the way to make them safer and more aligned with your business goals.
Related topics:

More from this channel

Upcoming talks (0)
On-demand talks (267)
Subscribers (65216)
On this channel, Kaspersky experts share their knowledge and key insights into high-fidelity threat hunting and intelligence, incident management, malware
analysis, reverse engineering, security solutions, and several other vital aspects of the cyberworld. To keep you up to date, the experts also provide detailed
webinars and workshops on how Kaspersky security solutions and services can halt and prevent a vast range of malicious attacks conducted by
cybercriminals. Kaspersky is a global cybersecurity and digital privacy company that has been providing protection for 25 years, with over 400 million user…