A Step-by-Step Guide to Securing Large Language Models (LLMs)

Logo
Presented by

Ravi Ithal, Co-Founder and CTO, Normalyze

About this talk

Securing Large Language Models (LLMs) is key in today's AI landscape. This talk dives into viewing LLMs as data compressors, understanding challenges with compressed data, and tracing data origins. Join this session to see how to ensure that LLMs are not trained on sensitive or biased data by implementing on-demand scanning, training automation, and a proxy system to withhold sensitive outputs.
Related topics:

More from this channel

Upcoming talks (1)
On-demand talks (26)
Subscribers (4660)
We discuss how to understand the full range of risks present against your cloud, on-prem and hybrid data, and eliminate the risks that matter the most. Let's put data security at the center of information security--where it belongs.