Mitigating AI Security Risks in Content Generation: Securing API-Based AI Systems

Logo
Presented by

Amritha Arun Babu, Technical Product Management Leader

About this talk

AI-powered content generation introduces security risks, including data leakage, prompt injection, adversarial misuse, and compliance gaps. Companies using AI APIs from providers like OpenAI and Claude must address these vulnerabilities to prevent unauthorized data access and model exploitation. This session breaks down real-world attack scenarios, including users manipulating AI prompts to extract sensitive information and API endpoints being exploited for adversarial attacks. We will introduce a structured AI security framework and a conceptual product design for mitigating these risks. Attendees will learn how to implement real-time AI monitoring, enforce API-level security controls, and automate content governance to protect their systems without limiting AI-driven innovation.
Related topics:

More from this channel

Upcoming talks (8)
On-demand talks (653)
Subscribers (215131)
This channel features presentations by leading experts in the field of information security. From application, computer, network and Internet security to
access control management, data privacy and other hot topics, you will walk away with practical advice for your strategic and tactical information security
initiatives.