Microsoft Tay's infamous descent into unintended behavior serves as a stark reminder of AI's potential vulnerabilities in cybersecurity contexts.
This webinar from industry thought leader John Bambenek explores key lessons from Tay’s deployment, highlighting the importance of safeguarding AI systems from exploitation and model poisoning. Participants will gain insight into managing AI securely by employing resource-augmented generation grounded in trusted frameworks like MITRE ATT&CK, D3FEND, and the NIST Cybersecurity Framework (CSF). Learn how these methodologies empower cybersecurity analysts to rapidly, accurately, and securely interpret security events, preventing adversaries from compromising AI integrity.
Discover practical strategies for ensuring your organization's AI systems remain resilient against sophisticated AI-driven cyberattacks as well as how to transform the security operations center (SOC) from alert triage into a security program that stops the actual attackers targeting your organization.