Mitigating AI Risks with Confidential Computing in the Cloud

Logo
Presented by

Mark Bower, VP Product & Market Development

About this talk

Confidential Computing introduces new techniques that bolster the security of applications and data, establishing a robust defense against insider breaches and unauthorized access. This novel approach tackles new challenges in scenarios requiring secure collaboration among untrusted parties, such as generative AI applications. Ensuring data rights management, privacy, and control are paramount to thwart issues like model poisoning, data leakage, and theft. In this webinar, we explore the critical importance of safeguarding in-use code and data in the context of emerging AI LLMs. Through attack demonstrations and real-world use cases in financial services and healthcare, we illustrate how a Confidential Computing-based zero-trust architecture enables organizations to meet escalating demand AI technologies with confidence. In this webinar, you will learn how: - Confidential Computing enhances app and data security against insider threats. - This technology addresses enterprise challenges in secure collaboration. - Confidential Computing is applied to reduce risks in cloud-hosted AI technology, such as LLMs.
Related topics:

More from this channel

Upcoming talks (0)
On-demand talks (10)
Subscribers (531)
Anjuna Seaglass™ is the world’s first Universal Confidential Computing Platform, capable of running applications in any cloud with complete data security and privacy. Anjuna Seaglass isolates workloads in a hardware-assisted Trusted Execution Environment that intrinsically secures data in every state – in use, at rest, and in transit – to create a zero trust environment.