xPU Deployment and Solutions Deep Dive

Logo
Presented by

Tim Michels, F5; Mario Baldi, AMD Pensando; Amit Radzi, NeuReality; John Kim, NVIDIA

About this talk

Our 1st and 2nd webcasts in this xPU series explained what xPUs are, how they work, and what they can do. In this 3rd webcast, we will dive deeper into next steps for xPU deployment and solutions, discussing: When to deploy • Pros and cons of dedicated accelerator chips versus running everything on the CPU • xPU use cases across hybrid, multi-cloud and edge environments • Cost and power considerations Where to deploy • Deployment operating models: Edge, Core Data Center, CoLo, Public Cloud • System location: In the server, with the storage, on the network, or in all those locations? How to deploy • Mapping workloads to hyperconverged and disaggregated infrastructure • Integrating xPUs into workload flows • Applying offload and acceleration elements within an optimized solution
Related topics:

More from this channel

Upcoming talks (2)
On-demand talks (123)
Subscribers (56100)
SNIA is a not-for-profit global organization made up of corporations, universities, startups, and individuals. The members collaborate to develop and promote vendor-neutral architectures, standards, and education for management, movement, and security for technologies related to handling and optimizing data. SNIA focuses on the transport, storage, acceleration, format, protection, and optimization of infrastructure for data.