Cautions of Designing Your Own NPU

Logo
Presented by

Paul Karazuba, VP of Marketing, Expedera

About this talk

A relatively new class of processor, the NPU (Neural Processing Unit), is taking its place in heterogeneous designs. As a result, chip designers who rarely consider creating custom CPU or GPU architectures are grappling with whether to design their own custom NPUs. This talk will highlight the complexities of designing NPUs and explore why a better option might be a silicon-proven, packet-based AI engine that allows you to customize for current and future workloads.
Related topics:

More from this channel

Upcoming talks (0)
On-demand talks (9)
Subscribers (257)
Expedera provides scalable neural engine semiconductor IP that enables major improvements in performance, power, and latency while reducing cost and complexity in AI inference applications. Third-party silicon validated, Expedera’s solutions produce superior performance and are scalable to a wide range of applications from edge nodes and smartphones to automotive and data centers. Expedera’s Origin deep learning accelerator products are easily integrated, readily scalable, and can be customized to application requirements. The company is headquartered in Santa Clara, California. Visit expedera.com