NVIDIA A100 Tensor Core GPUs based on the latest Ampere-technology are a momentous performance leap from previous generations. However, designing multi-GPU, multi-node systems that are quickly deployable at large-scale, that fully leverage the resulting performance leap is another gigantic task. In this session, James He, Supermicro's director of system product management, and Charu Chaubal, NVIDIA's product marketing manager, will share today's most demanding AI and HPC use cases. You will learn how our collaborative platform design helps you create large-scale GPU-cluster deployments with the latest-technology software stack. The building blocks design features up to 8 NVIDIA A100 GPUs with NVLink and NVSwitch, dual AMD EPYC processors with high core counts, and PCIe 4.0 lanes in one system. It combines the latest storage and networking technologies such as GPUDirect Storage and RDMA, NVME-oF, to keep up with the rapidly evolving data-hungry applications.