High-Performance MPI Library with SR-IOV and SLURM for Virtualized InfiniBand Clusters
In this video from the 2016 OpenFabrics Workshop, DK Panda from Ohio State University presents: High-Performance MPI Library with SR-IOV and SLURM for Virtualized InfiniBand Clusters.
"The MVAPICH2 software libraries have been enabling many HPC clusters during the last 14 years to extract performance, scalability and fault-tolerance using OpenFabrics verbs. As the HPC field is moving to Exascale, many new challenges are emerging to design the next generation MPI, PGAS and Hybrid MPI+PGAS libraries with capabilities to scale to millions of processors while taking advantages of the latest trends in accelerator/co-processor technologies and the features of the OpenFabrics Verbs. In this talk, we will present the approach being taken by the MVAPICH2 project including support for new verbs-level capabilities (DC, UMR, ODP, and offload), PGAS (OpenSHMEM, UPC, CAF and UPC++), Hybrid MPI+PGAS models, tight-integration with NVIDIA GPUs (with GPUDirect RDMA) and Intel MIC, and designs leading to reduced energy consumption. We will also highlight a co-design approach where the capabilities of InfiniBand Network Analysis and Monitoring (INAM) can be used together with the new MPI-T capabilities of the MPI standard to analyze and introspect performance of an MPI program on an InfiniBand cluster and tune it further. We will also present upcoming plans of the MVAPICH2 project to provide support for emerging technologies: OmniPath, KNL and OpenPower."
Learn more: https://www.openfabrics.org/index.php/about-the-2016-ofa-workshop.html
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Видео High-Performance MPI Library with SR-IOV and SLURM for Virtualized InfiniBand Clusters канала InsideHPC Report
"The MVAPICH2 software libraries have been enabling many HPC clusters during the last 14 years to extract performance, scalability and fault-tolerance using OpenFabrics verbs. As the HPC field is moving to Exascale, many new challenges are emerging to design the next generation MPI, PGAS and Hybrid MPI+PGAS libraries with capabilities to scale to millions of processors while taking advantages of the latest trends in accelerator/co-processor technologies and the features of the OpenFabrics Verbs. In this talk, we will present the approach being taken by the MVAPICH2 project including support for new verbs-level capabilities (DC, UMR, ODP, and offload), PGAS (OpenSHMEM, UPC, CAF and UPC++), Hybrid MPI+PGAS models, tight-integration with NVIDIA GPUs (with GPUDirect RDMA) and Intel MIC, and designs leading to reduced energy consumption. We will also highlight a co-design approach where the capabilities of InfiniBand Network Analysis and Monitoring (INAM) can be used together with the new MPI-T capabilities of the MPI standard to analyze and introspect performance of an MPI program on an InfiniBand cluster and tune it further. We will also present upcoming plans of the MVAPICH2 project to provide support for emerging technologies: OmniPath, KNL and OpenPower."
Learn more: https://www.openfabrics.org/index.php/about-the-2016-ofa-workshop.html
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Видео High-Performance MPI Library with SR-IOV and SLURM for Virtualized InfiniBand Clusters канала InsideHPC Report
Показать
Комментарии отсутствуют
Информация о видео
Другие видео канала
![Enabling Applications to Exploit SmartNICs and FPGAs](https://i.ytimg.com/vi/EwmjjcI0bVo/default.jpg)
![iPad Configured for Remote HPC](https://i.ytimg.com/vi/30OLOxvXQSs/default.jpg)
![NEC Accelerates HPC with Vector Computing at ISC 2018](https://i.ytimg.com/vi/gitOak68n8A/default.jpg)
![Cray Sonexion Storage Takes Lustre to Infinite Scale](https://i.ytimg.com/vi/A_4ZbhvWGcc/default.jpg)
![DAOS: Scale-Out Software-Defined Storage for HPC/Big Data/AI Convergence](https://i.ytimg.com/vi/wnGBW31yhLM/default.jpg)
![Perlmutter: a 2020 Pre-Exascale GPU-Accelerated System for NERSC](https://i.ytimg.com/vi/dv8TuIoSSAE/default.jpg)
![Optics for the Cloud – A New Approach to Data Centre Technology](https://i.ytimg.com/vi/-ypM2j5PR0E/default.jpg)
![Managing Genomics Data with DDN at the Sanger Institute](https://i.ytimg.com/vi/KDYyuEM_yno/default.jpg)
![Performance of a Task-Parallel PGAS Programming Model using OpenSHMEM and UCX](https://i.ytimg.com/vi/jF9c142cEik/default.jpg)
![ClusterStor 1500 Storage Appliance for Big Data](https://i.ytimg.com/vi/Xdytmrisjo8/default.jpg)
![Panel Discussion: The Convergence of AI and HPC](https://i.ytimg.com/vi/tlvbXTW3cw4/default.jpg)
![How AI is Reshaping HPC](https://i.ytimg.com/vi/bFtjP2Qhp9M/default.jpg)
![Architecting Flash for Scale and Performance in HPC](https://i.ytimg.com/vi/7dkYhrxsnoI/default.jpg)
![Managing HPC Software Complexity with Spack](https://i.ytimg.com/vi/z7ZdnCkaPCY/default.jpg)
![IPOIB Acceleration](https://i.ytimg.com/vi/2ZAjUTMhflU/default.jpg)
![E4-ARKA: ARM64+GPU+IB is Now Here](https://i.ytimg.com/vi/4qn1Igc2BvE/default.jpg)
![Kx Streaming Analytics Demo Easily Crunches 1.2 Billion NYC Taxi Data points using Intel Xeon Phi](https://i.ytimg.com/vi/eC7jZpVE0vQ/default.jpg)
![NEC Steps up with SX-Aurora Vector Engine for HPC](https://i.ytimg.com/vi/aPwt_l7ObjI/default.jpg)
![An Update on CXL Specification Advancements](https://i.ytimg.com/vi/VZKiAILN6a4/default.jpg)
![Data-Centric Parallel Programming](https://i.ytimg.com/vi/dUATZ3gKC6g/default.jpg)