An Update on CXL Specification Advancements
Jim Pappas, CXL Consortium
Jim Pappas is Director of Technology Initiatives at Intel, with responsibility to establish broad industry ecosystems that comply with new technologies in the areas of Enterprise I/O, Energy Efficient Computing, Solid State Storage, and Persistent Memory. Jim has founded or has served on several organizations in these areas including PCI-SIG, USB, SNIA, IBTA, OFA, The Green Grid (TGG), Compute Express Link™ (CXL), and many others. Jim has over 30 years’ experience in the computer industry and holds eight U.S. patents in computer graphics and microprocessor technologies. He holds a B.S.E.E. from the University of Massachusetts, Amherst.
Compute Express Link™ (CXL) is a high-speed CPU-to-Device and CPU-to-Memory interconnect designed to accelerate next-generation data center performance. This presentation will provide an update on the latest advancements in CXL specification development, its use cases and industry differentiators. CXL enables a high-speed, efficient interconnect between the CPU and platform enhancements and workload accelerators. Attendees will learn how CXL technology: Allows resource sharing for higher performance Reduces complexity and lowers overall system cost Permits users to focus on target workloads as opposed to redundant memory management Builds upon PCI Express® infrastructure Supports new use cases for caching devices and accelerators, accelerators with memory and memory buffers. The CXL Consortium has released the CXL 1.1 Specification and the next generation of the spec is currently under development. Consortium members can contribute to spec development and help shape the ecosystem
Видео An Update on CXL Specification Advancements канала InsideHPC Report
Jim Pappas is Director of Technology Initiatives at Intel, with responsibility to establish broad industry ecosystems that comply with new technologies in the areas of Enterprise I/O, Energy Efficient Computing, Solid State Storage, and Persistent Memory. Jim has founded or has served on several organizations in these areas including PCI-SIG, USB, SNIA, IBTA, OFA, The Green Grid (TGG), Compute Express Link™ (CXL), and many others. Jim has over 30 years’ experience in the computer industry and holds eight U.S. patents in computer graphics and microprocessor technologies. He holds a B.S.E.E. from the University of Massachusetts, Amherst.
Compute Express Link™ (CXL) is a high-speed CPU-to-Device and CPU-to-Memory interconnect designed to accelerate next-generation data center performance. This presentation will provide an update on the latest advancements in CXL specification development, its use cases and industry differentiators. CXL enables a high-speed, efficient interconnect between the CPU and platform enhancements and workload accelerators. Attendees will learn how CXL technology: Allows resource sharing for higher performance Reduces complexity and lowers overall system cost Permits users to focus on target workloads as opposed to redundant memory management Builds upon PCI Express® infrastructure Supports new use cases for caching devices and accelerators, accelerators with memory and memory buffers. The CXL Consortium has released the CXL 1.1 Specification and the next generation of the spec is currently under development. Consortium members can contribute to spec development and help shape the ecosystem
Видео An Update on CXL Specification Advancements канала InsideHPC Report
Показать
Комментарии отсутствуют
Информация о видео
Другие видео канала
Compute Express Link™ (CXL™): Exploring Coherent Memory and Innovative Use CasesAn FPGA platform for Reconfigurable Heterogeneous HPC and Cloud ComputingThe New CXL StandardInside a Google data centerDistributed Asynchronous Object Storage (DAOS)Lustre Network Multi-Rail Feature SetSDC2020: Understanding Compute Express Link: A Cache-coherent InterconnectTips and Tricks to Build Your Own HPC ClusterSDC2020: CXL 1.1 Protocol Extensions: Review of the cache and memory protocols in CXLEnhancing NVMe and NVMe-oF configuration and managability with SNIA Swordfish and DMTF Redfish to...PCI Express Physical LayerEnhancing OFI for Invoking Acceleration Capabilities on an Integrated Networking/Accelerator FPGA...PCIe 5.0 Drill-DownDesigning a Deep-Learning Aware MPI Library: An MVAPICH2 ApproachMichael AdamsHardware BasicsPCIe 5.0 and PCIe 6.0 OverviewHyperion Merle Giles interviewHyperion Michael Resh interview 1Seattle Conference on Scalability: YouTube Scalability