CppCon 2018: Jefferson Amstutz “Compute More in Less Time Using C++ Simd Wrapper Libraries”
http://CppCon.org
—
Presentation Slides, PDFs, Source Code and other presenter materials are available at: https://github.com/CppCon/CppCon2018
—
Leveraging SIMD (Single Instruction Multiple Data) instructions are an important part of fully utilizing modern processors. However, utilizing SIMD hardware features in C++ can be difficult as it requires an understanding of how the underlying instructions work. Furthermore, there are not yet standardized ways to express C++ in ways which can guarantee such instructions are used to increase performance effectively.
This talk aims to demystify how SIMD instructions can benefit the performance of applications and libraries, as well as demonstrate how a C++ SIMD wrapper library can greatly ease programmers in writing efficient, cross-platform SIMD code. While one particular library will be used to demonstrate elegant SIMD programming, the concepts shown are applicable to practically every C++ SIMD library currently available (e.g. boost.simd, tsimd, VC, dimsum, etc.), as well as the proposed SIMD extensions to the C++ standard library.
Lastly, this talk will also seek to unify the greater topic of data parallelism in C++ by connecting the SIMD parallelism concepts demonstrated to other expressions of parallelism, such as SPMD/SIMT parallelism used in GPU computing.
—
Jefferson Amstutz, Software Engineer
Intel
Jeff is a Visualization Software Engineer at Intel, where he leads the open source OSPRay project. He enjoys all things ray tracing, high performance computing, clearly implemented code, and the perfect combination of git, CMake, and modern C++.
—
Videos Filmed & Edited by Bash Films: http://www.BashFilms.com
Видео CppCon 2018: Jefferson Amstutz “Compute More in Less Time Using C++ Simd Wrapper Libraries” канала CppCon
—
Presentation Slides, PDFs, Source Code and other presenter materials are available at: https://github.com/CppCon/CppCon2018
—
Leveraging SIMD (Single Instruction Multiple Data) instructions are an important part of fully utilizing modern processors. However, utilizing SIMD hardware features in C++ can be difficult as it requires an understanding of how the underlying instructions work. Furthermore, there are not yet standardized ways to express C++ in ways which can guarantee such instructions are used to increase performance effectively.
This talk aims to demystify how SIMD instructions can benefit the performance of applications and libraries, as well as demonstrate how a C++ SIMD wrapper library can greatly ease programmers in writing efficient, cross-platform SIMD code. While one particular library will be used to demonstrate elegant SIMD programming, the concepts shown are applicable to practically every C++ SIMD library currently available (e.g. boost.simd, tsimd, VC, dimsum, etc.), as well as the proposed SIMD extensions to the C++ standard library.
Lastly, this talk will also seek to unify the greater topic of data parallelism in C++ by connecting the SIMD parallelism concepts demonstrated to other expressions of parallelism, such as SPMD/SIMT parallelism used in GPU computing.
—
Jefferson Amstutz, Software Engineer
Intel
Jeff is a Visualization Software Engineer at Intel, where he leads the open source OSPRay project. He enjoys all things ray tracing, high performance computing, clearly implemented code, and the perfect combination of git, CMake, and modern C++.
—
Videos Filmed & Edited by Bash Films: http://www.BashFilms.com
Видео CppCon 2018: Jefferson Amstutz “Compute More in Less Time Using C++ Simd Wrapper Libraries” канала CppCon
Показать
Комментарии отсутствуют
Информация о видео
Другие видео канала
![CppCon 2018: Ben Deane “Easy to Use, Hard to Misuse: Declarative Style in C++”](https://i.ytimg.com/vi/I52uPJSoAT4/default.jpg)
![CppCon 2017: John Lakos “Local ('Arena') Memory Allocators (part 1 of 2)”](https://i.ytimg.com/vi/nZNd5FjSquk/default.jpg)
![Vectorization (SIMD) and Scaling | James Reinders, Intel Corporation](https://i.ytimg.com/vi/hyZMssi_gZY/default.jpg)
![CppCon 2016: Timur Doumler “Want fast C++? Know your hardware!"](https://i.ytimg.com/vi/BP6NxVxDQIs/default.jpg)
![CppCon 2019: David Olsen “Faster Code Through Parallelism on CPUs and GPUs”](https://i.ytimg.com/vi/cbbKEAWf1ow/default.jpg)
![CppCon 2015: Chandler Carruth "Tuning C++: Benchmarks, and CPUs, and Compilers! Oh My!"](https://i.ytimg.com/vi/nXaxk27zwlk/default.jpg)
![CppCon 2018: Fedor Pikus “Design for Performance”](https://i.ytimg.com/vi/m25p3EtBua4/default.jpg)
![Mastering Chaos - A Netflix Guide to Microservices](https://i.ytimg.com/vi/CZ3wIuvmHeM/default.jpg)
![Angus Hewlett - SIMD, vector classes and branchless algorithms for audio synthesis (ADC'17)](https://i.ytimg.com/vi/cn-5k8fm_u0/default.jpg)
![CppCon 2017: Carl Cook “When a Microsecond Is an Eternity: High Performance Trading Systems in C++”](https://i.ytimg.com/vi/NH1Tta7purM/default.jpg)
![Performance: SIMD, Vectorization and Performance Tuning | James Reinders, former Intel Director](https://i.ytimg.com/vi/_OJmxi4-twY/default.jpg)
![CppCon 2017: Kate Gregory “10 Core Guidelines You Need to Start Using Now”](https://i.ytimg.com/vi/XkDEzfpdcSg/default.jpg)
![2 3 1 Introduction to SIMD](https://i.ytimg.com/vi/o_n4AKwdfiA/default.jpg)
![REFERENCES in C++](https://i.ytimg.com/vi/IzoFn3dfsPA/default.jpg)
![Modern x64 Assembly 15: Introduction to SIMD](https://i.ytimg.com/vi/QghC6G8TyQ0/default.jpg)
![C++ SSE Optimization [Shuffle/Instruction Pipeline] Lesson 3](https://i.ytimg.com/vi/ZyWUTL03PsE/default.jpg)
![SIMD and Vectorization: Parallelism in C++ #1/3 (multitasking on single core)](https://i.ytimg.com/vi/Pc8DfEyAxzg/default.jpg)
![CppCon 2018: “Secure Coding Best Practices: Your First Line Is The Last Line Of Defense (2 of 2)”](https://i.ytimg.com/vi/i0m0FBD-McY/default.jpg)
![CppCon 2019: Klaus Iglberger “Back to Basics: Move Semantics (part 1 of 2)”](https://i.ytimg.com/vi/St0MNEU5b0o/default.jpg)