How to make TensorFlow models run faster on GPUs
XLA compilation on GPU can greatly boost the performance of your models (~1.2x-35x performance improvements recorded). Learn how to use @tf.function(jit_compile=True) in TensorFlow to control what exact scopes are being compiled, and how to debug the performance of the resulting program. We'll cover writing compiled models, debugging them, and exploring the performance characteristics and optimizations the XLA compiler performs, and we'll do a detailed case study on XLA usage for Google’s GPU MLPerf submission. We'll also cover how automatic kernel fusion by XLA reduces memory bandwidth requirements and improves the performance of your models. You should have basic familiarity with TensorFlow and GPU computing in general.
Subscribe to TensorFlow → https://goo.gle/TensorFlow
product: TensorFlow - General; re_ty: Publish;
Видео How to make TensorFlow models run faster on GPUs канала TensorFlow
Subscribe to TensorFlow → https://goo.gle/TensorFlow
product: TensorFlow - General; re_ty: Publish;
Видео How to make TensorFlow models run faster on GPUs канала TensorFlow
purpose: Educate pr_pr: TensorFlow series: Coding TensorFlow type: DevByte (deck cleanup 10-20min) GDS: Yes tensorflow tensorflow speed increase tensorflow speed tensorflow faster make tensorflow faster make tensorflow run faster tensorflow gpu xla compiler xla just in time compiler tensorflow developer tensorflow developers google developers google George Karpenkov
Комментарии отсутствуют
Информация о видео
17 августа 2021 г. 21:00:15
00:21:04
Другие видео канала