Загрузка страницы

Accelerating Stable Diffusion Inference on Intel CPUs with Hugging Face (part 2) 🚀 🚀 🚀

In this video, you will learn how to accelerate image generation with an Intel Sapphire Rapids server. Using Stable Diffusion models, the Intel Extension for PyTorch and system-level optimizations, we're going to cut inference latency from 36+ seconds to 5 seconds!

⭐️⭐️⭐️ Don't forget to subscribe to be notified of future videos ⭐️⭐️⭐️
⭐️⭐️⭐️ Want to buy me a coffee? I can always use more :) https://www.buymeacoffee.com/julsimon ⭐️⭐️⭐️

- Blog post: https://huggingface.co/blog/stable-diffusion-inference-intel
- Code: https://gitlab.com/juliensimon/huggingface-demos/-/tree/main/optimum/stable_diffusion_intel
- Jemalloc: https://jemalloc.net/
- Intel Extension for PyTorch: https://github.com/intel/intel-extension-for-pytorch
- Intel Sapphire Rapids: https://en.wikipedia.org/wiki/Sapphire_Rapids
- Intel Advanced Matrix Extensions: https://en.wikipedia.org/wiki/Advanced_Matrix_Extensions

Видео Accelerating Stable Diffusion Inference on Intel CPUs with Hugging Face (part 2) 🚀 🚀 🚀 канала Julien Simon
Показать
Комментарии отсутствуют
Введите заголовок:

Введите адрес ссылки:

Введите адрес видео с YouTube:

Зарегистрируйтесь или войдите с
Информация о видео
3 апреля 2023 г. 14:48:19
00:15:51
Яндекс.Метрика