Accelerating Stable Diffusion Inference on Intel CPUs with Hugging Face (part 2) 🚀 🚀 🚀
In this video, you will learn how to accelerate image generation with an Intel Sapphire Rapids server. Using Stable Diffusion models, the Intel Extension for PyTorch and system-level optimizations, we're going to cut inference latency from 36+ seconds to 5 seconds!
⭐️⭐️⭐️ Don't forget to subscribe to be notified of future videos ⭐️⭐️⭐️
⭐️⭐️⭐️ Want to buy me a coffee? I can always use more :) https://www.buymeacoffee.com/julsimon ⭐️⭐️⭐️
- Blog post: https://huggingface.co/blog/stable-diffusion-inference-intel
- Code: https://gitlab.com/juliensimon/huggingface-demos/-/tree/main/optimum/stable_diffusion_intel
- Jemalloc: https://jemalloc.net/
- Intel Extension for PyTorch: https://github.com/intel/intel-extension-for-pytorch
- Intel Sapphire Rapids: https://en.wikipedia.org/wiki/Sapphire_Rapids
- Intel Advanced Matrix Extensions: https://en.wikipedia.org/wiki/Advanced_Matrix_Extensions
Видео Accelerating Stable Diffusion Inference on Intel CPUs with Hugging Face (part 2) 🚀 🚀 🚀 канала Julien Simon
⭐️⭐️⭐️ Don't forget to subscribe to be notified of future videos ⭐️⭐️⭐️
⭐️⭐️⭐️ Want to buy me a coffee? I can always use more :) https://www.buymeacoffee.com/julsimon ⭐️⭐️⭐️
- Blog post: https://huggingface.co/blog/stable-diffusion-inference-intel
- Code: https://gitlab.com/juliensimon/huggingface-demos/-/tree/main/optimum/stable_diffusion_intel
- Jemalloc: https://jemalloc.net/
- Intel Extension for PyTorch: https://github.com/intel/intel-extension-for-pytorch
- Intel Sapphire Rapids: https://en.wikipedia.org/wiki/Sapphire_Rapids
- Intel Advanced Matrix Extensions: https://en.wikipedia.org/wiki/Advanced_Matrix_Extensions
Видео Accelerating Stable Diffusion Inference on Intel CPUs with Hugging Face (part 2) 🚀 🚀 🚀 канала Julien Simon
Показать
Комментарии отсутствуют
Информация о видео
Другие видео канала
![Hugging Face / AWS roadshow - Johannesburg 🇿🇦🇿🇦🇿🇦](https://i.ytimg.com/vi/2rwaGK9-zdA/default.jpg)
![Hugging Face / AWS roadshow - Cape Town 🇿🇦🇿🇦🇿🇦](https://i.ytimg.com/vi/1tZoxqAHUTA/default.jpg)
![AWS User Group Dubai](https://i.ytimg.com/vi/f4W0eqq0qzs/default.jpg)
![Hugging Face / AWS roadshow - Zurich 🇨🇭🇨🇭🇨🇭](https://i.ytimg.com/vi/tuH9u3nABpE/default.jpg)
![AWS / Huggingface roadshow - Day 4, Munich](https://i.ytimg.com/vi/0hrJNd_L17E/default.jpg)
![Hugging Face profite de l'emballement pour l'intelligence artificielle](https://i.ytimg.com/vi/6vHHkaUBiWk/default.jpg)
![Hugging Face / AWS roadshow - Day 2, Madrid](https://i.ytimg.com/vi/Ge8FceLRLX4/default.jpg)
![Hugging Face/AWS roadshow - Day 1, Madrid](https://i.ytimg.com/vi/-YzX0-_Yt7Q/default.jpg)
![Transformer training shootout, part 2: AWS Trainium vs. NVIDIA V100](https://i.ytimg.com/vi/VDKDmKFOJ5M/default.jpg)
![Accelerating Transformers with Optimum Neuron, AWS Trainium and AWS Inferentia2](https://i.ytimg.com/vi/FmjTWags__Q/default.jpg)
![Summarizing legal documents with Hugging Face and Amazon SageMaker](https://i.ytimg.com/vi/tc87-ZKWm78/default.jpg)
![Interview BFM Business - Hugging Face (04/2023)](https://i.ytimg.com/vi/fADF2Wb56Os/default.jpg)
![Accelerating Stable Diffusion Inference on Intel CPUs with Hugging Face (part 1) 🚀 🚀 🚀](https://i.ytimg.com/vi/KJDCGyZ2fPw/default.jpg)
![Transformer training shootout: AWS Trainium vs. NVIDIA A10G](https://i.ytimg.com/vi/2SquGhkld7k/default.jpg)
![Training Transformers with AWS Trainium and the Hugging Face Neuron AMI](https://i.ytimg.com/vi/0Y5E8RI_D2E/default.jpg)
![How Witty Works leverages Hugging Face to scale inclusive language](https://i.ytimg.com/vi/Z_S1gfRFtgA/default.jpg)
![Fast and accurate language identification with Hugging Face and Intel OpenVINO](https://i.ytimg.com/vi/uUjcMWKSJtk/default.jpg)
![Semantic search on images and videos with BridgeTower](https://i.ytimg.com/vi/1Qzkzk3iQFw/default.jpg)
![Efficient Few-Shot Learning on CPU with SetFit](https://i.ytimg.com/vi/3uOXG01mcUQ/default.jpg)
![Accelerate PyTorch Transformers with Intel Sapphire Rapids, part 2](https://i.ytimg.com/vi/-q9rmF6fK_w/default.jpg)