Загрузка...

Fine Tuning Video Generation Models | Make Your Own AI Videos

Training video models with… Images? Turns out video diffusion and image diffusion is similar enough that we can train the latest open source video generation models with a limited number of labeled images.

Resources:
Model Trained: https://huggingface.co/AdamLucek/Wan2.1-T2V-14B-OldBookIllustrations
Written Blog Tutorial: https://learn2train.medium.com/fine-tuning-wan-2-1-with-a-curated-dataset-step-by-step-guide-a6f0b334ab79
Video Diffusion Blog: https://lilianweng.github.io/posts/2024-04-12-diffusion-video/#adapting-image-models-to-generate-videos
Diffusion Pipe: https://github.com/tdrussell/diffusion-pipe
ComfyUI: https://github.com/comfyanonymous/ComfyUI
Wan Paper: https://arxiv.org/pdf/2503.20314
Wan2.1-T2V-14B: https://huggingface.co/Wan-AI/Wan2.1-T2V-14B

Chapters:
00:00 - Introduction
00:34 - How Video Generation Works
06:40 - Training: Dataset Preparation
11:42 - Training: Environment Setup
18:28 - Training: Configuration
23:38 - Training: Testing the Models

#ai #datascience #finetuning

Видео Fine Tuning Video Generation Models | Make Your Own AI Videos канала Adam Lucek
Страницу в закладки Мои закладки
Все заметки Новая заметка Страницу в заметки