Загрузка...

LTX 2.3 Transition LoRA — Character Morphs Tested on 24GB and 8GB VRAM

How to use any LoRA with LTX 2.3 in ComfyUI — full setup plus real benchmarks on 24GB and 8GB VRAM.

This LoRA does smooth visual transformations inside the same shot: character morphs, style shifts, scene transitions. Made by Valiant Cat, trained on LTX 2.3. Once you learn how to connect one LoRA — you can connect any LoRA.

📊 BENCHMARK RESULTS

24GB — Q4_K_S | Gemma API
Warm run: 343.0 sec

8GB — Q3_K_M | Gemma API
Warm run: 373.1 sec
⚠️ 8GB tested via VRAM cap on RTX 3090 (~7GB available for generation, OBS recording overhead included)

Note: on 8GB the test was limited to ~3 seconds to stay within VRAM. Starting from 12GB results become noticeably better.
Both setups used Gemma API to save VRAM.

⚙️ SETUP
- LoRA file → ComfyUI/models/loras/
- Load workflow JSON → drag into ComfyUI browser
- LoRA strength: 1.0 | Guidance: 1.0 | CFG: 4.0
- Trigger word: zhuanchang (always at end of prompt)

🔗 FREE WORKFLOW + AUTO INSTALLER
Free workflow is in my free group:
👉 https://whop.com/viper-ai-vault/viper-ai-community/
Auto installer detects your GPU and sets everything up automatically.

One-click auto-installer(This LoRA comes pre-integrated — one click):
👉 https://whop.com/viper-ai-vault/auto-installs-ltx-2-3/

Part 1 — LTX 2.3 24GB Setup: https://youtu.be/lWwpTXfVsBE
Part 2 — LTX 2.3 8GB/12GB/16GB Benchmarks: https://youtu.be/AvG-sMRfcZs

LoRA: https://huggingface.co/valiantcat/LTX-2.3-Transition-LORA/tree/main

0:00 Transition LoRA demo
0:18 What is a LoRA
0:52 About this LoRA — Valiant Cat
1:23 Download files from HuggingFace
1:43 File placement — loras folder
2:01 Load workflow in ComfyUI
2:25 Settings — strength, guidance, CFG
2:58 Trigger word and prompt structure
3:33 24GB cold and warm run
5:01 Switch to 8GB setup
5:13 8GB cold and warm run
6:08 Results comparison — 24GB vs 8GB
6:59 Free workflow and auto installer
7:21 LoRA vs no LoRA — side by side
7:46 Outro

#LTX23 #LTXLoRA #TransitionLoRA #ComfyUI #LocalAI #AIVideo #LowVRAM #8GBVRAM #LTXVideo #GGUF #AIVideoGeneration #Lightricks #ComfyUIWorkflow #OpenSourceAI #CharacterMorph

Видео LTX 2.3 Transition LoRA — Character Morphs Tested on 24GB and 8GB VRAM канала @Viper_AI_Vaunt
Яндекс.Метрика
Все заметки Новая заметка Страницу в заметки
Страницу в закладки Мои закладки
На информационно-развлекательном портале SALDA.WS применяются cookie-файлы. Нажимая кнопку Принять, вы подтверждаете свое согласие на их использование.
О CookiesНапомнить позжеПринять