logo
14
1
WeChat Login
Jukka Seppänen<Kijai@users.noreply.huggingface.co>
Update README.md

Separated LTX2.3 checkpoint for alternative way to load the models in Comfy

image

The fp8 quantizations were done with the basic static weight scales and are set to not run with fp8 matmuls, the models marked input_scaled additionally have activation scaling, and are set to run with fp8 matmuls on supported hardware (roughly 40xx and later Nvidia GPUs).

As this is first time I'm attempting to do calibrate input scales, these are pretty experimental, but result wise seems to work, this is a test on a 4090, 8 steps with distill:

<video controls autoplay width=50% src=https://cdn-uploads.huggingface.co/production/uploads/63297908f0b2fc94904a65b8/ALNr3_j0klp29fHkI3pyt.mp4>

Tiny VAE by madebyollin

Can be used like this currently:

image

About

No description, topics, or website provided.
171.69 GiB
14 forks1 stars1 branches0 TagREADMEOther license