LTX-2 19B Dev (4-bit) - MLX
This is a 4-bit quantized version of the LTX-2 19B Dev model, optimized for Apple Silicon using MLX.
Model Description
LTX-2 is a state-of-the-art video generation model from Lightricks. This version has been quantized to 4-bit precision for efficient inference on Apple Silicon devices with MLX.
Key Features
- Pipeline: Dev (full control with CFG scale)
- Quantization: 4-bit precision
- Framework: MLX (Apple Silicon optimized)
- Memory: ~19GB VRAM required
Usage
Installation
pip install git+https://github.com/CharafChnioune/mlx-video.git
Command Line
# Basic generation
mlx-video --prompt "A beautiful sunset over the ocean" \
--model-repo AITRADER/ltx2-dev-4bit-mlx \
--pipeline dev \
--height 512 --width 512 \
--num-frames 33
# Dev pipeline with CFG
mlx-video --prompt 'A cat playing with yarn' \\
--model-repo AITRADER/ltx2-dev-4bit-mlx \\
--pipeline dev \\
--steps 40 --cfg-scale 4.0
Python API
from mlx_video import generate_video
video = generate_video(
prompt="A beautiful sunset over the ocean",
model_repo="AITRADER/ltx2-dev-4bit-mlx",
pipeline="dev",
height=512,
width=512,
num_frames=33,
)
Model Files
ltx-2-19b-dev-mlx.safetensors- Main model weights (4-bit quantized)quantization.json- Quantization configurationconfig.json- Model configurationlayer_report.json- Layer information
Performance
| Resolution | Frames | Steps |
|---|---|---|
| 512x512 | 33 | ~40 steps |
| 768x512 | 33 | ~40 steps |
License
This model is released under the LTX Video License.
Acknowledgements
- Lightricks for the original LTX-2 model
- MLX team at Apple for the framework
- mlx-video for the MLX conversion
- Downloads last month
- -
Hardware compatibility
Log In
to add your hardware
Quantized
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support