Spaces:
Running
on
Zero
Running
on
Zero
Expanded README.md with installation, configuration, and usage instructions.
#15
by
smolSWE
- opened
README.md
CHANGED
@@ -1,13 +1,82 @@
|
|
1 |
-
---
|
2 |
-
title: LTX Video Fast
|
3 |
-
emoji: 🎥
|
4 |
-
colorFrom: yellow
|
5 |
-
colorTo: pink
|
6 |
-
sdk: gradio
|
7 |
-
sdk_version: 5.29.1
|
8 |
-
app_file: app.py
|
9 |
-
pinned: false
|
10 |
-
short_description: ultra-fast video model, LTX 0.9.7 13B distilled
|
11 |
-
---
|
12 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
13 |
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
|
|
1 |
+
---
|
2 |
+
title: LTX Video Fast
|
3 |
+
emoji: 🎥
|
4 |
+
colorFrom: yellow
|
5 |
+
colorTo: pink
|
6 |
+
sdk: gradio
|
7 |
+
sdk_version: 5.29.1
|
8 |
+
app_file: app.py
|
9 |
+
pinned: false
|
10 |
+
short_description: ultra-fast video model, LTX 0.9.7 13B distilled
|
11 |
+
---
|
12 |
+
|
13 |
+
# LTX Video Fast
|
14 |
+
|
15 |
+
This project provides an ultra-fast video generation model, LTX 0.9.7 13B distilled.
|
16 |
+
|
17 |
+
## Installation
|
18 |
+
|
19 |
+
1. **Clone the repository:**
|
20 |
+
|
21 |
+
```bash
|
22 |
+
git clone <repository_url>
|
23 |
+
cd <repository_directory>
|
24 |
+
```
|
25 |
+
|
26 |
+
2. **Install the dependencies:**
|
27 |
+
|
28 |
+
```bash
|
29 |
+
pip install -r requirements.txt
|
30 |
+
```
|
31 |
+
|
32 |
+
The `requirements.txt` file includes the following dependencies:
|
33 |
+
|
34 |
+
```
|
35 |
+
accelerate
|
36 |
+
transformers
|
37 |
+
sentencepiece
|
38 |
+
pillow
|
39 |
+
numpy
|
40 |
+
torchvision
|
41 |
+
huggingface_hub
|
42 |
+
spaces
|
43 |
+
opencv-python
|
44 |
+
imageio
|
45 |
+
imageio-ffmpeg
|
46 |
+
einops
|
47 |
+
timm
|
48 |
+
av
|
49 |
+
git+https://github.com/huggingface/diffusers.git@main
|
50 |
+
```
|
51 |
+
|
52 |
+
## Configuration
|
53 |
+
|
54 |
+
The project uses YAML configuration files to define model parameters and pipeline settings. Example configuration files are located in the `configs` directory.
|
55 |
+
|
56 |
+
* `configs/ltxv-13b-0.9.7-dev.yaml`
|
57 |
+
* `configs/ltxv-13b-0.9.7-distilled.yaml`
|
58 |
+
* `configs/ltxv-2b-0.9.1.yaml`
|
59 |
+
* `configs/ltxv-2b-0.9.5.yaml`
|
60 |
+
* `configs/ltxv-2b-0.9.6-dev.yaml`
|
61 |
+
* `configs/ltxv-2b-0.9.6-distilled.yaml`
|
62 |
+
* `configs/ltxv-2b-0.9.yaml`
|
63 |
+
|
64 |
+
To use a specific configuration, you can specify its path when running the generation scripts.
|
65 |
+
|
66 |
+
## Usage
|
67 |
+
|
68 |
+
The project supports different generation modes: text-to-video, image-to-video, and video-to-video.
|
69 |
+
|
70 |
+
### Text-to-Video
|
71 |
+
|
72 |
+
To generate a video from a text prompt, use the `inference.py` script with the `--mode text2video` argument. You must also specify the path to the desired config file using `--config` and a text prompt using `--prompt`. For example:
|
73 |
+
|
74 |
+
### Image-to-Video
|
75 |
+
|
76 |
+
To generate a video from an image, use the `inference.py` script with the `--mode image2video` argument. You must also specify the path to the desired config file using `--config` and the path to the input image using `--image_path`. For example:
|
77 |
+
|
78 |
+
### Video-to-Video
|
79 |
+
|
80 |
+
To generate a video from another video, use the `inference.py` script with the `--mode video2video` argument. You must also specify the path to the desired config file using `--config` and the path to the input video using `--video_path`. For example:
|
81 |
+
|
82 |
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|