Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
silveroxides
/
Chroma-GGUF
like
118
Text-to-Image
GGUF
License:
apache-2.0
Model card
Files
Files and versions
Community
7
Q8_M
and
Q4_K_S
can be found at
Clybius/Chroma-GGUF
BF16
Q8_0
Q6_K
Q5_1
Q5_0
Q5_K_S
Q4_1
Q4_K_M
Q4_0
Q3_K_L
Downloads last month
51,024
GGUF
Model size
8.9B params
Architecture
flux
Hardware compatibility
Log In
to view the estimation
3-bit
Q3_K_S
4.29 GB
Q3_K_S
4.29 GB
Q3_K_S
4.29 GB
Q3_K_S
4.29 GB
Q3_K_S
4.29 GB
Q3_K_L
4.99 GB
Q3_K_L
4.99 GB
Q3_K_L
4.99 GB
Q3_K_L
4.99 GB
Q3_K_L
4.99 GB
Q3_K_L
4.99 GB
Q3_K_L
4.99 GB
Q3_K_L
4.99 GB
Q3_K_L
4.99 GB
Q3_K_L
4.99 GB
Q3_K_L
4.99 GB
Q3_K_L
4.99 GB
Q3_K_L
4.99 GB
Q3_K_L
4.99 GB
Q3_K_L
4.99 GB
Q3_K_L
4.99 GB
Q3_K_L
4.99 GB
Q3_K_L
4.99 GB
Q3_K_L
4.43 GB
Q3_K_L
4.99 GB
4-bit
Q4_K_S
5.99 GB
Q4_K_S
5.99 GB
Q4_K_S
5.99 GB
Q4_K_S
5.99 GB
Q4_K_S
5.99 GB
Q4_K_S
5.99 GB
Q4_K_S
5.99 GB
Q4_K_S
5.99 GB
Q4_K_S
5.43 GB
Q4_K_S
5.99 GB
Q4_0
5.99 GB
Q4_1
6.53 GB
Q4_0
5.99 GB
Q4_1
6.53 GB
Q4_0
5.99 GB
Q4_1
6.53 GB
Q4_0
5.99 GB
Q4_1
6.53 GB
Q4_0
5.99 GB
Q4_1
6.53 GB
Q4_0
5.99 GB
Q4_1
6.53 GB
Q4_0
5.99 GB
Q4_1
6.53 GB
Q4_0
5.99 GB
Q4_1
6.53 GB
Q4_0
5.99 GB
Q4_1
6.53 GB
Q4_0
5.99 GB
Q4_1
6.53 GB
Q4_0
5.99 GB
Q4_1
6.53 GB
Q4_0
5.99 GB
Q4_1
6.53 GB
Q4_0
5.99 GB
Q4_1
6.53 GB
Q4_0
5.99 GB
Q4_1
6.53 GB
Q4_0
5.99 GB
Q4_1
6.53 GB
Q4_0
5.99 GB
Q4_1
6.53 GB
Q4_0
5.99 GB
Q4_1
6.53 GB
Q4_0
5.99 GB
Q4_1
6.53 GB
Q4_0
5.43 GB
Q4_1
5.97 GB
Q4_0
5.99 GB
Q4_1
6.53 GB
Q4_0
5.43 GB
Q4_0
5.43 GB
Q4_0
5.43 GB
Q4_0
5.43 GB
Q4_0
5.43 GB
Q4_0
5.43 GB
Q4_0
5.43 GB
Q4_0
5.43 GB
Q4_0
5.43 GB
Q4_K_M
6.12 GB
Q4_K_M
6.12 GB
Q4_K_M
6.12 GB
Q4_K_M
6.12 GB
Q4_K_M
6.12 GB
Q4_K_M
6.12 GB
Q4_K_M
6.12 GB
Q4_K_M
6.12 GB
Q4_K_M
6.12 GB
Q4_K_M
6.12 GB
Q4_K_M
6.12 GB
Q4_K_M
6.12 GB
Q4_K_M
6.12 GB
Q4_K_M
6.12 GB
Q4_K_M
6.12 GB
Q4_K_M
6.12 GB
Q4_K_M
6.12 GB
Q4_K_M
6.12 GB
Q4_K_M
5.57 GB
Q4_K_M
6.12 GB
5-bit
Q5_K_S
7.07 GB
Q5_K_S
7.07 GB
Q5_K_S
7.07 GB
Q5_K_S
7.07 GB
Q5_K_S
7.07 GB
Q5_K_S
7.07 GB
Q5_K_S
7.07 GB
Q5_K_S
7.07 GB
Q5_K_S
7.07 GB
Q5_K_S
7.07 GB
Q5_K_S
7.07 GB
Q5_K_S
7.07 GB
Q5_K_S
7.07 GB
Q5_K_S
7.07 GB
Q5_K_S
7.07 GB
Q5_K_S
7.07 GB
Q5_K_S
7.07 GB
Q5_K_S
7.07 GB
Q5_K_S
6.51 GB
Q5_K_S
7.07 GB
Q5_0
7.07 GB
Q5_1
7.6 GB
Q5_0
7.07 GB
Q5_1
7.6 GB
Q5_0
7.07 GB
Q5_1
7.6 GB
Q5_0
7.07 GB
Q5_1
7.6 GB
Q5_0
7.07 GB
Q5_1
7.6 GB
Q5_0
7.07 GB
Q5_1
7.6 GB
Q5_0
7.07 GB
Q5_1
7.6 GB
Q5_0
7.07 GB
Q5_1
7.6 GB
Q5_0
7.07 GB
Q5_1
7.6 GB
Q5_0
7.07 GB
Q5_1
7.6 GB
Q5_0
7.07 GB
Q5_1
7.6 GB
Q5_0
7.07 GB
Q5_1
7.6 GB
Q5_0
7.07 GB
Q5_1
7.6 GB
Q5_0
7.07 GB
Q5_1
7.6 GB
Q5_0
7.07 GB
Q5_1
7.6 GB
Q5_0
7.07 GB
Q5_1
7.6 GB
Q5_0
7.07 GB
Q5_1
7.6 GB
Q5_0
7.07 GB
Q5_1
7.6 GB
Q5_0
6.51 GB
Q5_1
7.05 GB
Q5_0
7.07 GB
Q5_1
7.6 GB
Q5_0
6.51 GB
Q5_0
6.51 GB
Q5_0
6.51 GB
Q5_0
6.51 GB
Q5_0
6.51 GB
6-bit
Q6_K
8.21 GB
Q6_K
8.21 GB
Q6_K
8.21 GB
Q6_K
8.21 GB
Q6_K
8.21 GB
Q6_K
8.21 GB
Q6_K
8.21 GB
Q6_K
8.21 GB
Q6_K
8.21 GB
Q6_K
8.21 GB
Q6_K
8.21 GB
Q6_K
8.21 GB
Q6_K
8.21 GB
Q6_K
8.21 GB
Q6_K
8.21 GB
Q6_K
8.21 GB
Q6_K
8.21 GB
Q6_K
8.21 GB
Q6_K
7.65 GB
Q6_K
8.21 GB
8-bit
Q8_0
10.3 GB
Q8_0
10.3 GB
Q8_0
10.3 GB
Q8_0
10.3 GB
Q8_0
10.3 GB
Q8_0
10.3 GB
Q8_0
10.3 GB
Q8_0
10.3 GB
Q8_0
10.3 GB
Q8_0
10.3 GB
Q8_0
10.3 GB
Q8_0
10.3 GB
Q8_0
10.3 GB
Q8_0
10.3 GB
Q8_0
10.3 GB
Q8_0
10.3 GB
Q8_0
10.3 GB
Q8_0
9.74 GB
Q8_0
10.3 GB
Q8_0
9.74 GB
Q8_0
9.74 GB
Q8_0
9.74 GB
Q8_0
9.74 GB
Q8_0
9.74 GB
Q8_0
9.74 GB
Q8_0
9.74 GB
Q8_0
9.74 GB
Q8_0
9.74 GB
16-bit
BF16
18.4 GB
BF16
17.8 GB
BF16
17.8 GB
BF16
17.8 GB
BF16
17.8 GB
BF16
17.8 GB
BF16
17.8 GB
BF16
17.8 GB
BF16
17.8 GB
BF16
18.4 GB
Inference Providers
NEW
Text-to-Image
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for
silveroxides/Chroma-GGUF
Base model
lodestones/Chroma
Quantized
(
2
)
this model