gguf quantized version of chatterbox

  • base model from resembleai
  • text-to-speech synthesis

run it with gguf-connector

ggc c2

screenshot

Prompt Audio Sample
Hey Connector, why your appearance looks so stupid?
Oh, really? maybe I ate too much smart beans.
Wow. Amazing.
Let's go to get some more smart beans and you will become stupid as well.
🎧 audio-sample-1
Now let's make my mum's favourite. So three mars bars into the pan. Then we add the tuna and just stir for a bit, just let the chocolate and fish infuse.
A sprinkle of olive oil and some tomato ketchup. Now smell that. Oh boy this is going to be incredible.
🎧 audio-sample-2

review/reference

  • simply execute the command (ggc c2) above in console/terminal
  • opt a vae, a clip(encoder) and a model file in the current directory to interact with (see example below)

GGUF file(s) available. Select which one for ve:

  1. t3_cfg-q2_k.gguf
  2. t3_cfg-q4_k_m.gguf
  3. t3_cfg-q6_k.gguf
  4. ve_fp32-f16.gguf
  5. ve_fp32-f32.gguf

Enter your choice (1 to 5): 4

ve file: ve_fp32-f16.gguf is selected!

GGUF file(s) available. Select which one for t3:

  1. t3_cfg-q2_k.gguf
  2. t3_cfg-q4_k_m.gguf
  3. t3_cfg-q6_k.gguf
  4. ve_fp32-f16.gguf
  5. ve_fp32-f32.gguf

Enter your choice (1 to 5): 2

t3 file: t3_cfg-q4_k_m.gguf is selected!

Safetensors file(s) available. Select which one for s3gen:

  1. s3gen_bf16.safetensors (recommended)
  2. s3gen_fp16.safetensors (for non-cuda user)
  3. s3gen_fp32.safetensors

Enter your choice (1 to 3): _

  • note: for the latest update, only tokenizer will be pulled to cache automatically during the first launch; you need to prepare the model, encoder and vae files yourself, working like vision connector right away; mix and match, more flexible
  • run it entirely offline; i.e., from local URL: http://127.0.0.1:7860 with lazy webui
  • gguf-connector (pypi)
Downloads last month
1,206
GGUF
Model size
532M params
Architecture
pig
Hardware compatibility
Log In to view the estimation

2-bit

3-bit

4-bit

5-bit

6-bit

16-bit

32-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for calcuis/chatterbox-gguf

Quantized
(1)
this model