|
--- |
|
license: apache-2.0 |
|
language: |
|
- en |
|
base_model: |
|
- Prince-1/orpheus_3b_0.1_4bit |
|
- Prince-1/orpheus_3b_0.1_GGUF |
|
tags: |
|
- rkllm |
|
- text-to-speech |
|
- tts |
|
- llama |
|
library_name: rkllm |
|
--- |
|
|
|
# Orpheus_3b_0.1_rkllm |
|
|
|
**Orpheus_3b_0.1_rkllm** is a [Text-to-Speech](https://huggingface.co/models?pipeline_tag=text-to-speech&sort=trending) model built from Orpheus [16bit](https://huggingface.co/Prince-1/orpheus_3b_0.1_ft_16bit) and [GGUF F16](https://huggingface.co/Prince-1/orpheus_3b_0.1_GGUF), using the [Rkllm-Toolkit](https://github.com/airockchip/rknn-llm). |
|
|
|
## Features |
|
|
|
- ποΈ Text-to-Speech capability with optimized inference |
|
- π§ Built from Orpheus 16bit & GGUF F16 formats |
|
- π Runs on **RK3588 NPU** using **w8a8 quantization** |
|
- βοΈ Powered by [RKLLM Toolkit v1.2.1b1](https://github.com/airockchip/rknn-llm) |
|
- β‘ Designed for high-performance on edge devices |
|
|
|
## Requirements |
|
|
|
- RK3588-based device |
|
- RKLLM Toolkit v1.2.1b1 |
|
- Compatible runtime environment for deploying quantized models |
|
|
|
## Usage |
|
|
|
1. Clone or download the model from [Hugging Face](https://huggingface.co/Prince-1/orpheus_3b_0.1_rkllm). |
|
2. Clone the github [repo](https://github.com/N-E-W-T-O-N/OrpheusTTSInference-RKLLM) |
|
3. Run the model using python implementation or using C++ implementation . |
|
|
|
|
|
### License |
|
This model is released under the **Apache-2.0** license. |
|
|
|
--- |