File size: 1,355 Bytes
a7721af
 
 
 
 
 
 
 
 
 
 
 
cc83250
a7721af
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
af27d33
 
a7721af
 
 
 
 
cc83250
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
---
license: apache-2.0
language:
- en
base_model:
- Prince-1/orpheus_3b_0.1_4bit
- Prince-1/orpheus_3b_0.1_GGUF
tags:
- rkllm
- text-to-speech
- tts
- llama
library_name: rkllm
---

# Orpheus_3b_0.1_rkllm

**Orpheus_3b_0.1_rkllm** is a [Text-to-Speech](https://huggingface.co/models?pipeline_tag=text-to-speech&sort=trending) model built from Orpheus [16bit](https://huggingface.co/Prince-1/orpheus_3b_0.1_ft_16bit) and [GGUF F16](https://huggingface.co/Prince-1/orpheus_3b_0.1_GGUF), using the [Rkllm-Toolkit](https://github.com/airockchip/rknn-llm).

## Features

- 🎙️ Text-to-Speech capability with optimized inference
- 🧠 Built from Orpheus 16bit & GGUF F16 formats
- 🚀 Runs on **RK3588 NPU** using **w8a8 quantization**
- ⚙️ Powered by [RKLLM Toolkit v1.2.1b1](https://github.com/airockchip/rknn-llm)
- ⚡ Designed for high-performance on edge devices

## Requirements

- RK3588-based device
- RKLLM Toolkit v1.2.1b1
- Compatible runtime environment for deploying quantized models

## Usage

1. Clone or download the model from [Hugging Face](https://huggingface.co/Prince-1/orpheus_3b_0.1_rkllm).
2. Clone the github [repo](https://github.com/N-E-W-T-O-N/OrpheusTTSInference-RKLLM)
3. Run the model using python implementation or using C++ implementation .


### License
This model is released under the **Apache-2.0** license.

---