File size: 1,755 Bytes
e689513 9279942 e689513 9279942 e689513 9279942 e689513 9279942 e689513 9279942 e689513 9279942 e689513 9279942 e689513 9279942 e689513 9279942 e689513 9279942 e689513 9279942 e689513 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 |
---
library_name: transformers
license: cc
datasets:
- atlithor/talromur3_without_emotions
language:
- is
base_model:
- parler-tts/parler-tts-mini-multilingual-v1.1
pipeline_tag: text-to-speech
---
# Model Card for RepeaTTS-level-1
See [Emotive Icelandic](https://huggingface.co/atlithor/EmotiveIcelandic) for more information about this model and the data that it is trained on.
The RepeaTTS series is trained on the same data as Emotive Icelandic, but without emotive content disclosure.
This model, level-1, corresponds to a model without any further refinement fine-tuning.
## Usage
Use the code below to get started with the model.
```py
import torch
from parler_tts import ParlerTTSForConditionalGeneration
from transformers import AutoTokenizer
import soundfile as sf
device = "cuda:0" if torch.cuda.is_available() else "cpu"
model = ParlerTTSForConditionalGeneration.from_pretrained("atlithor/RepeaTTS-level-1").to(device)
tokenizer = AutoTokenizer.from_pretrained("atlithor/EmotiveIcelandic")
description_tokenizer = AutoTokenizer.from_pretrained(model.config.text_encoder._name_or_path)
prompt = "Þetta er frábær hugmynd!" # E: this is a great idea!
description = "The recording is of very high quality, with Ingrid's voice sounding clear and very close up."
input_ids = description_tokenizer(description, return_tensors="pt").input_ids.to(device)
prompt_input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to(device)
generation = model.generate(input_ids=input_ids, prompt_input_ids=prompt_input_ids)
audio_arr = generation.cpu().numpy().squeeze()
sf.write("ingrid.wav", audio_arr, model.config.sampling_rate)
```
## Citation
_coming later_
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
|