+
+## Results
+
+
+## Setup:
+### Environment Setup:
+```
+# 1. Create environment
+sudo apt-get install libsndfile1-dev ffmpeg enchant
+conda create -n tts-env
+conda activate tts-env
+
+# 2. Setup PyTorch
+pip3 install -U torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu113
+
+# 3. Setup Trainer
+git clone https://github.com/gokulkarthik/Trainer
+
+cd Trainer
+pip3 install -e .[all]
+cd ..
+[or]
+cp Trainer/trainer/logging/wandb_logger.py to the local Trainer installation # fixed wandb logger
+cp Trainer/trainer/trainer.py to the local Trainer installation # fixed model.module.test_log and added code to log epoch
+add `gpus = [str(gpu) for gpu in gpus]` in line 53 of trainer/distribute.py
+
+# 4. Setup TTS
+git clone https://github.com/gokulkarthik/TTS
+
+cd TTS
+pip3 install -e .[all]
+cd ..
+[or]
+cp TTS/TTS/bin/synthesize.py to the local TTS installation # added multiple output support for TTS.bin.synthesis
+
+# 5. Install other requirements
+> pip3 install -r requirements.txt
+```
+
+
+### Data Setup:
+1. Format IndicTTS dataset in LJSpeech format using [preprocessing/FormatDatasets.ipynb](./preprocessing/FormatDatasets.ipynb)
+2. Analyze IndicTTS dataset to check TTS suitability using [preprocessing/AnalyzeDataset.ipynb](./preprocessing/AnalyzeDataset.ipynb)
+
+### Training Steps:
+1. Set the configuration with [main.py](./main.py), [vocoder.py](./vocoder.py), [configs](./configs) and [run.sh](./run.sh). Make sure to update the CUDA_VISIBLE_DEVICES in all these files.
+2. Train and test by executing `sh run.sh`
+
+### Inference:
+Trained model weight and config files can be downloaded at [this link.](https://github.com/AI4Bharat/Indic-TTS/releases/tag/v1-checkpoints-release)
+
+```
+python3 -m TTS.bin.synthesize --text
+
+๐ธTTS is a library for advanced Text-to-Speech generation. It's built on the latest research, was designed to achieve the best trade-off among ease-of-training, speed and quality.
+๐ธTTS comes with pretrained models, tools for measuring dataset quality and already used in **20+ languages** for products and research projects.
+
+[](https://gitter.im/coqui-ai/TTS?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge)
+[
+| ๐พ **Installation** | [TTS/README.md](https://github.com/coqui-ai/TTS/tree/dev#install-tts)|
+| ๐ฉโ๐ป **Contributing** | [CONTRIBUTING.md](https://github.com/coqui-ai/TTS/blob/main/CONTRIBUTING.md)|
+| ๐ **Road Map** | [Main Development Plans](https://github.com/coqui-ai/TTS/issues/378)
+| ๐ **Released Models** | [TTS Releases](https://github.com/coqui-ai/TTS/releases) and [Experimental Models](https://github.com/coqui-ai/TTS/wiki/Experimental-Released-Models)|
+
+## ๐ฅ TTS Performance
+
+
+๐ธTTS is a library for advanced Text-to-Speech generation. It's built on the latest research, was designed to achieve the best trade-off among ease-of-training, speed and quality.
+๐ธTTS comes with pretrained models, tools for measuring dataset quality and already used in **20+ languages** for products and research projects.
+
+[](https://gitter.im/coqui-ai/TTS?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge)
+[
+| ๐พ **Installation** | [TTS/README.md](https://github.com/coqui-ai/TTS/tree/dev#install-tts)|
+| ๐ฉโ๐ป **Contributing** | [CONTRIBUTING.md](https://github.com/coqui-ai/TTS/blob/main/CONTRIBUTING.md)|
+| ๐ **Road Map** | [Main Development Plans](https://github.com/coqui-ai/TTS/issues/378)
+| ๐ **Released Models** | [TTS Releases](https://github.com/coqui-ai/TTS/releases) and [Experimental Models](https://github.com/coqui-ai/TTS/wiki/Experimental-Released-Models)|
+
+## ๐ฅ TTS Performance
+
+
+ {% if show_details == true %}
+
+ | CLI key | +Value | +
| {{ key }} | +{{ value }} | +
| Key | +Value | +
| {{ key }} | +{{ value }} | +
| Key | +Value | +
| {{ key }} | +{{ value }} | +
+
+
+
+
+
+
+
+