File size: 3,140 Bytes
53ea588
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
# RIVA NMT Example

This is an example that shows how to perform lanuage translation using RIVA Neural Machine Translation (NMT). It supports `Nvidia Riva ASR and TTS` and `ACETransport`.

## Get Started

From the example directory, run the following commands to create a virtual environment and install the dependencies:

```bash
uv venv
uv sync
source .venv/bin/activate
```

Update the secrets in the `.env` file.

```bash
cp env.example .env # and add your credentials
```

## Deploy local Riva ASR and TTS models.

#### Prerequisites
- You have access and are logged into NVIDIA NGC. For step-by-step instructions, refer to [the NGC Getting Started Guide](https://docs.nvidia.com/ngc/ngc-overview/index.html#registering-activating-ngc-account).

- You have access to an NVIDIA Volta™, NVIDIA Turing™, or an NVIDIA Ampere architecture-based A100 GPU. For more information, refer to [the Support Matrix](https://docs.nvidia.com/deeplearning/riva/user-guide/docs/support-matrix.html#support-matrix).

- You have Docker installed with support for NVIDIA GPUs. For more information, refer to [the Support Matrix]((https://docs.nvidia.com/deeplearning/riva/user-guide/docs/support-matrix.html#support-matrix)).

#### Download Riva Quick Start

Go to the Riva Quick Start for [Data center](https://catalog.ngc.nvidia.com/orgs/nvidia/teams/riva/resources/riva_quickstart/files?version=2.19.0). Select the File Browser tab to download the scripts or use [the NGC CLI tool](https://ngc.nvidia.com/setup/installers/cli) to download from the command line.

```bash
ngc registry resource download-version nvidia/riva/riva_quickstart:2.19.0
```

#### Deploy Riva Speech Server


Set `service_enabled_nmt` to `true` and uncomment whichever model you want for NMT from the list in `../examples/utils/riva_config.sh`.
Update `tts_language_code` as the desired target language code in `../examples/utils/riva_config.sh`.

From the example directory, run below commands:

```bash
cd riva_quickstart_v2.19.0
chmod +x riva_init.sh riva_clean.sh riva_start.sh
bash riva_clean.sh ../../utils/riva_config.sh
bash riva_init.sh ../../utils/riva_config.sh
bash riva_start.sh ../../utils/riva_config.sh
cd ..
```

This may take few minutes for the first time and will start the riva server on `localhost:50051`.

For more info, you can refer to the [Riva Quick Start Guide](https://docs.nvidia.com/deeplearning/riva/user-guide/docs/quick-start-guide.html).



## Using NvidiaLLMService

By default, it connects to a hosted NIM, but can be configured to connect to a local NIM by setting the `base_url` parameter in `NvidiaLLMService` to the locally deployed LLM endpoint ( For example: base_url = http://machine_ip:port/v1 ). An API key is required to connect to the hosted NIM.

## Run the bot pipeline

```bash
python examples/riva_nmt/bot.py
```

This will host the static web client along with the ACE controller server, visit `http://WORKSTATION_IP:8100/static/index.html` in your browser to start a session.

Note: For mic access, you will need to update chrome://flags/ and add http://WORKSTATION_IP:8100 in Insecure origins treated as secure section.