fciannella's picture
Working with service run on 7860
53ea588
|
raw
history blame
3.14 kB

RIVA NMT Example

This is an example that shows how to perform lanuage translation using RIVA Neural Machine Translation (NMT). It supports Nvidia Riva ASR and TTS and ACETransport.

Get Started

From the example directory, run the following commands to create a virtual environment and install the dependencies:

uv venv
uv sync
source .venv/bin/activate

Update the secrets in the .env file.

cp env.example .env # and add your credentials

Deploy local Riva ASR and TTS models.

Prerequisites

  • You have access and are logged into NVIDIA NGC. For step-by-step instructions, refer to the NGC Getting Started Guide.

  • You have access to an NVIDIA Volta™, NVIDIA Turing™, or an NVIDIA Ampere architecture-based A100 GPU. For more information, refer to the Support Matrix.

  • You have Docker installed with support for NVIDIA GPUs. For more information, refer to the Support Matrix.

Download Riva Quick Start

Go to the Riva Quick Start for Data center. Select the File Browser tab to download the scripts or use the NGC CLI tool to download from the command line.

ngc registry resource download-version nvidia/riva/riva_quickstart:2.19.0

Deploy Riva Speech Server

Set service_enabled_nmt to true and uncomment whichever model you want for NMT from the list in ../examples/utils/riva_config.sh. Update tts_language_code as the desired target language code in ../examples/utils/riva_config.sh.

From the example directory, run below commands:

cd riva_quickstart_v2.19.0
chmod +x riva_init.sh riva_clean.sh riva_start.sh
bash riva_clean.sh ../../utils/riva_config.sh
bash riva_init.sh ../../utils/riva_config.sh
bash riva_start.sh ../../utils/riva_config.sh
cd ..

This may take few minutes for the first time and will start the riva server on localhost:50051.

For more info, you can refer to the Riva Quick Start Guide.

Using NvidiaLLMService

By default, it connects to a hosted NIM, but can be configured to connect to a local NIM by setting the base_url parameter in NvidiaLLMService to the locally deployed LLM endpoint ( For example: base_url = http://machine_ip:port/v1 ). An API key is required to connect to the hosted NIM.

Run the bot pipeline

python examples/riva_nmt/bot.py

This will host the static web client along with the ACE controller server, visit http://WORKSTATION_IP:8100/static/index.html in your browser to start a session.

Note: For mic access, you will need to update chrome://flags/ and add http://WORKSTATION_IP:8100 in Insecure origins treated as secure section.