|
--- |
|
emoji: "\U0001F4AC" |
|
sdk: static |
|
pinned: false |
|
license: mit |
|
title: Realtime Conversational WebGPU (Vue) |
|
colorFrom: purple |
|
colorTo: indigo |
|
models: |
|
- HuggingFaceTB/SmolLM2-1.7B-Instruct |
|
- onnx-community/whisper-base |
|
- onnx-community/silero-vad |
|
short_description: Yet another Realtime Conversational WebGPU |
|
--- |
|
|
|
<h1 align="center">Realtime Conversational WebGPU (Vue)</h1> |
|
|
|
<p align="center"> |
|
[<a href="https://conversational-webgpu-vue.netlify.app/">Try it</a>] |
|
</p> |
|
|
|
> Heavily inspired by [WebGPU Video Object Detection - a Hugging Face Space by WebML Community](https://huggingface.co/spaces/webml-community/webgpu-video-object-detection) |
|
|
|
# Realtime Conversational WebGPU |
|
|
|
## Getting Started |
|
|
|
Follow the steps below to set up and run the application. |
|
|
|
### 1. Clone the Repository |
|
|
|
Clone the examples repository from GitHub: |
|
|
|
```sh |
|
git clone https://github.com/proj-airi/webai-examples.git |
|
``` |
|
|
|
### 2. Navigate to the Project Directory |
|
|
|
Change your working directory to the `conversational-webgpu` folder: |
|
|
|
```sh |
|
cd apps/conversational-webgpu |
|
``` |
|
|
|
### 3. Install Dependencies |
|
|
|
Install the necessary dependencies using npm: |
|
|
|
```sh |
|
npm i |
|
``` |
|
|
|
### 4. Run the Development Server |
|
|
|
Start the development server: |
|
|
|
```sh |
|
npm run dev |
|
``` |
|
|
|
The application should now be running locally. Open your browser and go to `http://localhost:5175` to see it in action. |
|
|
|
## Acknowledgements |
|
|
|
Great thanks to what WebML Community have done. |
|
|
|
> [Source code](https://huggingface.co/spaces/webml-community/conversational-webgpu) |
|
|
|
> [UI inspiration](https://app.sesame.com/) |
|
|