mcp-server-track / README.md
Dreamcatcher23's picture
Update README.md
a0e01a7 verified

A newer version of the Gradio SDK is available: 5.41.1

Upgrade
metadata
title: Mcp Server Track
emoji: 🔥
colorFrom: blue
colorTo: yellow
sdk: gradio
sdk_version: 5.33.1
app_file: app.py
pinned: false
license: mit
short_description: Gradio Whisper Transcription App (MCP)

Here's a complete README.md tailored for your project hosted at https://huggingface.co/spaces/Dreamcatcher23/mcp-server-track. It assumes you're offering an audio transcription service using OpenAI's Whisper via Modal and exposing it as an MCP-compliant server using Gradio:


# 🎙️ Whisper Transcription Tool – MCP Server

Welcome to the **MCP Server Track** submission for the Hugging Face & OpenAI Agents Hackathon!

This tool provides speech-to-text transcription using OpenAI’s Whisper model, deployed via [Modal](https://modal.com/), and exposes the service through a Gradio interface that supports the **Model Context Protocol (MCP)**. Agents can access this tool via HTTP or streaming endpoints.

---

## 🧠 What It Does

- Accepts an audio URL (MP3, WAV, FLAC, etc.)
- Transcribes the speech to text using Whisper (`base` model)
- Returns clean, readable output
- Exposes an MCP-compliant API endpoint

---

## 🚀 How to Use

### 🔹 Web Interface (UI)
1. Enter an audio URL (e.g., a `.flac` or `.wav` file).
2. Click the **Submit** button.
3. View the transcribed text output instantly.

### 🔹 As an MCP Tool (Programmatic Access)
This app can be invoked by agents (e.g., SmolAI, LangChain, or custom agent scripts) using the MCP specification.

- Endpoint: `/predict`
- Method: POST
- Input Schema: `{ "data": [ "AUDIO_URL_HERE" ] }`
- Output Schema: `{ "data": [ "TRANSCRIPTION_TEXT" ] }`

---

## 🛠️ Stack

| Layer        | Tech                                |
|--------------|-------------------------------------|
| Frontend     | [Gradio](https://gradio.app/)       |
| Inference    | [OpenAI Whisper](https://github.com/openai/whisper) |
| Hosting      | [Hugging Face Spaces](https://huggingface.co/spaces) |
| Remote Compute | [Modal](https://modal.com/)       |
| Protocol     | Model Context Protocol (MCP)        |

---

## 📦 Example Input

https://huggingface.co/datasets/Narsil/asr_dummy/resolve/main/mlk.flac


Expected output:

I have a dream that one day this nation will rise up and live out the true meaning of its creed.


---

## 📎 Notes

- Supports English only (for now).
- Long-form audio and larger models (e.g., `medium`, `large`) can be added later.
- You can extend it to support file uploads or streaming audio.

---

## 🤖 Hackathon Submission

- Track: `mcp-server-track`
- MCP-Enabled: ✅
- Repo/Space: [Dreamcatcher23/mcp-server-track](https://huggingface.co/spaces/Dreamcatcher23/mcp-server-track)

---

## 📚 References

- [Modal Docs](https://modal.com/docs)
- [Whisper GitHub](https://github.com/openai/whisper)
- [Gradio MCP Guide](https://huggingface.co/docs/hub/spaces-sse)
- [Agents Hackathon](https://huggingface.co/agents)

---

## 🧪 MCP Test Instructions

Use tools like [curl](https://curl.se/), Postman, or a Python client to test the API:

```bash
curl -X POST https://your-space-url/predict \
    -H "Content-Type: application/json" \
    -d '{"data":["https://huggingface.co/datasets/Narsil/asr_dummy/resolve/main/mlk.flac"]}'

✨ Built with ❤️ by Dreamcatcher23


---

Let me know if you'd like a badge version (e.g., Hugging Face badge, Modal run badge) or Markdown preview for your Hugging Face Space directly!