Spaces:
Running
Running
readme is good now
Browse files
README.md
CHANGED
@@ -11,94 +11,143 @@ license: apache-2.0
|
|
11 |
short_description: an MCP Tool for Symptom-to-ICD Diagnosis Mapping.
|
12 |
tags:
|
13 |
- mcp-server-track
|
14 |
-
-
|
|
|
|
|
15 |
---
|
16 |
-
|
|
|
17 |
|
18 |
# Features
|
19 |
|
20 |
-
- **
|
21 |
-
- **Interactive Q&A
|
22 |
-
- **Multi
|
23 |
-
- **ICD
|
24 |
-
- **MCP
|
25 |
|
26 |
# Getting Started
|
27 |
|
28 |
-
## Clone & Install
|
29 |
-
|
30 |
```bash
|
31 |
-
git clone https://huggingface.co/spaces/
|
32 |
-
cd
|
33 |
-
python3 -m venv .venv && source .venv/bin/activate
|
34 |
pip install -r requirements.txt
|
35 |
-
|
36 |
|
37 |
## Environment Variables
|
38 |
|
39 |
-
| Name
|
40 |
-
|
41 |
-
| `OPENAI_API_KEY`
|
42 |
-
| `HUGGINGFACEHUB_API_TOKEN` | HF token for Mistral/inference models
|
43 |
-
| `USE_LOCAL_GPU`
|
44 |
-
| `LOCAL_MODEL`
|
45 |
-
| `USE_MISTRAL`
|
46 |
-
| `MISTRAL_MODEL`
|
47 |
-
| `MISTRAL_TEMPERATURE
|
48 |
-
| `MISTRAL_MAX_INPUT`
|
49 |
-
| `MISTRAL_NUM_OUTPUT`
|
50 |
|
51 |
## Launch Locally
|
52 |
|
53 |
```bash
|
54 |
-
#
|
55 |
python app.py
|
56 |
|
57 |
-
#
|
58 |
export USE_MISTRAL=1
|
59 |
export HUGGINGFACEHUB_API_TOKEN="hf_..."
|
60 |
python app.py
|
61 |
|
62 |
-
#
|
63 |
export USE_LOCAL_GPU=1
|
64 |
-
export LOCAL_MODEL="./distilgpt2"
|
65 |
python app.py
|
66 |
```
|
67 |
|
68 |
-
Open http://localhost:7860 to
|
69 |
-
1. Record your symptoms via the **Microphone** widget.
|
70 |
-
2. Engage in follow‑up Q&A until the agent returns a JSON diagnosis.
|
71 |
|
72 |
## MCP API Usage
|
73 |
|
74 |
-
Send a POST to `/mcp` to call the `transcribe_and_respond` tool programmatically:
|
75 |
-
|
76 |
```bash
|
77 |
curl -X POST http://localhost:7860/mcp \
|
78 |
-H "Content-Type: application/json" \
|
79 |
-
-d '{"tool":"transcribe_and_respond","input":{"audio":
|
80 |
```
|
81 |
|
82 |
-
|
83 |
-
|
84 |
-
# Project Structure
|
85 |
|
86 |
```
|
87 |
-
├── app.py #
|
88 |
-
├── src/
|
89 |
-
|
90 |
-
├──
|
91 |
-
│ └── llama_index_utils.py # LLM predictor & indexing utils
|
92 |
-
├── data/
|
93 |
-
│ └── icd10cm_tabular_2025/ # ICD-10 dataset
|
94 |
├── requirements.txt # Dependencies
|
95 |
└── README.md # This file
|
96 |
```
|
97 |
|
98 |
-
#
|
99 |
|
100 |
-
|
101 |
-
- Tag `@MistralTeam` to qualify for the \$2,000 Mistral prize.
|
102 |
-
- Post on Discord in the **#hackathon** channel for live help.
|
103 |
|
104 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
11 |
short_description: an MCP Tool for Symptom-to-ICD Diagnosis Mapping.
|
12 |
tags:
|
13 |
- mcp-server-track
|
14 |
+
- mistral
|
15 |
+
- llamaindex
|
16 |
+
- gradio
|
17 |
---
|
18 |
+
|
19 |
+
A voice-enabled medical assistant that takes patient audio complaints, engages in follow-up questions, and returns structured ICD-10 diagnosis suggestions via an MCP endpoint.
|
20 |
|
21 |
# Features
|
22 |
|
23 |
+
- **Automatic speech recognition (ASR)**: Transcribe real-time patient audio using [Gradio](https://www.gradio.app/guides/real-time-speech-recognition).
|
24 |
+
- **Interactive Q&A Agent**: The LLM dynamically asks clarifying questions base on ICD codes until it can diagnose with high confidence.
|
25 |
+
- **Multi-backend LLM**: Switch between OpenAI GPT, Mistral (HF), or a local transformers model via environment flags.
|
26 |
+
- **ICD-10 Mapping**: Use LlamaIndex for vector retrieval of probable ICD-10 codes with confidence scores.
|
27 |
+
- **MCP-Server Ready**: Exposes a `/mcp` REST endpoint for seamless agent integration.
|
28 |
|
29 |
# Getting Started
|
30 |
|
31 |
+
## Clone & Install
|
|
|
32 |
```bash
|
33 |
+
git clone https://huggingface.co/spaces/Agents-MCP-Hackathon/MedCodeMCP
|
34 |
+
cd MedCodeMCP
|
35 |
+
python3 -m venv .venv && source .venv/bin/activate
|
36 |
pip install -r requirements.txt
|
37 |
+
````
|
38 |
|
39 |
## Environment Variables
|
40 |
|
41 |
+
| Name | Description | Default |
|
42 |
+
| -------------------------- | --------------------------------------------------------- | ---------------------- |
|
43 |
+
| `OPENAI_API_KEY` | OpenAI API key for GPT calls | *required* |
|
44 |
+
| `HUGGINGFACEHUB_API_TOKEN` | HF token for Mistral/inference models | *required for Mistral* |
|
45 |
+
| `USE_LOCAL_GPU` | Set to `1` to use a local transformers model (no credits) | `0` |
|
46 |
+
| `LOCAL_MODEL` | Path or HF ID of local model (e.g. `distilgpt2`) | `gpt2` |
|
47 |
+
| `USE_MISTRAL` | Set to `1` to use Mistral via HF instead of OpenAI | `0` |
|
48 |
+
| `MISTRAL_MODEL` | HF ID for Mistral model (`mistral-small/medium/large`) | `mistral-large` |
|
49 |
+
| `MISTRAL_TEMPERATURE` | Sampling temperature for Mistral | `0.7` |
|
50 |
+
| `MISTRAL_MAX_INPUT` | Max tokens for input prompt | `4096` |
|
51 |
+
| `MISTRAL_NUM_OUTPUT` | Max tokens to generate | `512` |
|
52 |
|
53 |
## Launch Locally
|
54 |
|
55 |
```bash
|
56 |
+
# Default (OpenAI)
|
57 |
python app.py
|
58 |
|
59 |
+
# Mistral backend
|
60 |
export USE_MISTRAL=1
|
61 |
export HUGGINGFACEHUB_API_TOKEN="hf_..."
|
62 |
python app.py
|
63 |
|
64 |
+
# Local GPU (no credits)
|
65 |
export USE_LOCAL_GPU=1
|
66 |
+
export LOCAL_MODEL="./models/distilgpt2"
|
67 |
python app.py
|
68 |
```
|
69 |
|
70 |
+
Open [http://localhost:7860](http://localhost:7860) to interact with the app.
|
|
|
|
|
71 |
|
72 |
## MCP API Usage
|
73 |
|
|
|
|
|
74 |
```bash
|
75 |
curl -X POST http://localhost:7860/mcp \
|
76 |
-H "Content-Type: application/json" \
|
77 |
+
-d '{"tool":"transcribe_and_respond","input":{"audio":"<base64_audio>","history":[]}}'
|
78 |
```
|
79 |
|
80 |
+
## Project Structure
|
|
|
|
|
81 |
|
82 |
```
|
83 |
+
├── app.py # HF entrypoint
|
84 |
+
├── src/app.py # Core Gradio & agent logic
|
85 |
+
├── utils/llama_index_utils.py # LLM predictor & indexing utils
|
86 |
+
├── data/icd10cm_tabular_2025/ # ICD-10 dataset
|
|
|
|
|
|
|
87 |
├── requirements.txt # Dependencies
|
88 |
└── README.md # This file
|
89 |
```
|
90 |
|
91 |
+
# Hackathon Timeline
|
92 |
|
93 |
+
Here are the key dates for the Gradio Agents & MCP Hackathon:
|
|
|
|
|
94 |
|
95 |
+
* **May 20 – 26, 2025**: Pre-Hackathon announcements period.
|
96 |
+
* **June 2 – 10, 2025**: Official hackathon window (sign-ups remain open).
|
97 |
+
* **June 3, 2025 — 9 AM PST / 4 PM UTC**: Live kickoff YouTube event.
|
98 |
+
* **June 4 – 5, 2025**: Gradio Office Hours with MCP Support, MistralAI, LlamaIndex, Custom Components team, and Sambanova.
|
99 |
+
* **June 10, 2025 — 11:59 PM UTC**: Final submission deadline.
|
100 |
+
* **June 11 – 16, 2025**: Judging period.
|
101 |
+
* **June 17, 2025**: Winners announced.
|
102 |
+
|
103 |
+
# Key Players
|
104 |
+
|
105 |
+
## Sponsors
|
106 |
+
|
107 |
+
* **Modal Labs**: \$250 GPU/CPU credits to every participant ([modal.com](https://modal.com)).
|
108 |
+
* **Hugging Face**: \$25 API credits to every participant ([huggingface.co](https://huggingface.co)).
|
109 |
+
* **Nebius**: \$25 API credits to first 3,300 participants ([nebius.com](https://nebius.com)).
|
110 |
+
* **Anthropic**: \$25 API credits to first 1,000 participants ([anthropic.com](https://www.anthropic.com)).
|
111 |
+
* **OpenAI**: \$25 API credits to first 1,000 participants ([openai.com](https://openai.com)).
|
112 |
+
* **Hyperbolic Labs**: \$15 API credits to first 1,000 participants ([hyperbolic.xyz](https://hyperbolic.xyz)).
|
113 |
+
* **MistralAI**: \$25 API credits to first 500 participants ([mistral.ai](https://mistral.ai)).
|
114 |
+
* **Sambanova.AI**: \$25 API credits to first 250 participants ([sambanova.ai](https://sambanova.ai)). ([huggingface.co](https://huggingface.co/Agents-MCP-Hackathon))
|
115 |
+
|
116 |
+
## Panel of Judges
|
117 |
+
|
118 |
+
Judging will be conducted by representatives from sponsor partners and the Hugging Face community team, including Modal Labs, MistralAI, LlamaIndex, Sambanova.AI, and Hugging Face. To be properly judged, ensure the project is in the [proper space](https://huggingface.co/Agents-MCP-Hackathon) on hugging face, not just in a personal space. Click join organization, then click new to create a space that will be judged.
|
119 |
+
|
120 |
+
## Office Hours Hosts
|
121 |
+
|
122 |
+
* **Abubakar Abid** (MCP Support) — [@abidlabs](https://huggingface.co/abidlabs)
|
123 |
+
* **MistralAI Office Hours** — [Watch on YouTube](https://www.youtube.com/watch?v=TkyeUckXc-0)
|
124 |
+
* **LlamaIndex Office Hours** — [Watch on YouTube](https://www.youtube.com/watch?v=Ac1sh8MTQ2w)
|
125 |
+
* **Custom Components Office Hours** — [Watch on YouTube](https://www.youtube.com/watch?v=DHskahJ2e-c)
|
126 |
+
* **Sambanova Office Hours** — [Watch on YouTube](https://www.youtube.com/watch?v=h82Z7qcjgnU)
|
127 |
+
|
128 |
+
## Primary Organizers
|
129 |
+
|
130 |
+
* **Yuvraj Sharma (Yuvi)** (@yvrjsharma) — Machine Learning Engineer & Developer Advocate, Gradio Team at Hugging Face
|
131 |
+
* **Abubakar Abid** (@abidlabs) — Developer Advocate & MCP Support Lead at Hugging Face
|
132 |
+
* **Gradio Team at Hugging Face** — Core organizing team providing platform infrastructure, logistics, and community coordination
|
133 |
+
|
134 |
+
# Resources
|
135 |
+
|
136 |
+
* **Hackathon Org & Registration**: [Agents-MCP-Hackathon](https://huggingface.co/Agents-MCP-Hackathon)
|
137 |
+
* **Discord**: [discord.gg/agents-mcp-hackathon](https://discord.gg/agents-mcp-hackathon)
|
138 |
+
* **Slides from Kickoff**: [PDF](https://huggingface.co/spaces/Agents-MCP-Hackathon/README/blob/main/Gradio%20x%20Agents%20x%20MCP%20Hackathon.pdf)
|
139 |
+
* **Code of Conduct**: [Contributor Covenant](https://huggingface.co/code-of-conduct)
|
140 |
+
* **Submission Guidelines**: See “Submission Guidelines” on the hackathon page
|
141 |
+
* **MCP Guide**: [How to Build an MCP Server](https://huggingface.co/blog/gradio-mcp)
|
142 |
+
* **Gradio Docs**: [https://www.gradio.app/docs](https://www.gradio.app/docs)
|
143 |
+
* **LlamaIndex Docs**: [https://llamaindex.ai/docs](https://llamaindex.ai/docs)
|
144 |
+
* **Mistral Model Hub**: [https://huggingface.co/mistral-ai/mistral-small](https://huggingface.co/mistral-ai/mistral-small)
|
145 |
+
|
146 |
+
# About the Author
|
147 |
+
|
148 |
+
**Graham Paasch** is an AI realist passionate about the coming AI revolution.
|
149 |
+
|
150 |
+
* LinkedIn: [https://www.linkedin.com/in/grahampaasch/](https://www.linkedin.com/in/grahampaasch/)
|
151 |
+
* YouTube: [https://www.youtube.com/channel/UCg3oUjrSYcqsL9rGk1g\_lPQ](https://www.youtube.com/channel/UCg3oUjrSYcqsL9rGk1g_lPQ)
|
152 |
+
|
153 |
+
Graham is currently looking for work. Inspired by Leopold Aschenbrenner’s “AI Situational Awareness” ([https://situational-awareness.ai/](https://situational-awareness.ai/)), he believes AI will become a multi-trillion-dollar industry over the next decade—what we’re seeing now is the equivalent of ARPANET in the early days of the internet. He’s committed to aligning his work with this vision to stay at the forefront of the AI revolution.
|