Spaces:
Running
Running
Update README to reflect audio-driven pipeline and multi-backend LLM support
Browse files- Add project title, description, and metadata for HF Spaces
- Document features: Whisper ASR, interactive Q&A agent, multi-backend LLM switch, ICD-10 mapping, MCP endpoint
- Provide explicit setup instructions, env var table, and local launch options
- Detail MCP API usage and project structure
- Include contribution and prize qualification notes
README.md
CHANGED
@@ -8,174 +8,97 @@ sdk_version: 5.33.0
|
|
8 |
app_file: app.py
|
9 |
pinned: false
|
10 |
license: apache-2.0
|
11 |
-
short_description: an MCP Tool for Symptom-to-ICD Diagnosis Mapping.
|
12 |
tags:
|
13 |
- mcp-server-track
|
|
|
14 |
---
|
15 |
-
A
|
16 |
-
|
17 |
-
# MedCodeMCP – an MCP Tool for Symptom-to-ICD Diagnosis Mapping
|
18 |
-
|
19 |
-
## MVP Scope
|
20 |
-
- Accept a patient’s symptom description (free-text input).
|
21 |
-
- Output a structured JSON with a list of probable diagnoses, each including:
|
22 |
-
- ICD-10 code
|
23 |
-
- Diagnosis name
|
24 |
-
- Confidence score
|
25 |
-
- Handle a subset of common symptoms and return the top 3–5 likely diagnoses.
|
26 |
-
|
27 |
-
## How It Works
|
28 |
-
|
29 |
-
### Input Interface
|
30 |
-
- Gradio-based demo UI for testing:
|
31 |
-
- Single text box for symptoms (e.g., “chest pain and shortness of breath”).
|
32 |
-
- Primary interface is programmatic (MCP client calls the server).
|
33 |
-
|
34 |
-
### Processing Logic
|
35 |
-
- Leverage an LLM (e.g., OpenAI GPT-4 or Anthropic Claude) to parse symptoms and suggest diagnoses.
|
36 |
-
- Prompt example:
|
37 |
-
> “The patient reports: {symptoms}. Provide a JSON list of up to 5 possible diagnoses, each with an ICD-10 code and a confidence score between 0 and 1. Use official ICD-10 names and codes.”
|
38 |
-
- Recent experiments with medical foundation models (e.g., Google’s Med-PaLM/MedGEMMA) show they can identify relevant diagnosis codes via prompt-based reasoning ([medium.com](https://medium.com)).
|
39 |
-
- Using GPT-4/Claude in the loop ensures rapid development and high-quality suggestions ([publish0x.com](https://publish0x.com)).
|
40 |
-
|
41 |
-
### Confidence Scoring
|
42 |
-
- Instruct the LLM to assign a subjective probability (0–1) for each diagnosis.
|
43 |
-
- Accept approximate confidences for MVP.
|
44 |
-
- Alternative: rank by output order (first = highest confidence).
|
45 |
-
|
46 |
-
### ICD-10 Code Mapping
|
47 |
-
- Trust LLM’s knowledge of common ICD-10 codes (e.g., chest pain → R07.9, heart attack → I21.x).
|
48 |
-
- Sanity-check:
|
49 |
-
- Maintain a small dictionary of common ICD-10 codes.
|
50 |
-
- Use regex to verify code format.
|
51 |
-
- Flag or adjust codes that don’t match known patterns.
|
52 |
-
- Future improvement: integrate a full ICD-10 lookup list for validation.
|
53 |
-
|
54 |
-
### Alternate Approach
|
55 |
-
- Use an open model fine-tuned for ICD coding (e.g., Clinical BERT on Hugging Face) to predict top ICD-10 codes from clinical text.
|
56 |
-
- Requires more coding and possibly a GPU, but feasible.
|
57 |
-
- For hackathon MVP, prioritize API-based approach with GPT/Claude ([huggingface.co](https://huggingface.co)).
|
58 |
-
|
59 |
-
### Output Format
|
60 |
-
- JSON structure for easy agent parsing. Example:
|
61 |
-
```json
|
62 |
-
{
|
63 |
-
"diagnoses": [
|
64 |
-
{
|
65 |
-
"icd_code": "I20.0",
|
66 |
-
"diagnosis": "Unstable angina",
|
67 |
-
"confidence": 0.85
|
68 |
-
},
|
69 |
-
{
|
70 |
-
"icd_code": "J18.9",
|
71 |
-
"diagnosis": "Pneumonia, unspecified organism",
|
72 |
-
"confidence": 0.60
|
73 |
-
}
|
74 |
-
]
|
75 |
-
}
|
76 |
-
````
|
77 |
-
|
78 |
-
* Input: “chest pain and shortness of breath”
|
79 |
-
* Output: Cardiac-related issues (e.g., angina/MI) and respiratory causes, each with confidence estimates.
|
80 |
-
* Structured output aligns with MCP tool requirements for downstream agent reasoning.
|
81 |
-
|
82 |
-
## Gradio MCP Integration
|
83 |
-
|
84 |
-
* Implement logic in `app.py` of a Gradio Space.
|
85 |
-
* Tag README with `mcp-server-track` as required by hackathon.
|
86 |
-
* Follow “Building an MCP Server with Gradio” guide:
|
87 |
-
|
88 |
-
* Use Gradio SDK 5.x.
|
89 |
-
* Define a tool function with metadata for agent discovery.
|
90 |
-
* Expose a prediction endpoint.
|
91 |
-
|
92 |
-
### Example Gradio Definition (simplified)
|
93 |
-
|
94 |
-
```python
|
95 |
-
import gradio as gr
|
96 |
-
import openai
|
97 |
-
def symptom_to_diagnosis(symptoms: str) -> dict:
|
98 |
-
prompt = f"""The patient reports: {symptoms}. Provide a JSON list of up to 5 possible diagnoses, each with an ICD-10 code and a confidence score between 0 and 1. Use official ICD-10 names and codes."""
|
99 |
-
response = openai.ChatCompletion.create(
|
100 |
-
model="gpt-4",
|
101 |
-
messages=[{"role": "system", "content": prompt}],
|
102 |
-
temperature=0.2
|
103 |
-
)
|
104 |
-
# Parse response content as JSON
|
105 |
-
return response.choices[0].message.content
|
106 |
-
demo = gr.Interface(
|
107 |
-
fn=symptom_to_diagnosis,
|
108 |
-
inputs=gr.Textbox(placeholder="Enter symptoms here..."),
|
109 |
-
outputs=gr.JSON(),
|
110 |
-
title="MedCodeMCP Symptom-to-ICD Mapper",
|
111 |
-
)
|
112 |
-
demo.launch()
|
113 |
-
```
|
114 |
-
|
115 |
-
* Ensure MCP metadata is included so an external agent can discover and call `symptom_to_diagnosis`.
|
116 |
-
|
117 |
-
## User Demo (Client App)
|
118 |
-
|
119 |
-
* Create a separate Gradio Space or local script that:
|
120 |
-
|
121 |
-
* Calls the MCP server endpoint.
|
122 |
-
* Renders JSON result in a user-friendly format.
|
123 |
-
* Optionally record a video demonstration:
|
124 |
-
|
125 |
-
* Show an agent (e.g., Claude-2 chatbot) calling the MCP tool.
|
126 |
-
* Verify end-to-end functionality.
|
127 |
-
|
128 |
-
## MVP Development Steps
|
129 |
-
|
130 |
-
1. **Set Up Gradio Space**
|
131 |
|
132 |
-
|
133 |
-
* Tag README with `mcp-server-track`.
|
134 |
|
135 |
-
|
|
|
|
|
|
|
|
|
136 |
|
137 |
-
|
138 |
|
139 |
-
|
140 |
-
* Call GPT-4/Claude API with JSON-output prompt.
|
141 |
-
* Parse the model’s JSON response into a Python dictionary.
|
142 |
-
* Sanitize and validate JSON output.
|
143 |
-
* Fallback: rule-based approach or offline model for demo cases if API limits are reached.
|
144 |
|
145 |
-
|
146 |
-
|
147 |
-
|
148 |
-
|
149 |
-
|
150 |
-
|
151 |
|
152 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
153 |
|
154 |
-
|
|
|
|
|
155 |
|
156 |
-
|
157 |
-
* Rank by output order.
|
158 |
-
* Document confidence methodology in README.
|
159 |
|
160 |
-
|
161 |
|
162 |
-
|
163 |
-
|
164 |
-
|
|
|
|
|
165 |
|
166 |
-
|
167 |
|
168 |
-
|
169 |
-
* Option B: Local script using `requests` to call the deployed Space’s prediction API.
|
170 |
-
* Prepare a screen recording illustrating agent invocation.
|
171 |
|
172 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
173 |
|
174 |
-
|
175 |
|
176 |
-
|
177 |
-
|
178 |
-
|
179 |
-
* Provide example usage and sample inputs/outputs.
|
180 |
|
181 |
-
|
|
|
8 |
app_file: app.py
|
9 |
pinned: false
|
10 |
license: apache-2.0
|
11 |
+
short_description: an MCP Tool for Audio‑Driven Symptom-to-ICD Diagnosis Mapping.
|
12 |
tags:
|
13 |
- mcp-server-track
|
14 |
+
- @MistralTeam
|
15 |
---
|
16 |
+
A voice‑enabled medical assistant that takes patient audio complaints, engages in follow‑up questions, and returns structured ICD‑10 diagnosis suggestions via an MCP endpoint. fileciteturn2file0
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
17 |
|
18 |
+
# Features
|
|
|
19 |
|
20 |
+
- **Audio input & ASR**: Use Whisper to transcribe real‑time patient audio (e.g. “I’ve had a dry cough for three days”).
|
21 |
+
- **Interactive Q&A agent**: The LLM asks targeted clarifications (“Is your cough dry or productive?”) until ready to diagnose.
|
22 |
+
- **Multi‑backend LLM**: Switch dynamically between OpenAI GPT, Mistral (HF), or any local transformers model via env flags.
|
23 |
+
- **ICD‑10 mapping**: Leverage LlamaIndex to vector‑retrieve the most probable ICD‑10 codes with confidence scores.
|
24 |
+
- **MCP‑server ready**: Exposes a `/mcp` REST endpoint for seamless integration with agent frameworks.
|
25 |
|
26 |
+
# Getting Started
|
27 |
|
28 |
+
## Clone & Install
|
|
|
|
|
|
|
|
|
29 |
|
30 |
+
```bash
|
31 |
+
git clone https://huggingface.co/spaces/gpaasch/Grahams_Gradio_Agents_MCP_Hackathon_2025_Submission.git
|
32 |
+
cd Grahams_Gradio_Agents_MCP_Hackathon_2025_Submission
|
33 |
+
python3 -m venv .venv && source .venv/bin/activate
|
34 |
+
pip install -r requirements.txt
|
35 |
+
```
|
36 |
|
37 |
+
## Environment Variables
|
38 |
+
|
39 |
+
| Name | Description | Default |
|
40 |
+
|----------------------|-----------------------------------------------------------|-------------------|
|
41 |
+
| `OPENAI_API_KEY` | OpenAI API key for GPT calls | none (required) |
|
42 |
+
| `HUGGINGFACEHUB_API_TOKEN` | HF token for Mistral/inference models | none (required for Mistral) |
|
43 |
+
| `USE_LOCAL_GPU` | Set to `1` to use a local transformers model (no credits) | `0` |
|
44 |
+
| `LOCAL_MODEL` | Path or HF ID of local model (e.g. `distilgpt2`) | `gpt2` |
|
45 |
+
| `USE_MISTRAL` | Set to `1` to use Mistral via HF instead of OpenAI | `0` |
|
46 |
+
| `MISTRAL_MODEL` | HF ID for Mistral model (`mistral-small/medium/large`) | `mistral-large` |
|
47 |
+
| `MISTRAL_TEMPERATURE`| Sampling temperature for Mistral | `0.7` |
|
48 |
+
| `MISTRAL_MAX_INPUT` | Max tokens for input prompt | `4096` |
|
49 |
+
| `MISTRAL_NUM_OUTPUT` | Max tokens to generate | `512` |
|
50 |
+
|
51 |
+
## Launch Locally
|
52 |
+
|
53 |
+
```bash
|
54 |
+
# Option A: Default (OpenAI)
|
55 |
+
python app.py
|
56 |
+
|
57 |
+
# Option B: Mistral backend
|
58 |
+
export USE_MISTRAL=1
|
59 |
+
export HUGGINGFACEHUB_API_TOKEN="hf_..."
|
60 |
+
python app.py
|
61 |
+
|
62 |
+
# Option C: Local GPU (no credits)
|
63 |
+
export USE_LOCAL_GPU=1
|
64 |
+
export LOCAL_MODEL="./distilgpt2"
|
65 |
+
python app.py
|
66 |
+
```
|
67 |
|
68 |
+
Open http://localhost:7860 to:
|
69 |
+
1. Record your symptoms via the **Microphone** widget.
|
70 |
+
2. Engage in follow‑up Q&A until the agent returns a JSON diagnosis.
|
71 |
|
72 |
+
## MCP API Usage
|
|
|
|
|
73 |
|
74 |
+
Send a POST to `/mcp` to call the `transcribe_and_respond` tool programmatically:
|
75 |
|
76 |
+
```bash
|
77 |
+
curl -X POST http://localhost:7860/mcp \
|
78 |
+
-H "Content-Type: application/json" \
|
79 |
+
-d '{"tool":"transcribe_and_respond","input":{"audio": "<base64_audio>", "history": []}}'
|
80 |
+
```
|
81 |
|
82 |
+
The response will be a JSON chat history, ending with your final ICD‑10 suggestions.
|
83 |
|
84 |
+
# Project Structure
|
|
|
|
|
85 |
|
86 |
+
```
|
87 |
+
├── app.py # Root wrapper (HF entrypoint)
|
88 |
+
├── src/
|
89 |
+
│ └── app.py # Core Gradio & agent logic
|
90 |
+
├── utils/
|
91 |
+
│ └── llama_index_utils.py # LLM predictor & indexing utils
|
92 |
+
├── data/
|
93 |
+
│ └── icd10cm_tabular_2025/ # ICD-10 dataset
|
94 |
+
├── requirements.txt # Dependencies
|
95 |
+
└── README.md # This file
|
96 |
+
```
|
97 |
|
98 |
+
# Contributing & Support
|
99 |
|
100 |
+
- Open an issue or discussion on the [Hugging Face Space](https://huggingface.co/spaces/gpaasch/Grahams_Gradio_Agents_MCP_Hackathon_2025_Submission/discussions).
|
101 |
+
- Tag `@MistralTeam` to qualify for the \$2,000 Mistral prize.
|
102 |
+
- Post on Discord in the **#hackathon** channel for live help.
|
|
|
103 |
|
104 |
+
---
|