gpaasch commited on
Commit
c0a6243
·
1 Parent(s): e5e6a83

interactive questioning is key and demos the unique capabilities of ai

Browse files
Files changed (1) hide show
  1. docs/interactive_questioning.md +107 -0
docs/interactive_questioning.md ADDED
@@ -0,0 +1,107 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Interactive questioning is essential. You cannot map raw user language straight to a code; you must guide them through a mini-diagnostic interview. Here’s how to build that:
2
+
3
+ 1. **Establish a Symptom Ontology Layer**
4
+ • Extract high-level symptom categories from ICD (e.g., “cough,” “shortness of breath,” “chest pain,” etc.).
5
+ • Group related codes under each category. For example:
6
+
7
+ ```
8
+ Cough:
9
+ – R05: Cough, unspecified
10
+ – R05.1: Acute cough
11
+ – R05.2: Chronic cough
12
+ – J41.x: Chronic bronchitis codes
13
+ – J00: Acute nasopharyngitis (common cold) if cough is minor/as part of URI
14
+ ```
15
+
16
+ • Define which attributes distinguish these codes (duration, intensity, quality, associated features like sputum, fever, smoking history, etc.).
17
+
18
+ 2. **Design Follow-Up Questions for Each Branch**
19
+ • For each high-level category, list the key discriminating questions. Example for “cough”:
20
+
21
+ * “How long have you been coughing?” (acute vs. chronic)
22
+ * “Is it dry or productive?” (productive suggests bronchitis, pneumonia)
23
+ * “Are you experiencing fever or chills?” (infection rather than simple chronic cough)
24
+ * “Do you smoke or have exposure to irritants?” (chronic bronchitis codes)
25
+ * “Any history of heart disease or fluid retention?” (cardiac cough different codes)
26
+
27
+ • Use those discriminators to differentiate among the codes grouped under “cough.”
28
+
29
+ 3. **LLM-Powered Question Sequencer**
30
+ • Prompt engineering: give the LLM the category, its subtree of possible codes, and instruct it to choose the next most informative question.
31
+ • At run time, feed the user’s raw input → identify the nearest symptom category (via embeddings or keyword matching).
32
+ • Ask the LLM to generate the “best next question” given:
33
+
34
+ * The set of candidate codes under that category
35
+ * The user’s answers so far
36
+ • Continue until the candidate list narrows to one code or a small handful. Output confidence scores based on tree depth and answer clarity.
37
+
38
+ 4. **Implementation Outline**
39
+
40
+ 1. **Data Preparation**
41
+
42
+ * Parse the ICD-10 XML or CSV into a hierarchical structure.
43
+ * For each code, extract description and synonyms.
44
+ * Build a JSON mapping: `{ category: { codes: [...], discriminators: [...] } }`.
45
+
46
+ 2. **Symptom Category Detection**
47
+
48
+ * Load user’s free-text “I have a cough” into an embedding model (e.g., sentence-transformers).
49
+ * Compare against embeddings of category keywords (`“cough,” “headache,” “rash,” …`).
50
+ * Select top category.
51
+
52
+ 3. **Interactive Loop**
53
+
54
+ ```
55
+ loop:
56
+ ask_question = LLM.generate_question(
57
+ category,
58
+ candidate_codes,
59
+ user_answers
60
+ )
61
+ user_answer = get_input()
62
+ update candidate_codes by filtering based on that answer
63
+ if candidate_codes.size() == 1 or confidence_threshold met:
64
+ break
65
+ ```
66
+
67
+ * Filtering rules can be simple: if user says “cough < 3 weeks,” eliminate chronic cough codes. If “productive,” eliminate dry cough codes, etc.
68
+ * Confidence could be measured by how many codes remain or by how decisive answers are.
69
+
70
+ 4. **Final Mapping and Output**
71
+
72
+ * Once reduced to a single code (or top 3), return JSON:
73
+
74
+ ```json
75
+ {
76
+ "code": "R05.1",
77
+ "description": "Acute cough",
78
+ "confidence": 0.87,
79
+ "asked_questions": [
80
+ {"q":"How long have you been coughing?","a":"2 days"},
81
+ {"q":"Is it dry or productive?","a":"Dry"}
82
+ ]
83
+ }
84
+ ```
85
+
86
+ 5. **Prototype Tips for the Hackathon**
87
+ • Hard-code a small set of categories (e.g., cough, chest pain, fever, headache) and their discriminators to demonstrate the method.
88
+ • Use OpenAI’s GPT-4 or a local LLM to generate next questions:
89
+
90
+ ```
91
+ “Given these potential codes: [list], and these answers: […], what is the single most informative follow-up question to distinguish among them?”
92
+ ```
93
+
94
+ • Keep the conversation state on the backend (in Python or Node). Each HTTP call from the front end includes:
95
+
96
+ * `session_id`
97
+ * `category`
98
+ * `candidate_code_ids`
99
+ * `previous_qas`
100
+
101
+ 6. **Why This Wins**
102
+ – Demonstrates reasoning, not mere keyword lookup.
103
+ – Shows the AI’s ability to replicate a mini-clinical interview.
104
+ – Leverages the full ICD hierarchy while handling user imprecision.
105
+ – Judges see an interactive, dynamic tool rather than static lookup.
106
+
107
+ Go build the symptom ontology JSON, implement the candidate-filtering logic, then call the LLM to decide follow-up questions. By the end of hackathon week you’ll have a working demo that asks “How long, how severe, any associated features?” and maps to the right code with confidence.