File size: 7,303 Bytes
3fa63a4
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
aa3b62c
 
 
 
3fa63a4
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c21e04a
3fa63a4
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
---
title: Tell Me
emoji: "💬🌿"
colorFrom: indigo
colorTo: green
sdk: streamlit
app_file: fresh_app_v2.py
pinned: false
tags:
- streamlit
short_description: Mental wellbeing chat (research)
---

**🌿 Tell Me — A Mental Well-Being Space**

Tell Me is a safe space for individuals seeking some well-being advice or a self-reflection space. It also provides the research community to simulate some LLM generated client-therapist synthetic data. This is a research prototype, not a medical device.

## Key Components of Tell Me:***

- **Tell Me Assistant**
  
  Tell Me Assistant is a Mental Well-being Chatbot designed to help individuals process their thoughts and emotions in a supportive way.
  It is not a substitute for professional care, but it offers a safe space for conversation and self-reflection.
  The Assistant is created with care, recognizing that people may turn to it during moments of initial support. Its goal is to make such therapeutic-style interactions more accessible and approachable for everyone.

  `fresh_app_v2.py` interconnected with `rag.py` and `llm_models.py` to power the Assistant with context using RAG
 
- **Simulate a Conversation**  
  This generates a synthetic client–therapist conversation from a short client profile. It helps create sample data for research and lets professionals inspect the dialogue quality. Outputs are created by an LLM and can guide future fine-tuning or evaluation.
  Multi‑turn, role‑locked *Therapist ↔ Client* dialogue from a brief persona (see `llm_models.py`).

- **Well‑being Planner (CrewAI)**  
  1) Transcript analysis (themes, emotions, triggers)  
  2) **7‑day plan** (CBT/behavioral steps, routines, sleep hygiene, social micro‑actions)  
  3) **Guided meditation** script + **MP3** (gTTS/Edge/Coqui/ElevenLabs)  
  Implemented in `crew_ai.py`, surfaced in the **Planner** tab in `fresh_app_v2.py`.


- **Evaluation**  
  Use `prep_responses.py` and `judge.py` to prepare and score generations using LLM as a judge and also the results of conducted Human Evaluation; see `Results/` for artifacts (e.g., *gpt4o/5 eval*).

---

## Repository Structure

```
.
├─ Results/                 # Evaluation outputs / artifacts (e.g., gpt4o eval)
├─ index_storage/           # Vector index built by rag.py
├─ rag_data/                # Source docs for RAG
├─ src/                     # Streamlit template seed
├─ bg.jpg                   # App background
├─ config.toml              # Streamlit config (dark mode default, etc.)
├─ crew_ai.py               # CrewAI pipeline (planner + meditation TTS)
├─ fresh_app_v2.py          # Main Streamlit app
├─ judge.py                 # Evaluation judge
├─ llm_models.py            # Prompt builders + simulate‑conversation helpers
├─ prep_responses.py        # Prep helper for evaluation
├─ rag.py                   # Simple RAG indexing/query helpers
├─ requirements.txt         # Python dependencies
├─ Dockerfile               # Optional container build
├─ .gitattributes
└─ README.md                # You are here :)
```

---

## Quickstart

### 1) Python setup

```bash
# Python 3.10+ recommended
python -m venv .venv
source .venv/bin/activate           # Windows: .venv\Scripts\activate
pip install -r requirements.txt
```

### 2) Environment variables

Create a `.env` in the project root (same folder as `fresh_app_v2.py`). Minimal example:

```dotenv

# Optional TTS configuration for the guided meditation
# TTS_PROVIDER=gtts               # or: edge | coqui | elevenlabs
# ELEVEN_API_KEY=...              # if using ElevenLabs
# EDGE_VOICE=en-US-JennyNeural    # if using edge-tts
# COQUI_MODEL=tts_models/multilingual/multi-dataset/xtts_v2
```

> Some tabs may allow choosing models/keys in the UI.  
> The **Planner** currently works with the key above (and/or an in‑tab field if present in your build).

### 3) Run the app

```bash
streamlit run fresh_app_v2.py
```

Open the URL Streamlit prints (usually http://localhost:8501).

---

## Using the App

### UI View Recommendation

  Note: On the streamlit app please ensure that you have selected the Dark Mode in settings to get the best UI Experience of the App

### Simulate a Conversation 🧪🤖
1. In that tab, paste a **Client Profile** (e.g., `Age 24 student; recently moved... sleep irregular...`).
2. Click **Generate Synthetic Dialogue** to produce a multi‑turn conversation.
3. Optionally **Download Transcript**.

### Well‑being Planner 📅🧘
1. Ensure you provide open ai api key for running this module (i.e paste a key in the tab if the field is available).
2. Upload one **.txt** transcript (client–therapist chat).
3. Click **Create Plan & Meditation**.
4. The app displays:
   - **Transcript Summary**
   - **7‑Day Well‑being Plan** (Markdown, Day 1 … Day 7)
   - **Meditation Script** and an **MP3** player  
     (audio saved locally as `guided_meditation.mp3`)

### RAG (optional)
- Place your files into `rag_data/`.
- Build/update the index (if needed):

  ```bash
  python rag.py
  ```

- Use the app’s RAG controls to query your docs (index artifacts stored in `index_storage/`).

### Evaluation (optional)
- Use `prep_responses.py` to format generations and `judge.py` to score them.
- Outputs/examples are kept under `Results/`.

---

## Streamlit Configuration

- `config.toml` sets app defaults (e.g., dark mode). Example:

```toml
[theme]
base = "dark"
```

Adjust as needed per Streamlit docs.

---

## Docker (optional)

```bash
# Build
docker build -t tellme-assistant .

# Run (exposes Streamlit on 8501)
docker run --rm -p 8501:8501 --env-file .env tellme-assistant
```

---

## Troubleshooting

- **AuthenticationError / “You didn’t provide an API key.”**  
  Ensure `.env` includes `open_ai_key_for_crew_ai=sk-...` (or provide the key in‑tab if available) and **restart** Streamlit so the new env is loaded.

- **Only meditation shows but not the plan**  
  Update to the latest `crew_ai.py` that collects and returns **summary**, **plan**, and **meditation**, and ensure the tab renders all three fields.

- **TTS provider errors**  
  Install the provider’s dependency (`pip install edge-tts`, `pip install TTS`, `pip install elevenlabs`) and set the related env vars.

- **Ollama (if used in other tabs)**  
  Start the daemon and pull a model:
  ```bash
  ollama serve
  ollama pull llama3.1:8b-instruct
  ```

---

## Tech Stack

- **UI:** Streamlit  
- **LLMs:** OpenAI (planner), plus optional Anthropic/Ollama in other tabs  
- **Agents:** CrewAI (via LiteLLM under the hood)  
- **RAG:** Simple local index (`rag.py`, `index_storage/`)  
- **TTS:** gTTS / Edge‑TTS / Coqui TTS / ElevenLabs (configurable)
# 
---

## Roadmap

- In‑tab API key entry for the CrewAI planner (UI‑first flow)
- Configurable model/provider for planner
- Save generated plans/MP3s into `Results/` with timestamped filenames

---

## License

MIT

---

## Acknowledgments

- Streamlit template seed  
- CrewAI & LiteLLM ecosystem  
- TTS libraries: gTTS, Edge‑TTS, Coqui TTS, ElevenLabs

## Acknowledgment of AI Assistance
Some parts of this project code was generated or refined with the assistance of GPT-5. 
All outputs were reviewed, tested, and integrated by the authors.