A newer version of the Streamlit SDK is available:
1.52.1
title: Tell Me
emoji: 💬🌿
colorFrom: indigo
colorTo: green
sdk: streamlit
app_file: fresh_app_v2.py
pinned: false
tags:
- streamlit
short_description: Mental wellbeing chat (research)
🌿 Tell Me — A Mental Well-Being Space
Tell Me is a safe space for individuals seeking some well-being advice or a self-reflection space. It also provides the research community to simulate some LLM generated client-therapist synthetic data. This is a research prototype, not a medical device.
Key Components of Tell Me:***
Tell Me Assistant
Tell Me Assistant is a Mental Well-being Chatbot designed to help individuals process their thoughts and emotions in a supportive way. It is not a substitute for professional care, but it offers a safe space for conversation and self-reflection. The Assistant is created with care, recognizing that people may turn to it during moments of initial support. Its goal is to make such therapeutic-style interactions more accessible and approachable for everyone.
fresh_app_v2.pyinterconnected withrag.pyandllm_models.pyto power the Assistant with context using RAGSimulate a Conversation
This generates a synthetic client–therapist conversation from a short client profile. It helps create sample data for research and lets professionals inspect the dialogue quality. Outputs are created by an LLM and can guide future fine-tuning or evaluation. Multi‑turn, role‑locked Therapist ↔ Client dialogue from a brief persona (seellm_models.py).Well‑being Planner (CrewAI)
- Transcript analysis (themes, emotions, triggers)
- 7‑day plan (CBT/behavioral steps, routines, sleep hygiene, social micro‑actions)
- Guided meditation script + MP3 (gTTS/Edge/Coqui/ElevenLabs)
Implemented increw_ai.py, surfaced in the Planner tab infresh_app_v2.py.
Evaluation
Useprep_responses.pyandjudge.pyto prepare and score generations using LLM as a judge and also the results of conducted Human Evaluation; seeResults/for artifacts (e.g., gpt4o/5 eval).
Repository Structure
.
├─ Results/ # Evaluation outputs / artifacts (e.g., gpt4o eval)
├─ index_storage/ # Vector index built by rag.py
├─ rag_data/ # Source docs for RAG
├─ src/ # Streamlit template seed
├─ bg.jpg # App background
├─ config.toml # Streamlit config (dark mode default, etc.)
├─ crew_ai.py # CrewAI pipeline (planner + meditation TTS)
├─ fresh_app_v2.py # Main Streamlit app
├─ judge.py # Evaluation judge
├─ llm_models.py # Prompt builders + simulate‑conversation helpers
├─ prep_responses.py # Prep helper for evaluation
├─ rag.py # Simple RAG indexing/query helpers
├─ requirements.txt # Python dependencies
├─ Dockerfile # Optional container build
├─ .gitattributes
└─ README.md # You are here :)
Quickstart
1) Python setup
# Python 3.10+ recommended
python -m venv .venv
source .venv/bin/activate # Windows: .venv\Scripts\activate
pip install -r requirements.txt
2) Environment variables
Create a .env in the project root (same folder as fresh_app_v2.py). Minimal example:
# Optional TTS configuration for the guided meditation
# TTS_PROVIDER=gtts # or: edge | coqui | elevenlabs
# ELEVEN_API_KEY=... # if using ElevenLabs
# EDGE_VOICE=en-US-JennyNeural # if using edge-tts
# COQUI_MODEL=tts_models/multilingual/multi-dataset/xtts_v2
Some tabs may allow choosing models/keys in the UI.
The Planner currently works with the key above (and/or an in‑tab field if present in your build).
3) Run the app
streamlit run fresh_app_v2.py
Open the URL Streamlit prints (usually http://localhost:8501).
Using the App
UI View Recommendation
Note: On the streamlit app please ensure that you have selected the Dark Mode in settings to get the best UI Experience of the App
Simulate a Conversation 🧪🤖
- In that tab, paste a Client Profile (e.g.,
Age 24 student; recently moved... sleep irregular...). - Click Generate Synthetic Dialogue to produce a multi‑turn conversation.
- Optionally Download Transcript.
Well‑being Planner 📅🧘
- Ensure you provide open ai api key for running this module (i.e paste a key in the tab if the field is available).
- Upload one .txt transcript (client–therapist chat).
- Click Create Plan & Meditation.
- The app displays:
- Transcript Summary
- 7‑Day Well‑being Plan (Markdown, Day 1 … Day 7)
- Meditation Script and an MP3 player
(audio saved locally asguided_meditation.mp3)
RAG (optional)
Place your files into
rag_data/.Build/update the index (if needed):
python rag.pyUse the app’s RAG controls to query your docs (index artifacts stored in
index_storage/).
Evaluation (optional)
- Use
prep_responses.pyto format generations andjudge.pyto score them. - Outputs/examples are kept under
Results/.
Streamlit Configuration
config.tomlsets app defaults (e.g., dark mode). Example:
[theme]
base = "dark"
Adjust as needed per Streamlit docs.
Docker (optional)
# Build
docker build -t tellme-assistant .
# Run (exposes Streamlit on 8501)
docker run --rm -p 8501:8501 --env-file .env tellme-assistant
Troubleshooting
AuthenticationError / “You didn’t provide an API key.”
Ensure.envincludesopen_ai_key_for_crew_ai=sk-...(or provide the key in‑tab if available) and restart Streamlit so the new env is loaded.Only meditation shows but not the plan
Update to the latestcrew_ai.pythat collects and returns summary, plan, and meditation, and ensure the tab renders all three fields.TTS provider errors
Install the provider’s dependency (pip install edge-tts,pip install TTS,pip install elevenlabs) and set the related env vars.Ollama (if used in other tabs)
Start the daemon and pull a model:ollama serve ollama pull llama3.1:8b-instruct
Tech Stack
- UI: Streamlit
- LLMs: OpenAI (planner), plus optional Anthropic/Ollama in other tabs
- Agents: CrewAI (via LiteLLM under the hood)
- RAG: Simple local index (
rag.py,index_storage/) - TTS: gTTS / Edge‑TTS / Coqui TTS / ElevenLabs (configurable)
Roadmap
- In‑tab API key entry for the CrewAI planner (UI‑first flow)
- Configurable model/provider for planner
- Save generated plans/MP3s into
Results/with timestamped filenames
License
MIT
Acknowledgments
- Streamlit template seed
- CrewAI & LiteLLM ecosystem
- TTS libraries: gTTS, Edge‑TTS, Coqui TTS, ElevenLabs
Acknowledgment of AI Assistance
Some parts of this project code was generated or refined with the assistance of GPT-5. All outputs were reviewed, tested, and integrated by the authors.