Model Details
This is just a LoRA Adapter, please navigate to ShivomH/Elixir-MentalHealth-3B to access the merged model with a guided inference script.
Model Description
Elixir-MentalHealth is a fine-tuned version of Meta-Llama-3.2-3B-Instruct, adapted using QLoRA on a curated dataset of single-turn and multi-turn mental health support conversations. The model is designed to provide empathetic, safe, and supportive responses while maintaining clear professional boundaries.
β οΈ Disclaimer: This model is not a replacement for professional mental health services. Always seek help from licensed professionals in crisis situations.
Primary Use Cases:
- Mental health support chats
- Stress and Anxiety management conversations
- Empathetic listening, encouragement and general guidance
- Psychoeducational tips (e.g., mindfulness, coping strategies, depression support)
Out-of-Scope Use (should NOT be used for):
- Medical diagnosis or treatment planning
- Emergency mental health intervention (e.g., suicide prevention crisis line replacement)
- Legal, financial, or unrelated domains
This model is best suited for research, prototyping, and supportive chatbot applications where professional disclaimers and human oversight are always present.
How to Get Started with the Model
# Load model with LoRA
from peft import PeftModel, PeftConfig
lora_model = "ShivomH/Elixir-MentalHealth-3B"
base_model = "meta-llama/Llama-3.2-3B-Instruct"
# Load configuration
peft_config = PeftConfig.from_pretrained(lora_model)
# Load base model
inference_model = AutoModelForCausalLM.from_pretrained(
peft_config.base_model,
quantization_config=bnb_config,
device_map="auto",
torch_dtype=torch.bfloat16,
)
# Load LoRA weights
inference_model = PeftModel.from_pretrained(inference_model, lora_model)
# Load tokenizer
inference_tokenizer = AutoTokenizer.from_pretrained(lora_model)
π Dataset Details
- Dataset Source: ShivomH/MentalHealth-Support
- Size: 25,000 conversations
- Training Split: 23,750 (95%)
- Validation Split: 1,250 (5%)
- Multi-Turn Conversations: 16,000
- Long Single-Turn Conversations: 8,000
- Short Single-Turn Conversations: 1,000
- Total tokens: ~17M
- Mean: ~700 tokens
- Data format: (.jsonl) Messages List with Roles and Content
General Details
- Developed by: Shivom Hatalkar
- Funded by: Shivom Hatalkar
- Model type: NLP Text Generation LLM
- Language(s) (NLP): English
- License: llama3.2
- Base Model: meta-llama/Llama-3.2-3B-Instruct
Model Sources
- Repository: [More Information Needed]
- Paper [optional]: [More Information Needed]
- Demo [optional]: [More Information Needed]
Training Details
Please visit the Merged model ShivomH/Elixir-MentalHealth-3B page for detailed Training details.
Results
Please visit the Merged model ShivomH/Elixir-MentalHealth-3B page for viewing the testing samples.
Model Examination [optional]
[More Information Needed]
Hardware
[More Information Needed]
Software
[More Information Needed]
Citation [optional]
BibTeX:
[More Information Needed]
APA:
[More Information Needed]
Framework versions
- PEFT 0.17.1
- Downloads last month
- 8
Model tree for ShivomH/Elixir-MentalHealth-LoRA
Base model
meta-llama/Llama-3.2-3B-Instruct