Update README.md
Browse files
README.md
CHANGED
|
@@ -1,3 +1,122 @@
|
|
| 1 |
-
---
|
| 2 |
-
license: mit
|
| 3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: mit
|
| 3 |
+
base_model: meta-llama/Llama-3.2-3B
|
| 4 |
+
tags:
|
| 5 |
+
- llama-3.2
|
| 6 |
+
- unsloth
|
| 7 |
+
- fine-tuned
|
| 8 |
+
- gguf
|
| 9 |
+
- doctor
|
| 10 |
+
- dental
|
| 11 |
+
- medical
|
| 12 |
+
- chat
|
| 13 |
+
- instruction-tuning
|
| 14 |
+
datasets:
|
| 15 |
+
- BirdieByte1024/doctor-dental-llama-qa
|
| 16 |
+
---
|
| 17 |
+
|
| 18 |
+
# 🦷 doctor-dental-implant-llama3.2-3B-full-model
|
| 19 |
+
|
| 20 |
+
This model is a fine-tuned version of `meta-llama/Llama-3.2-3B`, trained using the [Unsloth](https://github.com/unslothai/unsloth) framework on a domain-specific instruction dataset focused on **medical** and **dental implant conversations**.
|
| 21 |
+
|
| 22 |
+
The model has been optimized for **chat-style reasoning** in doctor–patient scenarios, particularly within the domain of **Straumann® dental implant systems**, as well as general medical question answering.
|
| 23 |
+
|
| 24 |
+
---
|
| 25 |
+
|
| 26 |
+
## 🔍 Model Details
|
| 27 |
+
|
| 28 |
+
- **Base model:** `meta-llama/Llama-3.2-3B`
|
| 29 |
+
- **Training framework:** [Unsloth](https://github.com/unslothai/unsloth) with LoRA + QLoRA support
|
| 30 |
+
- **Training format:** Conversational JSON with `{"from": "patient"/"doctor", "value": ...}` messages
|
| 31 |
+
- **Checkpoint format:** Full model merged, usable as standard HF or GGUF (Ollama / llama.cpp)
|
| 32 |
+
- **Tokenizer:** Inherited from base model
|
| 33 |
+
- **Model size:** 3B parameters (efficient for consumer-grade inference)
|
| 34 |
+
|
| 35 |
+
---
|
| 36 |
+
|
| 37 |
+
## 📚 Dataset
|
| 38 |
+
|
| 39 |
+
This model was trained on:
|
| 40 |
+
- [`BirdieByte1024/doctor-dental-llama-qa`](https://huggingface.co/datasets/BirdieByte1024/doctor-dental-llama-qa)
|
| 41 |
+
|
| 42 |
+
The dataset contains synthetic and handbook-derived doctor-patient conversations focused on:
|
| 43 |
+
- Dental implant systems (e.g. surgical kits, guided procedures)
|
| 44 |
+
- General medical Q&A relevant to clinics and telemedicine
|
| 45 |
+
- Clinical assistant-style instruction-following
|
| 46 |
+
|
| 47 |
+
---
|
| 48 |
+
|
| 49 |
+
## 💬 Prompt Format
|
| 50 |
+
|
| 51 |
+
The model expects a **chat-style format**:
|
| 52 |
+
|
| 53 |
+
```
|
| 54 |
+
{
|
| 55 |
+
"conversation": [
|
| 56 |
+
{ "from": "patient", "value": "What are the advantages of guided implant surgery?" },
|
| 57 |
+
{ "from": "doctor", "value": "Guided surgery improves accuracy, safety, and esthetic outcomes." }
|
| 58 |
+
]
|
| 59 |
+
}
|
| 60 |
+
```
|
| 61 |
+
|
| 62 |
+
---
|
| 63 |
+
|
| 64 |
+
## ✅ Intended Use
|
| 65 |
+
|
| 66 |
+
- Virtual assistants in dental or medical Q&A
|
| 67 |
+
- Instruction-tuned experimentation on health topics
|
| 68 |
+
- Local chatbot agents (Ollama / llama.cpp compatible)
|
| 69 |
+
|
| 70 |
+
---
|
| 71 |
+
|
| 72 |
+
## ⚠️ Limitations
|
| 73 |
+
|
| 74 |
+
- Model is not a medical device or diagnostic tool
|
| 75 |
+
- Hallucinations and factual errors may occur
|
| 76 |
+
- Content was fine-tuned using synthetic and handbook-based sources (not real EMR)
|
| 77 |
+
|
| 78 |
+
---
|
| 79 |
+
|
| 80 |
+
## 🧪 Example Prompt
|
| 81 |
+
|
| 82 |
+
```json
|
| 83 |
+
{
|
| 84 |
+
"conversation": [
|
| 85 |
+
{ "from": "human", "value": "What should I expect after a Straumann implant surgery?" },
|
| 86 |
+
{ "from": "assistant", "value": "[MODEL RESPONSE HERE]" }
|
| 87 |
+
]
|
| 88 |
+
}
|
| 89 |
+
```
|
| 90 |
+
|
| 91 |
+
---
|
| 92 |
+
|
| 93 |
+
## 🛠 Deployment
|
| 94 |
+
|
| 95 |
+
### Local Use with Hugging Face Transformers
|
| 96 |
+
|
| 97 |
+
```python
|
| 98 |
+
from transformers import AutoTokenizer, AutoModelForCausalLM
|
| 99 |
+
|
| 100 |
+
tokenizer = AutoTokenizer.from_pretrained("BirdieByte1024/doctor-dental-implant-llama3.2-3B-full-model")
|
| 101 |
+
model = AutoModelForCausalLM.from_pretrained("BirdieByte1024/doctor-dental-implant-llama3.2-3B-full-model")
|
| 102 |
+
```
|
| 103 |
+
|
| 104 |
+
### GGUF / Ollama / llama.cpp
|
| 105 |
+
|
| 106 |
+
```bash
|
| 107 |
+
ollama run doctor-dental-llama3.2
|
| 108 |
+
```
|
| 109 |
+
|
| 110 |
+
> If using a local `Modelfile`, ensure the prompt template matches chat formatting (no Alpaca-style).
|
| 111 |
+
|
| 112 |
+
---
|
| 113 |
+
|
| 114 |
+
## ✍️ Author
|
| 115 |
+
|
| 116 |
+
Created by [(BirdieByte1024)](https://huggingface.co/BirdieByte1024) as part of a medical AI research project using Unsloth and LLaMA 3.2.
|
| 117 |
+
|
| 118 |
+
---
|
| 119 |
+
|
| 120 |
+
## 📜 License
|
| 121 |
+
|
| 122 |
+
MIT
|