File size: 3,524 Bytes
5e93c4d
 
 
 
 
4d64ec1
5e93c4d
4d64ec1
5e93c4d
 
 
 
 
4d64ec1
 
 
 
 
 
5e93c4d
 
 
4d64ec1
5e93c4d
 
 
 
 
4d64ec1
 
 
5e93c4d
4d64ec1
5e93c4d
4d64ec1
 
5e93c4d
 
 
4d64ec1
 
5e93c4d
 
 
4d64ec1
5e93c4d
 
 
4d64ec1
5e93c4d
 
 
4d64ec1
 
5e93c4d
4d64ec1
 
 
 
 
 
 
5e93c4d
 
 
 
 
4d64ec1
5e93c4d
 
 
4d64ec1
5e93c4d
4d64ec1
5e93c4d
 
 
4d64ec1
5e93c4d
 
 
4d64ec1
5e93c4d
 
 
 
 
 
 
4d64ec1
5e93c4d
 
 
 
 
4d64ec1
5e93c4d
 
 
 
 
4d64ec1
5e93c4d
 
 
4d64ec1
 
 
5e93c4d
 
 
 
 
4d64ec1
 
 
 
 
 
 
 
5e93c4d
 
 
4d64ec1
5e93c4d
 
 
4d64ec1
 
5e93c4d
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
---
base_model: google/gemma-3-1b-it
library_name: peft
---

# Model Card for Julia, an AI for Medical Reasoning

Julia Medical Reasoning is a fine-tuned version of Google's Gemma-3 model, optimized for clinical reasoning, diagnostic support, and medical question answering in English. It has been adapted through supervised fine-tuning using a curated dataset composed of medical case studies, question-answer pairs, and evidence-based medicine protocols.

## Model Details

### Model Description

- **Developed by:** Miguel Araújo Julio
- **Shared by:** Miguel Araújo Julio
- **Model type:** Causal Language Model
- **Language(s) (NLP):** English
- **License:** Apache 2.0
- **Finetuned from model:** google/gemma-3-1b-it

### Model Sources [optional]

- **Repository:** N/A

## Uses

### Direct Use

- Medical education and training.
- Assisting clinicians with reasoning through differential diagnoses.
- Generating answers to patient queries and common clinical questions.

### Downstream Use

- Integration into clinical decision support tools.
- Augmenting chatbot interfaces for hospitals or telemedicine platforms.

### Out-of-Scope Use

- Final medical diagnosis or treatment recommendation without human oversight.
- Use in high-risk clinical environments without regulatory clearance.

## Bias, Risks, and Limitations

The model may reproduce biases found in the training data and should not be considered a replacement for licensed medical professionals. There is a risk of hallucinated or outdated information being presented as fact.

### Recommendations

Users should validate outputs against trusted medical sources and consult with qualified professionals before making clinical decisions.

## How to Get Started with the Model

```python
from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("your-username/julia-medical-reasoning")
tokenizer = AutoTokenizer.from_pretrained("your-username/julia-medical-reasoning")

inputs = tokenizer("What are common symptoms of diabetes?", return_tensors="pt")
outputs = model.generate(**inputs)
print(tokenizer.decode(outputs[0]))
```

## Training Details

### Training Data

The model was trained using a mix of open medical question-answer datasets, synthetic case-based reasoning examples, and filtered PubMed articles.

### Training Procedure

#### Preprocessing

Data was filtered to remove out-of-domain or unsafe content, and pre-tokenized using Gemma's tokenizer.

#### Training Hyperparameters

- **Training regime:** bf16 mixed precision

#### Speeds, Sizes, Times [optional]

- Fine-tuned for 3 epochs on a L4x4 GPU.

## Evaluation

### Testing Data, Factors & Metrics

#### Testing Data

Curated benchmark sets of medical reasoning and multiple-choice questions (FreedomIntelligence/medical-o1-reasoning-SFT).

## Technical Specifications [optional]

### Model Architecture and Objective

Decoder-only transformer architecture following Gemma specifications.

### Compute Infrastructure

#### Hardware

L4 x 4 (96 GB)

#### Software

- PyTorch 2.1
- PEFT 0.14.0
- Transformers 4.40

## Citation [optional]

**BibTeX:**

```bibtex
@misc{julia2025,
  title={Julia Medical Reasoning: Fine-tuning Gemma for Medical Understanding},
  author={Miguel Araújo Julio},
  year={2025},
  url={https://huggingface.co/Miguell-J/julia-medical-reasoning}
}
```

## Model Card Authors [optional]

Miguel Araújo Julio

## Model Card Contact

- Email: julioaraujo.guel@gmail.com

### Framework versions

- PEFT 0.14.0