File size: 3,131 Bytes
529100a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
---
library_name: mlx-vlm
tags:
- mlx
- vision-language-model
- fine-tuned
- brake-components
- visual-ai
base_model: mlx-community/SmolVLM-256M-Instruct-bf16
---

# FinalVisualLearning-v4 - MLX Fine-tuned Vision Language Model

This model was fine-tuned using the VisualAI platform with MLX (Apple Silicon optimization).

## πŸš€ Model Details
- **Base Model**: `mlx-community/SmolVLM-256M-Instruct-bf16`
- **Training Platform**: VisualAI (MLX-optimized)
- **GPU Type**: MLX (Apple Silicon)
- **Training Job ID**: 4
- **Created**: 2025-06-03 03:35:36.112869
- **Training Completed**: βœ… Yes

## πŸ“Š Training Data
This model was trained on a combined dataset with visual examples and conversations.

## πŸ› οΈ Usage

### Installation
```bash
pip install mlx-vlm
```

### Loading the Model
```python
from mlx_vlm import load
import json
import os

# Load the base MLX model
model, processor = load("mlx-community/SmolVLM-256M-Instruct-bf16")

# Load the fine-tuned artifacts
model_info_path = "mlx_model_info.json"
if os.path.exists(model_info_path):
    with open(model_info_path, 'r') as f:
        model_info = json.load(f)
    print(f"βœ… Loaded fine-tuned model with {model_info.get('training_examples_count', 0)} training examples")

# Check for adapter weights
adapters_path = "adapters/adapter_config.json"
if os.path.exists(adapters_path):
    with open(adapters_path, 'r') as f:
        adapter_config = json.load(f)
    print(f"🎯 Found MLX adapters with {adapter_config.get('training_examples', 0)} training examples")
```

### Inference
```python
from mlx_vlm import generate
from mlx_vlm.prompt_utils import apply_chat_template
from mlx_vlm.utils import load_config
from PIL import Image

# Load your image
image = Image.open("your_image.jpg")

# Ask a question
question = "What type of brake component is this?"

# Format the prompt
config = load_config("mlx-community/SmolVLM-256M-Instruct-bf16")
formatted_prompt = apply_chat_template(processor, config, question, num_images=1)

# Generate response
response = generate(model, processor, formatted_prompt, [image], verbose=False, max_tokens=100)
print(f"Model response: {response}")
```

## πŸ“ Model Artifacts

This repository contains:
- `mlx_model_info.json`: Training metadata and learned mappings
- `training_images/`: Reference images from training data
- `adapters/`: MLX LoRA adapter weights and configuration (if available)
- `README.md`: This documentation

## ⚠️ Important Notes

- This model uses MLX format optimized for Apple Silicon
- The actual model weights remain in the base model (`mlx-community/SmolVLM-256M-Instruct-bf16`)
- The fine-tuning artifacts enhance the model's domain-specific knowledge
- **Check the `adapters/` folder for MLX-specific fine-tuned weights**
- For best results, use on Apple Silicon devices (M1/M2/M3)

## 🎯 Training Statistics

- Training Examples: 3
- Learned Mappings: 2
- Domain Keywords: 79

## πŸ“ž Support

For questions about this model or the VisualAI platform, please refer to the training logs or contact support.

---
*This model was trained using VisualAI's MLX-optimized training pipeline.*