YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co/docs/hub/model-cards#model-card-metadata)
First beta test on training VLMs usign MLX-VLM and my overhaul PR.
terminal command used:
python -m mlx_vlm.lora --model-path mlx-community/Qwen2-VL-2B-Instruct-bf16 --dataset TIGER-Lab/VisualWebInstruct --dataset-config 'example' --output-path Desktop/Qwen2-VL-2B-Instruct-bf16-VisualWebInstruct-lora --batch-size 1 --epochs 1 --learning-rate 1e-6 --grad-checkpoint --train-on-completions --steps-per-report 1
last 10 steps logs:
Iter 990: Train loss 0.703, Learning Rate 1.000e-06, It/sec 5.188, Tokens/sec 166.003, Trained Tokens 31677, Peak mem 5.132 GB
Iter 991: Train loss 3.135, Learning Rate 1.000e-06, It/sec 5.019, Tokens/sec 160.618, Trained Tokens 31709, Peak mem 5.132 GB
Iter 992: Train loss 1.932, Learning Rate 1.000e-06, It/sec 5.112, Tokens/sec 163.598, Trained Tokens 31741, Peak mem 5.132 GB
Iter 993: Train loss 0.751, Learning Rate 1.000e-06, It/sec 5.159, Tokens/sec 165.081, Trained Tokens 31773, Peak mem 5.137 GB
Iter 994: Train loss 2.252, Learning Rate 1.000e-06, It/sec 5.103, Tokens/sec 163.304, Trained Tokens 31805, Peak mem 5.137 GB
Iter 995: Train loss 0.738, Learning Rate 1.000e-06, It/sec 5.175, Tokens/sec 165.601, Trained Tokens 31837, Peak mem 5.137 GB
Iter 996: Train loss 1.454, Learning Rate 1.000e-06, It/sec 5.202, Tokens/sec 166.455, Trained Tokens 31869, Peak mem 5.137 GB
Iter 997: Train loss 1.298, Learning Rate 1.000e-06, It/sec 5.048, Tokens/sec 161.523, Trained Tokens 31901, Peak mem 5.137 GB
Iter 998: Train loss 2.843, Learning Rate 1.000e-06, It/sec 5.209, Tokens/sec 166.696, Trained Tokens 31933, Peak mem 5.137 GB
Iter 999: Train loss 1.243, Learning Rate 1.000e-06, It/sec 5.118, Tokens/sec 163.765, Trained Tokens 31965, Peak mem 5.137 GB
Iter 1000: Train loss 1.513, Learning Rate 1.000e-06, It/sec 5.140, Tokens/sec 164.481, Trained Tokens 31997, Peak mem 5.137 GB
its really fast and works really good.
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support