YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

VetBot V3 - Qwen2.5-72B LoRA Adapter

Fine-tuned Qwen2.5-72B-Instruct for veterinary medicine Q&A.

Training Summary

Parameter Value
Base Model Qwen/Qwen2.5-72B-Instruct
Method LoRA (r=128, alpha=256)
Trainable Params 1.68B / 74.4B (2.26%)
Hardware 4x NVIDIA H200 (141GB each)
Training Time 8.46 hours
Epochs 3
Train Samples 41,506
Final Loss 0.572
Token Accuracy ~94%

Files

  • fsdp_checkpoint/ - FSDP2 distributed checkpoint
  • adapter_config.json - LoRA configuration
  • tokenizer* - Qwen2.5-72B tokenizer

Note

This is an FSDP2 checkpoint format. Load with HuggingFace accelerate library.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support