YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Coda-Robotics/OpenVLA-ER-Select-Book

Model Description

This is a full fine-tuned model with LoRA weights merged into base model of OpenVLA, fine-tuned on the select_book dataset.

Training Details

  • Dataset: select_book
  • Number of Episodes: 479
  • Batch Size: 8
  • Training Steps: 20000
  • Learning Rate: 2e-5
  • LoRA Configuration:
    • Rank: 32
    • Dropout: 0.0
    • Target Modules: all-linear

Usage

from transformers import AutoProcessor, AutoModelForVision2Seq

# Load the model and processor
processor = AutoProcessor.from_pretrained("Coda-Robotics/OpenVLA-ER-Select-Book")
model = AutoModelForVision2Seq.from_pretrained("Coda-Robotics/OpenVLA-ER-Select-Book")

# Process an image
image = ...  # Load your image
inputs = processor(images=image, return_tensors="pt")
outputs = model.generate(**inputs)
text = processor.decode(outputs[0], skip_special_tokens=True)
Downloads last month
6
Safetensors
Model size
7.54B params
Tensor type
BF16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support