Tacoin GR00T Libero Long (Checkpoint 7500)
Tacoin fine-tuned GR00T checkpoint trained on the LIBERO libero long benchmark. The model follows the libero_gr00t data config (dual RGB streams + 8-DoF state) and predicts 16-step joint-space actions.
Training Snapshot
- Base model: 
nvidia/GR00T-N1.5-3B - Checkpoint step: 7500 / 8000
 - Dataset: libero_long (10 tasks, 379 demos @ 10.0 FPS)
 - Run notes: long-horizon manipulation suite finetune
 
Evaluation
Offline reconstruction evaluated on 10 evenly spaced trajectories (160 steps each) with decord video backend and denoising_steps=4. Metrics are on unnormalized actions.
| Metric | Value | 
|---|---|
| Average MSE | 0.03294 | 
| Median MSE | 0.03433 | 
| Std MSE | 0.01303 | 
| Max MSE | 0.05386 | 
| Fraction ≤ 0.05 | 80.0% | 
| Fraction ≤ 0.075 | 100.0% | 
| Fraction ≤ 0.10 | 100.0% | 
Usage
from gr00t.experiment.data_config import load_data_config
from gr00t.model.policy import Gr00tPolicy
ckpt = 'Tacoin/GR00T-N1.5-3B-LIBERO-LONG'
data_config = load_data_config('libero_gr00t')
policy = Gr00tPolicy(
    model_path=ckpt,
    modality_config=data_config.modality_config(),
    modality_transform=data_config.transform(),
    embodiment_tag='new_embodiment',
    denoising_steps=4,
)
Feed a LeRobot-format observation dict into policy.get_action(...) to get a 16-step chunk.
Files
| Path | Description | 
|---|---|
config.json | 
Transformer config for the action head. | 
model-0000x-of-00002.safetensors | 
Sharded weights. | 
model.safetensors.index.json | 
Weight shard index. | 
experiment_cfg/metadata.json | 
Dataset statistics for normalization. | 
optimizer.pt, scheduler.pt, rng_state.pth | 
Optimizer state for resuming. | 
trainer_state.json | 
Trainer snapshot. | 
License
Released under Apache-2.0; please cite NVIDIA Isaac GR00T and the LIBERO benchmark.
- Downloads last month
 - 136
 
Model tree for Tacoin/GR00T-N1.5-3B-LIBERO-LONG
Base model
nvidia/GR00T-N1.5-3B