Tna001's picture
Upload policy weights, train config and readme
24b101d verified
metadata
datasets: Tna001/real_rl_classifier
library_name: lerobot
license: apache-2.0
model_name: reward_classifier
pipeline_tag: robotics
tags:
  - lerobot
  - robotics
  - reward_classifier

Model Card for reward_classifier

A reward classifier is a lightweight neural network that scores observations or trajectories for task success, providing a learned reward signal or offline evaluation when explicit rewards are unavailable.

This policy has been trained and pushed to the Hub using LeRobot. See the full documentation at LeRobot Docs.


How to Get Started with the Model

For a complete walkthrough, see the training guide. Below is the short version on how to train and run inference/eval:

Train from scratch

python -m lerobot.scripts.train \
  --dataset.repo_id=${HF_USER}/<dataset> \
  --policy.type=act \
  --output_dir=outputs/train/<desired_policy_repo_id> \
  --job_name=lerobot_training \
  --policy.device=cuda \
  --policy.repo_id=${HF_USER}/<desired_policy_repo_id>
  --wandb.enable=true

Writes checkpoints to outputs/train/<desired_policy_repo_id>/checkpoints/.

Evaluate the policy/run inference

python -m lerobot.record \
  --robot.type=so100_follower \
  --dataset.repo_id=<hf_user>/eval_<dataset> \
  --policy.path=<hf_user>/<desired_policy_repo_id> \
  --episodes=10

Prefix the dataset repo with eval_ and supply --policy.path pointing to a local or hub checkpoint.


Model Details

  • License: apache-2.0