chess_mid_rl

~95% first gen accuracy via excessive RL training 🥱 Chess model submitted to the LLM Course Chess Challenge.

Submission Info

  • Submitted by: j0eyd
  • Parameters: 969,600
  • Organization: LLM-course

Model Details

  • Architecture: Chess Transformer (GPT-style)
  • Vocab size: 72
  • Embedding dim: 128
  • Layers: 7
  • Heads: 8
Downloads last month
26
Safetensors
Model size
970k params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support