Static chess embedding (512d) -- themes/openings <-> positions

This is a sentence-transformers model trained. It maps sentences & paragraphs to a 512-dimensional dense vector space and can be used for retrieval.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Maximum Sequence Length: inf tokens
  • Output Dimensionality: 512 dimensions
  • Similarity Function: Cosine Similarity
  • Supported Modality: Text
  • Language: en
  • License: apache-2.0

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): StaticEmbedding({})
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("oneryalcin/static-embedding-chess")
# Run inference
queries = [
    '[UNK] deflection discoveredAttack [UNK] queensideAttack short Philidor Defense [UNK] Defense Other variations',
]
documents = [
    'themes crushing deflection discoveredAttack middlegame queensideAttack short opening Philidor Defense Philidor Defense Other variations moves d3c3 d4b3 c1b1 d7d1 d3c3+d4b3 d4b3+c1b1 c1b1+d7d1',
    'themes advantage discoveredAttack middlegame short opening Philidor Defense Philidor Defense Other variations moves e4d4 d3f5 c8b8 d1d4 e4d4+d3f5 d3f5+c8b8 c8b8+d1d4',
    'themes crushing middlegame pin queensideAttack short opening Sicilian Defense Sicilian Defense Najdorf Variation moves c3d5 c5b3 c1b1 b3d2 c3d5+c5b3 c5b3+c1b1 c1b1+b3d2',
]
query_embeddings = model.encode_query(queries)
document_embeddings = model.encode_document(documents)
print(query_embeddings.shape, document_embeddings.shape)
# [1, 512] [3, 512]

# Get the similarity scores for the embeddings
similarities = model.similarity(query_embeddings, document_embeddings)
print(similarities)
# tensor([[0.8405, 0.5061, 0.2136]])

Evaluation

Metrics

Information Retrieval

Metric chess-ir chess-ir-tokens
cosine_accuracy@1 0.005 0.0794
cosine_accuracy@10 0.07 0.2593
cosine_precision@1 0.005 0.0794
cosine_precision@10 0.008 0.0603
cosine_recall@1 0.0017 0.0022
cosine_recall@10 0.0267 0.024
cosine_ndcg@10 0.0168 0.0672
cosine_mrr@10 0.0207 0.1233
cosine_map@100 0.0141 0.0332

Training Details

Training Dataset

Unnamed Dataset

  • Size: 1,619,946 training samples
  • Columns: anchor and positive
  • Approximate statistics based on the first 100 samples:
    anchor positive
    type string string
    modality text text
    details
    • min: 21 characters
    • mean: 75.57 characters
    • max: 122 characters
    • min: 86 characters
    • mean: 158.13 characters
    • max: 256 characters
  • Samples:
    anchor positive
    kingsideAttack mate mateIn1 middlegame oneMove Horwitz Defense Horwitz Defense [UNK] variations themes kingsideAttack mate mateIn1 middlegame oneMove opening Horwitz Defense Horwitz Defense Other variations moves f7h8 g6g2 f7h8+g6g2
    backRankMate endgame mate mateIn2 short Kings Knight Opening Kings Knight Opening [UNK] [UNK] themes backRankMate endgame mate mateIn2 short opening Kings Knight Opening Kings Knight Opening Other variations moves c5d4 c3c8 g5d8 c8d8 c5d4+c3c8 c3c8+g5d8 g5d8+c8d8
    kingsideAttack mate mateIn1 middlegame oneMove Sicilian Defense Sicilian Defense Paulsen-Basman Defense themes kingsideAttack mate mateIn1 middlegame oneMove opening Sicilian Defense Sicilian Defense Paulsen-Basman Defense moves g3f3 c7h2 g3f3+c7h2
  • Loss: MatryoshkaLoss with these parameters:
    {
        "loss": "MultipleNegativesRankingLoss",
        "matryoshka_dims": [
            512,
            256,
            128,
            64,
            32
        ],
        "matryoshka_weights": [
            1,
            1,
            1,
            1,
            1
        ],
        "n_dims_per_step": -1
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • per_device_train_batch_size: 4096
  • num_train_epochs: 20
  • learning_rate: 0.01
  • warmup_steps: 0.1
  • weight_decay: 0.01
  • per_device_eval_batch_size: 4096
  • push_to_hub: True
  • hub_model_id: oneryalcin/static-embedding-chess
  • load_best_model_at_end: True
  • seed: 12

All Hyperparameters

Click to expand
  • per_device_train_batch_size: 4096
  • num_train_epochs: 20
  • max_steps: -1
  • learning_rate: 0.01
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: None
  • warmup_steps: 0.1
  • optim: adamw_torch_fused
  • optim_args: None
  • weight_decay: 0.01
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • optim_target_modules: None
  • gradient_accumulation_steps: 1
  • average_tokens_across_devices: True
  • max_grad_norm: 1.0
  • label_smoothing_factor: 0.0
  • bf16: False
  • fp16: False
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • use_liger_kernel: False
  • liger_kernel_config: None
  • use_cache: False
  • neftune_noise_alpha: None
  • torch_empty_cache_steps: None
  • auto_find_batch_size: False
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • include_num_input_tokens_seen: no
  • log_level: passive
  • log_level_replica: warning
  • disable_tqdm: False
  • project: huggingface
  • trackio_space_id: None
  • trackio_bucket_id: None
  • trackio_static_space_id: None
  • per_device_eval_batch_size: 4096
  • prediction_loss_only: True
  • eval_on_start: False
  • eval_do_concat_batches: True
  • eval_use_gather_object: False
  • eval_accumulation_steps: None
  • include_for_metrics: []
  • batch_eval_metrics: False
  • save_only_model: False
  • save_on_each_node: False
  • enable_jit_checkpoint: False
  • push_to_hub: True
  • hub_private_repo: None
  • hub_model_id: oneryalcin/static-embedding-chess
  • hub_strategy: every_save
  • hub_always_push: False
  • hub_revision: None
  • load_best_model_at_end: True
  • ignore_data_skip: False
  • restore_callback_states_from_checkpoint: False
  • full_determinism: False
  • seed: 12
  • data_seed: None
  • use_cpu: False
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • parallelism_config: None
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • dataloader_prefetch_factor: None
  • remove_unused_columns: True
  • label_names: None
  • train_sampling_strategy: random
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • ddp_static_graph: None
  • ddp_backend: None
  • ddp_timeout: 1800
  • fsdp: []
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • deepspeed: None
  • debug: []
  • skip_memory_metrics: True
  • do_predict: False
  • resume_from_checkpoint: None
  • warmup_ratio: None
  • local_rank: -1
  • prompts: None
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: proportional
  • router_mapping: {}
  • learning_rate_mapping: {}

Training Logs

Epoch Step Training Loss chess-ir_cosine_ndcg@10 chess-ir-tokens_cosine_ndcg@10
-1 -1 - 0.0123 0.0561
0.0025 1 27.3123 - -
0.2020 80 26.3304 - -
0.4040 160 22.2114 - -
0.6061 240 17.4522 - -
0.8081 320 12.8864 - -
1.0 396 - 0.0800 0.1181
1.0101 400 9.1439 - -
1.2121 480 6.5434 - -
1.4141 560 4.9138 - -
1.6162 640 3.9819 - -
1.8182 720 3.4584 - -
2.0 792 - 0.0505 0.0938
2.0202 800 3.1303 - -
2.2222 880 2.9652 - -
2.4242 960 2.8584 - -
2.6263 1040 2.7907 - -
2.8283 1120 2.7475 - -
3.0 1188 - 0.0251 0.0830
3.0303 1200 2.7031 - -
3.2323 1280 2.6927 - -
3.4343 1360 2.6516 - -
3.6364 1440 2.6441 - -
3.8384 1520 2.6202 - -
4.0 1584 - 0.0168 0.0672

Training Time

  • Training: 4.1 minutes
  • Evaluation: 0.2 seconds
  • Total: 4.1 minutes

Framework Versions

  • Python: 3.12.10
  • Sentence Transformers: 5.5.0
  • Transformers: 5.8.0
  • PyTorch: 2.11.0
  • Accelerate: 1.13.0
  • Datasets: 4.8.5
  • Tokenizers: 0.22.2

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MatryoshkaLoss

@misc{kusupati2024matryoshka,
    title={Matryoshka Representation Learning},
    author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
    year={2024},
    eprint={2205.13147},
    archivePrefix={arXiv},
    primaryClass={cs.LG}
}

MultipleNegativesRankingLoss

@misc{oord2019representationlearningcontrastivepredictive,
      title={Representation Learning with Contrastive Predictive Coding},
      author={Aaron van den Oord and Yazhe Li and Oriol Vinyals},
      year={2019},
      eprint={1807.03748},
      archivePrefix={arXiv},
      primaryClass={cs.LG},
      url={https://arxiv.org/abs/1807.03748},
}
Downloads last month

-

Downloads are not tracked for this model. How to track
Safetensors
Model size
2.22M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Papers for oneryalcin/static-embedding-chess

Evaluation results