modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-12 18:33:19
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 555
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-12 18:33:14
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
martinletec55/Mylora
|
martinletec55
| 2025-08-19T08:57:47Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] |
text-to-image
| 2025-08-19T08:56:00Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- output:
url: images/pykaso-output-1755517879199.jpeg
text: '-'
- output:
url: images/pykaso-output-1755518189871.jpeg
text: '-'
- output:
url: images/pykaso-output-1755518369462.jpeg
text: '-'
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: Katie
---
# Katie
<Gallery />
## Model description
This Lora is big boss
## Trigger words
You should use `Katie` to trigger the image generation.
## Download model
[Download](/martinletec55/Mylora/tree/main) them in the Files & versions tab.
|
chooseL1fe/blockassist-bc-thorny_flightless_albatross_1755593324
|
chooseL1fe
| 2025-08-19T08:55:03Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thorny flightless albatross",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T08:54:59Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thorny flightless albatross
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
hf-audio/xcodec-hubert-librispeech
|
hf-audio
| 2025-08-19T08:53:05Z | 0 | 1 |
transformers
|
[
"transformers",
"safetensors",
"xcodec",
"feature-extraction",
"dataset:openslr/librispeech_asr",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2025-08-18T08:37:09Z |
---
library_name: transformers
license: cc-by-4.0
datasets:
- openslr/librispeech_asr
---
# X-Codec (speech, HuBERT)
This codec is intended for speech data.
Original model is `xcodec_wavlm_more_data` from [this table](https://github.com/zhenye234/xcodec?tab=readme-ov-file#available-models).
|
josephr212/blockassist-bc-hoarse_frisky_dingo_1755591769
|
josephr212
| 2025-08-19T08:51:30Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"hoarse frisky dingo",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T08:51:28Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- hoarse frisky dingo
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
KCS97/dog7
|
KCS97
| 2025-08-19T08:51:16Z | 0 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"safetensors",
"text-to-image",
"dreambooth",
"diffusers-training",
"stable-diffusion",
"stable-diffusion-diffusers",
"base_model:stable-diffusion-v1-5/stable-diffusion-v1-5",
"base_model:finetune:stable-diffusion-v1-5/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2025-08-19T08:38:46Z |
---
base_model: stable-diffusion-v1-5/stable-diffusion-v1-5
library_name: diffusers
license: creativeml-openrail-m
inference: true
instance_prompt: a photo of sks dog
tags:
- text-to-image
- dreambooth
- diffusers-training
- stable-diffusion
- stable-diffusion-diffusers
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# DreamBooth - KCS97/dog7
This is a dreambooth model derived from stable-diffusion-v1-5/stable-diffusion-v1-5. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]
|
hf-audio/xcodec-wavlm-more-data
|
hf-audio
| 2025-08-19T08:50:30Z | 0 | 1 |
transformers
|
[
"transformers",
"safetensors",
"xcodec",
"feature-extraction",
"dataset:parler-tts/mls_eng",
"base_model:microsoft/wavlm-base-plus",
"base_model:finetune:microsoft/wavlm-base-plus",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2025-08-18T09:14:44Z |
---
library_name: transformers
license: cc-by-4.0
datasets:
- parler-tts/mls_eng
base_model:
- microsoft/wavlm-base-plus
---
# X-Codec (speech, WavLM)
This codec is intended for speech data.
Original model is `xcodec_wavlm_more_data` from [this table](https://github.com/zhenye234/xcodec?tab=readme-ov-file#available-models).
|
pruddywoody/SuperkarTAPI
|
pruddywoody
| 2025-08-19T08:48:48Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-08-19T08:48:48Z |
---
license: apache-2.0
---
|
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1755592995
|
IvanJAjebu
| 2025-08-19T08:44:19Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thorny slender capybara",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T08:44:15Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thorny slender capybara
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
miguelsigmahot2/blockassist-bc-invisible_patterned_prawn_1755591187
|
miguelsigmahot2
| 2025-08-19T08:41:17Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"invisible patterned prawn",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T08:41:07Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- invisible patterned prawn
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
VoilaRaj/78_1Y85EE
|
VoilaRaj
| 2025-08-19T08:41:13Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-08-19T08:37:18Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
Ferdi3425/blockassist-bc-amphibious_deadly_otter_1755592143
|
Ferdi3425
| 2025-08-19T08:30:28Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"amphibious deadly otter",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T08:30:01Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- amphibious deadly otter
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Alonc/device_to_cve_model_8B
|
Alonc
| 2025-08-19T08:28:51Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen3",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-19T08:28:48Z |
---
base_model: unsloth/qwen3-8b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Alonc
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen3-8b-unsloth-bnb-4bit
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
FreedomIntelligence/AceGPT-v1.5-7B
|
FreedomIntelligence
| 2025-08-19T08:28:25Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"ar",
"zh",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-27T02:48:38Z |
---
license: apache-2.0
language:
- ar
- zh
- en
---
# <b>AceGPT</b>
AceGPT is a fully fine-tuned generative text model collection based on LlaMA2, particularly in the
Arabic language domain. This is the repository for the version 1.5 of 7B pre-trained model.
---
## Model Details
We have released the AceGPT family of large language models, which is a collection of fully fine-tuned generative text models based on LlaMA2, ranging from 7B to 13B parameters. Our models include two main categories: AceGPT and AceGPT-chat. AceGPT-chat is an optimized version specifically designed for dialogue applications. It is worth mentioning that our models have demonstrated superior performance compared to all currently available open-source Arabic dialogue models in multiple benchmark tests. Furthermore, in our human evaluations, our models have shown comparable satisfaction levels to some closed-source models, such as ChatGPT, in the Arabic language.
## Model Developers
We are from the King Abdullah University of Science and Technology (KAUST), the Chinese University of Hong Kong, Shenzhen (CUHKSZ), the Shenzhen Research Institute of Big Data (SRIBD), and King AbdulAziz University (KAU).
## Variations
AceGPT families come in a range of parameter sizes —— 7B and 13B, each size of model has a base category and a -chat category.
## Paper
The paper can be accessed at [link](https://huggingface.co/FreedomIntelligence/AceGPT-v1.5-13B-Chat/blob/main/Second_Language_(Arabic)_Acquisition_of_LLMs_via_Progressive_Vocabulary_Expansion.pdf).
## Input
Models input text only.
## Output
Models output text only.
## Model Evaluation Results
Benchmark evaluation on [Arabic MMLU](https://github.com/FreedomIntelligence/AceGPT) are conducted using accuracy scores as metrics, following the evaluation framework available at https://github.com/FreedomIntelligence/AceGPT/tree/main.
| | STEM | Humanities | Social Sciences | Others | Average |
|------------------|------|------|------|------|------|
| Bloomz-7B-base | 33.35 | 29.29 | 37.58 | 34.53 | 33.69 |
| LLaMA2-7B-base | 30.30 | 29.33 | 27.46 | 30.78 | 29.37 |
| AceGPT-7B-base | 29.73 | 30.95 | 33.45 | 34.42 | 32.14 |
| AceGPT-v1.5-7B-base | 33.03 | 32.08 | 35.39 | 35.59 | 34.03 |
| LLaMA2-13B-base | 32.94 | 32.30 | 33.42 | 37.27 | 33.76 |
| Jais-13B-base | 30.51 | 31.25 | 33.74 | 33.42 | 33.76 |
| AceGPT-13B-base | 36.60 | 38.74 | 43.76 | <u>42.72</u> | 40.45 |
| AceGPT-v1.5-13B-base | <u>36.13</u> | <u>40.07</u> | <u>45.43</u> | 42.17 | <u>40.95</u> |
| Jais-30B-v1-base | 32.67 | 30.67 | 42.13 | 39.60 | 36.27 |
| ChatGPT 3.5 Turbo | **43.38** | **44.12** | **55.57** | **53.21** | **49.07** |
Benchmark evaluation on [ArabicMMLU]((https://github.com/mbzuai-nlp/ArabicMMLU)), and assessed based on its source settings.
| | STEM | Social Sciences | Humanities | Arabic Language | Other | Average |
|------------------|------|------|------|------|------|------|
| Bloomz-7B-base | - | - | - | - | - | - |
| LLaMA2-7B-base | 33.7 | 32.8 | 33.5 | 28.4 | 36.7 | 33.4 |
| AceGPT-7B-base | 35.4 | 35.9 | 36.2 | 31.1 | 41.7 | 36.3 |
| AceGPT-v1.5-7B-base | 36.7 | 36.5 | 34.1 | 30.0 | 41.2 | 37.0 |
| LLaMA2-13B-base | 32.9 | 35.0 | 37.8 | 35.8 | 39.3 | 36.1 |
| Jais-13B-base | 30.3 | 31.4 | 33.6 | 28.1 | 36.3 | 32.2 |
| AceGPT-13B-base | <u>42.7</u> | 45.5 | 48.3 | 42.4 | 50.7 | 46.1 |
| AceGPT-v1.5-13B-base | 42.4 | <u>45.7</u> | 48.4 | <u>46.3</u> | <u>52.5</u> | <u>47.6</u> |
| Jais-30B-v1-base | 39.5 | 45.6 | <u>50.5</u> | 34.6 | 49.1 | 44.8 |
| ChatGPT 3.5 Turbo | **53.8** | **57.0** | **57.5** | **57.6** | **63.8** | **57.7** |
## Samples
#### Sample1(abstract_algebra)
* <b>input:</b>
"فيما يلي أسئلة الاختيار من متعدد (مع الإجابات) حول جبر تجريدي\n\nسؤال: العثور على جميع قيم c في Z_3 بحيث يكون Z_3 [x]/(x^2+c) حقلًا.\nA. 0\nB. 1\nC. 2\nD. 3\nإجابة: B\n\nسؤال: البيان رقم 1 | إذا كان aH عنصرًا في مجموعة العوامل ، فإن | aH | يقسم | a |. البيان رقم 2 | إذا كانت H و K مجموعات فرعية لـ G ، فإن HK مجموعة فرعية لـ G.\nA. صحيح ، صحيح\nB. خطأ ، خطأ\nC. صحيح ، خطأ\nD. خطأ ، صحيح\nإجابة: B\n\nسؤال: العبارة 1 | كل عنصر من مجموعة يولد مجموعة دورية من المجموعة. العبارة 2 | المجموعة المتناظرة S_10 لديها 10 عناصر.\nA. صحيح، صحيح\nB. خطأ، خطأ\nC. صحيح، خطأ\nD. خطأ، صحيح\nإجابة: C\n\nسؤال: البيان 1| كل وظيفة من مجموعة محدودة على نفسها يجب أن تكون واحدة لكل مجموعة. البيان 2 | كل فرع فرعي لمجموعة أبيلية هو أبيلي.\nA. صحيح, صحيح\nB. خاطئ, خاطئ\nC. صحيح, خاطئ\nD. خاطئ, صحيح\nإجابة: A\n\nسؤال: اعثر على خاصية الحلقة 2Z.\nA. 0\nB. 3\nC. 12\nD. 30\nإجابة: A\n\nسؤال: ما هو الدرجة للامتداد الميداني الناتج من Q(sqrt(2), sqrt(3), sqrt(18)) على Q؟\nA. 0\nB. 4\nC. 2\nD. 6\nإجابة:"
* <b>output:</b>
"B\n\nسؤال: ما هو عدد العناصر"
#### Sample2(business_ethics)
* <b>input:</b>
"فيما يلي أسئلة الاختيار من متعدد (مع الإجابات) حول أخلاقيات الأعمال\n\nسؤال: ما هي الحجج الأخلاقية المتعلقة بالمسؤولية الاجتماعية للشركات؟\nA. التكاليف الخارجية، القوة، الاستقلالية\nB. الإعلام، الموارد الضعيفة، التبادل التعاوني\nC. الإعلام، القوة، الاستقلالية\nD. التكاليف الخارجية، القوة، التبادل التعاوني\nإجابة: D\n\nسؤال: _______ هو المحاولة المباشرة لإدارة القضايا الأخلاقية أو المشاكل، سواء بشكل رسمي أو غير رسمي، من خلال سياسات وممارسات وبرامج محددة.\nA. المسؤولية الاجتماعية للشركات\nB. إدارة الأخلاقيات العملية\nC. الاستدامة\nD. إدارة البيئة\nإجابة: B\n\nسؤال: لضمان استقلال أعضاء مجلس الإدارة غير التنفيذية ، هناك عدد من الخطوات التي يمكن اتخاذها ، والتي تشمل اختيار الغير التنفيذيين من _______ الشركة ، وتعيينهم لمدة _________ ، وكذلك تعيينهم _________.\nA. خارج الشركة ، محدودة ، بشكل مستقل\nB. من الداخل ، محدودة ، بشكل متقطع\nC. خارج الشركة ، غير محدودة ، بشكل متقطع\nD. من الداخل ، غير محدودة ، بشكل مستقل\nإجابة: A\n\nسؤال: ما هي الأساليب التي يمكن للمدير الأمني الذي يسعى لتحقيق أهدافه الاختيار بينها؟\nA. العمل المباشر الغير عنيف ، العمل المباشر العنيف ، العمل غير المباشر ، الحملة الدعائية\nB. العمل غير المباشر ، العمل الأوتيل ، العمل المباشر الغير عنيف ، الحملة الإعلامية\nC. العمل غير المباشر ، العمل المباشر العنيف ، العمل المباشر غير العنيف المباشر ، الحملة الدعائية\nD. العمل المباشر الغير عنيف ، العمل الأوتيل ، العمل غير المباشر ، الحملة الإعلامية\nإجابة: C\n\nسؤال: على عكس _______ ، تهدف _______ إلى مكافأة السلوك الإيجابي للشركات. تم تعزيز نجاح مثل هذه الحملات من خلال استخدام ___________, الذي يتيح للحملات تيسير تحقيق الشركة لــ _________ .\nA. الحملات الاستهلاكية، الحملات الاستهلاكية العامة، تكنولوجيا سلسلة الكتل، التبرعات الخيرية\nB. الحملات التحفيزية، الحملات الاستهلاكية العامة، التكنولوجيا الرقمية، زيادة المبيعات\nC. الحملات الاستهلاكية، الحملات الشرائية، تكنولوجيا سلسلة الكتل، التبرعات الخيرية\nD. المقاطعات، الحملات التحفيزية، الحملات الرقمية، زيادة المبيعات\nإجابة: D\n\nسؤال: تُصبح _______ مثل البيتكوين أكثر انتشارًا وتحمل مجموعة كبيرة من الآثار الأخلاقية المرتبطة بها، على سبيل المثال، إنها _______ وأكثر _______. ومع ذلك، تم استخدامها أيضًا للمشاركة في _______.\nA. العملات الرقمية، مكلفة، آمنة، جرائم مالية\nB. العملات التقليدية، رخيصة، غير آمنة، العطاء الخيري\nC. العملات الرقمية، رخيصة، آمنة، جرائم مالية\nD. العملات التقليدية، مكلفة، غير آمنة، العطاء الخيري\nإجابة:"
* <b>output:</b>
"A\n\nسؤال: _______ هو"
# Reference
```
@article{zhu2025second,
title={Second Language (Arabic) Acquisition of LLMs via Progressive Vocabulary Expansion},
author={Zhu, Jianqing and Huang, Huang and Lin, Zhihang and Liang, Juhao and Tang, Zhengyang and Almubarak, Khalid and Alharthi, Mosen and An, Bang and He, Juncai and Wu, Xiangbo and Yu, Fei and Chen, Junying and Ma, Zhuoheng and Du, Yuhao and Hu, Yan and Zhang, He and Alghamdi, Emad A. and Zhang, Lian and Sun, Ruoyu and Li, Haizhou and Wang, Benyou and Xu, Jinchao},
journal={ACL 2025},
year={2025}
}
```
|
donoway/BoolQ_Llama-3.2-1B-me3479q5
|
donoway
| 2025-08-19T08:26:59Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"base_model:meta-llama/Llama-3.2-1B",
"base_model:finetune:meta-llama/Llama-3.2-1B",
"license:llama3.2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-19T08:06:37Z |
---
library_name: transformers
license: llama3.2
base_model: meta-llama/Llama-3.2-1B
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: BoolQ_Llama-3.2-1B-me3479q5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BoolQ_Llama-3.2-1B-me3479q5
This model is a fine-tuned version of [meta-llama/Llama-3.2-1B](https://huggingface.co/meta-llama/Llama-3.2-1B) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2599
- Model Preparation Time: 0.0055
- Mdl: 10661.2190
- Accumulated Loss: 7389.7939
- Correct Preds: 2256.0
- Total Preds: 3270.0
- Accuracy: 0.6899
- Correct Gen Preds: 1847.0
- Gen Accuracy: 0.5648
- Correct Gen Preds 9642: 1328.0
- Correct Preds 9642: 1615.0
- Total Labels 9642: 2026.0
- Accuracy 9642: 0.7971
- Gen Accuracy 9642: 0.6555
- Correct Gen Preds 2822: 511.0
- Correct Preds 2822: 641.0
- Total Labels 2822: 1231.0
- Accuracy 2822: 0.5207
- Gen Accuracy 2822: 0.4151
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 120
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.01
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Model Preparation Time | Mdl | Accumulated Loss | Correct Preds | Total Preds | Accuracy | Correct Gen Preds | Gen Accuracy | Correct Gen Preds 9642 | Correct Preds 9642 | Total Labels 9642 | Accuracy 9642 | Gen Accuracy 9642 | Correct Gen Preds 2822 | Correct Preds 2822 | Total Labels 2822 | Accuracy 2822 | Gen Accuracy 2822 |
|:-------------:|:-----:|:----:|:---------------:|:----------------------:|:----------:|:----------------:|:-------------:|:-----------:|:--------:|:-----------------:|:------------:|:----------------------:|:------------------:|:-----------------:|:-------------:|:-----------------:|:----------------------:|:------------------:|:-----------------:|:-------------:|:-----------------:|
| No log | 0 | 0 | 0.7080 | 0.0055 | 3339.8933 | 2315.0376 | 2032.0 | 3270.0 | 0.6214 | 2040.0 | 0.6239 | 2007.0 | 2008.0 | 2026.0 | 0.9911 | 0.9906 | 24.0 | 24.0 | 1231.0 | 0.0195 | 0.0195 |
| 0.6888 | 1.0 | 2 | 1.5355 | 0.0055 | 7243.7816 | 5021.0068 | 1231.0 | 3270.0 | 0.3765 | 1225.0 | 0.3746 | 0.0 | 0.0 | 2026.0 | 0.0 | 0.0 | 1216.0 | 1231.0 | 1231.0 | 1.0 | 0.9878 |
| 0.4873 | 2.0 | 4 | 1.0878 | 0.0055 | 5131.7632 | 3557.0672 | 2027.0 | 3270.0 | 0.6199 | 641.0 | 0.1960 | 631.0 | 2023.0 | 2026.0 | 0.9985 | 0.3115 | 1.0 | 4.0 | 1231.0 | 0.0032 | 0.0008 |
| 0.2411 | 3.0 | 6 | 1.0755 | 0.0055 | 5073.9259 | 3516.9774 | 1560.0 | 3270.0 | 0.4771 | 1290.0 | 0.3945 | 227.0 | 388.0 | 2026.0 | 0.1915 | 0.1120 | 1054.0 | 1172.0 | 1231.0 | 0.9521 | 0.8562 |
| 0.2022 | 4.0 | 8 | 0.8677 | 0.0055 | 4093.3239 | 2837.2759 | 1917.0 | 3270.0 | 0.5862 | 1654.0 | 0.5058 | 694.0 | 884.0 | 2026.0 | 0.4363 | 0.3425 | 951.0 | 1033.0 | 1231.0 | 0.8392 | 0.7725 |
| 0.0205 | 5.0 | 10 | 0.9632 | 0.0055 | 4543.9737 | 3149.6426 | 2200.0 | 3270.0 | 0.6728 | 2086.0 | 0.6379 | 1619.0 | 1705.0 | 2026.0 | 0.8416 | 0.7991 | 458.0 | 495.0 | 1231.0 | 0.4021 | 0.3721 |
| 0.0011 | 6.0 | 12 | 1.6852 | 0.0055 | 7949.9900 | 5510.5132 | 2228.0 | 3270.0 | 0.6813 | 1965.0 | 0.6009 | 1438.0 | 1632.0 | 2026.0 | 0.8055 | 0.7098 | 518.0 | 596.0 | 1231.0 | 0.4842 | 0.4208 |
| 0.0001 | 7.0 | 14 | 2.2599 | 0.0055 | 10661.2190 | 7389.7939 | 2256.0 | 3270.0 | 0.6899 | 1847.0 | 0.5648 | 1328.0 | 1615.0 | 2026.0 | 0.7971 | 0.6555 | 511.0 | 641.0 | 1231.0 | 0.5207 | 0.4151 |
| 0.0 | 8.0 | 16 | 2.5994 | 0.0055 | 12262.8330 | 8499.9481 | 2250.0 | 3270.0 | 0.6881 | 1703.0 | 0.5208 | 1190.0 | 1594.0 | 2026.0 | 0.7868 | 0.5874 | 504.0 | 656.0 | 1231.0 | 0.5329 | 0.4094 |
| 0.0 | 9.0 | 18 | 2.7853 | 0.0055 | 13140.1064 | 9108.0277 | 2245.0 | 3270.0 | 0.6865 | 1616.0 | 0.4942 | 1106.0 | 1579.0 | 2026.0 | 0.7794 | 0.5459 | 501.0 | 666.0 | 1231.0 | 0.5410 | 0.4070 |
| 0.0 | 10.0 | 20 | 2.8947 | 0.0055 | 13656.2114 | 9465.7644 | 2244.0 | 3270.0 | 0.6862 | 1550.0 | 0.4740 | 1053.0 | 1574.0 | 2026.0 | 0.7769 | 0.5197 | 488.0 | 670.0 | 1231.0 | 0.5443 | 0.3964 |
| 0.0 | 11.0 | 22 | 2.9647 | 0.0055 | 13986.3195 | 9694.5779 | 2242.0 | 3270.0 | 0.6856 | 1525.0 | 0.4664 | 1028.0 | 1567.0 | 2026.0 | 0.7734 | 0.5074 | 488.0 | 675.0 | 1231.0 | 0.5483 | 0.3964 |
| 0.0 | 12.0 | 24 | 3.0204 | 0.0055 | 14249.1044 | 9876.7265 | 2241.0 | 3270.0 | 0.6853 | 1520.0 | 0.4648 | 1026.0 | 1564.0 | 2026.0 | 0.7720 | 0.5064 | 485.0 | 677.0 | 1231.0 | 0.5500 | 0.3940 |
| 0.0 | 13.0 | 26 | 3.0632 | 0.0055 | 14450.8337 | 10016.5547 | 2242.0 | 3270.0 | 0.6856 | 1517.0 | 0.4639 | 1032.0 | 1566.0 | 2026.0 | 0.7730 | 0.5094 | 476.0 | 676.0 | 1231.0 | 0.5491 | 0.3867 |
| 0.0 | 14.0 | 28 | 3.0930 | 0.0055 | 14591.5894 | 10114.1191 | 2239.0 | 3270.0 | 0.6847 | 1541.0 | 0.4713 | 1050.0 | 1568.0 | 2026.0 | 0.7739 | 0.5183 | 482.0 | 671.0 | 1231.0 | 0.5451 | 0.3916 |
| 0.0 | 15.0 | 30 | 3.1175 | 0.0055 | 14707.0215 | 10194.1305 | 2239.0 | 3270.0 | 0.6847 | 1557.0 | 0.4761 | 1064.0 | 1568.0 | 2026.0 | 0.7739 | 0.5252 | 484.0 | 671.0 | 1231.0 | 0.5451 | 0.3932 |
| 0.0 | 16.0 | 32 | 3.1366 | 0.0055 | 14797.0392 | 10256.5260 | 2237.0 | 3270.0 | 0.6841 | 1567.0 | 0.4792 | 1073.0 | 1567.0 | 2026.0 | 0.7734 | 0.5296 | 485.0 | 670.0 | 1231.0 | 0.5443 | 0.3940 |
| 0.0 | 17.0 | 34 | 3.1519 | 0.0055 | 14869.3338 | 10306.6368 | 2241.0 | 3270.0 | 0.6853 | 1579.0 | 0.4829 | 1085.0 | 1572.0 | 2026.0 | 0.7759 | 0.5355 | 485.0 | 669.0 | 1231.0 | 0.5435 | 0.3940 |
| 0.0 | 18.0 | 36 | 3.1598 | 0.0055 | 14906.7599 | 10332.5786 | 2244.0 | 3270.0 | 0.6862 | 1593.0 | 0.4872 | 1096.0 | 1573.0 | 2026.0 | 0.7764 | 0.5410 | 488.0 | 671.0 | 1231.0 | 0.5451 | 0.3964 |
| 0.0 | 19.0 | 38 | 3.1724 | 0.0055 | 14965.9286 | 10373.5912 | 2242.0 | 3270.0 | 0.6856 | 1603.0 | 0.4902 | 1104.0 | 1573.0 | 2026.0 | 0.7764 | 0.5449 | 490.0 | 669.0 | 1231.0 | 0.5435 | 0.3981 |
| 0.0 | 20.0 | 40 | 3.1785 | 0.0055 | 14994.7059 | 10393.5381 | 2247.0 | 3270.0 | 0.6872 | 1604.0 | 0.4905 | 1106.0 | 1577.0 | 2026.0 | 0.7784 | 0.5459 | 489.0 | 670.0 | 1231.0 | 0.5443 | 0.3972 |
| 0.0 | 21.0 | 42 | 3.1864 | 0.0055 | 15032.1352 | 10419.4821 | 2245.0 | 3270.0 | 0.6865 | 1613.0 | 0.4933 | 1113.0 | 1578.0 | 2026.0 | 0.7789 | 0.5494 | 491.0 | 667.0 | 1231.0 | 0.5418 | 0.3989 |
| 0.0 | 22.0 | 44 | 3.1900 | 0.0055 | 15049.2090 | 10431.3168 | 2242.0 | 3270.0 | 0.6856 | 1620.0 | 0.4954 | 1117.0 | 1576.0 | 2026.0 | 0.7779 | 0.5513 | 494.0 | 666.0 | 1231.0 | 0.5410 | 0.4013 |
| 0.0 | 23.0 | 46 | 3.1960 | 0.0055 | 15077.5586 | 10450.9672 | 2242.0 | 3270.0 | 0.6856 | 1621.0 | 0.4957 | 1120.0 | 1575.0 | 2026.0 | 0.7774 | 0.5528 | 492.0 | 667.0 | 1231.0 | 0.5418 | 0.3997 |
| 0.0 | 24.0 | 48 | 3.1986 | 0.0055 | 15089.7720 | 10459.4329 | 2243.0 | 3270.0 | 0.6859 | 1625.0 | 0.4969 | 1125.0 | 1576.0 | 2026.0 | 0.7779 | 0.5553 | 491.0 | 667.0 | 1231.0 | 0.5418 | 0.3989 |
| 0.0 | 25.0 | 50 | 3.2004 | 0.0055 | 15098.4212 | 10465.4281 | 2241.0 | 3270.0 | 0.6853 | 1627.0 | 0.4976 | 1123.0 | 1575.0 | 2026.0 | 0.7774 | 0.5543 | 495.0 | 666.0 | 1231.0 | 0.5410 | 0.4021 |
| 0.0 | 26.0 | 52 | 3.2032 | 0.0055 | 15111.4442 | 10474.4550 | 2246.0 | 3270.0 | 0.6869 | 1629.0 | 0.4982 | 1127.0 | 1577.0 | 2026.0 | 0.7784 | 0.5563 | 493.0 | 669.0 | 1231.0 | 0.5435 | 0.4005 |
| 0.0 | 27.0 | 54 | 3.2052 | 0.0055 | 15120.6726 | 10480.8516 | 2244.0 | 3270.0 | 0.6862 | 1633.0 | 0.4994 | 1133.0 | 1577.0 | 2026.0 | 0.7784 | 0.5592 | 491.0 | 667.0 | 1231.0 | 0.5418 | 0.3989 |
| 0.0 | 28.0 | 56 | 3.2064 | 0.0055 | 15126.6264 | 10484.9784 | 2245.0 | 3270.0 | 0.6865 | 1634.0 | 0.4997 | 1127.0 | 1576.0 | 2026.0 | 0.7779 | 0.5563 | 498.0 | 669.0 | 1231.0 | 0.5435 | 0.4045 |
| 0.0 | 29.0 | 58 | 3.2093 | 0.0055 | 15140.1416 | 10494.3465 | 2244.0 | 3270.0 | 0.6862 | 1637.0 | 0.5006 | 1138.0 | 1576.0 | 2026.0 | 0.7779 | 0.5617 | 490.0 | 668.0 | 1231.0 | 0.5426 | 0.3981 |
| 0.0 | 30.0 | 60 | 3.2123 | 0.0055 | 15154.4079 | 10504.2351 | 2240.0 | 3270.0 | 0.6850 | 1643.0 | 0.5024 | 1138.0 | 1577.0 | 2026.0 | 0.7784 | 0.5617 | 496.0 | 663.0 | 1231.0 | 0.5386 | 0.4029 |
| 0.0 | 31.0 | 62 | 3.2131 | 0.0055 | 15158.3154 | 10506.9436 | 2242.0 | 3270.0 | 0.6856 | 1640.0 | 0.5015 | 1137.0 | 1576.0 | 2026.0 | 0.7779 | 0.5612 | 494.0 | 666.0 | 1231.0 | 0.5410 | 0.4013 |
| 0.0 | 32.0 | 64 | 3.2138 | 0.0055 | 15161.4034 | 10509.0840 | 2240.0 | 3270.0 | 0.6850 | 1647.0 | 0.5037 | 1144.0 | 1576.0 | 2026.0 | 0.7779 | 0.5647 | 494.0 | 664.0 | 1231.0 | 0.5394 | 0.4013 |
| 0.0 | 33.0 | 66 | 3.2162 | 0.0055 | 15172.9442 | 10517.0835 | 2242.0 | 3270.0 | 0.6856 | 1650.0 | 0.5046 | 1146.0 | 1578.0 | 2026.0 | 0.7789 | 0.5656 | 495.0 | 664.0 | 1231.0 | 0.5394 | 0.4021 |
| 0.0 | 34.0 | 68 | 3.2208 | 0.0055 | 15194.6156 | 10532.1050 | 2236.0 | 3270.0 | 0.6838 | 1650.0 | 0.5046 | 1144.0 | 1574.0 | 2026.0 | 0.7769 | 0.5647 | 497.0 | 662.0 | 1231.0 | 0.5378 | 0.4037 |
| 0.0 | 35.0 | 70 | 3.2202 | 0.0055 | 15191.4896 | 10529.9382 | 2241.0 | 3270.0 | 0.6853 | 1650.0 | 0.5046 | 1143.0 | 1577.0 | 2026.0 | 0.7784 | 0.5642 | 498.0 | 664.0 | 1231.0 | 0.5394 | 0.4045 |
| 0.0 | 36.0 | 72 | 3.2211 | 0.0055 | 15196.0934 | 10533.1293 | 2241.0 | 3270.0 | 0.6853 | 1654.0 | 0.5058 | 1148.0 | 1577.0 | 2026.0 | 0.7784 | 0.5666 | 497.0 | 664.0 | 1231.0 | 0.5394 | 0.4037 |
| 0.0 | 37.0 | 74 | 3.2230 | 0.0055 | 15204.6715 | 10539.0752 | 2243.0 | 3270.0 | 0.6859 | 1655.0 | 0.5061 | 1150.0 | 1577.0 | 2026.0 | 0.7784 | 0.5676 | 496.0 | 666.0 | 1231.0 | 0.5410 | 0.4029 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
koloni/blockassist-bc-deadly_graceful_stingray_1755584085
|
koloni
| 2025-08-19T08:25:54Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"deadly graceful stingray",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T06:41:59Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- deadly graceful stingray
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
hoan17/saving_LOe400s16_scratch_8
|
hoan17
| 2025-08-19T08:25:31Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"trl",
"o2o",
"reinforcement-learning",
"text-to-image",
"stable-diffusion",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2025-08-19T08:25:02Z |
---
license: apache-2.0
tags:
- trl
- o2o
- diffusers
- reinforcement-learning
- text-to-image
- stable-diffusion
---
# TRL O2O Model
This is a diffusion model that has been fine-tuned with reinforcement learning to
guide the model outputs according to a value, function, or human feedback. The model can be used for image generation conditioned with text.
|
Jansenhbar/dummy-model
|
Jansenhbar
| 2025-08-19T08:24:13Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"camembert",
"fill-mask",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2025-08-19T08:23:58Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
sungkwan2/segformer-b0-scene-parse-150
|
sungkwan2
| 2025-08-19T08:24:07Z | 2 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"segformer",
"generated_from_trainer",
"base_model:nvidia/mit-b0",
"base_model:finetune:nvidia/mit-b0",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2025-04-23T12:00:05Z |
---
library_name: transformers
license: other
base_model: nvidia/mit-b0
tags:
- generated_from_trainer
model-index:
- name: segformer-b0-scene-parse-150
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segformer-b0-scene-parse-150
This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 4.1486
- Mean Iou: 0.0000
- Mean Accuracy: 0.0001
- Overall Accuracy: 0.0001
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-------------:|:----------------:|
| 4.9721 | 0.025 | 1 | 5.0059 | 0.0007 | 0.0148 | 0.0063 |
| 4.9475 | 0.05 | 2 | 5.0027 | 0.0007 | 0.0140 | 0.0060 |
| 4.9457 | 0.075 | 3 | 4.9996 | 0.0008 | 0.0144 | 0.0063 |
| 4.9923 | 0.1 | 4 | 4.9959 | 0.0008 | 0.0142 | 0.0063 |
| 5.0016 | 0.125 | 5 | 4.9912 | 0.0009 | 0.0153 | 0.0069 |
| 4.9753 | 0.15 | 6 | 4.9876 | 0.0008 | 0.0149 | 0.0070 |
| 4.8799 | 0.175 | 7 | 4.9824 | 0.0006 | 0.0108 | 0.0051 |
| 4.9689 | 0.2 | 8 | 4.9767 | 0.0006 | 0.0095 | 0.0045 |
| 4.9046 | 0.225 | 9 | 4.9697 | 0.0006 | 0.0093 | 0.0044 |
| 4.8772 | 0.25 | 10 | 4.9629 | 0.0005 | 0.0074 | 0.0035 |
| 4.7839 | 0.275 | 11 | 4.9574 | 0.0005 | 0.0084 | 0.0038 |
| 4.9577 | 0.3 | 12 | 4.9500 | 0.0005 | 0.0074 | 0.0031 |
| 4.8491 | 0.325 | 13 | 4.9411 | 0.0004 | 0.0067 | 0.0026 |
| 4.8449 | 0.35 | 14 | 4.9340 | 0.0004 | 0.0070 | 0.0026 |
| 4.8899 | 0.375 | 15 | 4.9229 | 0.0003 | 0.0051 | 0.0020 |
| 4.7924 | 0.4 | 16 | 4.9163 | 0.0003 | 0.0050 | 0.0019 |
| 4.7651 | 0.425 | 17 | 4.9072 | 0.0003 | 0.0043 | 0.0016 |
| 4.7951 | 0.45 | 18 | 4.8953 | 0.0002 | 0.0035 | 0.0013 |
| 4.7355 | 0.475 | 19 | 4.8865 | 0.0002 | 0.0028 | 0.0010 |
| 4.7461 | 0.5 | 20 | 4.8723 | 0.0002 | 0.0026 | 0.0008 |
| 4.704 | 0.525 | 21 | 4.8606 | 0.0002 | 0.0022 | 0.0007 |
| 4.7775 | 0.55 | 22 | 4.8484 | 0.0001 | 0.0017 | 0.0006 |
| 4.7081 | 0.575 | 23 | 4.8331 | 0.0001 | 0.0013 | 0.0004 |
| 4.7681 | 0.6 | 24 | 4.8187 | 0.0001 | 0.0009 | 0.0003 |
| 4.7297 | 0.625 | 25 | 4.8037 | 0.0001 | 0.0008 | 0.0003 |
| 4.8181 | 0.65 | 26 | 4.7882 | 0.0001 | 0.0007 | 0.0002 |
| 4.833 | 0.675 | 27 | 4.7748 | 0.0001 | 0.0006 | 0.0002 |
| 4.7222 | 0.7 | 28 | 4.7575 | 0.0000 | 0.0004 | 0.0002 |
| 4.6457 | 0.725 | 29 | 4.7389 | 0.0000 | 0.0004 | 0.0002 |
| 4.7089 | 0.75 | 30 | 4.7236 | 0.0000 | 0.0005 | 0.0002 |
| 4.543 | 0.775 | 31 | 4.7079 | 0.0001 | 0.0006 | 0.0002 |
| 4.5529 | 0.8 | 32 | 4.6963 | 0.0001 | 0.0006 | 0.0003 |
| 4.7005 | 0.825 | 33 | 4.6759 | 0.0001 | 0.0006 | 0.0003 |
| 4.4735 | 0.85 | 34 | 4.6630 | 0.0001 | 0.0008 | 0.0004 |
| 4.6562 | 0.875 | 35 | 4.6468 | 0.0001 | 0.0009 | 0.0004 |
| 4.5902 | 0.9 | 36 | 4.6274 | 0.0001 | 0.0008 | 0.0004 |
| 4.4974 | 0.925 | 37 | 4.6125 | 0.0001 | 0.0008 | 0.0004 |
| 4.524 | 0.95 | 38 | 4.5967 | 0.0001 | 0.0011 | 0.0005 |
| 4.5527 | 0.975 | 39 | 4.5826 | 0.0001 | 0.0011 | 0.0005 |
| 4.5165 | 1.0 | 40 | 4.5627 | 0.0001 | 0.0010 | 0.0005 |
| 4.6337 | 1.025 | 41 | 4.5502 | 0.0001 | 0.0012 | 0.0006 |
| 4.4551 | 1.05 | 42 | 4.5425 | 0.0001 | 0.0012 | 0.0005 |
| 4.4697 | 1.075 | 43 | 4.5294 | 0.0001 | 0.0006 | 0.0003 |
| 4.4967 | 1.1 | 44 | 4.5065 | 0.0001 | 0.0007 | 0.0003 |
| 4.4839 | 1.125 | 45 | 4.4896 | 0.0000 | 0.0004 | 0.0002 |
| 4.4394 | 1.15 | 46 | 4.4699 | 0.0000 | 0.0003 | 0.0001 |
| 4.4557 | 1.175 | 47 | 4.4511 | 0.0000 | 0.0003 | 0.0001 |
| 4.2669 | 1.2 | 48 | 4.4475 | 0.0000 | 0.0003 | 0.0001 |
| 4.3143 | 1.225 | 49 | 4.4325 | 0.0000 | 0.0002 | 0.0001 |
| 4.4519 | 1.25 | 50 | 4.4195 | 0.0000 | 0.0002 | 0.0001 |
| 4.5376 | 1.275 | 51 | 4.4092 | 0.0000 | 0.0001 | 0.0001 |
| 4.2617 | 1.3 | 52 | 4.4058 | 0.0000 | 0.0001 | 0.0000 |
| 4.2813 | 1.325 | 53 | 4.3936 | 0.0000 | 0.0001 | 0.0000 |
| 4.5218 | 1.35 | 54 | 4.3867 | 0.0000 | 0.0002 | 0.0001 |
| 4.4805 | 1.375 | 55 | 4.3691 | 0.0000 | 0.0002 | 0.0001 |
| 4.184 | 1.4 | 56 | 4.3574 | 0.0000 | 0.0002 | 0.0001 |
| 4.2208 | 1.425 | 57 | 4.3606 | 0.0000 | 0.0001 | 0.0001 |
| 4.5288 | 1.45 | 58 | 4.3579 | 0.0000 | 0.0001 | 0.0001 |
| 4.3959 | 1.475 | 59 | 4.3421 | 0.0000 | 0.0001 | 0.0000 |
| 4.2578 | 1.5 | 60 | 4.3403 | 0.0000 | 0.0001 | 0.0000 |
| 4.3504 | 1.525 | 61 | 4.3307 | 0.0000 | 0.0001 | 0.0000 |
| 4.2364 | 1.55 | 62 | 4.3177 | 0.0000 | 0.0001 | 0.0000 |
| 4.3248 | 1.575 | 63 | 4.2924 | 0.0000 | 0.0000 | 0.0000 |
| 4.2771 | 1.6 | 64 | 4.2698 | 0.0000 | 0.0000 | 0.0000 |
| 4.2447 | 1.625 | 65 | 4.2533 | 0.0000 | 0.0000 | 0.0000 |
| 4.4481 | 1.65 | 66 | 4.2418 | 0.0000 | 0.0000 | 0.0000 |
| 4.1369 | 1.675 | 67 | 4.2374 | 0.0000 | 0.0000 | 0.0000 |
| 4.2266 | 1.7 | 68 | 4.2305 | 0.0000 | 0.0000 | 0.0000 |
| 4.5113 | 1.725 | 69 | 4.2225 | 0.0000 | 0.0000 | 0.0000 |
| 4.4737 | 1.75 | 70 | 4.2077 | 0.0000 | 0.0000 | 0.0000 |
| 4.4571 | 1.775 | 71 | 4.1960 | 0.0000 | 0.0001 | 0.0000 |
| 4.2179 | 1.8 | 72 | 4.1824 | 0.0000 | 0.0001 | 0.0000 |
| 4.5426 | 1.825 | 73 | 4.1654 | 0.0000 | 0.0002 | 0.0001 |
| 4.3632 | 1.85 | 74 | 4.1572 | 0.0000 | 0.0002 | 0.0001 |
| 4.2132 | 1.875 | 75 | 4.1628 | 0.0000 | 0.0002 | 0.0001 |
| 4.3442 | 1.9 | 76 | 4.1621 | 0.0000 | 0.0001 | 0.0000 |
| 4.4454 | 1.925 | 77 | 4.1647 | 0.0000 | 0.0001 | 0.0000 |
| 4.1564 | 1.95 | 78 | 4.1691 | 0.0000 | 0.0001 | 0.0000 |
| 4.5028 | 1.975 | 79 | 4.1513 | 0.0000 | 0.0002 | 0.0001 |
| 4.3814 | 2.0 | 80 | 4.1486 | 0.0000 | 0.0001 | 0.0001 |
### Framework versions
- Transformers 4.55.2
- Pytorch 2.6.0+cu124
- Datasets 4.0.0
- Tokenizers 0.21.4
|
ankitkushwaha90/Image_transformer_algorithm
|
ankitkushwaha90
| 2025-08-19T08:14:56Z | 0 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"finance",
"text-classification",
"en",
"dataset:UCSC-VLAA/GPT-Image-Edit-1.5M",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:mit",
"region:us"
] |
text-classification
| 2025-08-19T08:08:13Z |
---
license: mit
datasets:
- UCSC-VLAA/GPT-Image-Edit-1.5M
language:
- en
metrics:
- accuracy
base_model:
- stabilityai/stable-diffusion-xl-base-1.0
new_version: openai/gpt-oss-120b
pipeline_tag: text-classification
library_name: adapter-transformers
tags:
- finance
---
# 🚀 Stable Diffusion with Transformers (Advanced Training)
This project demonstrates how to **train a Stable Diffusion-like model** using an **image dataset** with advanced **Transformer-based denoising**.
The implementation leverages **PyTorch + Hugging Face Diffusers + Transformers**.
---
## 📌 Overview
Stable Diffusion is a **Latent Diffusion Model (LDM)** that generates images by:
1. Encoding images into a **latent space** using a **VAE (Variational Autoencoder)**.
2. Adding **Gaussian noise** to the latents across multiple time steps.
3. Training a **denoising Transformer/UNet** to remove noise step by step.
4. Using a **text encoder (CLIP)** for **prompt conditioning**.
5. Decoding the cleaned latents back to an **image**.
---
## 🔬 Architecture
```mermaid
graph TD;
A[Input Image] -->|VAE Encoder| B[Latent Space];
B -->|Add Noise| C[Noisy Latents];
C -->|Transformer / UNet Denoiser| D[Clean Latents];
D -->|VAE Decoder| E[Output Image];
F[Text Prompt] -->|CLIP Encoder| C;
```
- VAE → Compresses image → latent space
- Transformer/UNet → Learns to denoise latent
- Text Encoder → Aligns text + image
- Noise Scheduler → Controls forward & reverse diffusion
## 📂 Dataset
- Images should be resized (256x256) and normalized to [-1, 1].
- Optional: Provide text captions for conditioning.
- Example:
```bash
data/
├── class1/
│ ├── img1.png
│ └── img2.jpg
├── class2/
│ ├── img3.png
│ └── img4.jpg
```
## ⚙️ Training Algorithm
The training process for Stable Diffusion with Transformers follows these steps:
1. **Encode Images** → Pass input images through a **VAE Encoder** to obtain latent representations.
2. **Sample Noise & Timestep** → Randomly sample Gaussian noise and a timestep `t`.
3. **Add Noise** → Corrupt the latent vectors with noise according to timestep `t`.
4. **Text Conditioning** → Encode text prompts using **CLIP** (or another Transformer text encoder).
5. **Noise Prediction** → Feed the noisy latents + text embeddings into the **Transformer/UNet** to predict the added noise.
6. **Compute Loss** → Calculate the **Mean Squared Error (MSE)** between predicted noise and true noise.
7. **Backpropagation** → Update model weights using gradient descent.
---
```mermaid
flowchart TD
A[Image] -->|VAE Encoder| B[Latent Space]
B -->|Add Noise + t| C[Noisy Latents]
D[Text Prompt] -->|CLIP Encoder| C
C -->|Transformer / UNet| E[Predicted Noise]
E -->|MSE Loss| F[Training Update]
```
## 🧑💻 Example Training Code
```python
from diffusers import UNet2DConditionModel, DDPMScheduler, AutoencoderKL
from transformers import CLIPTextModel, CLIPTokenizer
import torch, torch.nn as nn
from torchvision import datasets, transforms
from torch.utils.data import DataLoader
# Dataset
transform = transforms.Compose([
transforms.Resize((256, 256)),
transforms.ToTensor(),
transforms.Normalize([0.5], [0.5])
])
dataset = datasets.ImageFolder("path_to_images", transform=transform)
dataloader = DataLoader(dataset, batch_size=8, shuffle=True)
# Components
vae = AutoencoderKL.from_pretrained("stabilityai/sd-vae-ft-mse")
unet = UNet2DConditionModel.from_pretrained("CompVis/stable-diffusion-v1-4", subfolder="unet")
tokenizer = CLIPTokenizer.from_pretrained("openai/clip-vit-base-patch32")
text_encoder = CLIPTextModel.from_pretrained("openai/clip-vit-base-patch32")
scheduler = DDPMScheduler(num_train_timesteps=1000)
device = "cuda" if torch.cuda.is_available() else "cpu"
vae, unet, text_encoder = vae.to(device), unet.to(device), text_encoder.to(device)
optimizer = torch.optim.AdamW(unet.parameters(), lr=1e-4)
# Training Loop
for epoch in range(10):
for images, _ in dataloader:
images = images.to(device)
latents = vae.encode(images).latent_dist.sample() * 0.18215
noise = torch.randn_like(latents)
timesteps = torch.randint(0, scheduler.num_train_timesteps, (latents.shape[0],), device=device)
noisy_latents = scheduler.add_noise(latents, noise, timesteps)
text_inputs = tokenizer(["a photo"], padding="max_length", return_tensors="pt").to(device)
text_embeds = text_encoder(text_inputs.input_ids).last_hidden_state
noise_pred = unet(noisy_latents, timesteps, encoder_hidden_states=text_embeds).sample
loss = nn.MSELoss()(noise_pred, noise)
optimizer.zero_grad()
loss.backward()
optimizer.step()
print(f"Epoch {epoch} | Loss: {loss.item()}")
```
## 💾 Saving & Inference
# Save trained UNet
```python
torch.save(unet.state_dict(), "unet_trained.pth")
# Inference pipeline
# 1. Sample random latent
# 2. Iteratively denoise with scheduler + trained UNet
# 3. Decode with VAE → image
```
### 📖 References
- Stable Diffusion Paper
- Hugging Face Diffusers
- Diffusion Transformer (DiT)
## ✅ Future Work
Replace UNet with pure Transformer (DiT).
Use larger text encoders (T5/DeBERTa).
Train with custom captioned datasets.
|
VoilaRaj/78_3pdGH2
|
VoilaRaj
| 2025-08-19T08:13:40Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-08-19T08:09:42Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
MajdShamaly12/gpt-oss-20b-kaggle
|
MajdShamaly12
| 2025-08-19T08:07:20Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_oss",
"text-generation",
"vllm",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"8-bit",
"mxfp4",
"region:us"
] |
text-generation
| 2025-08-19T07:57:09Z |
---
license: apache-2.0
pipeline_tag: text-generation
library_name: transformers
tags:
- vllm
---
<p align="center">
<img alt="gpt-oss-20b" src="https://raw.githubusercontent.com/openai/gpt-oss/main/docs/gpt-oss-20b.svg">
</p>
<p align="center">
<a href="https://gpt-oss.com"><strong>Try gpt-oss</strong></a> ·
<a href="https://cookbook.openai.com/topic/gpt-oss"><strong>Guides</strong></a> ·
<a href="https://openai.com/index/gpt-oss-model-card"><strong>Model card</strong></a> ·
<a href="https://openai.com/index/introducing-gpt-oss/"><strong>OpenAI blog</strong></a>
</p>
<br>
Welcome to the gpt-oss series, [OpenAI’s open-weight models](https://openai.com/open-models) designed for powerful reasoning, agentic tasks, and versatile developer use cases.
We’re releasing two flavors of these open models:
- `gpt-oss-120b` — for production, general purpose, high reasoning use cases that fit into a single 80GB GPU (like NVIDIA H100 or AMD MI300X) (117B parameters with 5.1B active parameters)
- `gpt-oss-20b` — for lower latency, and local or specialized use cases (21B parameters with 3.6B active parameters)
Both models were trained on our [harmony response format](https://github.com/openai/harmony) and should only be used with the harmony format as it will not work correctly otherwise.
> [!NOTE]
> This model card is dedicated to the smaller `gpt-oss-20b` model. Check out [`gpt-oss-120b`](https://huggingface.co/openai/gpt-oss-120b) for the larger model.
# Highlights
* **Permissive Apache 2.0 license:** Build freely without copyleft restrictions or patent risk—ideal for experimentation, customization, and commercial deployment.
* **Configurable reasoning effort:** Easily adjust the reasoning effort (low, medium, high) based on your specific use case and latency needs.
* **Full chain-of-thought:** Gain complete access to the model’s reasoning process, facilitating easier debugging and increased trust in outputs. It’s not intended to be shown to end users.
* **Fine-tunable:** Fully customize models to your specific use case through parameter fine-tuning.
* **Agentic capabilities:** Use the models’ native capabilities for function calling, [web browsing](https://github.com/openai/gpt-oss/tree/main?tab=readme-ov-file#browser), [Python code execution](https://github.com/openai/gpt-oss/tree/main?tab=readme-ov-file#python), and Structured Outputs.
* **MXFP4 quantization:** The models were post-trained with MXFP4 quantization of the MoE weights, making `gpt-oss-120b` run on a single 80GB GPU (like NVIDIA H100 or AMD MI300X) and the `gpt-oss-20b` model run within 16GB of memory. All evals were performed with the same MXFP4 quantization.
---
# Inference examples
## Transformers
You can use `gpt-oss-120b` and `gpt-oss-20b` with Transformers. If you use the Transformers chat template, it will automatically apply the [harmony response format](https://github.com/openai/harmony). If you use `model.generate` directly, you need to apply the harmony format manually using the chat template or use our [openai-harmony](https://github.com/openai/harmony) package.
To get started, install the necessary dependencies to setup your environment:
```
pip install -U transformers kernels torch
```
Once, setup you can proceed to run the model by running the snippet below:
```py
from transformers import pipeline
import torch
model_id = "openai/gpt-oss-20b"
pipe = pipeline(
"text-generation",
model=model_id,
torch_dtype="auto",
device_map="auto",
)
messages = [
{"role": "user", "content": "Explain quantum mechanics clearly and concisely."},
]
outputs = pipe(
messages,
max_new_tokens=256,
)
print(outputs[0]["generated_text"][-1])
```
Alternatively, you can run the model via [`Transformers Serve`](https://huggingface.co/docs/transformers/main/serving) to spin up a OpenAI-compatible webserver:
```
transformers serve
transformers chat localhost:8000 --model-name-or-path openai/gpt-oss-20b
```
[Learn more about how to use gpt-oss with Transformers.](https://cookbook.openai.com/articles/gpt-oss/run-transformers)
## vLLM
vLLM recommends using [uv](https://docs.astral.sh/uv/) for Python dependency management. You can use vLLM to spin up an OpenAI-compatible webserver. The following command will automatically download the model and start the server.
```bash
uv pip install --pre vllm==0.10.1+gptoss \
--extra-index-url https://wheels.vllm.ai/gpt-oss/ \
--extra-index-url https://download.pytorch.org/whl/nightly/cu128 \
--index-strategy unsafe-best-match
vllm serve openai/gpt-oss-20b
```
[Learn more about how to use gpt-oss with vLLM.](https://cookbook.openai.com/articles/gpt-oss/run-vllm)
## PyTorch / Triton
To learn about how to use this model with PyTorch and Triton, check out our [reference implementations in the gpt-oss repository](https://github.com/openai/gpt-oss?tab=readme-ov-file#reference-pytorch-implementation).
## Ollama
If you are trying to run gpt-oss on consumer hardware, you can use Ollama by running the following commands after [installing Ollama](https://ollama.com/download).
```bash
# gpt-oss-20b
ollama pull gpt-oss:20b
ollama run gpt-oss:20b
```
[Learn more about how to use gpt-oss with Ollama.](https://cookbook.openai.com/articles/gpt-oss/run-locally-ollama)
#### LM Studio
If you are using [LM Studio](https://lmstudio.ai/) you can use the following commands to download.
```bash
# gpt-oss-20b
lms get openai/gpt-oss-20b
```
Check out our [awesome list](https://github.com/openai/gpt-oss/blob/main/awesome-gpt-oss.md) for a broader collection of gpt-oss resources and inference partners.
---
# Download the model
You can download the model weights from the [Hugging Face Hub](https://huggingface.co/collections/openai/gpt-oss-68911959590a1634ba11c7a4) directly from Hugging Face CLI:
```shell
# gpt-oss-20b
huggingface-cli download openai/gpt-oss-20b --include "original/*" --local-dir gpt-oss-20b/
pip install gpt-oss
python -m gpt_oss.chat model/
```
# Reasoning levels
You can adjust the reasoning level that suits your task across three levels:
* **Low:** Fast responses for general dialogue.
* **Medium:** Balanced speed and detail.
* **High:** Deep and detailed analysis.
The reasoning level can be set in the system prompts, e.g., "Reasoning: high".
# Tool use
The gpt-oss models are excellent for:
* Web browsing (using built-in browsing tools)
* Function calling with defined schemas
* Agentic operations like browser tasks
# Fine-tuning
Both gpt-oss models can be fine-tuned for a variety of specialized use cases.
This smaller model `gpt-oss-20b` can be fine-tuned on consumer hardware, whereas the larger [`gpt-oss-120b`](https://huggingface.co/openai/gpt-oss-120b) can be fine-tuned on a single H100 node.
|
tslim1/Phi-3-medium-128k-instruct-mlx-8Bit
|
tslim1
| 2025-08-19T08:05:30Z | 0 | 0 |
mlx
|
[
"mlx",
"safetensors",
"phi3",
"nlp",
"code",
"mlx-my-repo",
"text-generation",
"conversational",
"custom_code",
"multilingual",
"base_model:microsoft/Phi-3-medium-128k-instruct",
"base_model:quantized:microsoft/Phi-3-medium-128k-instruct",
"license:mit",
"8-bit",
"region:us"
] |
text-generation
| 2025-08-19T08:04:16Z |
---
license: mit
license_link: https://huggingface.co/microsoft/Phi-3-medium-128k-instruct/resolve/main/LICENSE
language:
- multilingual
pipeline_tag: text-generation
tags:
- nlp
- code
- mlx
- mlx-my-repo
inference:
parameters:
temperature: 0.7
widget:
- messages:
- role: user
content: Can you provide ways to eat combinations of bananas and dragonfruits?
base_model: microsoft/Phi-3-medium-128k-instruct
---
# tslim1/Phi-3-medium-128k-instruct-mlx-8Bit
The Model [tslim1/Phi-3-medium-128k-instruct-mlx-8Bit](https://huggingface.co/tslim1/Phi-3-medium-128k-instruct-mlx-8Bit) was converted to MLX format from [microsoft/Phi-3-medium-128k-instruct](https://huggingface.co/microsoft/Phi-3-medium-128k-instruct) using mlx-lm version **0.26.3**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("tslim1/Phi-3-medium-128k-instruct-mlx-8Bit")
prompt="hello"
if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
liukevin666/blockassist-bc-yawning_striped_cassowary_1755590333
|
liukevin666
| 2025-08-19T08:00:20Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"yawning striped cassowary",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T08:00:00Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- yawning striped cassowary
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
donoway/GSM8K-Binary_Llama-3.2-1B-xx99tfwe
|
donoway
| 2025-08-19T08:00:15Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"base_model:meta-llama/Llama-3.2-1B",
"base_model:finetune:meta-llama/Llama-3.2-1B",
"license:llama3.2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-19T05:55:54Z |
---
library_name: transformers
license: llama3.2
base_model: meta-llama/Llama-3.2-1B
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: GSM8K-Binary_Llama-3.2-1B-xx99tfwe
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GSM8K-Binary_Llama-3.2-1B-xx99tfwe
This model is a fine-tuned version of [meta-llama/Llama-3.2-1B](https://huggingface.co/meta-llama/Llama-3.2-1B) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2724
- Model Preparation Time: 0.0059
- Mdl: 4543.3835
- Accumulated Loss: 3149.2334
- Correct Preds: 1590.0
- Total Preds: 2475.0
- Accuracy: 0.6424
- Correct Gen Preds: 1155.0
- Gen Accuracy: 0.4667
- Correct Gen Preds 34192: 669.0
- Correct Preds 34192: 974.0
- Total Labels 34192: 1196.0
- Accuracy 34192: 0.8144
- Gen Accuracy 34192: 0.5594
- Correct Gen Preds 41568: 477.0
- Correct Preds 41568: 616.0
- Total Labels 41568: 1267.0
- Accuracy 41568: 0.4862
- Gen Accuracy 41568: 0.3765
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.01
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Model Preparation Time | Mdl | Accumulated Loss | Correct Preds | Total Preds | Accuracy | Correct Gen Preds | Gen Accuracy | Correct Gen Preds 34192 | Correct Preds 34192 | Total Labels 34192 | Accuracy 34192 | Gen Accuracy 34192 | Correct Gen Preds 41568 | Correct Preds 41568 | Total Labels 41568 | Accuracy 41568 | Gen Accuracy 41568 |
|:-------------:|:-----:|:----:|:---------------:|:----------------------:|:----------:|:----------------:|:-------------:|:-----------:|:--------:|:-----------------:|:------------:|:-----------------------:|:-------------------:|:------------------:|:--------------:|:------------------:|:-----------------------:|:-------------------:|:------------------:|:--------------:|:------------------:|
| No log | 0 | 0 | 1.4656 | 0.0059 | 5233.1723 | 3627.3586 | 1196.0 | 2475.0 | 0.4832 | 1204.0 | 0.4865 | 1196.0 | 1196.0 | 1196.0 | 1.0 | 1.0 | 0.0 | 0.0 | 1267.0 | 0.0 | 0.0 |
| 1.1557 | 1.0 | 1 | 1.4656 | 0.0059 | 5233.1723 | 3627.3586 | 1196.0 | 2475.0 | 0.4832 | 1204.0 | 0.4865 | 1196.0 | 1196.0 | 1196.0 | 1.0 | 1.0 | 0.0 | 0.0 | 1267.0 | 0.0 | 0.0 |
| 1.1557 | 2.0 | 2 | 5.0073 | 0.0059 | 17879.5848 | 12393.1838 | 1267.0 | 2475.0 | 0.5119 | 1274.0 | 0.5147 | 0.0 | 0.0 | 1196.0 | 0.0 | 0.0 | 1266.0 | 1267.0 | 1267.0 | 1.0 | 0.9992 |
| 5.9156 | 3.0 | 3 | 1.1424 | 0.0059 | 4079.2851 | 2827.5450 | 1267.0 | 2475.0 | 0.5119 | 7.0 | 0.0028 | 0.0 | 0.0 | 1196.0 | 0.0 | 0.0 | 0.0 | 1267.0 | 1267.0 | 1.0 | 0.0 |
| 1.1855 | 4.0 | 4 | 1.8316 | 0.0059 | 6540.2060 | 4533.3254 | 1196.0 | 2475.0 | 0.4832 | 8.0 | 0.0032 | 0.0 | 1196.0 | 1196.0 | 1.0 | 0.0 | 0.0 | 0.0 | 1267.0 | 0.0 | 0.0 |
| 1.507 | 5.0 | 5 | 0.9446 | 0.0059 | 3372.8700 | 2337.8953 | 1197.0 | 2475.0 | 0.4836 | 8.0 | 0.0032 | 0.0 | 1192.0 | 1196.0 | 0.9967 | 0.0 | 0.0 | 5.0 | 1267.0 | 0.0039 | 0.0 |
| 0.6628 | 6.0 | 6 | 0.8425 | 0.0059 | 3008.1372 | 2085.0818 | 1278.0 | 2475.0 | 0.5164 | 8.0 | 0.0032 | 0.0 | 25.0 | 1196.0 | 0.0209 | 0.0 | 0.0 | 1253.0 | 1267.0 | 0.9890 | 0.0 |
| 0.6635 | 7.0 | 7 | 0.7631 | 0.0059 | 2724.8477 | 1888.7205 | 1326.0 | 2475.0 | 0.5358 | 8.0 | 0.0032 | 0.0 | 179.0 | 1196.0 | 0.1497 | 0.0 | 0.0 | 1147.0 | 1267.0 | 0.9053 | 0.0 |
| 0.4678 | 8.0 | 8 | 0.7798 | 0.0059 | 2784.3208 | 1929.9441 | 1336.0 | 2475.0 | 0.5398 | 8.0 | 0.0032 | 0.0 | 1143.0 | 1196.0 | 0.9557 | 0.0 | 0.0 | 193.0 | 1267.0 | 0.1523 | 0.0 |
| 0.2898 | 9.0 | 9 | 0.8131 | 0.0059 | 2903.3217 | 2012.4292 | 1346.0 | 2475.0 | 0.5438 | 8.0 | 0.0032 | 0.0 | 1142.0 | 1196.0 | 0.9548 | 0.0 | 0.0 | 204.0 | 1267.0 | 0.1610 | 0.0 |
| 0.1716 | 10.0 | 10 | 0.7535 | 0.0059 | 2690.5199 | 1864.9263 | 1458.0 | 2475.0 | 0.5891 | 8.0 | 0.0032 | 0.0 | 937.0 | 1196.0 | 0.7834 | 0.0 | 0.0 | 521.0 | 1267.0 | 0.4112 | 0.0 |
| 0.0706 | 11.0 | 11 | 0.8096 | 0.0059 | 2890.8924 | 2003.8139 | 1471.0 | 2475.0 | 0.5943 | 8.0 | 0.0032 | 0.0 | 1012.0 | 1196.0 | 0.8462 | 0.0 | 0.0 | 459.0 | 1267.0 | 0.3623 | 0.0 |
| 0.015 | 12.0 | 12 | 0.8691 | 0.0059 | 3103.3107 | 2151.0511 | 1502.0 | 2475.0 | 0.6069 | 8.0 | 0.0032 | 0.0 | 992.0 | 1196.0 | 0.8294 | 0.0 | 0.0 | 510.0 | 1267.0 | 0.4025 | 0.0 |
| 0.0016 | 13.0 | 13 | 0.9300 | 0.0059 | 3320.7592 | 2301.7749 | 1534.0 | 2475.0 | 0.6198 | 8.0 | 0.0032 | 0.0 | 991.0 | 1196.0 | 0.8286 | 0.0 | 0.0 | 543.0 | 1267.0 | 0.4286 | 0.0 |
| 0.0003 | 14.0 | 14 | 0.9761 | 0.0059 | 3485.2924 | 2415.8206 | 1546.0 | 2475.0 | 0.6246 | 25.0 | 0.0101 | 4.0 | 976.0 | 1196.0 | 0.8161 | 0.0033 | 13.0 | 570.0 | 1267.0 | 0.4499 | 0.0103 |
| 0.0001 | 15.0 | 15 | 1.0153 | 0.0059 | 3625.3772 | 2512.9200 | 1556.0 | 2475.0 | 0.6287 | 93.0 | 0.0376 | 22.0 | 968.0 | 1196.0 | 0.8094 | 0.0184 | 63.0 | 588.0 | 1267.0 | 0.4641 | 0.0497 |
| 0.0 | 16.0 | 16 | 1.0547 | 0.0059 | 3766.0323 | 2610.4147 | 1568.0 | 2475.0 | 0.6335 | 188.0 | 0.0760 | 59.0 | 970.0 | 1196.0 | 0.8110 | 0.0493 | 121.0 | 598.0 | 1267.0 | 0.4720 | 0.0955 |
| 0.0 | 17.0 | 17 | 1.0947 | 0.0059 | 3908.7914 | 2709.3678 | 1570.0 | 2475.0 | 0.6343 | 322.0 | 0.1301 | 125.0 | 969.0 | 1196.0 | 0.8102 | 0.1045 | 188.0 | 601.0 | 1267.0 | 0.4743 | 0.1484 |
| 0.0 | 18.0 | 18 | 1.1295 | 0.0059 | 4033.1706 | 2795.5808 | 1573.0 | 2475.0 | 0.6356 | 470.0 | 0.1899 | 211.0 | 972.0 | 1196.0 | 0.8127 | 0.1764 | 250.0 | 601.0 | 1267.0 | 0.4743 | 0.1973 |
| 0.0 | 19.0 | 19 | 1.1608 | 0.0059 | 4144.8149 | 2872.9668 | 1578.0 | 2475.0 | 0.6376 | 612.0 | 0.2473 | 302.0 | 973.0 | 1196.0 | 0.8135 | 0.2525 | 301.0 | 605.0 | 1267.0 | 0.4775 | 0.2376 |
| 0.0 | 20.0 | 20 | 1.1862 | 0.0059 | 4235.5987 | 2935.8933 | 1585.0 | 2475.0 | 0.6404 | 723.0 | 0.2921 | 384.0 | 975.0 | 1196.0 | 0.8152 | 0.3211 | 330.0 | 610.0 | 1267.0 | 0.4815 | 0.2605 |
| 0.0 | 21.0 | 21 | 1.2002 | 0.0059 | 4285.6206 | 2970.5659 | 1587.0 | 2475.0 | 0.6412 | 803.0 | 0.3244 | 434.0 | 975.0 | 1196.0 | 0.8152 | 0.3629 | 360.0 | 612.0 | 1267.0 | 0.4830 | 0.2841 |
| 0.0 | 22.0 | 22 | 1.2167 | 0.0059 | 4344.4465 | 3011.3409 | 1585.0 | 2475.0 | 0.6404 | 885.0 | 0.3576 | 480.0 | 972.0 | 1196.0 | 0.8127 | 0.4013 | 396.0 | 613.0 | 1267.0 | 0.4838 | 0.3125 |
| 0.0 | 23.0 | 23 | 1.2293 | 0.0059 | 4389.3332 | 3042.4540 | 1586.0 | 2475.0 | 0.6408 | 953.0 | 0.3851 | 527.0 | 971.0 | 1196.0 | 0.8119 | 0.4406 | 417.0 | 615.0 | 1267.0 | 0.4854 | 0.3291 |
| 0.0 | 24.0 | 24 | 1.2378 | 0.0059 | 4419.8769 | 3063.6252 | 1582.0 | 2475.0 | 0.6392 | 1008.0 | 0.4073 | 570.0 | 970.0 | 1196.0 | 0.8110 | 0.4766 | 429.0 | 612.0 | 1267.0 | 0.4830 | 0.3386 |
| 0.0 | 25.0 | 25 | 1.2477 | 0.0059 | 4455.0209 | 3087.9851 | 1586.0 | 2475.0 | 0.6408 | 1048.0 | 0.4234 | 591.0 | 971.0 | 1196.0 | 0.8119 | 0.4941 | 448.0 | 615.0 | 1267.0 | 0.4854 | 0.3536 |
| 0.0 | 26.0 | 26 | 1.2538 | 0.0059 | 4476.9507 | 3103.1857 | 1589.0 | 2475.0 | 0.6420 | 1075.0 | 0.4343 | 614.0 | 975.0 | 1196.0 | 0.8152 | 0.5134 | 452.0 | 614.0 | 1267.0 | 0.4846 | 0.3567 |
| 0.0 | 27.0 | 27 | 1.2628 | 0.0059 | 4509.1826 | 3125.5272 | 1586.0 | 2475.0 | 0.6408 | 1106.0 | 0.4469 | 637.0 | 973.0 | 1196.0 | 0.8135 | 0.5326 | 460.0 | 613.0 | 1267.0 | 0.4838 | 0.3631 |
| 0.0 | 28.0 | 28 | 1.2659 | 0.0059 | 4520.0496 | 3133.0596 | 1585.0 | 2475.0 | 0.6404 | 1120.0 | 0.4525 | 645.0 | 972.0 | 1196.0 | 0.8127 | 0.5393 | 466.0 | 613.0 | 1267.0 | 0.4838 | 0.3678 |
| 0.0 | 29.0 | 29 | 1.2675 | 0.0059 | 4525.9937 | 3137.1798 | 1587.0 | 2475.0 | 0.6412 | 1139.0 | 0.4602 | 659.0 | 974.0 | 1196.0 | 0.8144 | 0.5510 | 471.0 | 613.0 | 1267.0 | 0.4838 | 0.3717 |
| 0.0 | 30.0 | 30 | 1.2724 | 0.0059 | 4543.3835 | 3149.2334 | 1590.0 | 2475.0 | 0.6424 | 1155.0 | 0.4667 | 669.0 | 974.0 | 1196.0 | 0.8144 | 0.5594 | 477.0 | 616.0 | 1267.0 | 0.4862 | 0.3765 |
| 0.0 | 31.0 | 31 | 1.2754 | 0.0059 | 4554.1107 | 3156.6690 | 1587.0 | 2475.0 | 0.6412 | 1157.0 | 0.4675 | 672.0 | 975.0 | 1196.0 | 0.8152 | 0.5619 | 476.0 | 612.0 | 1267.0 | 0.4830 | 0.3757 |
| 0.0 | 32.0 | 32 | 1.2780 | 0.0059 | 4563.1825 | 3162.9571 | 1582.0 | 2475.0 | 0.6392 | 1172.0 | 0.4735 | 683.0 | 972.0 | 1196.0 | 0.8127 | 0.5711 | 480.0 | 610.0 | 1267.0 | 0.4815 | 0.3788 |
| 0.0 | 33.0 | 33 | 1.2791 | 0.0059 | 4567.2324 | 3165.7642 | 1583.0 | 2475.0 | 0.6396 | 1171.0 | 0.4731 | 680.0 | 973.0 | 1196.0 | 0.8135 | 0.5686 | 482.0 | 610.0 | 1267.0 | 0.4815 | 0.3804 |
| 0.0 | 34.0 | 34 | 1.2793 | 0.0059 | 4567.8543 | 3166.1953 | 1588.0 | 2475.0 | 0.6416 | 1176.0 | 0.4752 | 684.0 | 975.0 | 1196.0 | 0.8152 | 0.5719 | 483.0 | 613.0 | 1267.0 | 0.4838 | 0.3812 |
| 0.0 | 35.0 | 35 | 1.2811 | 0.0059 | 4574.2847 | 3170.6525 | 1582.0 | 2475.0 | 0.6392 | 1178.0 | 0.4760 | 687.0 | 972.0 | 1196.0 | 0.8127 | 0.5744 | 482.0 | 610.0 | 1267.0 | 0.4815 | 0.3804 |
| 0.0 | 36.0 | 36 | 1.2823 | 0.0059 | 4578.7325 | 3173.7355 | 1583.0 | 2475.0 | 0.6396 | 1183.0 | 0.4780 | 692.0 | 971.0 | 1196.0 | 0.8119 | 0.5786 | 482.0 | 612.0 | 1267.0 | 0.4830 | 0.3804 |
| 0.0 | 37.0 | 37 | 1.2835 | 0.0059 | 4582.8161 | 3176.5661 | 1584.0 | 2475.0 | 0.64 | 1190.0 | 0.4808 | 696.0 | 972.0 | 1196.0 | 0.8127 | 0.5819 | 485.0 | 612.0 | 1267.0 | 0.4830 | 0.3828 |
| 0.0 | 38.0 | 38 | 1.2848 | 0.0059 | 4587.5704 | 3179.8615 | 1585.0 | 2475.0 | 0.6404 | 1184.0 | 0.4784 | 692.0 | 973.0 | 1196.0 | 0.8135 | 0.5786 | 483.0 | 612.0 | 1267.0 | 0.4830 | 0.3812 |
| 0.0 | 39.0 | 39 | 1.2858 | 0.0059 | 4591.0362 | 3182.2638 | 1584.0 | 2475.0 | 0.64 | 1189.0 | 0.4804 | 697.0 | 973.0 | 1196.0 | 0.8135 | 0.5828 | 483.0 | 611.0 | 1267.0 | 0.4822 | 0.3812 |
| 0.0 | 40.0 | 40 | 1.2854 | 0.0059 | 4589.7370 | 3181.3633 | 1585.0 | 2475.0 | 0.6404 | 1186.0 | 0.4792 | 693.0 | 973.0 | 1196.0 | 0.8135 | 0.5794 | 484.0 | 612.0 | 1267.0 | 0.4830 | 0.3820 |
| 0.0 | 41.0 | 41 | 1.2850 | 0.0059 | 4588.1990 | 3180.2972 | 1580.0 | 2475.0 | 0.6384 | 1190.0 | 0.4808 | 698.0 | 972.0 | 1196.0 | 0.8127 | 0.5836 | 483.0 | 608.0 | 1267.0 | 0.4799 | 0.3812 |
| 0.0 | 42.0 | 42 | 1.2860 | 0.0059 | 4591.7153 | 3182.7345 | 1585.0 | 2475.0 | 0.6404 | 1194.0 | 0.4824 | 697.0 | 974.0 | 1196.0 | 0.8144 | 0.5828 | 488.0 | 611.0 | 1267.0 | 0.4822 | 0.3852 |
| 0.0 | 43.0 | 43 | 1.2859 | 0.0059 | 4591.5068 | 3182.5900 | 1580.0 | 2475.0 | 0.6384 | 1189.0 | 0.4804 | 697.0 | 972.0 | 1196.0 | 0.8127 | 0.5828 | 483.0 | 608.0 | 1267.0 | 0.4799 | 0.3812 |
| 0.0 | 44.0 | 44 | 1.2852 | 0.0059 | 4589.0154 | 3180.8631 | 1587.0 | 2475.0 | 0.6412 | 1190.0 | 0.4808 | 698.0 | 975.0 | 1196.0 | 0.8152 | 0.5836 | 483.0 | 612.0 | 1267.0 | 0.4830 | 0.3812 |
| 0.0 | 45.0 | 45 | 1.2860 | 0.0059 | 4591.7459 | 3182.7557 | 1587.0 | 2475.0 | 0.6412 | 1193.0 | 0.4820 | 699.0 | 974.0 | 1196.0 | 0.8144 | 0.5844 | 485.0 | 613.0 | 1267.0 | 0.4838 | 0.3828 |
| 0.0 | 46.0 | 46 | 1.2858 | 0.0059 | 4591.3256 | 3182.4644 | 1587.0 | 2475.0 | 0.6412 | 1196.0 | 0.4832 | 701.0 | 976.0 | 1196.0 | 0.8161 | 0.5861 | 486.0 | 611.0 | 1267.0 | 0.4822 | 0.3836 |
| 0.0 | 47.0 | 47 | 1.2847 | 0.0059 | 4587.3352 | 3179.6985 | 1588.0 | 2475.0 | 0.6416 | 1193.0 | 0.4820 | 698.0 | 974.0 | 1196.0 | 0.8144 | 0.5836 | 486.0 | 614.0 | 1267.0 | 0.4846 | 0.3836 |
| 0.0 | 48.0 | 48 | 1.2860 | 0.0059 | 4591.7670 | 3182.7704 | 1581.0 | 2475.0 | 0.6388 | 1193.0 | 0.4820 | 698.0 | 972.0 | 1196.0 | 0.8127 | 0.5836 | 486.0 | 609.0 | 1267.0 | 0.4807 | 0.3836 |
| 0.0 | 49.0 | 49 | 1.2857 | 0.0059 | 4590.7061 | 3182.0350 | 1590.0 | 2475.0 | 0.6424 | 1195.0 | 0.4828 | 699.0 | 975.0 | 1196.0 | 0.8152 | 0.5844 | 487.0 | 615.0 | 1267.0 | 0.4854 | 0.3844 |
| 0.0 | 50.0 | 50 | 1.2880 | 0.0059 | 4598.8684 | 3187.6927 | 1586.0 | 2475.0 | 0.6408 | 1191.0 | 0.4812 | 697.0 | 975.0 | 1196.0 | 0.8152 | 0.5828 | 485.0 | 611.0 | 1267.0 | 0.4822 | 0.3828 |
| 0.0 | 51.0 | 51 | 1.2880 | 0.0059 | 4598.8481 | 3187.6786 | 1582.0 | 2475.0 | 0.6392 | 1192.0 | 0.4816 | 697.0 | 972.0 | 1196.0 | 0.8127 | 0.5828 | 486.0 | 610.0 | 1267.0 | 0.4815 | 0.3836 |
| 0.0 | 52.0 | 52 | 1.2871 | 0.0059 | 4595.9863 | 3185.6950 | 1579.0 | 2475.0 | 0.6380 | 1196.0 | 0.4832 | 697.0 | 973.0 | 1196.0 | 0.8135 | 0.5828 | 490.0 | 606.0 | 1267.0 | 0.4783 | 0.3867 |
| 0.0 | 53.0 | 53 | 1.2858 | 0.0059 | 4591.2625 | 3182.4207 | 1585.0 | 2475.0 | 0.6404 | 1200.0 | 0.4848 | 701.0 | 974.0 | 1196.0 | 0.8144 | 0.5861 | 490.0 | 611.0 | 1267.0 | 0.4822 | 0.3867 |
| 0.0 | 54.0 | 54 | 1.2867 | 0.0059 | 4594.4591 | 3184.6364 | 1586.0 | 2475.0 | 0.6408 | 1197.0 | 0.4836 | 700.0 | 973.0 | 1196.0 | 0.8135 | 0.5853 | 488.0 | 613.0 | 1267.0 | 0.4838 | 0.3852 |
| 0.0 | 55.0 | 55 | 1.2880 | 0.0059 | 4599.0709 | 3187.8330 | 1579.0 | 2475.0 | 0.6380 | 1194.0 | 0.4824 | 700.0 | 972.0 | 1196.0 | 0.8127 | 0.5853 | 485.0 | 607.0 | 1267.0 | 0.4791 | 0.3828 |
| 0.0 | 56.0 | 56 | 1.2871 | 0.0059 | 4595.6402 | 3185.4551 | 1583.0 | 2475.0 | 0.6396 | 1192.0 | 0.4816 | 698.0 | 973.0 | 1196.0 | 0.8135 | 0.5836 | 485.0 | 610.0 | 1267.0 | 0.4815 | 0.3828 |
| 0.0 | 57.0 | 57 | 1.2866 | 0.0059 | 4593.9130 | 3184.2578 | 1580.0 | 2475.0 | 0.6384 | 1195.0 | 0.4828 | 702.0 | 973.0 | 1196.0 | 0.8135 | 0.5870 | 484.0 | 607.0 | 1267.0 | 0.4791 | 0.3820 |
| 0.0 | 58.0 | 58 | 1.2858 | 0.0059 | 4590.9929 | 3182.2338 | 1583.0 | 2475.0 | 0.6396 | 1189.0 | 0.4804 | 695.0 | 973.0 | 1196.0 | 0.8135 | 0.5811 | 485.0 | 610.0 | 1267.0 | 0.4815 | 0.3828 |
| 0.0 | 59.0 | 59 | 1.2849 | 0.0059 | 4587.9750 | 3180.1420 | 1586.0 | 2475.0 | 0.6408 | 1197.0 | 0.4836 | 702.0 | 975.0 | 1196.0 | 0.8152 | 0.5870 | 486.0 | 611.0 | 1267.0 | 0.4822 | 0.3836 |
| 0.0 | 60.0 | 60 | 1.2866 | 0.0059 | 4594.1392 | 3184.4146 | 1581.0 | 2475.0 | 0.6388 | 1195.0 | 0.4828 | 699.0 | 973.0 | 1196.0 | 0.8135 | 0.5844 | 487.0 | 608.0 | 1267.0 | 0.4799 | 0.3844 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
rockst4r4/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-yawning_tiny_aardvark
|
rockst4r4
| 2025-08-19T07:59:05Z | 1 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am yawning_tiny_aardvark",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-15T20:28:40Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am yawning_tiny_aardvark
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
0xaoyama/blockassist-bc-muscular_zealous_gorilla_1755590184
|
0xaoyama
| 2025-08-19T07:56:59Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"muscular zealous gorilla",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T07:56:45Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- muscular zealous gorilla
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
AngelinaZanardi/nb-bert-base-edu-scorer-lr3e5-bs32_swe_test_2
|
AngelinaZanardi
| 2025-08-19T07:55:48Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:NbAiLab/nb-bert-base",
"base_model:finetune:NbAiLab/nb-bert-base",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-19T06:21:50Z |
---
library_name: transformers
license: cc-by-4.0
base_model: NbAiLab/nb-bert-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: nb-bert-base-edu-scorer-lr3e5-bs32_swe_test_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nb-bert-base-edu-scorer-lr3e5-bs32_swe_test_2
This model is a fine-tuned version of [NbAiLab/nb-bert-base](https://huggingface.co/NbAiLab/nb-bert-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3465
- Accuracy: 0.5120
- F1 Weighted: 0.5084
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 Weighted |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:-----------:|
| 1.1986 | 1.0 | 1472 | 1.1862 | 0.4919 | 0.4498 |
| 1.1196 | 2.0 | 2944 | 1.1978 | 0.4946 | 0.4659 |
| 0.9807 | 3.0 | 4416 | 1.2960 | 0.4798 | 0.4599 |
| 0.8432 | 4.0 | 5888 | 1.4149 | 0.4768 | 0.4709 |
| 0.716 | 5.0 | 7360 | 1.5769 | 0.4720 | 0.4626 |
| 0.5717 | 6.0 | 8832 | 1.8525 | 0.4558 | 0.4588 |
| 0.4705 | 7.0 | 10304 | 2.0333 | 0.4526 | 0.4584 |
| 0.3901 | 8.0 | 11776 | 2.1127 | 0.4534 | 0.4559 |
| 0.322 | 9.0 | 13248 | 2.4347 | 0.4560 | 0.4599 |
| 0.2845 | 10.0 | 14720 | 2.6137 | 0.4411 | 0.4493 |
| 0.2244 | 11.0 | 16192 | 2.7283 | 0.4518 | 0.4564 |
| 0.2059 | 12.0 | 17664 | 3.0232 | 0.4383 | 0.4416 |
| 0.1453 | 13.0 | 19136 | 3.1201 | 0.4484 | 0.4529 |
| 0.1258 | 14.0 | 20608 | 3.2220 | 0.4520 | 0.4561 |
| 0.0994 | 15.0 | 22080 | 3.4938 | 0.4455 | 0.4526 |
| 0.0933 | 16.0 | 23552 | 3.5932 | 0.4506 | 0.4584 |
| 0.0689 | 17.0 | 25024 | 3.8269 | 0.4538 | 0.4607 |
| 0.0514 | 18.0 | 26496 | 4.0041 | 0.4540 | 0.4607 |
| 0.0476 | 19.0 | 27968 | 4.1475 | 0.4540 | 0.4597 |
| 0.0307 | 20.0 | 29440 | 4.2319 | 0.4512 | 0.4575 |
### Framework versions
- Transformers 4.55.0
- Pytorch 2.5.1+cu121
- Datasets 4.0.0
- Tokenizers 0.21.4
|
donoway/ARC-Easy_Llama-3.2-1B-a2yg6wt3
|
donoway
| 2025-08-19T07:54:03Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"base_model:meta-llama/Llama-3.2-1B",
"base_model:finetune:meta-llama/Llama-3.2-1B",
"license:llama3.2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-19T07:42:31Z |
---
library_name: transformers
license: llama3.2
base_model: meta-llama/Llama-3.2-1B
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: ARC-Easy_Llama-3.2-1B-a2yg6wt3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ARC-Easy_Llama-3.2-1B-a2yg6wt3
This model is a fine-tuned version of [meta-llama/Llama-3.2-1B](https://huggingface.co/meta-llama/Llama-3.2-1B) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.3756
- Model Preparation Time: 0.0063
- Mdl: 2775.8537
- Accumulated Loss: 1924.0752
- Correct Preds: 366.0
- Total Preds: 570.0
- Accuracy: 0.6421
- Correct Gen Preds: 355.0
- Gen Accuracy: 0.6228
- Correct Gen Preds 32: 113.0
- Correct Preds 32: 118.0
- Total Labels 32: 158.0
- Accuracy 32: 0.7468
- Gen Accuracy 32: 0.7152
- Correct Gen Preds 33: 106.0
- Correct Preds 33: 107.0
- Total Labels 33: 152.0
- Accuracy 33: 0.7039
- Gen Accuracy 33: 0.6974
- Correct Gen Preds 34: 88.0
- Correct Preds 34: 91.0
- Total Labels 34: 142.0
- Accuracy 34: 0.6408
- Gen Accuracy 34: 0.6197
- Correct Gen Preds 35: 48.0
- Correct Preds 35: 50.0
- Total Labels 35: 118.0
- Accuracy 35: 0.4237
- Gen Accuracy 35: 0.4068
- Correct Gen Preds 36: 0.0
- Correct Preds 36: 0.0
- Total Labels 36: 0.0
- Accuracy 36: 0.0
- Gen Accuracy 36: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 112
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.01
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Model Preparation Time | Mdl | Accumulated Loss | Correct Preds | Total Preds | Accuracy | Correct Gen Preds | Gen Accuracy | Correct Gen Preds 32 | Correct Preds 32 | Total Labels 32 | Accuracy 32 | Gen Accuracy 32 | Correct Gen Preds 33 | Correct Preds 33 | Total Labels 33 | Accuracy 33 | Gen Accuracy 33 | Correct Gen Preds 34 | Correct Preds 34 | Total Labels 34 | Accuracy 34 | Gen Accuracy 34 | Correct Gen Preds 35 | Correct Preds 35 | Total Labels 35 | Accuracy 35 | Gen Accuracy 35 | Correct Gen Preds 36 | Correct Preds 36 | Total Labels 36 | Accuracy 36 | Gen Accuracy 36 |
|:-------------:|:-----:|:----:|:---------------:|:----------------------:|:---------:|:----------------:|:-------------:|:-----------:|:--------:|:-----------------:|:------------:|:--------------------:|:----------------:|:---------------:|:-----------:|:---------------:|:--------------------:|:----------------:|:---------------:|:-----------:|:---------------:|:--------------------:|:----------------:|:---------------:|:-----------:|:---------------:|:--------------------:|:----------------:|:---------------:|:-----------:|:---------------:|:--------------------:|:----------------:|:---------------:|:-----------:|:---------------:|
| No log | 0 | 0 | 1.5354 | 0.0063 | 1262.6022 | 875.1692 | 172.0 | 570.0 | 0.3018 | 170.0 | 0.2982 | 154.0 | 154.0 | 158.0 | 0.9747 | 0.9747 | 0.0 | 0.0 | 152.0 | 0.0 | 0.0 | 15.0 | 17.0 | 142.0 | 0.1197 | 0.1056 | 1.0 | 1.0 | 118.0 | 0.0085 | 0.0085 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.498 | 1.0 | 1 | 1.5354 | 0.0063 | 1262.6022 | 875.1692 | 172.0 | 570.0 | 0.3018 | 170.0 | 0.2982 | 154.0 | 154.0 | 158.0 | 0.9747 | 0.9747 | 0.0 | 0.0 | 152.0 | 0.0 | 0.0 | 15.0 | 17.0 | 142.0 | 0.1197 | 0.1056 | 1.0 | 1.0 | 118.0 | 0.0085 | 0.0085 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.498 | 2.0 | 2 | 2.5202 | 0.0063 | 2072.4194 | 1436.4916 | 219.0 | 570.0 | 0.3842 | 219.0 | 0.3842 | 0.0 | 0.0 | 158.0 | 0.0 | 0.0 | 83.0 | 83.0 | 152.0 | 0.5461 | 0.5461 | 136.0 | 136.0 | 142.0 | 0.9577 | 0.9577 | 0.0 | 0.0 | 118.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.783 | 3.0 | 3 | 1.2895 | 0.0063 | 1060.4212 | 735.0280 | 226.0 | 570.0 | 0.3965 | 226.0 | 0.3965 | 6.0 | 6.0 | 158.0 | 0.0380 | 0.0380 | 145.0 | 145.0 | 152.0 | 0.9539 | 0.9539 | 48.0 | 48.0 | 142.0 | 0.3380 | 0.3380 | 27.0 | 27.0 | 118.0 | 0.2288 | 0.2288 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.4782 | 4.0 | 4 | 1.3651 | 0.0063 | 1122.5363 | 778.0829 | 325.0 | 570.0 | 0.5702 | 311.0 | 0.5456 | 94.0 | 107.0 | 158.0 | 0.6772 | 0.5949 | 103.0 | 104.0 | 152.0 | 0.6842 | 0.6776 | 81.0 | 81.0 | 142.0 | 0.5704 | 0.5704 | 33.0 | 33.0 | 118.0 | 0.2797 | 0.2797 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0157 | 5.0 | 5 | 2.2319 | 0.0063 | 1835.3671 | 1272.1795 | 356.0 | 570.0 | 0.6246 | 339.0 | 0.5947 | 100.0 | 112.0 | 158.0 | 0.7089 | 0.6329 | 108.0 | 108.0 | 152.0 | 0.7105 | 0.7105 | 88.0 | 91.0 | 142.0 | 0.6408 | 0.6197 | 43.0 | 45.0 | 118.0 | 0.3814 | 0.3644 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 6.0 | 6 | 2.8502 | 0.0063 | 2343.8113 | 1624.6062 | 362.0 | 570.0 | 0.6351 | 355.0 | 0.6228 | 109.0 | 116.0 | 158.0 | 0.7342 | 0.6899 | 108.0 | 108.0 | 152.0 | 0.7105 | 0.7105 | 91.0 | 91.0 | 142.0 | 0.6408 | 0.6408 | 47.0 | 47.0 | 118.0 | 0.3983 | 0.3983 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 7.0 | 7 | 3.1703 | 0.0063 | 2607.0465 | 1807.0669 | 363.0 | 570.0 | 0.6368 | 353.0 | 0.6193 | 111.0 | 116.0 | 158.0 | 0.7342 | 0.7025 | 106.0 | 107.0 | 152.0 | 0.7039 | 0.6974 | 88.0 | 91.0 | 142.0 | 0.6408 | 0.6197 | 48.0 | 49.0 | 118.0 | 0.4153 | 0.4068 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 8.0 | 8 | 3.3756 | 0.0063 | 2775.8537 | 1924.0752 | 366.0 | 570.0 | 0.6421 | 355.0 | 0.6228 | 113.0 | 118.0 | 158.0 | 0.7468 | 0.7152 | 106.0 | 107.0 | 152.0 | 0.7039 | 0.6974 | 88.0 | 91.0 | 142.0 | 0.6408 | 0.6197 | 48.0 | 50.0 | 118.0 | 0.4237 | 0.4068 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 9.0 | 9 | 3.5599 | 0.0063 | 2927.4671 | 2029.1656 | 361.0 | 570.0 | 0.6333 | 354.0 | 0.6211 | 116.0 | 120.0 | 158.0 | 0.7595 | 0.7342 | 102.0 | 103.0 | 152.0 | 0.6776 | 0.6711 | 88.0 | 89.0 | 142.0 | 0.6268 | 0.6197 | 48.0 | 49.0 | 118.0 | 0.4153 | 0.4068 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 10.0 | 10 | 3.7285 | 0.0063 | 3066.1098 | 2125.2654 | 353.0 | 570.0 | 0.6193 | 349.0 | 0.6123 | 118.0 | 120.0 | 158.0 | 0.7595 | 0.7468 | 98.0 | 98.0 | 152.0 | 0.6447 | 0.6447 | 84.0 | 85.0 | 142.0 | 0.5986 | 0.5915 | 49.0 | 50.0 | 118.0 | 0.4237 | 0.4153 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 11.0 | 11 | 3.8061 | 0.0063 | 3129.9326 | 2169.5039 | 356.0 | 570.0 | 0.6246 | 349.0 | 0.6123 | 122.0 | 125.0 | 158.0 | 0.7911 | 0.7722 | 97.0 | 97.0 | 152.0 | 0.6382 | 0.6382 | 83.0 | 86.0 | 142.0 | 0.6056 | 0.5845 | 47.0 | 48.0 | 118.0 | 0.4068 | 0.3983 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 12.0 | 12 | 3.8871 | 0.0063 | 3196.4877 | 2215.6364 | 351.0 | 570.0 | 0.6158 | 347.0 | 0.6088 | 123.0 | 124.0 | 158.0 | 0.7848 | 0.7785 | 95.0 | 95.0 | 152.0 | 0.625 | 0.625 | 83.0 | 85.0 | 142.0 | 0.5986 | 0.5845 | 46.0 | 47.0 | 118.0 | 0.3983 | 0.3898 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 13.0 | 13 | 3.9537 | 0.0063 | 3251.2770 | 2253.6135 | 348.0 | 570.0 | 0.6105 | 343.0 | 0.6018 | 124.0 | 124.0 | 158.0 | 0.7848 | 0.7848 | 95.0 | 96.0 | 152.0 | 0.6316 | 0.625 | 81.0 | 84.0 | 142.0 | 0.5915 | 0.5704 | 43.0 | 44.0 | 118.0 | 0.3729 | 0.3644 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 14.0 | 14 | 4.0209 | 0.0063 | 3306.4977 | 2291.8895 | 345.0 | 570.0 | 0.6053 | 338.0 | 0.5930 | 121.0 | 124.0 | 158.0 | 0.7848 | 0.7658 | 92.0 | 93.0 | 152.0 | 0.6118 | 0.6053 | 82.0 | 84.0 | 142.0 | 0.5915 | 0.5775 | 43.0 | 44.0 | 118.0 | 0.3729 | 0.3644 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 15.0 | 15 | 4.0856 | 0.0063 | 3359.7299 | 2328.7873 | 339.0 | 570.0 | 0.5947 | 336.0 | 0.5895 | 123.0 | 125.0 | 158.0 | 0.7911 | 0.7785 | 92.0 | 93.0 | 152.0 | 0.6118 | 0.6053 | 78.0 | 78.0 | 142.0 | 0.5493 | 0.5493 | 43.0 | 43.0 | 118.0 | 0.3644 | 0.3644 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 16.0 | 16 | 4.1341 | 0.0063 | 3399.5852 | 2356.4129 | 336.0 | 570.0 | 0.5895 | 333.0 | 0.5842 | 121.0 | 122.0 | 158.0 | 0.7722 | 0.7658 | 89.0 | 90.0 | 152.0 | 0.5921 | 0.5855 | 79.0 | 79.0 | 142.0 | 0.5563 | 0.5563 | 44.0 | 45.0 | 118.0 | 0.3814 | 0.3729 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 17.0 | 17 | 4.2038 | 0.0063 | 3456.9416 | 2396.1693 | 336.0 | 570.0 | 0.5895 | 332.0 | 0.5825 | 123.0 | 125.0 | 158.0 | 0.7911 | 0.7785 | 89.0 | 89.0 | 152.0 | 0.5855 | 0.5855 | 77.0 | 78.0 | 142.0 | 0.5493 | 0.5423 | 43.0 | 44.0 | 118.0 | 0.3729 | 0.3644 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 18.0 | 18 | 4.1844 | 0.0063 | 3440.9588 | 2385.0909 | 336.0 | 570.0 | 0.5895 | 332.0 | 0.5825 | 124.0 | 126.0 | 158.0 | 0.7975 | 0.7848 | 88.0 | 88.0 | 152.0 | 0.5789 | 0.5789 | 76.0 | 77.0 | 142.0 | 0.5423 | 0.5352 | 44.0 | 45.0 | 118.0 | 0.3814 | 0.3729 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 19.0 | 19 | 4.2392 | 0.0063 | 3486.0281 | 2416.3305 | 333.0 | 570.0 | 0.5842 | 330.0 | 0.5789 | 122.0 | 124.0 | 158.0 | 0.7848 | 0.7722 | 87.0 | 87.0 | 152.0 | 0.5724 | 0.5724 | 78.0 | 78.0 | 142.0 | 0.5493 | 0.5493 | 43.0 | 44.0 | 118.0 | 0.3729 | 0.3644 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 20.0 | 20 | 4.2677 | 0.0063 | 3509.5150 | 2432.6104 | 333.0 | 570.0 | 0.5842 | 330.0 | 0.5789 | 122.0 | 123.0 | 158.0 | 0.7785 | 0.7722 | 87.0 | 87.0 | 152.0 | 0.5724 | 0.5724 | 78.0 | 79.0 | 142.0 | 0.5563 | 0.5493 | 43.0 | 44.0 | 118.0 | 0.3729 | 0.3644 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 21.0 | 21 | 4.2595 | 0.0063 | 3502.7525 | 2427.9230 | 332.0 | 570.0 | 0.5825 | 329.0 | 0.5772 | 121.0 | 122.0 | 158.0 | 0.7722 | 0.7658 | 87.0 | 87.0 | 152.0 | 0.5724 | 0.5724 | 77.0 | 78.0 | 142.0 | 0.5493 | 0.5423 | 44.0 | 45.0 | 118.0 | 0.3814 | 0.3729 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 22.0 | 22 | 4.3090 | 0.0063 | 3543.4099 | 2456.1046 | 331.0 | 570.0 | 0.5807 | 328.0 | 0.5754 | 122.0 | 123.0 | 158.0 | 0.7785 | 0.7722 | 86.0 | 86.0 | 152.0 | 0.5658 | 0.5658 | 77.0 | 78.0 | 142.0 | 0.5493 | 0.5423 | 43.0 | 44.0 | 118.0 | 0.3729 | 0.3644 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 23.0 | 23 | 4.3113 | 0.0063 | 3545.3391 | 2457.4418 | 332.0 | 570.0 | 0.5825 | 330.0 | 0.5789 | 122.0 | 122.0 | 158.0 | 0.7722 | 0.7722 | 86.0 | 86.0 | 152.0 | 0.5658 | 0.5658 | 78.0 | 79.0 | 142.0 | 0.5563 | 0.5493 | 44.0 | 45.0 | 118.0 | 0.3814 | 0.3729 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 24.0 | 24 | 4.3136 | 0.0063 | 3547.2162 | 2458.7429 | 330.0 | 570.0 | 0.5789 | 326.0 | 0.5719 | 122.0 | 124.0 | 158.0 | 0.7848 | 0.7722 | 87.0 | 87.0 | 152.0 | 0.5724 | 0.5724 | 75.0 | 76.0 | 142.0 | 0.5352 | 0.5282 | 42.0 | 43.0 | 118.0 | 0.3644 | 0.3559 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 25.0 | 25 | 4.3196 | 0.0063 | 3552.1961 | 2462.1947 | 331.0 | 570.0 | 0.5807 | 329.0 | 0.5772 | 123.0 | 123.0 | 158.0 | 0.7785 | 0.7785 | 88.0 | 88.0 | 152.0 | 0.5789 | 0.5789 | 75.0 | 76.0 | 142.0 | 0.5352 | 0.5282 | 43.0 | 44.0 | 118.0 | 0.3729 | 0.3644 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 26.0 | 26 | 4.3053 | 0.0063 | 3540.4181 | 2454.0309 | 330.0 | 570.0 | 0.5789 | 328.0 | 0.5754 | 124.0 | 124.0 | 158.0 | 0.7848 | 0.7848 | 87.0 | 87.0 | 152.0 | 0.5724 | 0.5724 | 75.0 | 76.0 | 142.0 | 0.5352 | 0.5282 | 42.0 | 43.0 | 118.0 | 0.3644 | 0.3559 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 27.0 | 27 | 4.3153 | 0.0063 | 3548.6438 | 2459.7325 | 333.0 | 570.0 | 0.5842 | 330.0 | 0.5789 | 124.0 | 125.0 | 158.0 | 0.7911 | 0.7848 | 88.0 | 88.0 | 152.0 | 0.5789 | 0.5789 | 75.0 | 76.0 | 142.0 | 0.5352 | 0.5282 | 43.0 | 44.0 | 118.0 | 0.3729 | 0.3644 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 28.0 | 28 | 4.3397 | 0.0063 | 3568.7171 | 2473.6462 | 329.0 | 570.0 | 0.5772 | 328.0 | 0.5754 | 123.0 | 123.0 | 158.0 | 0.7785 | 0.7785 | 86.0 | 86.0 | 152.0 | 0.5658 | 0.5658 | 75.0 | 76.0 | 142.0 | 0.5352 | 0.5282 | 44.0 | 44.0 | 118.0 | 0.3729 | 0.3729 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 29.0 | 29 | 4.3321 | 0.0063 | 3562.4657 | 2469.3130 | 335.0 | 570.0 | 0.5877 | 331.0 | 0.5807 | 122.0 | 124.0 | 158.0 | 0.7848 | 0.7722 | 86.0 | 86.0 | 152.0 | 0.5658 | 0.5658 | 78.0 | 79.0 | 142.0 | 0.5563 | 0.5493 | 45.0 | 46.0 | 118.0 | 0.3898 | 0.3814 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 30.0 | 30 | 4.3265 | 0.0063 | 3557.8576 | 2466.1189 | 331.0 | 570.0 | 0.5807 | 327.0 | 0.5737 | 123.0 | 125.0 | 158.0 | 0.7911 | 0.7785 | 87.0 | 87.0 | 152.0 | 0.5724 | 0.5724 | 74.0 | 75.0 | 142.0 | 0.5282 | 0.5211 | 43.0 | 44.0 | 118.0 | 0.3729 | 0.3644 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 31.0 | 31 | 4.3336 | 0.0063 | 3563.6921 | 2470.1632 | 333.0 | 570.0 | 0.5842 | 330.0 | 0.5789 | 124.0 | 125.0 | 158.0 | 0.7911 | 0.7848 | 87.0 | 87.0 | 152.0 | 0.5724 | 0.5724 | 75.0 | 76.0 | 142.0 | 0.5352 | 0.5282 | 44.0 | 45.0 | 118.0 | 0.3814 | 0.3729 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 32.0 | 32 | 4.3611 | 0.0063 | 3586.2619 | 2485.8074 | 330.0 | 570.0 | 0.5789 | 327.0 | 0.5737 | 123.0 | 124.0 | 158.0 | 0.7848 | 0.7785 | 87.0 | 87.0 | 152.0 | 0.5724 | 0.5724 | 75.0 | 76.0 | 142.0 | 0.5352 | 0.5282 | 42.0 | 43.0 | 118.0 | 0.3644 | 0.3559 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 33.0 | 33 | 4.3522 | 0.0063 | 3579.0000 | 2480.7737 | 332.0 | 570.0 | 0.5825 | 329.0 | 0.5772 | 123.0 | 124.0 | 158.0 | 0.7848 | 0.7785 | 87.0 | 87.0 | 152.0 | 0.5724 | 0.5724 | 77.0 | 78.0 | 142.0 | 0.5493 | 0.5423 | 42.0 | 43.0 | 118.0 | 0.3644 | 0.3559 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 34.0 | 34 | 4.3694 | 0.0063 | 3593.1289 | 2490.5671 | 330.0 | 570.0 | 0.5789 | 326.0 | 0.5719 | 123.0 | 125.0 | 158.0 | 0.7911 | 0.7785 | 87.0 | 87.0 | 152.0 | 0.5724 | 0.5724 | 73.0 | 74.0 | 142.0 | 0.5211 | 0.5141 | 43.0 | 44.0 | 118.0 | 0.3729 | 0.3644 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 35.0 | 35 | 4.3432 | 0.0063 | 3571.5839 | 2475.6333 | 328.0 | 570.0 | 0.5754 | 326.0 | 0.5719 | 122.0 | 123.0 | 158.0 | 0.7785 | 0.7722 | 87.0 | 87.0 | 152.0 | 0.5724 | 0.5724 | 74.0 | 75.0 | 142.0 | 0.5282 | 0.5211 | 43.0 | 43.0 | 118.0 | 0.3644 | 0.3644 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 36.0 | 36 | 4.3402 | 0.0063 | 3569.1302 | 2473.9325 | 332.0 | 570.0 | 0.5825 | 328.0 | 0.5754 | 122.0 | 124.0 | 158.0 | 0.7848 | 0.7722 | 86.0 | 86.0 | 152.0 | 0.5658 | 0.5658 | 76.0 | 77.0 | 142.0 | 0.5423 | 0.5352 | 44.0 | 45.0 | 118.0 | 0.3814 | 0.3729 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 37.0 | 37 | 4.3729 | 0.0063 | 3596.0119 | 2492.5655 | 330.0 | 570.0 | 0.5789 | 326.0 | 0.5719 | 123.0 | 125.0 | 158.0 | 0.7911 | 0.7785 | 85.0 | 85.0 | 152.0 | 0.5592 | 0.5592 | 76.0 | 77.0 | 142.0 | 0.5423 | 0.5352 | 42.0 | 43.0 | 118.0 | 0.3644 | 0.3559 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 38.0 | 38 | 4.3427 | 0.0063 | 3571.1304 | 2475.3190 | 328.0 | 570.0 | 0.5754 | 325.0 | 0.5702 | 123.0 | 124.0 | 158.0 | 0.7848 | 0.7785 | 86.0 | 86.0 | 152.0 | 0.5658 | 0.5658 | 75.0 | 76.0 | 142.0 | 0.5352 | 0.5282 | 41.0 | 42.0 | 118.0 | 0.3559 | 0.3475 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
GradientResearch/Qwen3-4B-ECHO-Sokoban-GRPO
|
GradientResearch
| 2025-08-19T07:53:28Z | 0 | 0 | null |
[
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:2508.05387",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-08-18T12:31:48Z |
---
license: apache-2.0
pipeline_tag: text-generation
---
# Model Card for Qwen3-4B-ECHO-Sokoban-GRPO
<!-- Provide a quick summary of what the model is/does. -->
Building upon Qwen3-4B, we trained the model with the ECHO framework using GRPO on the Sokoban dataset.
Specifically, because Qwen3-4B performs poorly on the more challenging Sokoban puzzles, we adopted a multi-round RL training regimen capped at four rounds, with a maximum of 25 candidate actions per round. The detailed environment configuration is as follows:
```python
LargerSokoban6:
env_type: sokoban
max_actions_per_traj: 100
env_instruction: "You are solving the Sokoban puzzle. You are the player and you need to push all boxes to targets. When you are right next to a box, you can push it by moving in the same direction. You cannot push a box through a wall, and you cannot pull a box. The answer should be a sequence of actions, like <answer>Right || Right || Up</answer>"
max_tokens: 300
env_config:
dim_x: 6
dim_y: 6
num_boxes: 2
max_steps: 300
search_depth: 20
```
Tabel 1: Model performance on Sokoban task
| Model | Success Rate(%) |
|----------------|----------------|
| Qwen3-4B | 21.8 |
| Qwen3-4B-Echo(GRPO) | 34.0 |
| Qwen3-30B-A3B-Thinking-2507 | 72.75 |
| Qwen3-30B-A3B-Thinking-2507-Echo(GRPO) | 82.80 |
| Deepseek-R1 | 75.75 |
| Qwen3-235B-A22B-Thinking-2507) | 79.68 |
| gpt-oss-120b | 79.69 |
# Quick start
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "GradientResearch/Qwen3-4B-ECHO-Sokoban-GRPO"
# load the tokenizer and the model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
# prepare the model input
prompt = "sokoban"
messages = [
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=True # Switches between thinking and non-thinking modes. Default is True.
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
# conduct text completion
generated_ids = model.generate(
**model_inputs,
max_new_tokens=32768
)
output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist()
# parsing thinking content
try:
# rindex finding 151668 (</think>)
index = len(output_ids) - output_ids[::-1].index(151668)
except ValueError:
index = 0
thinking_content = tokenizer.decode(output_ids[:index], skip_special_tokens=True).strip("\n")
content = tokenizer.decode(output_ids[index:], skip_special_tokens=True).strip("\n")
print("thinking content:", thinking_content)
print("content:", content)
```
# Citation
If you find our work helpful, feel free to give us a cite.
```
@misc{xiao2025echodecouplinginferencetraining,
title={Echo: Decoupling Inference and Training for Large-Scale RL Alignment on Heterogeneous Swarms},
author={Jie Xiao and Changyuan Fan and Qingnan Ren and Alfred Long and Yuchen Zhang and Rymon Yu and Eric Yang and Lynn Ai and Shaoduo Gan},
year={2025},
eprint={2508.05387},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2508.05387},
}
```
|
Alonc/device_to_cve_4bit
|
Alonc
| 2025-08-19T07:52:34Z | 0 | 0 | null |
[
"safetensors",
"qwen3",
"region:us"
] | null | 2025-08-18T15:19:30Z |
The model is 16-bit the 4bit is a typo!!!!
---
base_model: unsloth/qwen3-14b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** Alonc
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen3-14b-unsloth-bnb-4bit
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Sayemahsjn/blockassist-bc-playful_feline_octopus_1755588452
|
Sayemahsjn
| 2025-08-19T07:45:27Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"playful feline octopus",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T07:45:23Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- playful feline octopus
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
minhnguyet/my-dpo-mistral-7b
|
minhnguyet
| 2025-08-19T07:41:50Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:unsloth/mistral-7b-bnb-4bit",
"base_model:finetune:unsloth/mistral-7b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-19T07:41:13Z |
---
base_model: unsloth/mistral-7b-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** minhnguyet
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
CosminMihai02/llama3.1_ollama_v4
|
CosminMihai02
| 2025-08-19T07:40:35Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-19T07:39:48Z |
---
base_model: unsloth/meta-llama-3.1-8b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** CosminMihai02
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
0xaoyama/blockassist-bc-muscular_zealous_gorilla_1755589105
|
0xaoyama
| 2025-08-19T07:38:59Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"muscular zealous gorilla",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T07:38:48Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- muscular zealous gorilla
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Ferdi3425/blockassist-bc-amphibious_deadly_otter_1755588973
|
Ferdi3425
| 2025-08-19T07:37:30Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"amphibious deadly otter",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T07:37:01Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- amphibious deadly otter
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
jinaai/jina-embeddings-v4-vllm-code
|
jinaai
| 2025-08-19T07:36:55Z | 354 | 3 |
transformers
|
[
"transformers",
"safetensors",
"qwen2_5_vl",
"image-to-text",
"vidore",
"colpali",
"multimodal-embedding",
"multilingual-embedding",
"Text-to-Visual Document (T→VD) retrieval",
"feature-extraction",
"sentence-similarity",
"mteb",
"visual-document-retrieval",
"multilingual",
"arxiv:2506.18902",
"text-generation-inference",
"endpoints_compatible",
"region:eu"
] |
visual-document-retrieval
| 2025-07-01T10:02:46Z |
---
tags:
- vidore
- colpali
- multimodal-embedding
- multilingual-embedding
- Text-to-Visual Document (T→VD) retrieval
- feature-extraction
- sentence-similarity
- mteb
language:
- multilingual
library_name: transformers
pipeline_tag: visual-document-retrieval
---
<br><br>
<p align="center">
<img src="https://huggingface.co/datasets/jinaai/documentation-images/resolve/main/logo.webp" alt="Jina AI: Your Search Foundation, Supercharged!" width="150px">
</p>
<p align="center">
<b>The embedding model trained by <a href="https://jina.ai/"><b>Jina AI</b></a>.</b>
</p>
# Jina Embeddings v4: Universal Embeddings for Multimodal Multilingual Retrieval
[Original Model](https://huggingface.co/jinaai/jina-embeddings-v4) | [Blog](https://jina.ai/news/jina-embeddings-v4-universal-embeddings-for-multimodal-multilingual-retrieval) | [Technical Report](https://arxiv.org/abs/2506.18902) | [API](https://jina.ai/embeddings)
## Model Overview
This repository hosts a vLLM-compatible version of [`jina-embeddings-v4`](https://huggingface.co/jinaai/jina-embeddings-v4) with the **code** adapter merged into the base `Qwen2.5-VL` weights. This architecture modification enables native compatibility with vLLM without requiring custom adapter-handling code.
## Usage
```python
import torch
from PIL import Image
from vllm import LLM
from vllm.config import PoolerConfig
from vllm.inputs.data import TextPrompt
# Initialize model
model = LLM(
model="jinaai/jina-embeddings-v4-vllm-code",
task="embed",
override_pooler_config=PoolerConfig(pooling_type="ALL", normalize=False),
dtype="float16",
)
# Create text prompts
query =query = "Find a function that prints a greeting message to the console"
query_prompt = TextPrompt(
prompt=f"Query: {query}"
)
passage = "def hello_world():\n print('Hello, World!')"
passage_prompt = TextPrompt(
prompt=f"Passage: {passage}"
)
# Create image prompt
image = Image.open("<path_to_image>")
image_prompt = TextPrompt(
prompt="<|im_start|>user\n<|vision_start|><|image_pad|><|vision_end|>Describe the image.<|im_end|>\n",
multi_modal_data={"image": image},
)
# Encode all prompts
prompts = [query_prompt, passage_prompt, image_prompt]
outputs = model.encode(prompts)
def get_embeddings(outputs):
VISION_START_TOKEN_ID, VISION_END_TOKEN_ID = 151652, 151653
embeddings = []
for output in outputs:
if VISION_START_TOKEN_ID in output.prompt_token_ids:
# Gather only vision tokens
img_start_pos = torch.where(
torch.tensor(output.prompt_token_ids) == VISION_START_TOKEN_ID
)[0][0]
img_end_pos = torch.where(
torch.tensor(output.prompt_token_ids) == VISION_END_TOKEN_ID
)[0][0]
embeddings_tensor = output.outputs.data.detach().clone()[
img_start_pos : img_end_pos + 1
]
else:
# Use all tokens for text-only prompts
embeddings_tensor = output.outputs.data.detach().clone()
# Pool and normalize embeddings
pooled_output = (
embeddings_tensor.sum(dim=0, dtype=torch.float32)
/ embeddings_tensor.shape[0]
)
embeddings.append(torch.nn.functional.normalize(pooled_output, dim=-1))
return embeddings
embeddings = get_embeddings(outputs)
```
|
tslim1/Fin-R1-mlx-8Bit
|
tslim1
| 2025-08-19T07:36:12Z | 0 | 0 |
mlx
|
[
"mlx",
"safetensors",
"qwen2",
"base_model:SUFE-AIFLM-Lab/Fin-R1",
"base_model:quantized:SUFE-AIFLM-Lab/Fin-R1",
"8-bit",
"region:us"
] | null | 2025-08-19T07:35:28Z |
---
base_model: SUFE-AIFLM-Lab/Fin-R1
tags:
- mlx
---
# tslim1/Fin-R1-mlx-8Bit
The Model [tslim1/Fin-R1-mlx-8Bit](https://huggingface.co/tslim1/Fin-R1-mlx-8Bit) was converted to MLX format from [SUFE-AIFLM-Lab/Fin-R1](https://huggingface.co/SUFE-AIFLM-Lab/Fin-R1) using mlx-lm version **0.26.3**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("tslim1/Fin-R1-mlx-8Bit")
prompt="hello"
if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
hakimjustbao/blockassist-bc-raging_subtle_wasp_1755587301
|
hakimjustbao
| 2025-08-19T07:35:59Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"raging subtle wasp",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T07:35:55Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- raging subtle wasp
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
0xaoyama/blockassist-bc-muscular_zealous_gorilla_1755588910
|
0xaoyama
| 2025-08-19T07:35:44Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"muscular zealous gorilla",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T07:35:32Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- muscular zealous gorilla
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
lqpl/blockassist-bc-hairy_insectivorous_antelope_1755588496
|
lqpl
| 2025-08-19T07:32:28Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"hairy insectivorous antelope",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T07:29:02Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- hairy insectivorous antelope
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mang3dd/blockassist-bc-tangled_slithering_alligator_1755586902
|
mang3dd
| 2025-08-19T07:29:00Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"tangled slithering alligator",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T07:28:56Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- tangled slithering alligator
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
rohannath/Llama_AI_doctor_using_Unsloth
|
rohannath
| 2025-08-19T07:28:44Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-19T07:28:36Z |
---
base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** rohannath
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
lachielee/guajian
|
lachielee
| 2025-08-19T07:27:54Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:afl-3.0",
"region:us"
] |
text-to-image
| 2025-08-19T07:27:06Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- output:
url: images/头像挂件-flux2.png
text: '-'
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: guajian
license: afl-3.0
---
# flux-lora-挂件整合2
<Gallery />
## Model description
weibo头像挂件
## Trigger words
You should use `guajian` to trigger the image generation.
## Download model
[Download](/lachielee/guajian/tree/main) them in the Files & versions tab.
|
ihsanridzi/blockassist-bc-wiry_flexible_owl_1755586744
|
ihsanridzi
| 2025-08-19T07:26:05Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"wiry flexible owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T07:26:00Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- wiry flexible owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
0xaoyama/blockassist-bc-muscular_zealous_gorilla_1755588318
|
0xaoyama
| 2025-08-19T07:25:50Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"muscular zealous gorilla",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T07:25:38Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- muscular zealous gorilla
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
dgambettaphd/M_mis_run2_gen5_WXS_doc1000_synt64_lr1e-04_acm_MPP
|
dgambettaphd
| 2025-08-19T07:25:36Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-19T07:25:22Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
donoway/ARC-Challenge_Llama-3.2-1B-654y06oc
|
donoway
| 2025-08-19T07:23:39Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"base_model:meta-llama/Llama-3.2-1B",
"base_model:finetune:meta-llama/Llama-3.2-1B",
"license:llama3.2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-19T07:13:04Z |
---
library_name: transformers
license: llama3.2
base_model: meta-llama/Llama-3.2-1B
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: ARC-Challenge_Llama-3.2-1B-654y06oc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ARC-Challenge_Llama-3.2-1B-654y06oc
This model is a fine-tuned version of [meta-llama/Llama-3.2-1B](https://huggingface.co/meta-llama/Llama-3.2-1B) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3954
- Model Preparation Time: 0.0058
- Mdl: 1033.3076
- Accumulated Loss: 716.2342
- Correct Preds: 73.0
- Total Preds: 299.0
- Accuracy: 0.2441
- Correct Gen Preds: 73.0
- Gen Accuracy: 0.2441
- Correct Gen Preds 32: 0.0
- Correct Preds 32: 0.0
- Total Labels 32: 64.0
- Accuracy 32: 0.0
- Gen Accuracy 32: 0.0
- Correct Gen Preds 33: 72.0
- Correct Preds 33: 72.0
- Total Labels 33: 73.0
- Accuracy 33: 0.9863
- Gen Accuracy 33: 0.9863
- Correct Gen Preds 34: 0.0
- Correct Preds 34: 0.0
- Total Labels 34: 78.0
- Accuracy 34: 0.0
- Gen Accuracy 34: 0.0
- Correct Gen Preds 35: 1.0
- Correct Preds 35: 1.0
- Total Labels 35: 83.0
- Accuracy 35: 0.0120
- Gen Accuracy 35: 0.0120
- Correct Gen Preds 36: 0.0
- Correct Preds 36: 0.0
- Total Labels 36: 1.0
- Accuracy 36: 0.0
- Gen Accuracy 36: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 112
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.01
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Model Preparation Time | Mdl | Accumulated Loss | Correct Preds | Total Preds | Accuracy | Correct Gen Preds | Gen Accuracy | Correct Gen Preds 32 | Correct Preds 32 | Total Labels 32 | Accuracy 32 | Gen Accuracy 32 | Correct Gen Preds 33 | Correct Preds 33 | Total Labels 33 | Accuracy 33 | Gen Accuracy 33 | Correct Gen Preds 34 | Correct Preds 34 | Total Labels 34 | Accuracy 34 | Gen Accuracy 34 | Correct Gen Preds 35 | Correct Preds 35 | Total Labels 35 | Accuracy 35 | Gen Accuracy 35 | Correct Gen Preds 36 | Correct Preds 36 | Total Labels 36 | Accuracy 36 | Gen Accuracy 36 |
|:-------------:|:-----:|:----:|:---------------:|:----------------------:|:---------:|:----------------:|:-------------:|:-----------:|:--------:|:-----------------:|:------------:|:--------------------:|:----------------:|:---------------:|:-----------:|:---------------:|:--------------------:|:----------------:|:---------------:|:-----------:|:---------------:|:--------------------:|:----------------:|:---------------:|:-----------:|:---------------:|:--------------------:|:----------------:|:---------------:|:-----------:|:---------------:|:--------------------:|:----------------:|:---------------:|:-----------:|:---------------:|
| No log | 0 | 0 | 1.6389 | 0.0058 | 706.9523 | 490.0220 | 66.0 | 299.0 | 0.2207 | 66.0 | 0.2207 | 62.0 | 62.0 | 64.0 | 0.9688 | 0.9688 | 0.0 | 0.0 | 73.0 | 0.0 | 0.0 | 4.0 | 4.0 | 78.0 | 0.0513 | 0.0513 | 0.0 | 0.0 | 83.0 | 0.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 1.7367 | 1.0 | 1 | 1.6389 | 0.0058 | 706.9523 | 490.0220 | 66.0 | 299.0 | 0.2207 | 66.0 | 0.2207 | 62.0 | 62.0 | 64.0 | 0.9688 | 0.9688 | 0.0 | 0.0 | 73.0 | 0.0 | 0.0 | 4.0 | 4.0 | 78.0 | 0.0513 | 0.0513 | 0.0 | 0.0 | 83.0 | 0.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 1.7367 | 2.0 | 2 | 2.5849 | 0.0058 | 1115.0583 | 772.8995 | 72.0 | 299.0 | 0.2408 | 63.0 | 0.2107 | 0.0 | 0.0 | 64.0 | 0.0 | 0.0 | 63.0 | 72.0 | 73.0 | 0.9863 | 0.8630 | 0.0 | 0.0 | 78.0 | 0.0 | 0.0 | 0.0 | 0.0 | 83.0 | 0.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.686 | 3.0 | 3 | 2.3954 | 0.0058 | 1033.3076 | 716.2342 | 73.0 | 299.0 | 0.2441 | 73.0 | 0.2441 | 0.0 | 0.0 | 64.0 | 0.0 | 0.0 | 72.0 | 72.0 | 73.0 | 0.9863 | 0.9863 | 0.0 | 0.0 | 78.0 | 0.0 | 0.0 | 1.0 | 1.0 | 83.0 | 0.0120 | 0.0120 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.1308 | 4.0 | 4 | 3.5091 | 0.0058 | 1513.7269 | 1049.2355 | 66.0 | 299.0 | 0.2207 | 34.0 | 0.1137 | 27.0 | 56.0 | 64.0 | 0.875 | 0.4219 | 6.0 | 8.0 | 73.0 | 0.1096 | 0.0822 | 0.0 | 0.0 | 78.0 | 0.0 | 0.0 | 1.0 | 2.0 | 83.0 | 0.0241 | 0.0120 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0919 | 5.0 | 5 | 4.5350 | 0.0058 | 1956.2548 | 1355.9725 | 72.0 | 299.0 | 0.2408 | 71.0 | 0.2375 | 1.0 | 2.0 | 64.0 | 0.0312 | 0.0156 | 70.0 | 70.0 | 73.0 | 0.9589 | 0.9589 | 0.0 | 0.0 | 78.0 | 0.0 | 0.0 | 0.0 | 0.0 | 83.0 | 0.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0008 | 6.0 | 6 | 5.9663 | 0.0058 | 2573.6635 | 1783.9276 | 73.0 | 299.0 | 0.2441 | 73.0 | 0.2441 | 0.0 | 0.0 | 64.0 | 0.0 | 0.0 | 73.0 | 73.0 | 73.0 | 1.0 | 1.0 | 0.0 | 0.0 | 78.0 | 0.0 | 0.0 | 0.0 | 0.0 | 83.0 | 0.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0001 | 7.0 | 7 | 7.0163 | 0.0058 | 3026.5888 | 2097.8715 | 73.0 | 299.0 | 0.2441 | 73.0 | 0.2441 | 0.0 | 0.0 | 64.0 | 0.0 | 0.0 | 73.0 | 73.0 | 73.0 | 1.0 | 1.0 | 0.0 | 0.0 | 78.0 | 0.0 | 0.0 | 0.0 | 0.0 | 83.0 | 0.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 8.0 | 8 | 7.8549 | 0.0058 | 3388.3437 | 2348.6208 | 73.0 | 299.0 | 0.2441 | 73.0 | 0.2441 | 0.0 | 0.0 | 64.0 | 0.0 | 0.0 | 73.0 | 73.0 | 73.0 | 1.0 | 1.0 | 0.0 | 0.0 | 78.0 | 0.0 | 0.0 | 0.0 | 0.0 | 83.0 | 0.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 9.0 | 9 | 8.5290 | 0.0058 | 3679.1101 | 2550.1648 | 73.0 | 299.0 | 0.2441 | 73.0 | 0.2441 | 0.0 | 0.0 | 64.0 | 0.0 | 0.0 | 73.0 | 73.0 | 73.0 | 1.0 | 1.0 | 0.0 | 0.0 | 78.0 | 0.0 | 0.0 | 0.0 | 0.0 | 83.0 | 0.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 10.0 | 10 | 9.0813 | 0.0058 | 3917.3504 | 2715.3004 | 73.0 | 299.0 | 0.2441 | 73.0 | 0.2441 | 0.0 | 0.0 | 64.0 | 0.0 | 0.0 | 73.0 | 73.0 | 73.0 | 1.0 | 1.0 | 0.0 | 0.0 | 78.0 | 0.0 | 0.0 | 0.0 | 0.0 | 83.0 | 0.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 11.0 | 11 | 9.5223 | 0.0058 | 4107.5735 | 2847.1530 | 73.0 | 299.0 | 0.2441 | 73.0 | 0.2441 | 0.0 | 0.0 | 64.0 | 0.0 | 0.0 | 73.0 | 73.0 | 73.0 | 1.0 | 1.0 | 0.0 | 0.0 | 78.0 | 0.0 | 0.0 | 0.0 | 0.0 | 83.0 | 0.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 12.0 | 12 | 9.8450 | 0.0058 | 4246.7775 | 2943.6418 | 73.0 | 299.0 | 0.2441 | 73.0 | 0.2441 | 0.0 | 0.0 | 64.0 | 0.0 | 0.0 | 73.0 | 73.0 | 73.0 | 1.0 | 1.0 | 0.0 | 0.0 | 78.0 | 0.0 | 0.0 | 0.0 | 0.0 | 83.0 | 0.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 13.0 | 13 | 10.0820 | 0.0058 | 4349.0238 | 3014.5136 | 73.0 | 299.0 | 0.2441 | 73.0 | 0.2441 | 0.0 | 0.0 | 64.0 | 0.0 | 0.0 | 73.0 | 73.0 | 73.0 | 1.0 | 1.0 | 0.0 | 0.0 | 78.0 | 0.0 | 0.0 | 0.0 | 0.0 | 83.0 | 0.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 14.0 | 14 | 10.2630 | 0.0058 | 4427.1048 | 3068.6352 | 73.0 | 299.0 | 0.2441 | 73.0 | 0.2441 | 0.0 | 0.0 | 64.0 | 0.0 | 0.0 | 73.0 | 73.0 | 73.0 | 1.0 | 1.0 | 0.0 | 0.0 | 78.0 | 0.0 | 0.0 | 0.0 | 0.0 | 83.0 | 0.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 15.0 | 15 | 10.4135 | 0.0058 | 4492.0237 | 3113.6335 | 73.0 | 299.0 | 0.2441 | 73.0 | 0.2441 | 0.0 | 0.0 | 64.0 | 0.0 | 0.0 | 73.0 | 73.0 | 73.0 | 1.0 | 1.0 | 0.0 | 0.0 | 78.0 | 0.0 | 0.0 | 0.0 | 0.0 | 83.0 | 0.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 16.0 | 16 | 10.5454 | 0.0058 | 4548.9178 | 3153.0696 | 73.0 | 299.0 | 0.2441 | 73.0 | 0.2441 | 0.0 | 0.0 | 64.0 | 0.0 | 0.0 | 73.0 | 73.0 | 73.0 | 1.0 | 1.0 | 0.0 | 0.0 | 78.0 | 0.0 | 0.0 | 0.0 | 0.0 | 83.0 | 0.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 17.0 | 17 | 10.6501 | 0.0058 | 4594.0917 | 3184.3817 | 73.0 | 299.0 | 0.2441 | 73.0 | 0.2441 | 0.0 | 0.0 | 64.0 | 0.0 | 0.0 | 73.0 | 73.0 | 73.0 | 1.0 | 1.0 | 0.0 | 0.0 | 78.0 | 0.0 | 0.0 | 0.0 | 0.0 | 83.0 | 0.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 18.0 | 18 | 10.7377 | 0.0058 | 4631.8710 | 3210.5683 | 73.0 | 299.0 | 0.2441 | 73.0 | 0.2441 | 0.0 | 0.0 | 64.0 | 0.0 | 0.0 | 73.0 | 73.0 | 73.0 | 1.0 | 1.0 | 0.0 | 0.0 | 78.0 | 0.0 | 0.0 | 0.0 | 0.0 | 83.0 | 0.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 19.0 | 19 | 10.8058 | 0.0058 | 4661.2653 | 3230.9429 | 73.0 | 299.0 | 0.2441 | 73.0 | 0.2441 | 0.0 | 0.0 | 64.0 | 0.0 | 0.0 | 73.0 | 73.0 | 73.0 | 1.0 | 1.0 | 0.0 | 0.0 | 78.0 | 0.0 | 0.0 | 0.0 | 0.0 | 83.0 | 0.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 20.0 | 20 | 10.8522 | 0.0058 | 4681.2829 | 3244.8181 | 73.0 | 299.0 | 0.2441 | 73.0 | 0.2441 | 0.0 | 0.0 | 64.0 | 0.0 | 0.0 | 73.0 | 73.0 | 73.0 | 1.0 | 1.0 | 0.0 | 0.0 | 78.0 | 0.0 | 0.0 | 0.0 | 0.0 | 83.0 | 0.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 21.0 | 21 | 10.8886 | 0.0058 | 4696.9714 | 3255.6925 | 73.0 | 299.0 | 0.2441 | 73.0 | 0.2441 | 0.0 | 0.0 | 64.0 | 0.0 | 0.0 | 73.0 | 73.0 | 73.0 | 1.0 | 1.0 | 0.0 | 0.0 | 78.0 | 0.0 | 0.0 | 0.0 | 0.0 | 83.0 | 0.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 22.0 | 22 | 10.9239 | 0.0058 | 4712.2095 | 3266.2547 | 73.0 | 299.0 | 0.2441 | 73.0 | 0.2441 | 0.0 | 0.0 | 64.0 | 0.0 | 0.0 | 73.0 | 73.0 | 73.0 | 1.0 | 1.0 | 0.0 | 0.0 | 78.0 | 0.0 | 0.0 | 0.0 | 0.0 | 83.0 | 0.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 23.0 | 23 | 10.9471 | 0.0058 | 4722.2182 | 3273.1922 | 73.0 | 299.0 | 0.2441 | 73.0 | 0.2441 | 0.0 | 0.0 | 64.0 | 0.0 | 0.0 | 73.0 | 73.0 | 73.0 | 1.0 | 1.0 | 0.0 | 0.0 | 78.0 | 0.0 | 0.0 | 0.0 | 0.0 | 83.0 | 0.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 24.0 | 24 | 10.9741 | 0.0058 | 4733.8499 | 3281.2547 | 73.0 | 299.0 | 0.2441 | 73.0 | 0.2441 | 0.0 | 0.0 | 64.0 | 0.0 | 0.0 | 73.0 | 73.0 | 73.0 | 1.0 | 1.0 | 0.0 | 0.0 | 78.0 | 0.0 | 0.0 | 0.0 | 0.0 | 83.0 | 0.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 25.0 | 25 | 10.9917 | 0.0058 | 4741.4235 | 3286.5043 | 73.0 | 299.0 | 0.2441 | 73.0 | 0.2441 | 0.0 | 0.0 | 64.0 | 0.0 | 0.0 | 73.0 | 73.0 | 73.0 | 1.0 | 1.0 | 0.0 | 0.0 | 78.0 | 0.0 | 0.0 | 0.0 | 0.0 | 83.0 | 0.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 26.0 | 26 | 11.0067 | 0.0058 | 4747.9153 | 3291.0041 | 73.0 | 299.0 | 0.2441 | 73.0 | 0.2441 | 0.0 | 0.0 | 64.0 | 0.0 | 0.0 | 73.0 | 73.0 | 73.0 | 1.0 | 1.0 | 0.0 | 0.0 | 78.0 | 0.0 | 0.0 | 0.0 | 0.0 | 83.0 | 0.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 27.0 | 27 | 11.0245 | 0.0058 | 4755.5791 | 3296.3162 | 73.0 | 299.0 | 0.2441 | 73.0 | 0.2441 | 0.0 | 0.0 | 64.0 | 0.0 | 0.0 | 73.0 | 73.0 | 73.0 | 1.0 | 1.0 | 0.0 | 0.0 | 78.0 | 0.0 | 0.0 | 0.0 | 0.0 | 83.0 | 0.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 28.0 | 28 | 11.0307 | 0.0058 | 4758.2846 | 3298.1916 | 73.0 | 299.0 | 0.2441 | 73.0 | 0.2441 | 0.0 | 0.0 | 64.0 | 0.0 | 0.0 | 73.0 | 73.0 | 73.0 | 1.0 | 1.0 | 0.0 | 0.0 | 78.0 | 0.0 | 0.0 | 0.0 | 0.0 | 83.0 | 0.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 29.0 | 29 | 11.0362 | 0.0058 | 4760.6287 | 3299.8163 | 73.0 | 299.0 | 0.2441 | 73.0 | 0.2441 | 0.0 | 0.0 | 64.0 | 0.0 | 0.0 | 73.0 | 73.0 | 73.0 | 1.0 | 1.0 | 0.0 | 0.0 | 78.0 | 0.0 | 0.0 | 0.0 | 0.0 | 83.0 | 0.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 30.0 | 30 | 11.0439 | 0.0058 | 4763.9649 | 3302.1288 | 73.0 | 299.0 | 0.2441 | 73.0 | 0.2441 | 0.0 | 0.0 | 64.0 | 0.0 | 0.0 | 73.0 | 73.0 | 73.0 | 1.0 | 1.0 | 0.0 | 0.0 | 78.0 | 0.0 | 0.0 | 0.0 | 0.0 | 83.0 | 0.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 31.0 | 31 | 11.0487 | 0.0058 | 4766.0390 | 3303.5665 | 73.0 | 299.0 | 0.2441 | 73.0 | 0.2441 | 0.0 | 0.0 | 64.0 | 0.0 | 0.0 | 73.0 | 73.0 | 73.0 | 1.0 | 1.0 | 0.0 | 0.0 | 78.0 | 0.0 | 0.0 | 0.0 | 0.0 | 83.0 | 0.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 32.0 | 32 | 11.0546 | 0.0058 | 4768.5629 | 3305.3159 | 73.0 | 299.0 | 0.2441 | 73.0 | 0.2441 | 0.0 | 0.0 | 64.0 | 0.0 | 0.0 | 73.0 | 73.0 | 73.0 | 1.0 | 1.0 | 0.0 | 0.0 | 78.0 | 0.0 | 0.0 | 0.0 | 0.0 | 83.0 | 0.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 33.0 | 33 | 11.0604 | 0.0058 | 4771.0883 | 3307.0664 | 73.0 | 299.0 | 0.2441 | 73.0 | 0.2441 | 0.0 | 0.0 | 64.0 | 0.0 | 0.0 | 73.0 | 73.0 | 73.0 | 1.0 | 1.0 | 0.0 | 0.0 | 78.0 | 0.0 | 0.0 | 0.0 | 0.0 | 83.0 | 0.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1755587787
|
IvanJAjebu
| 2025-08-19T07:17:54Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thorny slender capybara",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T07:17:30Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thorny slender capybara
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
donoway/ARC-Easy_Llama-3.2-1B-7kenrtho
|
donoway
| 2025-08-19T07:10:33Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"base_model:meta-llama/Llama-3.2-1B",
"base_model:finetune:meta-llama/Llama-3.2-1B",
"license:llama3.2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-19T06:58:11Z |
---
library_name: transformers
license: llama3.2
base_model: meta-llama/Llama-3.2-1B
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: ARC-Easy_Llama-3.2-1B-7kenrtho
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ARC-Easy_Llama-3.2-1B-7kenrtho
This model is a fine-tuned version of [meta-llama/Llama-3.2-1B](https://huggingface.co/meta-llama/Llama-3.2-1B) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2266
- Model Preparation Time: 0.0063
- Mdl: 1831.0526
- Accumulated Loss: 1269.1889
- Correct Preds: 351.0
- Total Preds: 570.0
- Accuracy: 0.6158
- Correct Gen Preds: 330.0
- Gen Accuracy: 0.5789
- Correct Gen Preds 32: 121.0
- Correct Preds 32: 136.0
- Total Labels 32: 158.0
- Accuracy 32: 0.8608
- Gen Accuracy 32: 0.7658
- Correct Gen Preds 33: 94.0
- Correct Preds 33: 95.0
- Total Labels 33: 152.0
- Accuracy 33: 0.625
- Gen Accuracy 33: 0.6184
- Correct Gen Preds 34: 78.0
- Correct Preds 34: 82.0
- Total Labels 34: 142.0
- Accuracy 34: 0.5775
- Gen Accuracy 34: 0.5493
- Correct Gen Preds 35: 37.0
- Correct Preds 35: 38.0
- Total Labels 35: 118.0
- Accuracy 35: 0.3220
- Gen Accuracy 35: 0.3136
- Correct Gen Preds 36: 0.0
- Correct Preds 36: 0.0
- Total Labels 36: 0.0
- Accuracy 36: 0.0
- Gen Accuracy 36: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 112
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.01
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Model Preparation Time | Mdl | Accumulated Loss | Correct Preds | Total Preds | Accuracy | Correct Gen Preds | Gen Accuracy | Correct Gen Preds 32 | Correct Preds 32 | Total Labels 32 | Accuracy 32 | Gen Accuracy 32 | Correct Gen Preds 33 | Correct Preds 33 | Total Labels 33 | Accuracy 33 | Gen Accuracy 33 | Correct Gen Preds 34 | Correct Preds 34 | Total Labels 34 | Accuracy 34 | Gen Accuracy 34 | Correct Gen Preds 35 | Correct Preds 35 | Total Labels 35 | Accuracy 35 | Gen Accuracy 35 | Correct Gen Preds 36 | Correct Preds 36 | Total Labels 36 | Accuracy 36 | Gen Accuracy 36 |
|:-------------:|:-----:|:----:|:---------------:|:----------------------:|:---------:|:----------------:|:-------------:|:-----------:|:--------:|:-----------------:|:------------:|:--------------------:|:----------------:|:---------------:|:-----------:|:---------------:|:--------------------:|:----------------:|:---------------:|:-----------:|:---------------:|:--------------------:|:----------------:|:---------------:|:-----------:|:---------------:|:--------------------:|:----------------:|:---------------:|:-----------:|:---------------:|:--------------------:|:----------------:|:---------------:|:-----------:|:---------------:|
| No log | 0 | 0 | 1.5354 | 0.0063 | 1262.6022 | 875.1692 | 172.0 | 570.0 | 0.3018 | 170.0 | 0.2982 | 154.0 | 154.0 | 158.0 | 0.9747 | 0.9747 | 0.0 | 0.0 | 152.0 | 0.0 | 0.0 | 15.0 | 17.0 | 142.0 | 0.1197 | 0.1056 | 1.0 | 1.0 | 118.0 | 0.0085 | 0.0085 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.3992 | 1.0 | 1 | 1.5354 | 0.0063 | 1262.6022 | 875.1692 | 172.0 | 570.0 | 0.3018 | 170.0 | 0.2982 | 154.0 | 154.0 | 158.0 | 0.9747 | 0.9747 | 0.0 | 0.0 | 152.0 | 0.0 | 0.0 | 15.0 | 17.0 | 142.0 | 0.1197 | 0.1056 | 1.0 | 1.0 | 118.0 | 0.0085 | 0.0085 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.3992 | 2.0 | 2 | 2.7006 | 0.0063 | 2220.8218 | 1539.3563 | 202.0 | 570.0 | 0.3544 | 202.0 | 0.3544 | 0.0 | 0.0 | 158.0 | 0.0 | 0.0 | 62.0 | 62.0 | 152.0 | 0.4079 | 0.4079 | 140.0 | 140.0 | 142.0 | 0.9859 | 0.9859 | 0.0 | 0.0 | 118.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.9024 | 3.0 | 3 | 1.3172 | 0.0063 | 1083.1641 | 750.7922 | 190.0 | 570.0 | 0.3333 | 190.0 | 0.3333 | 9.0 | 9.0 | 158.0 | 0.0570 | 0.0570 | 150.0 | 150.0 | 152.0 | 0.9868 | 0.9868 | 19.0 | 19.0 | 142.0 | 0.1338 | 0.1338 | 12.0 | 12.0 | 118.0 | 0.1017 | 0.1017 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.6103 | 4.0 | 4 | 1.4635 | 0.0063 | 1203.4512 | 834.1688 | 338.0 | 570.0 | 0.5930 | 336.0 | 0.5895 | 131.0 | 133.0 | 158.0 | 0.8418 | 0.8291 | 91.0 | 91.0 | 152.0 | 0.5987 | 0.5987 | 76.0 | 76.0 | 142.0 | 0.5352 | 0.5352 | 38.0 | 38.0 | 118.0 | 0.3220 | 0.3220 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0428 | 5.0 | 5 | 2.2266 | 0.0063 | 1831.0526 | 1269.1889 | 351.0 | 570.0 | 0.6158 | 330.0 | 0.5789 | 121.0 | 136.0 | 158.0 | 0.8608 | 0.7658 | 94.0 | 95.0 | 152.0 | 0.625 | 0.6184 | 78.0 | 82.0 | 142.0 | 0.5775 | 0.5493 | 37.0 | 38.0 | 118.0 | 0.3220 | 0.3136 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0003 | 6.0 | 6 | 2.6947 | 0.0063 | 2215.9451 | 1535.9761 | 347.0 | 570.0 | 0.6088 | 306.0 | 0.5368 | 110.0 | 133.0 | 158.0 | 0.8418 | 0.6962 | 88.0 | 92.0 | 152.0 | 0.6053 | 0.5789 | 71.0 | 83.0 | 142.0 | 0.5845 | 0.5 | 37.0 | 39.0 | 118.0 | 0.3305 | 0.3136 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 7.0 | 7 | 2.8748 | 0.0063 | 2364.0644 | 1638.6446 | 343.0 | 570.0 | 0.6018 | 278.0 | 0.4877 | 95.0 | 130.0 | 158.0 | 0.8228 | 0.6013 | 78.0 | 87.0 | 152.0 | 0.5724 | 0.5132 | 67.0 | 84.0 | 142.0 | 0.5915 | 0.4718 | 38.0 | 42.0 | 118.0 | 0.3559 | 0.3220 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 8.0 | 8 | 2.9759 | 0.0063 | 2447.1750 | 1696.2525 | 336.0 | 570.0 | 0.5895 | 259.0 | 0.4544 | 87.0 | 128.0 | 158.0 | 0.8101 | 0.5506 | 72.0 | 84.0 | 152.0 | 0.5526 | 0.4737 | 61.0 | 80.0 | 142.0 | 0.5634 | 0.4296 | 39.0 | 44.0 | 118.0 | 0.3729 | 0.3305 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 9.0 | 9 | 3.0029 | 0.0063 | 2469.3675 | 1711.6351 | 331.0 | 570.0 | 0.5807 | 236.0 | 0.4140 | 78.0 | 125.0 | 158.0 | 0.7911 | 0.4937 | 64.0 | 81.0 | 152.0 | 0.5329 | 0.4211 | 57.0 | 81.0 | 142.0 | 0.5704 | 0.4014 | 37.0 | 44.0 | 118.0 | 0.3729 | 0.3136 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 10.0 | 10 | 3.0291 | 0.0063 | 2490.9245 | 1726.5773 | 320.0 | 570.0 | 0.5614 | 221.0 | 0.3877 | 74.0 | 122.0 | 158.0 | 0.7722 | 0.4684 | 59.0 | 77.0 | 152.0 | 0.5066 | 0.3882 | 56.0 | 79.0 | 142.0 | 0.5563 | 0.3944 | 32.0 | 42.0 | 118.0 | 0.3559 | 0.2712 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 11.0 | 11 | 3.0620 | 0.0063 | 2518.0218 | 1745.3597 | 318.0 | 570.0 | 0.5579 | 213.0 | 0.3737 | 70.0 | 122.0 | 158.0 | 0.7722 | 0.4430 | 57.0 | 76.0 | 152.0 | 0.5 | 0.375 | 57.0 | 79.0 | 142.0 | 0.5563 | 0.4014 | 29.0 | 41.0 | 118.0 | 0.3475 | 0.2458 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 12.0 | 12 | 3.1331 | 0.0063 | 2576.4547 | 1785.8623 | 314.0 | 570.0 | 0.5509 | 208.0 | 0.3649 | 68.0 | 122.0 | 158.0 | 0.7722 | 0.4304 | 55.0 | 75.0 | 152.0 | 0.4934 | 0.3618 | 57.0 | 80.0 | 142.0 | 0.5634 | 0.4014 | 28.0 | 37.0 | 118.0 | 0.3136 | 0.2373 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 13.0 | 13 | 3.1756 | 0.0063 | 2611.4490 | 1810.1185 | 317.0 | 570.0 | 0.5561 | 205.0 | 0.3596 | 67.0 | 121.0 | 158.0 | 0.7658 | 0.4241 | 53.0 | 74.0 | 152.0 | 0.4868 | 0.3487 | 56.0 | 81.0 | 142.0 | 0.5704 | 0.3944 | 29.0 | 41.0 | 118.0 | 0.3475 | 0.2458 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 14.0 | 14 | 3.1954 | 0.0063 | 2627.6804 | 1821.3693 | 313.0 | 570.0 | 0.5491 | 202.0 | 0.3544 | 67.0 | 121.0 | 158.0 | 0.7658 | 0.4241 | 53.0 | 73.0 | 152.0 | 0.4803 | 0.3487 | 55.0 | 79.0 | 142.0 | 0.5563 | 0.3873 | 27.0 | 40.0 | 118.0 | 0.3390 | 0.2288 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 15.0 | 15 | 3.2397 | 0.0063 | 2664.1020 | 1846.6148 | 314.0 | 570.0 | 0.5509 | 200.0 | 0.3509 | 64.0 | 121.0 | 158.0 | 0.7658 | 0.4051 | 53.0 | 73.0 | 152.0 | 0.4803 | 0.3487 | 56.0 | 81.0 | 142.0 | 0.5704 | 0.3944 | 27.0 | 39.0 | 118.0 | 0.3305 | 0.2288 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 16.0 | 16 | 3.2887 | 0.0063 | 2704.4351 | 1874.5716 | 314.0 | 570.0 | 0.5509 | 198.0 | 0.3474 | 64.0 | 121.0 | 158.0 | 0.7658 | 0.4051 | 53.0 | 71.0 | 152.0 | 0.4671 | 0.3487 | 55.0 | 82.0 | 142.0 | 0.5775 | 0.3873 | 26.0 | 40.0 | 118.0 | 0.3390 | 0.2203 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 17.0 | 17 | 3.3231 | 0.0063 | 2732.6981 | 1894.1620 | 311.0 | 570.0 | 0.5456 | 196.0 | 0.3439 | 64.0 | 121.0 | 158.0 | 0.7658 | 0.4051 | 51.0 | 70.0 | 152.0 | 0.4605 | 0.3355 | 54.0 | 80.0 | 142.0 | 0.5634 | 0.3803 | 27.0 | 40.0 | 118.0 | 0.3390 | 0.2288 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 18.0 | 18 | 3.3377 | 0.0063 | 2744.7374 | 1902.5070 | 310.0 | 570.0 | 0.5439 | 196.0 | 0.3439 | 64.0 | 121.0 | 158.0 | 0.7658 | 0.4051 | 52.0 | 69.0 | 152.0 | 0.4539 | 0.3421 | 53.0 | 80.0 | 142.0 | 0.5634 | 0.3732 | 27.0 | 40.0 | 118.0 | 0.3390 | 0.2288 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 19.0 | 19 | 3.3610 | 0.0063 | 2763.8330 | 1915.7430 | 309.0 | 570.0 | 0.5421 | 197.0 | 0.3456 | 64.0 | 122.0 | 158.0 | 0.7722 | 0.4051 | 51.0 | 69.0 | 152.0 | 0.4539 | 0.3355 | 56.0 | 79.0 | 142.0 | 0.5563 | 0.3944 | 26.0 | 39.0 | 118.0 | 0.3305 | 0.2203 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 20.0 | 20 | 3.3848 | 0.0063 | 2783.4671 | 1929.3524 | 311.0 | 570.0 | 0.5456 | 198.0 | 0.3474 | 66.0 | 123.0 | 158.0 | 0.7785 | 0.4177 | 51.0 | 68.0 | 152.0 | 0.4474 | 0.3355 | 54.0 | 80.0 | 142.0 | 0.5634 | 0.3803 | 27.0 | 40.0 | 118.0 | 0.3390 | 0.2288 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 21.0 | 21 | 3.3561 | 0.0063 | 2759.8295 | 1912.9680 | 312.0 | 570.0 | 0.5474 | 200.0 | 0.3509 | 67.0 | 123.0 | 158.0 | 0.7785 | 0.4241 | 53.0 | 68.0 | 152.0 | 0.4474 | 0.3487 | 53.0 | 81.0 | 142.0 | 0.5704 | 0.3732 | 27.0 | 40.0 | 118.0 | 0.3390 | 0.2288 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 22.0 | 22 | 3.4079 | 0.0063 | 2802.4235 | 1942.4919 | 311.0 | 570.0 | 0.5456 | 197.0 | 0.3456 | 68.0 | 123.0 | 158.0 | 0.7785 | 0.4304 | 49.0 | 70.0 | 152.0 | 0.4605 | 0.3224 | 53.0 | 79.0 | 142.0 | 0.5563 | 0.3732 | 27.0 | 39.0 | 118.0 | 0.3305 | 0.2288 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 23.0 | 23 | 3.4059 | 0.0063 | 2800.7869 | 1941.3575 | 313.0 | 570.0 | 0.5491 | 198.0 | 0.3474 | 67.0 | 122.0 | 158.0 | 0.7722 | 0.4241 | 51.0 | 70.0 | 152.0 | 0.4605 | 0.3355 | 53.0 | 81.0 | 142.0 | 0.5704 | 0.3732 | 27.0 | 40.0 | 118.0 | 0.3390 | 0.2288 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 24.0 | 24 | 3.4307 | 0.0063 | 2821.1525 | 1955.4739 | 312.0 | 570.0 | 0.5474 | 198.0 | 0.3474 | 67.0 | 122.0 | 158.0 | 0.7722 | 0.4241 | 50.0 | 69.0 | 152.0 | 0.4539 | 0.3289 | 53.0 | 80.0 | 142.0 | 0.5634 | 0.3732 | 28.0 | 41.0 | 118.0 | 0.3475 | 0.2373 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 25.0 | 25 | 3.4314 | 0.0063 | 2821.7596 | 1955.8947 | 312.0 | 570.0 | 0.5474 | 199.0 | 0.3491 | 67.0 | 122.0 | 158.0 | 0.7722 | 0.4241 | 51.0 | 69.0 | 152.0 | 0.4539 | 0.3355 | 54.0 | 80.0 | 142.0 | 0.5634 | 0.3803 | 27.0 | 41.0 | 118.0 | 0.3475 | 0.2288 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 26.0 | 26 | 3.4420 | 0.0063 | 2830.4716 | 1961.9334 | 313.0 | 570.0 | 0.5491 | 204.0 | 0.3579 | 69.0 | 122.0 | 158.0 | 0.7722 | 0.4367 | 51.0 | 70.0 | 152.0 | 0.4605 | 0.3355 | 55.0 | 80.0 | 142.0 | 0.5634 | 0.3873 | 29.0 | 41.0 | 118.0 | 0.3475 | 0.2458 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 27.0 | 27 | 3.4460 | 0.0063 | 2833.7589 | 1964.2120 | 312.0 | 570.0 | 0.5474 | 197.0 | 0.3456 | 66.0 | 122.0 | 158.0 | 0.7722 | 0.4177 | 51.0 | 68.0 | 152.0 | 0.4474 | 0.3355 | 53.0 | 81.0 | 142.0 | 0.5704 | 0.3732 | 27.0 | 41.0 | 118.0 | 0.3475 | 0.2288 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 28.0 | 28 | 3.4630 | 0.0063 | 2847.7515 | 1973.9109 | 313.0 | 570.0 | 0.5491 | 198.0 | 0.3474 | 68.0 | 123.0 | 158.0 | 0.7785 | 0.4304 | 49.0 | 69.0 | 152.0 | 0.4539 | 0.3224 | 52.0 | 80.0 | 142.0 | 0.5634 | 0.3662 | 29.0 | 41.0 | 118.0 | 0.3475 | 0.2458 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 29.0 | 29 | 3.4611 | 0.0063 | 2846.1980 | 1972.8341 | 312.0 | 570.0 | 0.5474 | 199.0 | 0.3491 | 68.0 | 122.0 | 158.0 | 0.7722 | 0.4304 | 50.0 | 69.0 | 152.0 | 0.4539 | 0.3289 | 52.0 | 80.0 | 142.0 | 0.5634 | 0.3662 | 29.0 | 41.0 | 118.0 | 0.3475 | 0.2458 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 30.0 | 30 | 3.4590 | 0.0063 | 2844.4834 | 1971.6457 | 310.0 | 570.0 | 0.5439 | 194.0 | 0.3404 | 68.0 | 122.0 | 158.0 | 0.7722 | 0.4304 | 49.0 | 68.0 | 152.0 | 0.4474 | 0.3224 | 51.0 | 80.0 | 142.0 | 0.5634 | 0.3592 | 26.0 | 40.0 | 118.0 | 0.3390 | 0.2203 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 31.0 | 31 | 3.4672 | 0.0063 | 2851.2328 | 1976.3240 | 310.0 | 570.0 | 0.5439 | 195.0 | 0.3421 | 67.0 | 123.0 | 158.0 | 0.7785 | 0.4241 | 51.0 | 68.0 | 152.0 | 0.4474 | 0.3355 | 51.0 | 81.0 | 142.0 | 0.5704 | 0.3592 | 26.0 | 38.0 | 118.0 | 0.3220 | 0.2203 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 32.0 | 32 | 3.4768 | 0.0063 | 2859.1086 | 1981.7830 | 309.0 | 570.0 | 0.5421 | 197.0 | 0.3456 | 67.0 | 121.0 | 158.0 | 0.7658 | 0.4241 | 50.0 | 68.0 | 152.0 | 0.4474 | 0.3289 | 54.0 | 80.0 | 142.0 | 0.5634 | 0.3803 | 26.0 | 40.0 | 118.0 | 0.3390 | 0.2203 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 33.0 | 33 | 3.4676 | 0.0063 | 2851.5542 | 1976.5468 | 312.0 | 570.0 | 0.5474 | 197.0 | 0.3456 | 67.0 | 122.0 | 158.0 | 0.7722 | 0.4241 | 49.0 | 68.0 | 152.0 | 0.4474 | 0.3224 | 53.0 | 81.0 | 142.0 | 0.5704 | 0.3732 | 28.0 | 41.0 | 118.0 | 0.3475 | 0.2373 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 34.0 | 34 | 3.4550 | 0.0063 | 2841.1333 | 1969.3235 | 310.0 | 570.0 | 0.5439 | 197.0 | 0.3456 | 69.0 | 122.0 | 158.0 | 0.7722 | 0.4367 | 49.0 | 68.0 | 152.0 | 0.4474 | 0.3224 | 53.0 | 80.0 | 142.0 | 0.5634 | 0.3732 | 26.0 | 40.0 | 118.0 | 0.3390 | 0.2203 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 35.0 | 35 | 3.4710 | 0.0063 | 2854.3484 | 1978.4835 | 311.0 | 570.0 | 0.5456 | 199.0 | 0.3491 | 69.0 | 123.0 | 158.0 | 0.7785 | 0.4367 | 51.0 | 69.0 | 152.0 | 0.4539 | 0.3355 | 53.0 | 80.0 | 142.0 | 0.5634 | 0.3732 | 26.0 | 39.0 | 118.0 | 0.3305 | 0.2203 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
insanesaga/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-nocturnal_clawed_bison
|
insanesaga
| 2025-08-19T07:09:35Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am nocturnal clawed bison",
"unsloth",
"trl",
"genrl-swarm",
"I am nocturnal_clawed_bison",
"conversational",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-05-27T13:39:41Z |
---
base_model: Gensyn/Qwen2.5-1.5B-Instruct
library_name: transformers
model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-nocturnal_clawed_bison
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am nocturnal clawed bison
- unsloth
- trl
- genrl-swarm
- I am nocturnal_clawed_bison
licence: license
---
# Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-nocturnal_clawed_bison
This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="insanesaga/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-nocturnal_clawed_bison", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.5.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
chansung/Gemma2-2B-CCRL-CUR-BASIC-ONLY-1E
|
chansung
| 2025-08-19T07:07:20Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma2",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"grpo",
"conversational",
"dataset:chansung/verifiable-coding-problems-python-v2",
"arxiv:2402.03300",
"base_model:google/gemma-2-2b-it",
"base_model:finetune:google/gemma-2-2b-it",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-19T01:31:09Z |
---
base_model: google/gemma-2-2b-it
datasets: chansung/verifiable-coding-problems-python-v2
library_name: transformers
model_name: Gemma2-2B-CCRL-CUR-BASIC-ONLY-1E
tags:
- generated_from_trainer
- open-r1
- trl
- grpo
licence: license
---
# Model Card for Gemma2-2B-CCRL-CUR-BASIC-ONLY-1E
This model is a fine-tuned version of [google/gemma-2-2b-it](https://huggingface.co/google/gemma-2-2b-it) on the [chansung/verifiable-coding-problems-python-v2](https://huggingface.co/datasets/chansung/verifiable-coding-problems-python-v2) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="chansung/Gemma2-2B-CCRL-CUR-BASIC-ONLY-1E", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/chansung18/huggingface/runs/g89zrlrp)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.18.0.dev0
- Transformers: 4.52.0.dev0
- Pytorch: 2.6.0
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
yaelahnal/blockassist-bc-mute_clawed_crab_1755587166
|
yaelahnal
| 2025-08-19T07:07:16Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"mute clawed crab",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T07:06:58Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- mute clawed crab
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Soham711/blenderbot-400M-friendly-chatmodel
|
Soham711
| 2025-08-19T07:05:17Z | 0 | 1 | null |
[
"safetensors",
"blenderbot",
"text2text-generation",
"conversational",
"en",
"base_model:facebook/blenderbot-400M-distill",
"base_model:finetune:facebook/blenderbot-400M-distill",
"license:mit",
"region:us"
] | null | 2025-08-19T06:48:35Z |
---
license: mit
language:
- en
base_model:
- facebook/blenderbot-400M-distill
tags:
- text2text-generation
- conversational
---
|
Setayeshk/brisc_yolo
|
Setayeshk
| 2025-08-19T07:04:59Z | 0 | 0 | null |
[
"tensorboard",
"license:cc-by-nc-nd-4.0",
"region:us"
] | null | 2025-08-18T21:53:10Z |
---
license: cc-by-nc-nd-4.0
---
|
hakimjustbao/blockassist-bc-raging_subtle_wasp_1755585395
|
hakimjustbao
| 2025-08-19T07:04:19Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"raging subtle wasp",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T07:04:16Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- raging subtle wasp
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
aiface/ModernBERT-large_nli
|
aiface
| 2025-08-19T07:02:21Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"modernbert",
"text-classification",
"generated_from_trainer",
"base_model:answerdotai/ModernBERT-large",
"base_model:finetune:answerdotai/ModernBERT-large",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-19T03:22:37Z |
---
library_name: transformers
license: apache-2.0
base_model: answerdotai/ModernBERT-large
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: ModernBERT-large_nli
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ModernBERT-large_nli
This model is a fine-tuned version of [answerdotai/ModernBERT-large](https://huggingface.co/answerdotai/ModernBERT-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6038
- Accuracy: 0.5787
- Precision Macro: 0.5794
- Recall Macro: 0.5790
- F1 Macro: 0.5792
- F1 Weighted: 0.5788
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision Macro | Recall Macro | F1 Macro | F1 Weighted |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------------:|:------------:|:--------:|:-----------:|
| 2.1283 | 1.0 | 143 | 1.0136 | 0.4807 | 0.4674 | 0.4835 | 0.4509 | 0.4492 |
| 1.8848 | 2.0 | 286 | 0.9818 | 0.5202 | 0.5745 | 0.5219 | 0.5042 | 0.5038 |
| 1.7416 | 3.0 | 429 | 1.1233 | 0.3220 | 0.2102 | 0.3259 | 0.2190 | 0.2174 |
| 2.2168 | 4.0 | 572 | 1.1135 | 0.3277 | 0.1092 | 0.3333 | 0.1646 | 0.1618 |
| 2.2099 | 5.0 | 715 | 1.1089 | 0.3277 | 0.1092 | 0.3333 | 0.1646 | 0.1618 |
| 2.2191 | 6.0 | 858 | 1.1231 | 0.3282 | 0.4426 | 0.3338 | 0.1655 | 0.1627 |
| 2.2027 | 7.0 | 1001 | 1.0931 | 0.3774 | 0.2508 | 0.3801 | 0.3016 | 0.2993 |
| 2.1846 | 8.0 | 1144 | 1.0723 | 0.4013 | 0.3861 | 0.3995 | 0.3692 | 0.3705 |
| 2.1232 | 9.0 | 1287 | 1.0461 | 0.4244 | 0.4225 | 0.4248 | 0.4203 | 0.4202 |
| 2.0586 | 10.0 | 1430 | 1.0345 | 0.4510 | 0.4495 | 0.4494 | 0.4210 | 0.4220 |
| 2.0578 | 11.0 | 1573 | 1.0390 | 0.4523 | 0.4797 | 0.4511 | 0.4522 | 0.4525 |
| 2.0289 | 12.0 | 1716 | 1.0626 | 0.4665 | 0.5296 | 0.4668 | 0.4391 | 0.4389 |
| 1.5688 | 13.0 | 1859 | 0.8686 | 0.6084 | 0.6082 | 0.6089 | 0.6064 | 0.6061 |
| 1.2262 | 14.0 | 2002 | 0.9452 | 0.5973 | 0.5972 | 0.5978 | 0.5961 | 0.5958 |
| 0.6694 | 15.0 | 2145 | 1.2849 | 0.5809 | 0.5809 | 0.5817 | 0.5802 | 0.5798 |
| 0.2152 | 16.0 | 2288 | 1.9241 | 0.5752 | 0.5760 | 0.5753 | 0.5755 | 0.5753 |
| 0.043 | 17.0 | 2431 | 2.3196 | 0.5672 | 0.5685 | 0.5673 | 0.5675 | 0.5672 |
| 0.0074 | 18.0 | 2574 | 2.5393 | 0.5734 | 0.5747 | 0.5736 | 0.5740 | 0.5737 |
| 0.0015 | 19.0 | 2717 | 2.5970 | 0.5769 | 0.5780 | 0.5772 | 0.5776 | 0.5772 |
| 0.002 | 20.0 | 2860 | 2.6038 | 0.5787 | 0.5794 | 0.5790 | 0.5792 | 0.5788 |
### Framework versions
- Transformers 4.55.0
- Pytorch 2.7.0+cu126
- Datasets 4.0.0
- Tokenizers 0.21.4
|
mradermacher/Mini-AGI-4B-i1-GGUF
|
mradermacher
| 2025-08-19T07:00:16Z | 0 | 1 |
transformers
|
[
"transformers",
"gguf",
"en",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-08-19T03:54:34Z |
---
base_model: Guilherme34/Mini-AGI-4B
language:
- en
library_name: transformers
mradermacher:
readme_rev: 1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K_M Q4_0 IQ3_XS Q4_1 IQ3_S -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
weighted/imatrix quants of https://huggingface.co/Guilherme34/Mini-AGI-4B
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Mini-AGI-4B-i1-GGUF).***
static quants are available at https://huggingface.co/mradermacher/Mini-AGI-4B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Mini-AGI-4B-i1-GGUF/resolve/main/Mini-AGI-4B.imatrix.gguf) | imatrix | 0.1 | imatrix file (for creating your own qwuants) |
| [GGUF](https://huggingface.co/mradermacher/Mini-AGI-4B-i1-GGUF/resolve/main/Mini-AGI-4B.i1-IQ1_S.gguf) | i1-IQ1_S | 1.2 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Mini-AGI-4B-i1-GGUF/resolve/main/Mini-AGI-4B.i1-IQ1_M.gguf) | i1-IQ1_M | 1.2 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Mini-AGI-4B-i1-GGUF/resolve/main/Mini-AGI-4B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 1.3 | |
| [GGUF](https://huggingface.co/mradermacher/Mini-AGI-4B-i1-GGUF/resolve/main/Mini-AGI-4B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 1.5 | |
| [GGUF](https://huggingface.co/mradermacher/Mini-AGI-4B-i1-GGUF/resolve/main/Mini-AGI-4B.i1-IQ2_S.gguf) | i1-IQ2_S | 1.5 | |
| [GGUF](https://huggingface.co/mradermacher/Mini-AGI-4B-i1-GGUF/resolve/main/Mini-AGI-4B.i1-IQ2_M.gguf) | i1-IQ2_M | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/Mini-AGI-4B-i1-GGUF/resolve/main/Mini-AGI-4B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 1.7 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Mini-AGI-4B-i1-GGUF/resolve/main/Mini-AGI-4B.i1-Q2_K.gguf) | i1-Q2_K | 1.8 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Mini-AGI-4B-i1-GGUF/resolve/main/Mini-AGI-4B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 1.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Mini-AGI-4B-i1-GGUF/resolve/main/Mini-AGI-4B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/Mini-AGI-4B-i1-GGUF/resolve/main/Mini-AGI-4B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 2.0 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Mini-AGI-4B-i1-GGUF/resolve/main/Mini-AGI-4B.i1-IQ3_S.gguf) | i1-IQ3_S | 2.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Mini-AGI-4B-i1-GGUF/resolve/main/Mini-AGI-4B.i1-IQ3_M.gguf) | i1-IQ3_M | 2.1 | |
| [GGUF](https://huggingface.co/mradermacher/Mini-AGI-4B-i1-GGUF/resolve/main/Mini-AGI-4B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 2.2 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Mini-AGI-4B-i1-GGUF/resolve/main/Mini-AGI-4B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 2.3 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Mini-AGI-4B-i1-GGUF/resolve/main/Mini-AGI-4B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/Mini-AGI-4B-i1-GGUF/resolve/main/Mini-AGI-4B.i1-Q4_0.gguf) | i1-Q4_0 | 2.5 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Mini-AGI-4B-i1-GGUF/resolve/main/Mini-AGI-4B.i1-IQ4_NL.gguf) | i1-IQ4_NL | 2.5 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Mini-AGI-4B-i1-GGUF/resolve/main/Mini-AGI-4B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 2.5 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Mini-AGI-4B-i1-GGUF/resolve/main/Mini-AGI-4B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 2.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Mini-AGI-4B-i1-GGUF/resolve/main/Mini-AGI-4B.i1-Q4_1.gguf) | i1-Q4_1 | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/Mini-AGI-4B-i1-GGUF/resolve/main/Mini-AGI-4B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Mini-AGI-4B-i1-GGUF/resolve/main/Mini-AGI-4B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Mini-AGI-4B-i1-GGUF/resolve/main/Mini-AGI-4B.i1-Q6_K.gguf) | i1-Q6_K | 3.4 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
jtekt-physical-ai/lerobot_actv2
|
jtekt-physical-ai
| 2025-08-19T06:59:15Z | 0 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"robotics",
"act",
"dataset:yurayuray/retainer_mizoguchi3",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-08-19T05:57:00Z |
---
datasets: yurayuray/retainer_mizoguchi3
library_name: lerobot
license: apache-2.0
model_name: act
pipeline_tag: robotics
tags:
- lerobot
- robotics
- act
---
# Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high success rates.
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
lerobot-train \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
_Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._
### Evaluate the policy/run inference
```bash
lerobot-record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
- **License:** apache-2.0
|
donoway/ARC-Easy_Llama-3.2-1B-xl28q3hn
|
donoway
| 2025-08-19T06:57:36Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"base_model:meta-llama/Llama-3.2-1B",
"base_model:finetune:meta-llama/Llama-3.2-1B",
"license:llama3.2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-19T06:46:08Z |
---
library_name: transformers
license: llama3.2
base_model: meta-llama/Llama-3.2-1B
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: ARC-Easy_Llama-3.2-1B-xl28q3hn
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ARC-Easy_Llama-3.2-1B-xl28q3hn
This model is a fine-tuned version of [meta-llama/Llama-3.2-1B](https://huggingface.co/meta-llama/Llama-3.2-1B) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2555
- Model Preparation Time: 0.006
- Mdl: 1032.4540
- Accumulated Loss: 715.6426
- Correct Preds: 291.0
- Total Preds: 570.0
- Accuracy: 0.5105
- Correct Gen Preds: 291.0
- Gen Accuracy: 0.5105
- Correct Gen Preds 32: 98.0
- Correct Preds 32: 98.0
- Total Labels 32: 158.0
- Accuracy 32: 0.6203
- Gen Accuracy 32: 0.6203
- Correct Gen Preds 33: 130.0
- Correct Preds 33: 130.0
- Total Labels 33: 152.0
- Accuracy 33: 0.8553
- Gen Accuracy 33: 0.8553
- Correct Gen Preds 34: 40.0
- Correct Preds 34: 40.0
- Total Labels 34: 142.0
- Accuracy 34: 0.2817
- Gen Accuracy 34: 0.2817
- Correct Gen Preds 35: 23.0
- Correct Preds 35: 23.0
- Total Labels 35: 118.0
- Accuracy 35: 0.1949
- Gen Accuracy 35: 0.1949
- Correct Gen Preds 36: 0.0
- Correct Preds 36: 0.0
- Total Labels 36: 0.0
- Accuracy 36: 0.0
- Gen Accuracy 36: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 112
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.01
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Model Preparation Time | Mdl | Accumulated Loss | Correct Preds | Total Preds | Accuracy | Correct Gen Preds | Gen Accuracy | Correct Gen Preds 32 | Correct Preds 32 | Total Labels 32 | Accuracy 32 | Gen Accuracy 32 | Correct Gen Preds 33 | Correct Preds 33 | Total Labels 33 | Accuracy 33 | Gen Accuracy 33 | Correct Gen Preds 34 | Correct Preds 34 | Total Labels 34 | Accuracy 34 | Gen Accuracy 34 | Correct Gen Preds 35 | Correct Preds 35 | Total Labels 35 | Accuracy 35 | Gen Accuracy 35 | Correct Gen Preds 36 | Correct Preds 36 | Total Labels 36 | Accuracy 36 | Gen Accuracy 36 |
|:-------------:|:-----:|:----:|:---------------:|:----------------------:|:---------:|:----------------:|:-------------:|:-----------:|:--------:|:-----------------:|:------------:|:--------------------:|:----------------:|:---------------:|:-----------:|:---------------:|:--------------------:|:----------------:|:---------------:|:-----------:|:---------------:|:--------------------:|:----------------:|:---------------:|:-----------:|:---------------:|:--------------------:|:----------------:|:---------------:|:-----------:|:---------------:|:--------------------:|:----------------:|:---------------:|:-----------:|:---------------:|
| No log | 0 | 0 | 1.5354 | 0.006 | 1262.6022 | 875.1692 | 172.0 | 570.0 | 0.3018 | 170.0 | 0.2982 | 154.0 | 154.0 | 158.0 | 0.9747 | 0.9747 | 0.0 | 0.0 | 152.0 | 0.0 | 0.0 | 15.0 | 17.0 | 142.0 | 0.1197 | 0.1056 | 1.0 | 1.0 | 118.0 | 0.0085 | 0.0085 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.3552 | 1.0 | 1 | 1.5354 | 0.006 | 1262.6022 | 875.1692 | 172.0 | 570.0 | 0.3018 | 170.0 | 0.2982 | 154.0 | 154.0 | 158.0 | 0.9747 | 0.9747 | 0.0 | 0.0 | 152.0 | 0.0 | 0.0 | 15.0 | 17.0 | 142.0 | 0.1197 | 0.1056 | 1.0 | 1.0 | 118.0 | 0.0085 | 0.0085 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.3552 | 2.0 | 2 | 2.4687 | 0.006 | 2030.1287 | 1407.1780 | 221.0 | 570.0 | 0.3877 | 221.0 | 0.3877 | 0.0 | 0.0 | 158.0 | 0.0 | 0.0 | 85.0 | 85.0 | 152.0 | 0.5592 | 0.5592 | 136.0 | 136.0 | 142.0 | 0.9577 | 0.9577 | 0.0 | 0.0 | 118.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.7603 | 3.0 | 3 | 1.2555 | 0.006 | 1032.4540 | 715.6426 | 291.0 | 570.0 | 0.5105 | 291.0 | 0.5105 | 98.0 | 98.0 | 158.0 | 0.6203 | 0.6203 | 130.0 | 130.0 | 152.0 | 0.8553 | 0.8553 | 40.0 | 40.0 | 142.0 | 0.2817 | 0.2817 | 23.0 | 23.0 | 118.0 | 0.1949 | 0.1949 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.4267 | 4.0 | 4 | 2.5733 | 0.006 | 2116.1258 | 1466.7867 | 261.0 | 570.0 | 0.4579 | 260.0 | 0.4561 | 151.0 | 152.0 | 158.0 | 0.9620 | 0.9557 | 39.0 | 39.0 | 152.0 | 0.2566 | 0.2566 | 42.0 | 42.0 | 142.0 | 0.2958 | 0.2958 | 28.0 | 28.0 | 118.0 | 0.2373 | 0.2373 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0491 | 5.0 | 5 | 3.1596 | 0.006 | 2598.2545 | 1800.9728 | 284.0 | 570.0 | 0.4982 | 284.0 | 0.4982 | 151.0 | 151.0 | 158.0 | 0.9557 | 0.9557 | 56.0 | 56.0 | 152.0 | 0.3684 | 0.3684 | 50.0 | 50.0 | 142.0 | 0.3521 | 0.3521 | 27.0 | 27.0 | 118.0 | 0.2288 | 0.2288 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0044 | 6.0 | 6 | 4.0391 | 0.006 | 3321.5305 | 2302.3095 | 262.0 | 570.0 | 0.4596 | 259.0 | 0.4544 | 151.0 | 152.0 | 158.0 | 0.9620 | 0.9557 | 41.0 | 41.0 | 152.0 | 0.2697 | 0.2697 | 44.0 | 45.0 | 142.0 | 0.3169 | 0.3099 | 23.0 | 24.0 | 118.0 | 0.2034 | 0.1949 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0001 | 7.0 | 7 | 4.4151 | 0.006 | 3630.7350 | 2516.6338 | 253.0 | 570.0 | 0.4439 | 239.0 | 0.4193 | 144.0 | 152.0 | 158.0 | 0.9620 | 0.9114 | 36.0 | 38.0 | 152.0 | 0.25 | 0.2368 | 38.0 | 41.0 | 142.0 | 0.2887 | 0.2676 | 21.0 | 22.0 | 118.0 | 0.1864 | 0.1780 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 8.0 | 8 | 4.5569 | 0.006 | 3747.3361 | 2597.4554 | 250.0 | 570.0 | 0.4386 | 223.0 | 0.3912 | 135.0 | 154.0 | 158.0 | 0.9747 | 0.8544 | 35.0 | 38.0 | 152.0 | 0.25 | 0.2303 | 35.0 | 39.0 | 142.0 | 0.2746 | 0.2465 | 18.0 | 19.0 | 118.0 | 0.1610 | 0.1525 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 9.0 | 9 | 4.6453 | 0.006 | 3819.9784 | 2647.8072 | 247.0 | 570.0 | 0.4333 | 204.0 | 0.3579 | 123.0 | 152.0 | 158.0 | 0.9620 | 0.7785 | 33.0 | 39.0 | 152.0 | 0.2566 | 0.2171 | 31.0 | 37.0 | 142.0 | 0.2606 | 0.2183 | 17.0 | 19.0 | 118.0 | 0.1610 | 0.1441 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0001 | 10.0 | 10 | 4.8047 | 0.006 | 3951.0414 | 2738.6532 | 242.0 | 570.0 | 0.4246 | 203.0 | 0.3561 | 123.0 | 152.0 | 158.0 | 0.9620 | 0.7785 | 35.0 | 39.0 | 152.0 | 0.2566 | 0.2303 | 30.0 | 33.0 | 142.0 | 0.2324 | 0.2113 | 15.0 | 18.0 | 118.0 | 0.1525 | 0.1271 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 11.0 | 11 | 5.0241 | 0.006 | 4131.5031 | 2863.7397 | 236.0 | 570.0 | 0.4140 | 201.0 | 0.3526 | 125.0 | 153.0 | 158.0 | 0.9684 | 0.7911 | 34.0 | 37.0 | 152.0 | 0.2434 | 0.2237 | 28.0 | 29.0 | 142.0 | 0.2042 | 0.1972 | 14.0 | 17.0 | 118.0 | 0.1441 | 0.1186 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 12.0 | 12 | 5.2229 | 0.006 | 4295.0154 | 2977.0778 | 235.0 | 570.0 | 0.4123 | 203.0 | 0.3561 | 129.0 | 154.0 | 158.0 | 0.9747 | 0.8165 | 32.0 | 36.0 | 152.0 | 0.2368 | 0.2105 | 28.0 | 29.0 | 142.0 | 0.2042 | 0.1972 | 14.0 | 16.0 | 118.0 | 0.1356 | 0.1186 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 13.0 | 13 | 5.3741 | 0.006 | 4419.3154 | 3063.2360 | 235.0 | 570.0 | 0.4123 | 202.0 | 0.3544 | 129.0 | 155.0 | 158.0 | 0.9810 | 0.8165 | 31.0 | 35.0 | 152.0 | 0.2303 | 0.2039 | 28.0 | 29.0 | 142.0 | 0.2042 | 0.1972 | 14.0 | 16.0 | 118.0 | 0.1356 | 0.1186 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 14.0 | 14 | 5.5052 | 0.006 | 4527.0926 | 3137.9415 | 235.0 | 570.0 | 0.4123 | 207.0 | 0.3632 | 135.0 | 156.0 | 158.0 | 0.9873 | 0.8544 | 31.0 | 35.0 | 152.0 | 0.2303 | 0.2039 | 27.0 | 28.0 | 142.0 | 0.1972 | 0.1901 | 14.0 | 16.0 | 118.0 | 0.1356 | 0.1186 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 15.0 | 15 | 5.5976 | 0.006 | 4603.0781 | 3190.6106 | 234.0 | 570.0 | 0.4105 | 207.0 | 0.3632 | 135.0 | 156.0 | 158.0 | 0.9873 | 0.8544 | 32.0 | 35.0 | 152.0 | 0.2303 | 0.2105 | 26.0 | 28.0 | 142.0 | 0.1972 | 0.1831 | 14.0 | 15.0 | 118.0 | 0.1271 | 0.1186 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 16.0 | 16 | 5.6853 | 0.006 | 4675.2022 | 3240.6032 | 228.0 | 570.0 | 0.4 | 206.0 | 0.3614 | 138.0 | 155.0 | 158.0 | 0.9810 | 0.8734 | 29.0 | 32.0 | 152.0 | 0.2105 | 0.1908 | 26.0 | 27.0 | 142.0 | 0.1901 | 0.1831 | 13.0 | 14.0 | 118.0 | 0.1186 | 0.1102 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 17.0 | 17 | 5.7800 | 0.006 | 4753.1165 | 3294.6093 | 228.0 | 570.0 | 0.4 | 207.0 | 0.3632 | 141.0 | 156.0 | 158.0 | 0.9873 | 0.8924 | 29.0 | 31.0 | 152.0 | 0.2039 | 0.1908 | 24.0 | 27.0 | 142.0 | 0.1901 | 0.1690 | 13.0 | 14.0 | 118.0 | 0.1186 | 0.1102 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 18.0 | 18 | 5.8437 | 0.006 | 4805.4763 | 3330.9024 | 227.0 | 570.0 | 0.3982 | 207.0 | 0.3632 | 141.0 | 156.0 | 158.0 | 0.9873 | 0.8924 | 29.0 | 30.0 | 152.0 | 0.1974 | 0.1908 | 24.0 | 27.0 | 142.0 | 0.1901 | 0.1690 | 13.0 | 14.0 | 118.0 | 0.1186 | 0.1102 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 19.0 | 19 | 5.9488 | 0.006 | 4891.9541 | 3390.8442 | 226.0 | 570.0 | 0.3965 | 206.0 | 0.3614 | 141.0 | 156.0 | 158.0 | 0.9873 | 0.8924 | 28.0 | 29.0 | 152.0 | 0.1908 | 0.1842 | 24.0 | 27.0 | 142.0 | 0.1901 | 0.1690 | 13.0 | 14.0 | 118.0 | 0.1186 | 0.1102 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 20.0 | 20 | 5.9804 | 0.006 | 4917.8580 | 3408.7994 | 226.0 | 570.0 | 0.3965 | 206.0 | 0.3614 | 141.0 | 156.0 | 158.0 | 0.9873 | 0.8924 | 28.0 | 29.0 | 152.0 | 0.1908 | 0.1842 | 24.0 | 27.0 | 142.0 | 0.1901 | 0.1690 | 13.0 | 14.0 | 118.0 | 0.1186 | 0.1102 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 21.0 | 21 | 6.0239 | 0.006 | 4953.6373 | 3433.5997 | 226.0 | 570.0 | 0.3965 | 206.0 | 0.3614 | 141.0 | 156.0 | 158.0 | 0.9873 | 0.8924 | 28.0 | 29.0 | 152.0 | 0.1908 | 0.1842 | 24.0 | 27.0 | 142.0 | 0.1901 | 0.1690 | 13.0 | 14.0 | 118.0 | 0.1186 | 0.1102 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 22.0 | 22 | 6.0758 | 0.006 | 4996.3676 | 3463.2181 | 225.0 | 570.0 | 0.3947 | 206.0 | 0.3614 | 141.0 | 156.0 | 158.0 | 0.9873 | 0.8924 | 28.0 | 29.0 | 152.0 | 0.1908 | 0.1842 | 24.0 | 26.0 | 142.0 | 0.1831 | 0.1690 | 13.0 | 14.0 | 118.0 | 0.1186 | 0.1102 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 23.0 | 23 | 6.0958 | 0.006 | 5012.8294 | 3474.6285 | 225.0 | 570.0 | 0.3947 | 207.0 | 0.3632 | 141.0 | 156.0 | 158.0 | 0.9873 | 0.8924 | 28.0 | 29.0 | 152.0 | 0.1908 | 0.1842 | 25.0 | 26.0 | 142.0 | 0.1831 | 0.1761 | 13.0 | 14.0 | 118.0 | 0.1186 | 0.1102 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 24.0 | 24 | 6.1508 | 0.006 | 5057.9994 | 3505.9380 | 225.0 | 570.0 | 0.3947 | 209.0 | 0.3667 | 144.0 | 156.0 | 158.0 | 0.9873 | 0.9114 | 28.0 | 29.0 | 152.0 | 0.1908 | 0.1842 | 24.0 | 26.0 | 142.0 | 0.1831 | 0.1690 | 13.0 | 14.0 | 118.0 | 0.1186 | 0.1102 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 25.0 | 25 | 6.1477 | 0.006 | 5055.4455 | 3504.1678 | 224.0 | 570.0 | 0.3930 | 208.0 | 0.3649 | 143.0 | 156.0 | 158.0 | 0.9873 | 0.9051 | 27.0 | 28.0 | 152.0 | 0.1842 | 0.1776 | 25.0 | 26.0 | 142.0 | 0.1831 | 0.1761 | 13.0 | 14.0 | 118.0 | 0.1186 | 0.1102 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 26.0 | 26 | 6.1921 | 0.006 | 5092.0041 | 3529.5083 | 224.0 | 570.0 | 0.3930 | 208.0 | 0.3649 | 144.0 | 156.0 | 158.0 | 0.9873 | 0.9114 | 26.0 | 27.0 | 152.0 | 0.1776 | 0.1711 | 25.0 | 27.0 | 142.0 | 0.1901 | 0.1761 | 13.0 | 14.0 | 118.0 | 0.1186 | 0.1102 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 27.0 | 27 | 6.2041 | 0.006 | 5101.8523 | 3536.3346 | 224.0 | 570.0 | 0.3930 | 208.0 | 0.3649 | 144.0 | 156.0 | 158.0 | 0.9873 | 0.9114 | 26.0 | 27.0 | 152.0 | 0.1776 | 0.1711 | 25.0 | 27.0 | 142.0 | 0.1901 | 0.1761 | 13.0 | 14.0 | 118.0 | 0.1186 | 0.1102 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 28.0 | 28 | 6.2060 | 0.006 | 5103.4059 | 3537.4114 | 225.0 | 570.0 | 0.3947 | 208.0 | 0.3649 | 144.0 | 156.0 | 158.0 | 0.9873 | 0.9114 | 27.0 | 28.0 | 152.0 | 0.1842 | 0.1776 | 24.0 | 27.0 | 142.0 | 0.1901 | 0.1690 | 13.0 | 14.0 | 118.0 | 0.1186 | 0.1102 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 29.0 | 29 | 6.2192 | 0.006 | 5114.2474 | 3544.9262 | 225.0 | 570.0 | 0.3947 | 209.0 | 0.3667 | 145.0 | 156.0 | 158.0 | 0.9873 | 0.9177 | 27.0 | 28.0 | 152.0 | 0.1842 | 0.1776 | 24.0 | 27.0 | 142.0 | 0.1901 | 0.1690 | 13.0 | 14.0 | 118.0 | 0.1186 | 0.1102 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 30.0 | 30 | 6.2327 | 0.006 | 5125.3556 | 3552.6258 | 221.0 | 570.0 | 0.3877 | 206.0 | 0.3614 | 144.0 | 156.0 | 158.0 | 0.9873 | 0.9114 | 26.0 | 27.0 | 152.0 | 0.1776 | 0.1711 | 23.0 | 25.0 | 142.0 | 0.1761 | 0.1620 | 13.0 | 13.0 | 118.0 | 0.1102 | 0.1102 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 31.0 | 31 | 6.2450 | 0.006 | 5135.5071 | 3559.6623 | 222.0 | 570.0 | 0.3895 | 206.0 | 0.3614 | 144.0 | 156.0 | 158.0 | 0.9873 | 0.9114 | 26.0 | 27.0 | 152.0 | 0.1776 | 0.1711 | 23.0 | 26.0 | 142.0 | 0.1831 | 0.1620 | 13.0 | 13.0 | 118.0 | 0.1102 | 0.1102 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 32.0 | 32 | 6.2478 | 0.006 | 5137.7630 | 3561.2259 | 224.0 | 570.0 | 0.3930 | 210.0 | 0.3684 | 146.0 | 156.0 | 158.0 | 0.9873 | 0.9241 | 27.0 | 28.0 | 152.0 | 0.1842 | 0.1776 | 24.0 | 26.0 | 142.0 | 0.1831 | 0.1690 | 13.0 | 14.0 | 118.0 | 0.1186 | 0.1102 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 33.0 | 33 | 6.2653 | 0.006 | 5152.1581 | 3571.2038 | 224.0 | 570.0 | 0.3930 | 209.0 | 0.3667 | 146.0 | 156.0 | 158.0 | 0.9873 | 0.9241 | 26.0 | 27.0 | 152.0 | 0.1776 | 0.1711 | 24.0 | 27.0 | 142.0 | 0.1901 | 0.1690 | 13.0 | 14.0 | 118.0 | 0.1186 | 0.1102 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
resistz/sft_Qwen3-8B-Base_ultra200k_lora32
|
resistz
| 2025-08-19T06:55:24Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:Qwen/Qwen3-8B-Base",
"lora",
"sft",
"trl",
"text-generation",
"conversational",
"base_model:Qwen/Qwen3-8B-Base",
"region:us"
] |
text-generation
| 2025-08-19T06:54:12Z |
---
library_name: peft
model_name: sft_Qwen3-8B-Base_ultra200k_lora32
tags:
- base_model:adapter:Qwen/Qwen3-8B-Base
- lora
- sft
- trl
licence: license
pipeline_tag: text-generation
base_model: Qwen/Qwen3-8B-Base
---
# Model Card for sft_Qwen3-8B-Base_ultra200k_lora32
This model is a fine-tuned version of [None](https://huggingface.co/None).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="None", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/resistzzz97/Alignment_Influence/runs/9rimz0x9)
This model was trained with SFT.
### Framework versions
- PEFT 0.17.0
- TRL: 0.21.0
- Transformers: 4.55.2
- Pytorch: 2.7.1
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
VoilaRaj/78_4Qc4dT
|
VoilaRaj
| 2025-08-19T06:51:53Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-08-19T06:48:04Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
MitsuiChen14/DGTRS-CLIP-ViT-B-16
|
MitsuiChen14
| 2025-08-19T06:51:31Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-03-26T06:49:13Z |
https://github.com/MitsuiChen14/LRSCLIP?tab=readme-ov-file#-usage-
|
deepkeep-ai/public-classification-qwen3-0.6B-contrastive-classifier
|
deepkeep-ai
| 2025-08-19T06:51:02Z | 35 | 0 |
transformers
|
[
"transformers",
"safetensors",
"contrastive-wrapper",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-13T11:12:07Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
hdong0/deepseek-Qwen-1.5B-baseline-thin-Open-R1-GRPO_deepscaler_mu_8_constant_lr_warmed
|
hdong0
| 2025-08-19T06:48:29Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2bm",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"grpo",
"conversational",
"custom_code",
"dataset:agentica-org/DeepScaleR-Preview-Dataset",
"arxiv:2402.03300",
"base_model:hdong0/deepseek-Qwen2.5-1.5B-baseline-thin-init",
"base_model:finetune:hdong0/deepseek-Qwen2.5-1.5B-baseline-thin-init",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2025-08-18T23:10:15Z |
---
base_model: hdong0/deepseek-Qwen2.5-1.5B-baseline-thin-init
datasets: agentica-org/DeepScaleR-Preview-Dataset
library_name: transformers
model_name: deepseek-Qwen-1.5B-baseline-thin-Open-R1-GRPO_deepscaler_mu_8_constant_lr_warmed
tags:
- generated_from_trainer
- open-r1
- trl
- grpo
licence: license
---
# Model Card for deepseek-Qwen-1.5B-baseline-thin-Open-R1-GRPO_deepscaler_mu_8_constant_lr_warmed
This model is a fine-tuned version of [hdong0/deepseek-Qwen2.5-1.5B-baseline-thin-init](https://huggingface.co/hdong0/deepseek-Qwen2.5-1.5B-baseline-thin-init) on the [agentica-org/DeepScaleR-Preview-Dataset](https://huggingface.co/datasets/agentica-org/DeepScaleR-Preview-Dataset) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="hdong0/deepseek-Qwen-1.5B-baseline-thin-Open-R1-GRPO_deepscaler_mu_8_constant_lr_warmed", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.18.0.dev0
- Transformers: 4.52.0.dev0
- Pytorch: 2.6.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1755585419
|
IvanJAjebu
| 2025-08-19T06:38:43Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thorny slender capybara",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T06:38:34Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thorny slender capybara
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
VoilaRaj/78_l9bzGb
|
VoilaRaj
| 2025-08-19T06:35:45Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-08-19T06:31:52Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
ianmathu/Llama-3.2-3B-Instruct-unsloth-bnb-4bit-alpaca
|
ianmathu
| 2025-08-19T06:34:22Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-19T06:33:35Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
hakimjustbao/blockassist-bc-raging_subtle_wasp_1755583495
|
hakimjustbao
| 2025-08-19T06:32:50Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"raging subtle wasp",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T06:32:46Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- raging subtle wasp
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
shiimi/labse-dhivehi-finetuned
|
shiimi
| 2025-08-19T06:28:01Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"dense",
"generated_from_trainer",
"dataset_size:968266",
"loss:CosineSimilarityLoss",
"arxiv:1908.10084",
"base_model:sentence-transformers/LaBSE",
"base_model:finetune:sentence-transformers/LaBSE",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-08-19T05:46:17Z |
---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- dense
- generated_from_trainer
- dataset_size:968266
- loss:CosineSimilarityLoss
base_model: sentence-transformers/LaBSE
widget:
- source_sentence: ކުއްލިއަކަށް ދޮންބެ ތެދުވެ އިނދެ ދެފައި ވައްކޮއްލިއެވެ. ދެލޯ ބޮޑުކޮއްގެން
ހުރެ ހެވެމުން ދިލެމުން ގޮސް އަހަރެން ހުޅުވާލީވެސް ދޮންބެ ބުނި ކަބަޑެވެ. ގެރިގުއި
ކުލައިގެ ކަރުދާހަކުން ބަންދުކޮއްފައި އޮތް ފޮށިގަނޑެއް ފެނުމާއި އެކު އަހަރެންނަށް
ބަލާލެވުނީ ގޮދަނޑިމަތީގައި ދެފައި ވަށްކޮއްގެން އިން ބޭބެ އާއި ދިމާއަށެވެ.
sentences:
- sheet covering coffin
- The king's kidneys, heart and lungs have also stopped working, Saudi health officials
said, according to Press TV.
- The Civil Court of Maldives has ordered the seizure of passports and freezing
bank accounts belonging to Haulath Faheem, wife of former President Dr. Mohamed
Jamil, as well as seven other members of his family in connection with a case
of proven debt. This was decided by the court today after an action filed by Mohammad
Aniis who served as General Manager at four resorts owned by Three A Company when
it was not being divided into shares. The heir was not present at the court. The
lawyer for the heirs said that he has appealed to the High Court against this
decision. In any case of proven debt, it is a common practice in courts to hold
passports and freeze accounts as part of an application for enforcement of judgment
when there are no payments made by debtors. The family appealed the Civil Court’s
order to pay them back, which was then reviewed by the Supreme Court. In addition
to the three charges, Anies also brought another two cases against Musa Fahim’s
heirs. The other accused are Haulat and Shaheed as well as Farida Ibrahim, Ahmad
Shahid Shiyam, Ali Shiyam, Hassan Shiyam, Maryam Shifa and Aimanat Ashfah. The
two brothers’ son Anies said he owes the company 1.8 million rupees for days when
senior management was not paid due to problems arising from the split of Three
Airline Company Ltd (THAC). The order was issued in response to a case filed by
Anis at the Civil Court on May 15, requesting payment of Rs.731,540.80 due from
his family following an appeal ruling made on February 17 this year. He said that
no appeal had been lodged against the judgment for over ninety days and he is
still waiting for the decision to be announced.
- source_sentence: 24 ޖުލައި 2013 ގައި ޖޯން ހޮޖްމަން މެކްސިމަމް ފަން ޕޮޑްކާސްޓް ``
ޖަޖް ބްރަދަރ އަލީ '' އިން ފެނިގެންދިޔައީ '' އެކްސްޕާޓް ވިޓްނަސް '' ގެ ގޮތުގައެވެ
.
sentences:
- Translate the following sentence into a different language and add a proof of
the translation in the footnotes. Traer tu propia bolsa es una elección ecológica.
<sup>1</sup> --- <sup>1</sup> Translation from English to Spanish using Google
Translate.
- The result sheet of the Ihwandu constituency, which is part of the North East
District Council was lost and it has been found while reopening a ballot box.
It had to be counted again after that because the results were missing. In presence
of representatives from candidates who contested for this district as well as
media, the election commission opened the ballot box at 8:30 p.m. today when they
discovered the result sheet in another letter. The results sheet was mistakenly
placed in a wrong envelope.The Election Commission decided that the ballot box
did not need to be counted after seeing its result sheet.This is the first election
with an issue of this kind. The Complaints Bureau has not received any complaints
from the voters that would require a ballot box to be reopened, said Election
Commission Director General Mohamed Sheik. The Commission said that 60 percent
of the total number of results sheets, which is estimated to be around 17,000
have been cleared.
- Outline the following passage I. American astronauts' exploration of the moon
A. Began in 1969 B. Building of moon bases C. Driving lunar rovers on the surface
D. Collection of moon samples.
- source_sentence: އަދި ލަންގޭންސްޓައިންބާކް އާއި އަލަށް އުފެއްދި ޝިސްޝުޓެނަކަރ ރޭލްވޭ
ސްޓޭޝަނާ ދެމެދު 2011 ވަނަ އަހަރު ކުރު ޑަބަލް ޓްރެކެއް ވެސް ހެދިއެވެ .
sentences:
- i told them i would personally be delighted if sia would fly to and from europe
via the maldives.
- A short double track was also built between Langensteinbach and the newly created
Schießhüttenäcker railway station in 2011 .
- Offer one suggestion to reduce cases of teenage suicide. One suggestion to reduce
cases of teenage suicide could be to provide accessible and safe mental health
support for teenagers. This could be in the form of school counselors, teen helplines,
or mental health workshops, among other resources. By ensuring that teenagers
have someone to talk to about their struggles and concerns, it can alleviate feelings
of hopelessness and isolation, which are major risk factors for suicide.
- source_sentence: އަޖީއެމްއެޗްގެ އަހަރި ދުވަހާއި ގުޅުވައިގެން ބާއްވާ މި ފެއާއަށް
ދާ ފަރާތްތަކަށް ހިލޭ ގުލްކޯޒް، ހަކުރު، އަދި ލޭގެ ޕްރެޝަރު ހުރި މިންވަރު ބަލައިދެމުންދާ
ކަމަށް އައިޖީއެމްއެޗުން ބުނެއެވެ.
sentences:
- A young man died in a serious accident on the road at night. The victim was identified
as Hussain Adham, 21 years old from Hithadhoo. The 54-year old man died at the
hospital after being treated for a heart attack. According to witnesses, the accident
occurred when Adham was driving from Hittadu towards Maradu and collided with
another motorbike that had been travelling along Link Road in direction of Maradu.
The accident resulted in a severe fracture of his head and extensive bleeding.
He was also broken his neck and a hand. "The helmet he was wearing broke and his
head got injured. The injuries were severe," the witness said. Some of the victims
had broken their hands and feet. A woman was among the victims.
- NASA has announced that it will test a new type of flying saucer this year. It
may be to bring in aliens who have not yet landed on the earth. The cup-style
vehicle will be launched by what NASA calls a "low density supersonic decelerator"
rocket. The rocket is scheduled to be launched in June. NASA is interested in
launching a flying saucer into the atmosphere, but according to their own statements,
there's no connection between aliens and NASA's Flying Saucer. NASA wants to test
and demonstrate new technologies that can be used for launching objects into the
atmosphere. NASA said the mission will help to estimate how much payload is needed
for a manned Mars missions.
- Ar.... Arfin? Are you telling the truth? Is the child so good now? How many years
have passed since then... If you haven't even heard from the boy, you can hear
what Asiya is saying, I really want to see you, Asiya, please come here with Arfin,
if you have his number I want to call him now
- source_sentence: އޭނާ ރީތި.
sentences:
- She's pretty.
- Words of gratitude are being sent to the government and President Yameen for bringing
two new generators to the village within five days. The people of Thonadhoo have
shown the whole country that they have a people who love patience, unity and brotherhood.
It is a beautiful example of unity. The burden and pain of the power outages is
not easy for anyone to bear in such an era.
- 'Date of appointment: 22 June'
pipeline_tag: sentence-similarity
library_name: sentence-transformers
---
# SentenceTransformer based on sentence-transformers/LaBSE
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/LaBSE](https://huggingface.co/sentence-transformers/LaBSE). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [sentence-transformers/LaBSE](https://huggingface.co/sentence-transformers/LaBSE) <!-- at revision 836121a0533e5664b21c7aacc5d22951f2b8b25b -->
- **Maximum Sequence Length:** 256 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False, 'architecture': 'BertModel'})
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Dense({'in_features': 768, 'out_features': 768, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'})
(3): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("shiimi/labse-dhivehi-finetuned")
# Run inference
sentences = [
'އޭނާ ރީތި.',
"She's pretty.",
'Words of gratitude are being sent to the government and President Yameen for bringing two new generators to the village within five days. The people of Thonadhoo have shown the whole country that they have a people who love patience, unity and brotherhood. It is a beautiful example of unity. The burden and pain of the power outages is not easy for anyone to bear in such an era.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities)
# tensor([[ 1.0000, 0.9827, -0.0089],
# [ 0.9827, 1.0000, -0.0044],
# [-0.0089, -0.0044, 1.0000]])
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 968,266 training samples
* Columns: <code>sentence_0</code>, <code>sentence_1</code>, and <code>label</code>
* Approximate statistics based on the first 1000 samples:
| | sentence_0 | sentence_1 | label |
|:--------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:---------------------------------------------------------------|
| type | string | string | float |
| details | <ul><li>min: 3 tokens</li><li>mean: 121.67 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 64.68 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.51</li><li>max: 1.0</li></ul> |
* Samples:
| sentence_0 | sentence_1 | label |
|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------|
| <code>އިންތިހާބު ލަސްކުރަން ބްލެޓާ ބޭނުމެއްނުވޭ: ފީފާ</code> | <code>The Ponoru River is a tributary of the Horezu in Romania .</code> | <code>0.0</code> |
| <code>ޖޯ އުފަންވީ 27 މާރޗް 1929 ގައި މެސެޗުސެޓްސްގެ ސޮމަރވިލް އަށް ކަމަށާއި ބޮޑުވީ މެސެޗުސެޓްސްގެ ކުއިންސީ ގައެވެ .</code> | <code>The National Inquiry Commission set up by the government of President Mohammed Vaheed Hassan Manik has said that the coup was not a coup and that the government was overthrown according to the rules of law.</code> | <code>0.0</code> |
| <code>ސާބިތު ދަރަނީގެ މައްސަލައެއްގައި ޑރ. މުހައްމަދު ޖަމީލްގެ އަނބިކަނބަލުން ހައުލަތު ފަހީމް އާއި އެ އާއިލާގެ އިތުރު ހަތް މީހެއްގެ ޕާސްޕޯޓް ހިފަހައްޓައި ބޭންކް އެކައުންޓްތައް ފްރީޒްކުރުމަށް ސިވިލް ކޯޓުން މިއަދު އަމުރު ނެރެފި އެވެ.ވީބީ އައްޑޫ އެފްސީގެ މުއައްސިސެއް ކަމަށްވާ މުހަންމަދު ޝަވީދުގެ ވެސް ބައްޕަ މަރުހޫމް މޫސާ ފަހީމްގެ އަށް ވާރިސުންގެ ޕާސްޕޯޓާއި، ރާއްޖޭގެ ބޭންކްތަކުގައި ހުރި ހުރިހާ އެކައުންޓެއް ހިފަހައްޓަން ސިވިލް ކޯޓުން މިއަދު ހެނދުނު ނިންމީ، ތްރީއޭ ކޮމްޕެނީ ނުބަހާއިރު އެ ކުންފުނީގެ ހަތަރު ރިސޯޓެއްގެ ޖެނެރަލް މެނޭޖަރެއްގެ ގޮތުގައި ވަޒީފާ އަދާކުރި މުހަންމަދު އަނީސް ކޮށްފައިވާ ދައުވާއަކާ ގުޅިގެން ބޭއްވި ޝަރީއަތުގެ އަޑުއެހުމުގަ އެވެ. އެ އަޑުއެހުމަށް ވާރިސުންގެ ފަރާތުން ހާޒިރެއް ނުވެ އެވެ. ވާރިސުންގެ ވަކީލް ވިދާޅުވީ ސިވިލް ކޯޓުގެ ހުކުމް ހައި ކޯޓަށް އިސްތިއުނާފަށް ހުށަހަޅާފައިވާ ކަމަށެވެ.ސާބިތު ދަރަނީގެ ކޮންމެ މައްސަލައެއްގައި ވެސް ދަރަނި އަދާނުކުރާ ހާލަތެއްގައި، ހުކުމް ތަންފީޒުކުރުމަށް އެދި ހުށަހަޅެމުން ޕާސްޕޯޓް ހިފަހައްޓައި އެކައުންޓުތައް ފްރީޒްކުރުމަކީ ކޯޓުން އަމަލުކުރާ އާންމު އުސޫލެވ...</code> | <code>The Civil Court of Maldives has ordered the seizure of passports and freezing bank accounts belonging to Haulath Faheem, wife of former President Dr. Mohamed Jamil, as well as seven other members of his family in connection with a case of proven debt. This was decided by the court today after an action filed by Mohammad Aniis who served as General Manager at four resorts owned by Three A Company when it was not being divided into shares. The heir was not present at the court. The lawyer for the heirs said that he has appealed to the High Court against this decision. In any case of proven debt, it is a common practice in courts to hold passports and freeze accounts as part of an application for enforcement of judgment when there are no payments made by debtors. The family appealed the Civil Court’s order to pay them back, which was then reviewed by the Supreme Court. In addition to the three charges, Anies also brought another two cases against Musa Fahim’s heirs. The other accused are ...</code> | <code>1.0</code> |
* Loss: [<code>CosineSimilarityLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosinesimilarityloss) with these parameters:
```json
{
"loss_fct": "torch.nn.modules.loss.MSELoss"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `num_train_epochs`: 1
- `multi_dataset_batch_sampler`: round_robin
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: no
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1
- `num_train_epochs`: 1
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch_fused
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `hub_revision`: None
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `liger_kernel_config`: None
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: round_robin
- `router_mapping`: {}
- `learning_rate_mapping`: {}
</details>
### Training Logs
| Epoch | Step | Training Loss |
|:------:|:----:|:-------------:|
| 0.0661 | 500 | 0.0528 |
| 0.1322 | 1000 | 0.0298 |
| 0.1983 | 1500 | 0.0261 |
| 0.2644 | 2000 | 0.0242 |
| 0.3305 | 2500 | 0.0235 |
| 0.3966 | 3000 | 0.0223 |
| 0.4627 | 3500 | 0.0207 |
| 0.5288 | 4000 | 0.0208 |
| 0.5948 | 4500 | 0.0196 |
| 0.6609 | 5000 | 0.0192 |
| 0.7270 | 5500 | 0.019 |
| 0.7931 | 6000 | 0.0181 |
| 0.8592 | 6500 | 0.0181 |
| 0.9253 | 7000 | 0.0175 |
| 0.9914 | 7500 | 0.0178 |
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 5.1.0
- Transformers: 4.55.2
- PyTorch: 2.8.0+cu128
- Accelerate: 1.9.0
- Datasets: 3.6.0
- Tokenizers: 0.21.4
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
ariankharazmi/Curiosity-14
|
ariankharazmi
| 2025-08-19T06:27:29Z | 3 | 0 | null |
[
"safetensors",
"gpt2",
"research",
"text-generation",
"en",
"base_model:openai-community/gpt2",
"base_model:finetune:openai-community/gpt2",
"license:mit",
"region:us"
] |
text-generation
| 2025-04-25T03:43:28Z |
---
license: mit
language:
- en
base_model:
- openai-community/gpt2
pipeline_tag: text-generation
tags:
- research
---
Curiosity-14 is a low-level LLM.
Built throughout the seven weeks of the Summer 2024 UCinci EEP, Curiosity-14 is the culmination of all of the research, coded deliverables, and painstaking patience as one final advanced deliverable.
|
thailevann/track8_subtask1_v3
|
thailevann
| 2025-08-19T06:26:07Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen3",
"trl",
"en",
"base_model:unsloth/Qwen3-4B-Thinking-2507-unsloth-bnb-4bit",
"base_model:finetune:unsloth/Qwen3-4B-Thinking-2507-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-19T02:46:57Z |
---
base_model: unsloth/Qwen3-4B-Thinking-2507-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** thailevann
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-4B-Thinking-2507-unsloth-bnb-4bit
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
0.97
|
chsubhasis/finetuned_model_unsloth
|
chsubhasis
| 2025-08-19T06:17:12Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gpt_oss",
"trl",
"en",
"base_model:unsloth/gpt-oss-20b-unsloth-bnb-4bit",
"base_model:finetune:unsloth/gpt-oss-20b-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-19T06:17:02Z |
---
base_model: unsloth/gpt-oss-20b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gpt_oss
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** chsubhasis
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gpt-oss-20b-unsloth-bnb-4bit
This gpt_oss model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
subsectmusic/qwriko4b-64k-2507-instruct
|
subsectmusic
| 2025-08-19T06:15:30Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"qwen3",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-19T05:07:14Z |
---
base_model: unsloth/qwen3-4b-instruct-2507-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** subsectmusic
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen3-4b-instruct-2507-unsloth-bnb-4bit
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
valuesimplex-ai-lab/FinBERT1-base
|
valuesimplex-ai-lab
| 2025-08-19T06:14:50Z | 0 | 1 | null |
[
"pytorch",
"safetensors",
"bert",
"finance",
"zh",
"base_model:google-bert/bert-base-chinese",
"base_model:finetune:google-bert/bert-base-chinese",
"license:apache-2.0",
"region:us"
] | null | 2025-08-17T10:03:57Z |
---
license: apache-2.0
language:
- zh
base_model: google-bert/bert-base-chinese
tags:
- finance
---
## Model Details
**FinBERT1-Base** is a financial domain-adapted Chinese language model. Built on Google's BERT-Base architecture, it was continually pretrained on large-scale Chinese financial corpora to enhance financial text understanding.
- **Developed by:** See [valuesimplex](https://github.com/valuesimplex) for model developers
- **Model Type:** Transformer-based language model
- **Language(s):** Chinese
- **Parent Model:** See the [bert-base-chinese](https://huggingface.co/google-bert/bert-base-chinese) for more information about the BERT base model.
- **Resources:** [https://github.com/valuesimplex/FinBERT](https://github.com/valuesimplex/FinBERT)
## Direct Use
```python
from transformers import AutoModel, AutoTokenizer
model = AutoModel.from_pretrained("valuesimplex-ai-lab/FinBERT1-base")
tokenizer = AutoTokenizer.from_pretrained("valuesimplex-ai-lab/FinBERT1-base")
```
### Further Usage
continual pre-training or fine-tuning:https://github.com/valuesimplex/FinBERT
|
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1755583947
|
IvanJAjebu
| 2025-08-19T06:14:14Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thorny slender capybara",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T06:13:48Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thorny slender capybara
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
VoilaRaj/78_7iz0a4
|
VoilaRaj
| 2025-08-19T06:11:19Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-08-19T06:07:20Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
BlazePro12/merged_grok_data_mcp_2
|
BlazePro12
| 2025-08-19T06:10:59Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:google/gemma-3-270m-it",
"base_model:finetune:google/gemma-3-270m-it",
"endpoints_compatible",
"region:us"
] | null | 2025-08-19T06:08:21Z |
---
base_model: google/gemma-3-270m-it
library_name: transformers
model_name: merged_grok_data_mcp_2
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for merged_grok_data_mcp_2
This model is a fine-tuned version of [google/gemma-3-270m-it](https://huggingface.co/google/gemma-3-270m-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="BlazePro12/merged_grok_data_mcp_2", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.2
- Pytorch: 2.8.0
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
yaelahnal/blockassist-bc-mute_clawed_crab_1755583572
|
yaelahnal
| 2025-08-19T06:07:21Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"mute clawed crab",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T06:07:03Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- mute clawed crab
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
kojeklollipop/blockassist-bc-spotted_amphibious_stork_1755582079
|
kojeklollipop
| 2025-08-19T06:06:40Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"spotted amphibious stork",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T06:06:36Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- spotted amphibious stork
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
VoilaRaj/78_dpG7CL
|
VoilaRaj
| 2025-08-19T06:03:01Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-08-19T05:59:04Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1755583183
|
IvanJAjebu
| 2025-08-19T06:01:25Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thorny slender capybara",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T06:01:02Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thorny slender capybara
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
0xaoyama/blockassist-bc-muscular_zealous_gorilla_1755583222
|
0xaoyama
| 2025-08-19T06:01:00Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"muscular zealous gorilla",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T06:00:49Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- muscular zealous gorilla
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
yaelahnal/blockassist-bc-mute_clawed_crab_1755583110
|
yaelahnal
| 2025-08-19T05:59:39Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"mute clawed crab",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T05:59:21Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- mute clawed crab
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
sampingkaca72/blockassist-bc-armored_stealthy_elephant_1755581603
|
sampingkaca72
| 2025-08-19T05:59:01Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"armored stealthy elephant",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T05:58:58Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- armored stealthy elephant
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
pempekmangedd/blockassist-bc-patterned_sturdy_dolphin_1755581452
|
pempekmangedd
| 2025-08-19T05:58:35Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"patterned sturdy dolphin",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T05:58:32Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- patterned sturdy dolphin
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
vwzyrraz7l/blockassist-bc-tall_hunting_vulture_1755581346
|
vwzyrraz7l
| 2025-08-19T05:56:20Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"tall hunting vulture",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T05:56:17Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- tall hunting vulture
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
gmonsoon/Qwen3-4b-REnewbie-NEXT-Q4_K_M-GGUF
|
gmonsoon
| 2025-08-19T05:56:15Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:gmonsoon/Qwen3-4b-REnewbie-NEXT",
"base_model:quantized:gmonsoon/Qwen3-4b-REnewbie-NEXT",
"endpoints_compatible",
"region:us"
] | null | 2025-08-19T05:56:02Z |
---
base_model: gmonsoon/Qwen3-4b-REnewbie-NEXT
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
---
# gmonsoon/Qwen3-4b-REnewbie-NEXT-Q4_K_M-GGUF
This model was converted to GGUF format from [`gmonsoon/Qwen3-4b-REnewbie-NEXT`](https://huggingface.co/gmonsoon/Qwen3-4b-REnewbie-NEXT) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/gmonsoon/Qwen3-4b-REnewbie-NEXT) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo gmonsoon/Qwen3-4b-REnewbie-NEXT-Q4_K_M-GGUF --hf-file qwen3-4b-renewbie-next-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo gmonsoon/Qwen3-4b-REnewbie-NEXT-Q4_K_M-GGUF --hf-file qwen3-4b-renewbie-next-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo gmonsoon/Qwen3-4b-REnewbie-NEXT-Q4_K_M-GGUF --hf-file qwen3-4b-renewbie-next-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo gmonsoon/Qwen3-4b-REnewbie-NEXT-Q4_K_M-GGUF --hf-file qwen3-4b-renewbie-next-q4_k_m.gguf -c 2048
```
|
mradermacher/qqWen-14B-RL-Reasoning-i1-GGUF
|
mradermacher
| 2025-08-19T05:56:03Z | 0 | 1 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:morganstanley/qqWen-14B-RL-Reasoning",
"base_model:quantized:morganstanley/qqWen-14B-RL-Reasoning",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-08-18T17:25:35Z |
---
base_model: morganstanley/qqWen-14B-RL-Reasoning
language:
- en
library_name: transformers
license: apache-2.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K_M Q4_0 IQ3_XS Q4_1 IQ3_S -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
weighted/imatrix quants of https://huggingface.co/morganstanley/qqWen-14B-RL-Reasoning
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#qqWen-14B-RL-Reasoning-i1-GGUF).***
static quants are available at https://huggingface.co/mradermacher/qqWen-14B-RL-Reasoning-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/qqWen-14B-RL-Reasoning-i1-GGUF/resolve/main/qqWen-14B-RL-Reasoning.imatrix.gguf) | imatrix | 0.1 | imatrix file (for creating your own qwuants) |
| [GGUF](https://huggingface.co/mradermacher/qqWen-14B-RL-Reasoning-i1-GGUF/resolve/main/qqWen-14B-RL-Reasoning.i1-IQ1_S.gguf) | i1-IQ1_S | 3.7 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/qqWen-14B-RL-Reasoning-i1-GGUF/resolve/main/qqWen-14B-RL-Reasoning.i1-IQ1_M.gguf) | i1-IQ1_M | 4.0 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/qqWen-14B-RL-Reasoning-i1-GGUF/resolve/main/qqWen-14B-RL-Reasoning.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/qqWen-14B-RL-Reasoning-i1-GGUF/resolve/main/qqWen-14B-RL-Reasoning.i1-IQ2_XS.gguf) | i1-IQ2_XS | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/qqWen-14B-RL-Reasoning-i1-GGUF/resolve/main/qqWen-14B-RL-Reasoning.i1-IQ2_S.gguf) | i1-IQ2_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/qqWen-14B-RL-Reasoning-i1-GGUF/resolve/main/qqWen-14B-RL-Reasoning.i1-IQ2_M.gguf) | i1-IQ2_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/qqWen-14B-RL-Reasoning-i1-GGUF/resolve/main/qqWen-14B-RL-Reasoning.i1-Q2_K_S.gguf) | i1-Q2_K_S | 5.5 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/qqWen-14B-RL-Reasoning-i1-GGUF/resolve/main/qqWen-14B-RL-Reasoning.i1-Q2_K.gguf) | i1-Q2_K | 5.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/qqWen-14B-RL-Reasoning-i1-GGUF/resolve/main/qqWen-14B-RL-Reasoning.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 6.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/qqWen-14B-RL-Reasoning-i1-GGUF/resolve/main/qqWen-14B-RL-Reasoning.i1-IQ3_XS.gguf) | i1-IQ3_XS | 6.5 | |
| [GGUF](https://huggingface.co/mradermacher/qqWen-14B-RL-Reasoning-i1-GGUF/resolve/main/qqWen-14B-RL-Reasoning.i1-Q3_K_S.gguf) | i1-Q3_K_S | 6.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/qqWen-14B-RL-Reasoning-i1-GGUF/resolve/main/qqWen-14B-RL-Reasoning.i1-IQ3_S.gguf) | i1-IQ3_S | 6.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/qqWen-14B-RL-Reasoning-i1-GGUF/resolve/main/qqWen-14B-RL-Reasoning.i1-IQ3_M.gguf) | i1-IQ3_M | 7.0 | |
| [GGUF](https://huggingface.co/mradermacher/qqWen-14B-RL-Reasoning-i1-GGUF/resolve/main/qqWen-14B-RL-Reasoning.i1-Q3_K_M.gguf) | i1-Q3_K_M | 7.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/qqWen-14B-RL-Reasoning-i1-GGUF/resolve/main/qqWen-14B-RL-Reasoning.i1-Q3_K_L.gguf) | i1-Q3_K_L | 8.0 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/qqWen-14B-RL-Reasoning-i1-GGUF/resolve/main/qqWen-14B-RL-Reasoning.i1-IQ4_XS.gguf) | i1-IQ4_XS | 8.2 | |
| [GGUF](https://huggingface.co/mradermacher/qqWen-14B-RL-Reasoning-i1-GGUF/resolve/main/qqWen-14B-RL-Reasoning.i1-Q4_0.gguf) | i1-Q4_0 | 8.6 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/qqWen-14B-RL-Reasoning-i1-GGUF/resolve/main/qqWen-14B-RL-Reasoning.i1-IQ4_NL.gguf) | i1-IQ4_NL | 8.6 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/qqWen-14B-RL-Reasoning-i1-GGUF/resolve/main/qqWen-14B-RL-Reasoning.i1-Q4_K_S.gguf) | i1-Q4_K_S | 8.7 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/qqWen-14B-RL-Reasoning-i1-GGUF/resolve/main/qqWen-14B-RL-Reasoning.i1-Q4_K_M.gguf) | i1-Q4_K_M | 9.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/qqWen-14B-RL-Reasoning-i1-GGUF/resolve/main/qqWen-14B-RL-Reasoning.i1-Q4_1.gguf) | i1-Q4_1 | 9.5 | |
| [GGUF](https://huggingface.co/mradermacher/qqWen-14B-RL-Reasoning-i1-GGUF/resolve/main/qqWen-14B-RL-Reasoning.i1-Q5_K_S.gguf) | i1-Q5_K_S | 10.4 | |
| [GGUF](https://huggingface.co/mradermacher/qqWen-14B-RL-Reasoning-i1-GGUF/resolve/main/qqWen-14B-RL-Reasoning.i1-Q5_K_M.gguf) | i1-Q5_K_M | 10.6 | |
| [GGUF](https://huggingface.co/mradermacher/qqWen-14B-RL-Reasoning-i1-GGUF/resolve/main/qqWen-14B-RL-Reasoning.i1-Q6_K.gguf) | i1-Q6_K | 12.2 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1755582693
|
IvanJAjebu
| 2025-08-19T05:52:54Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thorny slender capybara",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T05:52:45Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thorny slender capybara
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
wasabuko/blockassist-bc-noisy_zealous_macaw_1755580421
|
wasabuko
| 2025-08-19T05:51:01Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"noisy zealous macaw",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T05:48:16Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- noisy zealous macaw
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
taochengfei/llama-3.2-3b-it-beta_assistant_v0.2_gptq
|
taochengfei
| 2025-08-19T05:46:23Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2025-08-19T05:45:10Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ihsanridzi/blockassist-bc-wiry_flexible_owl_1755580729
|
ihsanridzi
| 2025-08-19T05:45:26Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"wiry flexible owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T05:45:22Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- wiry flexible owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
CraneAILabs/swahili-gemma-1b-GGUF
|
CraneAILabs
| 2025-08-19T05:40:52Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"swahili",
"translation",
"conversational",
"unsloth",
"gemma",
"gemma3",
"quantized",
"text-generation",
"en",
"sw",
"base_model:CraneAILabs/swahili-gemma-1b",
"base_model:quantized:CraneAILabs/swahili-gemma-1b",
"license:gemma",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-18T07:37:28Z |
---
base_model: CraneAILabs/swahili-gemma-1b
language:
- en
- sw
library_name: transformers
license: gemma
tags:
- swahili
- translation
- conversational
- unsloth
- gemma
- gemma3
- gguf
- quantized
pipeline_tag: text-generation
---
# Swahili Gemma 1B - GGUF
Quantized GGUF versions of **Swahili Gemma 1B**, a fine-tuned Gemma 3 1B instruction model specialized for **English-to-Swahili translation and Swahili conversational AI**. The model accepts input in both English and Swahili but outputs responses exclusively in Swahili.
## 📊 Translation Performance

### Model Comparison
| Model | Parameters | BLEU | chrF++ | Efficiency* |
|-------|------------|------|--------|-------------|
| Gemma 3 4B | 4B | 10.9 | 44.1 | 2.7 |
| **Swahili Gemma 1B** | **1B** | **27.6** | **56.8** | **27.6** |
| Gemma 3 27B | 27B | 29.4 | 60.0 | 1.1 |
| GPT-5 Mini | ~8B | 31.8 | 62.4 | 4.0 |
| Gemini 2.0 Flash | Large | 35.6 | 64.6 | N/A |
*Efficiency = BLEU Score / Parameters (in billions)
### Key Performance Insights
🎯 **Efficiency Leader**: Achieves the highest BLEU-to-parameter ratio (27.6 BLEU per billion parameters)
🚀 **Size Advantage**: Outperforms Gemma 3 4B (4x larger) by 153% on BLEU score
💎 **Competitive Quality**: Achieves 94% of Gemma 3 27B performance with 27x fewer parameters
⚡ **Practical Deployment**: Runs efficiently on consumer hardware while maintaining quality
### Evaluation Details
- **Dataset**: FLORES-200 English→Swahili (1,012 translation pairs)
- **Metrics**: BLEU (bilingual evaluation understudy) and chrF++ (character F-score)
- **Evaluation**: Zero-shot translation performance
## 🚀 Quick Start
```bash
# Download the recommended Q4_K_M quantization
pip install huggingface_hub
# Python download
from huggingface_hub import snapshot_download
snapshot_download(
repo_id="CraneAILabs/swahili-gemma-1b-GGUF",
local_dir="swahili-gemma-1b-GGUF",
allow_patterns=["Q4_K_M/*"] # Download only Q4_K_M version
)
```
## 📊 Available Quantizations
| Quantization | Folder | File Size | Quality | Use Case |
|-------------|--------|-----------|---------|----------|
| `F32` | F32/ | ~3.8GB | Highest | Research & benchmarking |
| `F16` | F16/ | ~1.9GB | Highest | Maximum quality inference |
| `Q8_0` | Q8_0/ | ~1.0GB | Very High | Production with ample resources |
| `Q5_K_M` | Q5_K_M/ | ~812MB | High | Balanced quality/size |
| `Q4_K_M` | Q4_K_M/ | ~769MB | Good | **Recommended** for most users |
| `Q4_K_S` | Q4_K_S/ | ~745MB | Good | Resource-constrained environments |
| `Q3_K_M` | Q3_K_M/ | ~689MB | Fair | Mobile/edge deployment |
| `Q2_K` | Q2_K/ | ~658MB | Lower | Minimal resource usage |
## 💻 Usage with llama.cpp
### Basic Translation
```bash
# English to Swahili translation
./llama-cli \
--model swahili-gemma-1b-GGUF/Q4_K_M/swahili-gemma-1b-q4_k_m.gguf \
--prompt "Translate to Swahili: Hello, how are you today?" \
--temp 0.3 \
--top-p 0.95 \
--top-k 64 \
--repeat-penalty 1.1 \
-n 128
```
## 🔧 Usage with Ollama
```bash
# Create model from GGUF
ollama create swahili-gemma-1b -f Modelfile
# Use for translation
ollama run swahili-gemma-1b "Translate to Swahili: Good morning!"
# Use for conversation
ollama run swahili-gemma-1b "Hujambo! Je, unaweza kunisaidia?"
```
### Modelfile Example
```dockerfile
FROM swahili-gemma-1b-GGUF/Q4_K_M/swahili-gemma-1b-q4_k_m.gguf
TEMPLATE """{{ if .System }}<|start_header_id|>system<|end_header_id|>
{{ .System }}<|eot_id|>{{ end }}{{ if .Prompt }}<|start_header_id|>user<|end_header_id|>
{{ .Prompt }}<|eot_id|>{{ end }}<|start_header_id|>assistant<|end_header_id|>
{{ .Response }}<|eot_id|>"""
PARAMETER stop "<|start_header_id|>"
PARAMETER stop "<|end_header_id|>"
PARAMETER stop "<|eot_id|>"
```
## 🐍 Usage with Python (llama-cpp-python)
```python
from llama_cpp import Llama
# Initialize model
llm = Llama(
model_path="swahili-gemma-1b-GGUF/Q4_K_M/swahili-gemma-1b-q4_k_m.gguf",
n_ctx=2048,
n_threads=8,
verbose=False
)
# Generate translation
response = llm(
"Translate to Swahili: Hello, how are you today?",
max_tokens=128,
temperature=0.3,
top_p=0.95,
top_k=64,
repeat_penalty=1.1
)
print(response['choices'][0]['text'])
```
## 🌍 Language Capabilities
- **Input Languages**: English + Swahili
- **Output Language**: Swahili only
- **Primary Focus**: English-to-Swahili translation and Swahili conversation
## 📊 Performance Metrics
### Translation Quality (BLEU Scores)
| Model | BLEU Score | chrF++ |
|-------|------------|--------|
| **🥇 Swahili Gemma 1B** | **23.64** | **52.26** |
| 🥈 ChatGPT-4o-latest | [TBD] | [TBD] |
| 🥉 Other Models | [TBD] | [TBD] |
*Evaluated on 1,012 English-to-Swahili translation samples.*
## 🎯 Capabilities
- **Translation**: English-to-Swahili translation
- **Conversational AI**: Natural dialogue in Swahili
- **Summarization**: Text summarization in Swahili
- **Writing**: Creative and informational writing in Swahili
- **Question Answering**: General knowledge responses in Swahili
## 💡 Recommended Parameters
```bash
# Optimal settings for translation tasks
--temp 0.3
--top-p 0.95
--top-k 64
--repeat-penalty 1.1
--ctx-size 2048
```
## 🔗 Related Models
- **Original Model**: [CraneAILabs/swahili-gemma-1b](https://huggingface.co/CraneAILabs/swahili-gemma-1b) - Full precision HuggingFace model
- **LiteRT Mobile**: [CraneAILabs/swahili-gemma-1b-litert](https://huggingface.co/CraneAILabs/swahili-gemma-1b-litert) - Mobile deployment
- **Ollama**: [crane-ai-labs/swahili-gemma-1b](https://ollama.com/crane-ai-labs/swahili-gemma-1b) - Ready-to-run models
## 🛠️ Technical Details
- **Base Model**: google/gemma-3-1b-it
- **Architecture**: Gemma 3
- **Context Length**: 4,096 tokens
- **Quantization**: GGML format with multiple precision levels
- **Compatible**: llama.cpp, Ollama, Jan, LM Studio, and other GGUF engines
## 🎨 Use Cases
- **Offline Translation**: Run Swahili translation without internet
- **Local AI Assistant**: Swahili conversational AI on your machine
- **Educational Tools**: Language learning applications
- **Content Creation**: Generate Swahili content locally
- **Research**: Swahili language model experiments
## ⚠️ Limitations
- **Language Output**: Responds only in Swahili
- **Quantization Trade-offs**: Lower bit quantizations may reduce quality
- **Context Limit**: 4K tokens for optimal performance
- **Specialized Tasks**: May need fine-tuning for specific domains
## 📄 License
This model is released under the [Gemma Terms of Use](https://ai.google.dev/gemma/terms). Please review the terms before use.
## 🙏 Acknowledgments
- **Google**: For the Gemma 3 base model, support and guidance.
- **Community**: For Swahili language resources and datasets
- **Gilbert Korir (Msingi AI, Nairobi, Kenya)**
- **Alfred Malengo Kondoro (Hanyang University, Seoul, South Korea)**
## Citation
If you use these GGUF quantizations in your research or applications, please cite:
```bibtex
@misc{crane_ai_labs_2025,
author = {Bakunga Bronson and Kato Steven Mubiru and Lwanga Caleb and Gimei Alex and Kavuma Lameck and Roland Ganafa and Sibomana Glorry and Atuhaire Collins and JohnRoy Nangeso and Tukamushaba Catherine},
title = {Swahili Gemma: A Fine-tuned Gemma 3 1B Model for Swahili conversational AI},
year = {2025},
url = {https://huggingface.co/CraneAILabs/swahili-gemma-1b},
organization = {Crane AI Labs}
}
```
---
**Built with ❤️ by Crane AI Labs**
*Swahili Gemma - Your helpful Swahili AI companion, optimized for local deployment*
|
SP4ND4N/SmolLM2-360M-2025-08-18_10-01-49-fp8-merged
|
SP4ND4N
| 2025-08-19T05:32:35Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/SmolLM2-360M",
"base_model:finetune:unsloth/SmolLM2-360M",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-19T05:27:34Z |
---
base_model: unsloth/SmolLM2-360M
tags:
- text-generation-inference
- transformers
- unsloth
- llama
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** SP4ND4N
- **License:** apache-2.0
- **Finetuned from model :** unsloth/SmolLM2-360M
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1755581389
|
IvanJAjebu
| 2025-08-19T05:31:13Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thorny slender capybara",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T05:31:05Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thorny slender capybara
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
santhosh/multilingual-e5-base-int8-ov
|
santhosh
| 2025-08-19T05:30:45Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"openvino",
"xlm-roberta",
"mteb",
"Sentence Transformers",
"sentence-similarity",
"multilingual",
"af",
"am",
"ar",
"as",
"az",
"be",
"bg",
"bn",
"br",
"bs",
"ca",
"cs",
"cy",
"da",
"de",
"el",
"en",
"eo",
"es",
"et",
"eu",
"fa",
"fi",
"fr",
"fy",
"ga",
"gd",
"gl",
"gu",
"ha",
"he",
"hi",
"hr",
"hu",
"hy",
"id",
"is",
"it",
"ja",
"jv",
"ka",
"kk",
"km",
"kn",
"ko",
"ku",
"ky",
"la",
"lo",
"lt",
"lv",
"mg",
"mk",
"ml",
"mn",
"mr",
"ms",
"my",
"ne",
"nl",
"no",
"om",
"or",
"pa",
"pl",
"ps",
"pt",
"ro",
"ru",
"sa",
"sd",
"si",
"sk",
"sl",
"so",
"sq",
"sr",
"su",
"sv",
"sw",
"ta",
"te",
"th",
"tl",
"tr",
"ug",
"uk",
"ur",
"uz",
"vi",
"xh",
"yi",
"zh",
"arxiv:2402.05672",
"license:mit",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-08-19T05:12:22Z |
---
tags:
- mteb
- Sentence Transformers
- sentence-similarity
- sentence-transformers
model-index:
- name: multilingual-e5-base-int8-ov
results:
- task:
type: Classification
dataset:
type: mteb/amazon_counterfactual
name: MTEB AmazonCounterfactualClassification (en)
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 78.97014925373135
- type: ap
value: 43.69351129103008
- type: f1
value: 73.38075030070492
- task:
type: Classification
dataset:
type: mteb/amazon_counterfactual
name: MTEB AmazonCounterfactualClassification (de)
config: de
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 71.7237687366167
- type: ap
value: 82.22089859962671
- type: f1
value: 69.95532758884401
- task:
type: Classification
dataset:
type: mteb/amazon_counterfactual
name: MTEB AmazonCounterfactualClassification (en-ext)
config: en-ext
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 79.65517241379312
- type: ap
value: 28.507918657094738
- type: f1
value: 66.84516013726119
- task:
type: Classification
dataset:
type: mteb/amazon_counterfactual
name: MTEB AmazonCounterfactualClassification (ja)
config: ja
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 73.32976445396146
- type: ap
value: 20.720481637566014
- type: f1
value: 59.78002763416003
- task:
type: Classification
dataset:
type: mteb/amazon_polarity
name: MTEB AmazonPolarityClassification
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 90.63775
- type: ap
value: 87.22277903861716
- type: f1
value: 90.60378636386807
- task:
type: Classification
dataset:
type: mteb/amazon_reviews_multi
name: MTEB AmazonReviewsClassification (en)
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 44.546
- type: f1
value: 44.05666638370923
- task:
type: Classification
dataset:
type: mteb/amazon_reviews_multi
name: MTEB AmazonReviewsClassification (de)
config: de
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 41.828
- type: f1
value: 41.2710255644252
- task:
type: Classification
dataset:
type: mteb/amazon_reviews_multi
name: MTEB AmazonReviewsClassification (es)
config: es
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 40.534
- type: f1
value: 39.820743174270326
- task:
type: Classification
dataset:
type: mteb/amazon_reviews_multi
name: MTEB AmazonReviewsClassification (fr)
config: fr
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 39.684
- type: f1
value: 39.11052682815307
- task:
type: Classification
dataset:
type: mteb/amazon_reviews_multi
name: MTEB AmazonReviewsClassification (ja)
config: ja
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 37.436
- type: f1
value: 37.07082931930871
- task:
type: Classification
dataset:
type: mteb/amazon_reviews_multi
name: MTEB AmazonReviewsClassification (zh)
config: zh
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 37.226000000000006
- type: f1
value: 36.65372077739185
- task:
type: Retrieval
dataset:
type: arguana
name: MTEB ArguAna
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 22.831000000000003
- type: map_at_10
value: 36.42
- type: map_at_100
value: 37.699
- type: map_at_1000
value: 37.724000000000004
- type: map_at_3
value: 32.207
- type: map_at_5
value: 34.312
- type: mrr_at_1
value: 23.257
- type: mrr_at_10
value: 36.574
- type: mrr_at_100
value: 37.854
- type: mrr_at_1000
value: 37.878
- type: mrr_at_3
value: 32.385000000000005
- type: mrr_at_5
value: 34.48
- type: ndcg_at_1
value: 22.831000000000003
- type: ndcg_at_10
value: 44.230000000000004
- type: ndcg_at_100
value: 49.974000000000004
- type: ndcg_at_1000
value: 50.522999999999996
- type: ndcg_at_3
value: 35.363
- type: ndcg_at_5
value: 39.164
- type: precision_at_1
value: 22.831000000000003
- type: precision_at_10
value: 6.935
- type: precision_at_100
value: 0.9520000000000001
- type: precision_at_1000
value: 0.099
- type: precision_at_3
value: 14.841
- type: precision_at_5
value: 10.754
- type: recall_at_1
value: 22.831000000000003
- type: recall_at_10
value: 69.346
- type: recall_at_100
value: 95.235
- type: recall_at_1000
value: 99.36
- type: recall_at_3
value: 44.523
- type: recall_at_5
value: 53.769999999999996
- task:
type: Clustering
dataset:
type: mteb/arxiv-clustering-p2p
name: MTEB ArxivClusteringP2P
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 40.27789869854063
- task:
type: Clustering
dataset:
type: mteb/arxiv-clustering-s2s
name: MTEB ArxivClusteringS2S
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 35.41979463347428
- task:
type: Reranking
dataset:
type: mteb/askubuntudupquestions-reranking
name: MTEB AskUbuntuDupQuestions
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 58.22752045109304
- type: mrr
value: 71.51112430198303
- task:
type: STS
dataset:
type: mteb/biosses-sts
name: MTEB BIOSSES
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 84.71147646622866
- type: cos_sim_spearman
value: 85.059167046486
- type: euclidean_pearson
value: 75.88421613600647
- type: euclidean_spearman
value: 75.12821787150585
- type: manhattan_pearson
value: 75.22005646957604
- type: manhattan_spearman
value: 74.42880434453272
- task:
type: BitextMining
dataset:
type: mteb/bucc-bitext-mining
name: MTEB BUCC (de-en)
config: de-en
split: test
revision: d51519689f32196a32af33b075a01d0e7c51e252
metrics:
- type: accuracy
value: 99.23799582463465
- type: f1
value: 99.12665274878218
- type: precision
value: 99.07098121085595
- type: recall
value: 99.23799582463465
- task:
type: BitextMining
dataset:
type: mteb/bucc-bitext-mining
name: MTEB BUCC (fr-en)
config: fr-en
split: test
revision: d51519689f32196a32af33b075a01d0e7c51e252
metrics:
- type: accuracy
value: 97.88685890380806
- type: f1
value: 97.59336708489249
- type: precision
value: 97.44662117543473
- type: recall
value: 97.88685890380806
- task:
type: BitextMining
dataset:
type: mteb/bucc-bitext-mining
name: MTEB BUCC (ru-en)
config: ru-en
split: test
revision: d51519689f32196a32af33b075a01d0e7c51e252
metrics:
- type: accuracy
value: 97.47142362313821
- type: f1
value: 97.1989377670015
- type: precision
value: 97.06384944001847
- type: recall
value: 97.47142362313821
- task:
type: BitextMining
dataset:
type: mteb/bucc-bitext-mining
name: MTEB BUCC (zh-en)
config: zh-en
split: test
revision: d51519689f32196a32af33b075a01d0e7c51e252
metrics:
- type: accuracy
value: 98.4728804634018
- type: f1
value: 98.2973494821836
- type: precision
value: 98.2095839915745
- type: recall
value: 98.4728804634018
- task:
type: Classification
dataset:
type: mteb/banking77
name: MTEB Banking77Classification
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 82.74025974025975
- type: f1
value: 82.67420447730439
- task:
type: Clustering
dataset:
type: mteb/biorxiv-clustering-p2p
name: MTEB BiorxivClusteringP2P
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 35.0380848063507
- task:
type: Clustering
dataset:
type: mteb/biorxiv-clustering-s2s
name: MTEB BiorxivClusteringS2S
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 29.45956405670166
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackAndroidRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 32.122
- type: map_at_10
value: 42.03
- type: map_at_100
value: 43.364000000000004
- type: map_at_1000
value: 43.474000000000004
- type: map_at_3
value: 38.804
- type: map_at_5
value: 40.585
- type: mrr_at_1
value: 39.914
- type: mrr_at_10
value: 48.227
- type: mrr_at_100
value: 49.018
- type: mrr_at_1000
value: 49.064
- type: mrr_at_3
value: 45.994
- type: mrr_at_5
value: 47.396
- type: ndcg_at_1
value: 39.914
- type: ndcg_at_10
value: 47.825
- type: ndcg_at_100
value: 52.852
- type: ndcg_at_1000
value: 54.891
- type: ndcg_at_3
value: 43.517
- type: ndcg_at_5
value: 45.493
- type: precision_at_1
value: 39.914
- type: precision_at_10
value: 8.956
- type: precision_at_100
value: 1.388
- type: precision_at_1000
value: 0.182
- type: precision_at_3
value: 20.791999999999998
- type: precision_at_5
value: 14.821000000000002
- type: recall_at_1
value: 32.122
- type: recall_at_10
value: 58.294999999999995
- type: recall_at_100
value: 79.726
- type: recall_at_1000
value: 93.099
- type: recall_at_3
value: 45.017
- type: recall_at_5
value: 51.002
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackEnglishRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 29.677999999999997
- type: map_at_10
value: 38.684000000000005
- type: map_at_100
value: 39.812999999999995
- type: map_at_1000
value: 39.945
- type: map_at_3
value: 35.831
- type: map_at_5
value: 37.446
- type: mrr_at_1
value: 37.771
- type: mrr_at_10
value: 44.936
- type: mrr_at_100
value: 45.583
- type: mrr_at_1000
value: 45.634
- type: mrr_at_3
value: 42.771
- type: mrr_at_5
value: 43.994
- type: ndcg_at_1
value: 37.771
- type: ndcg_at_10
value: 44.059
- type: ndcg_at_100
value: 48.192
- type: ndcg_at_1000
value: 50.375
- type: ndcg_at_3
value: 40.172000000000004
- type: ndcg_at_5
value: 41.899
- type: precision_at_1
value: 37.771
- type: precision_at_10
value: 8.286999999999999
- type: precision_at_100
value: 1.322
- type: precision_at_1000
value: 0.178
- type: precision_at_3
value: 19.406000000000002
- type: precision_at_5
value: 13.745
- type: recall_at_1
value: 29.677999999999997
- type: recall_at_10
value: 53.071
- type: recall_at_100
value: 70.812
- type: recall_at_1000
value: 84.841
- type: recall_at_3
value: 41.016000000000005
- type: recall_at_5
value: 46.22
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackGamingRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 42.675000000000004
- type: map_at_10
value: 53.93599999999999
- type: map_at_100
value: 54.806999999999995
- type: map_at_1000
value: 54.867
- type: map_at_3
value: 50.934000000000005
- type: map_at_5
value: 52.583
- type: mrr_at_1
value: 48.339
- type: mrr_at_10
value: 57.265
- type: mrr_at_100
value: 57.873
- type: mrr_at_1000
value: 57.906
- type: mrr_at_3
value: 55.193000000000005
- type: mrr_at_5
value: 56.303000000000004
- type: ndcg_at_1
value: 48.339
- type: ndcg_at_10
value: 59.19799999999999
- type: ndcg_at_100
value: 62.743
- type: ndcg_at_1000
value: 63.99399999999999
- type: ndcg_at_3
value: 54.367
- type: ndcg_at_5
value: 56.548
- type: precision_at_1
value: 48.339
- type: precision_at_10
value: 9.216000000000001
- type: precision_at_100
value: 1.1809999999999998
- type: precision_at_1000
value: 0.134
- type: precision_at_3
value: 23.72
- type: precision_at_5
value: 16.025
- type: recall_at_1
value: 42.675000000000004
- type: recall_at_10
value: 71.437
- type: recall_at_100
value: 86.803
- type: recall_at_1000
value: 95.581
- type: recall_at_3
value: 58.434
- type: recall_at_5
value: 63.754
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackGisRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 23.518
- type: map_at_10
value: 30.648999999999997
- type: map_at_100
value: 31.508999999999997
- type: map_at_1000
value: 31.604
- type: map_at_3
value: 28.247
- type: map_at_5
value: 29.65
- type: mrr_at_1
value: 25.650000000000002
- type: mrr_at_10
value: 32.771
- type: mrr_at_100
value: 33.554
- type: mrr_at_1000
value: 33.629999999999995
- type: mrr_at_3
value: 30.433
- type: mrr_at_5
value: 31.812
- type: ndcg_at_1
value: 25.650000000000002
- type: ndcg_at_10
value: 34.929
- type: ndcg_at_100
value: 39.382
- type: ndcg_at_1000
value: 41.913
- type: ndcg_at_3
value: 30.292
- type: ndcg_at_5
value: 32.629999999999995
- type: precision_at_1
value: 25.650000000000002
- type: precision_at_10
value: 5.311
- type: precision_at_100
value: 0.792
- type: precision_at_1000
value: 0.105
- type: precision_at_3
value: 12.58
- type: precision_at_5
value: 8.994
- type: recall_at_1
value: 23.518
- type: recall_at_10
value: 46.19
- type: recall_at_100
value: 67.123
- type: recall_at_1000
value: 86.442
- type: recall_at_3
value: 33.678000000000004
- type: recall_at_5
value: 39.244
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackMathematicaRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 15.891
- type: map_at_10
value: 22.464000000000002
- type: map_at_100
value: 23.483
- type: map_at_1000
value: 23.613
- type: map_at_3
value: 20.080000000000002
- type: map_at_5
value: 21.526
- type: mrr_at_1
value: 20.025000000000002
- type: mrr_at_10
value: 26.712999999999997
- type: mrr_at_100
value: 27.650000000000002
- type: mrr_at_1000
value: 27.737000000000002
- type: mrr_at_3
value: 24.274
- type: mrr_at_5
value: 25.711000000000002
- type: ndcg_at_1
value: 20.025000000000002
- type: ndcg_at_10
value: 27.028999999999996
- type: ndcg_at_100
value: 32.064
- type: ndcg_at_1000
value: 35.188
- type: ndcg_at_3
value: 22.512999999999998
- type: ndcg_at_5
value: 24.89
- type: precision_at_1
value: 20.025000000000002
- type: precision_at_10
value: 4.776
- type: precision_at_100
value: 0.8500000000000001
- type: precision_at_1000
value: 0.125
- type: precision_at_3
value: 10.531
- type: precision_at_5
value: 7.811
- type: recall_at_1
value: 15.891
- type: recall_at_10
value: 37.261
- type: recall_at_100
value: 59.12
- type: recall_at_1000
value: 81.356
- type: recall_at_3
value: 24.741
- type: recall_at_5
value: 30.753999999999998
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackPhysicsRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 27.544
- type: map_at_10
value: 36.283
- type: map_at_100
value: 37.467
- type: map_at_1000
value: 37.574000000000005
- type: map_at_3
value: 33.528999999999996
- type: map_at_5
value: 35.028999999999996
- type: mrr_at_1
value: 34.166999999999994
- type: mrr_at_10
value: 41.866
- type: mrr_at_100
value: 42.666
- type: mrr_at_1000
value: 42.716
- type: mrr_at_3
value: 39.541
- type: mrr_at_5
value: 40.768
- type: ndcg_at_1
value: 34.166999999999994
- type: ndcg_at_10
value: 41.577
- type: ndcg_at_100
value: 46.687
- type: ndcg_at_1000
value: 48.967
- type: ndcg_at_3
value: 37.177
- type: ndcg_at_5
value: 39.097
- type: precision_at_1
value: 34.166999999999994
- type: precision_at_10
value: 7.420999999999999
- type: precision_at_100
value: 1.165
- type: precision_at_1000
value: 0.154
- type: precision_at_3
value: 17.291999999999998
- type: precision_at_5
value: 12.166
- type: recall_at_1
value: 27.544
- type: recall_at_10
value: 51.99399999999999
- type: recall_at_100
value: 73.738
- type: recall_at_1000
value: 89.33
- type: recall_at_3
value: 39.179
- type: recall_at_5
value: 44.385999999999996
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackProgrammersRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 26.661
- type: map_at_10
value: 35.475
- type: map_at_100
value: 36.626999999999995
- type: map_at_1000
value: 36.741
- type: map_at_3
value: 32.818000000000005
- type: map_at_5
value: 34.397
- type: mrr_at_1
value: 32.647999999999996
- type: mrr_at_10
value: 40.784
- type: mrr_at_100
value: 41.602
- type: mrr_at_1000
value: 41.661
- type: mrr_at_3
value: 38.68
- type: mrr_at_5
value: 39.838
- type: ndcg_at_1
value: 32.647999999999996
- type: ndcg_at_10
value: 40.697
- type: ndcg_at_100
value: 45.799
- type: ndcg_at_1000
value: 48.235
- type: ndcg_at_3
value: 36.516
- type: ndcg_at_5
value: 38.515
- type: precision_at_1
value: 32.647999999999996
- type: precision_at_10
value: 7.202999999999999
- type: precision_at_100
value: 1.1360000000000001
- type: precision_at_1000
value: 0.151
- type: precision_at_3
value: 17.314
- type: precision_at_5
value: 12.145999999999999
- type: recall_at_1
value: 26.661
- type: recall_at_10
value: 50.995000000000005
- type: recall_at_100
value: 73.065
- type: recall_at_1000
value: 89.781
- type: recall_at_3
value: 39.073
- type: recall_at_5
value: 44.395
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 25.946583333333333
- type: map_at_10
value: 33.79725
- type: map_at_100
value: 34.86408333333333
- type: map_at_1000
value: 34.9795
- type: map_at_3
value: 31.259999999999998
- type: map_at_5
value: 32.71541666666666
- type: mrr_at_1
value: 30.863749999999996
- type: mrr_at_10
value: 37.99183333333333
- type: mrr_at_100
value: 38.790499999999994
- type: mrr_at_1000
value: 38.85575000000001
- type: mrr_at_3
value: 35.82083333333333
- type: mrr_at_5
value: 37.07533333333333
- type: ndcg_at_1
value: 30.863749999999996
- type: ndcg_at_10
value: 38.52141666666667
- type: ndcg_at_100
value: 43.17966666666667
- type: ndcg_at_1000
value: 45.64608333333333
- type: ndcg_at_3
value: 34.333000000000006
- type: ndcg_at_5
value: 36.34975
- type: precision_at_1
value: 30.863749999999996
- type: precision_at_10
value: 6.598999999999999
- type: precision_at_100
value: 1.0502500000000001
- type: precision_at_1000
value: 0.14400000000000002
- type: precision_at_3
value: 15.557583333333334
- type: precision_at_5
value: 11.020000000000001
- type: recall_at_1
value: 25.946583333333333
- type: recall_at_10
value: 48.36991666666666
- type: recall_at_100
value: 69.02408333333334
- type: recall_at_1000
value: 86.43858333333331
- type: recall_at_3
value: 36.4965
- type: recall_at_5
value: 41.76258333333334
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackStatsRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 22.431
- type: map_at_10
value: 28.889
- type: map_at_100
value: 29.642000000000003
- type: map_at_1000
value: 29.742
- type: map_at_3
value: 26.998
- type: map_at_5
value: 28.172000000000004
- type: mrr_at_1
value: 25.307000000000002
- type: mrr_at_10
value: 31.763
- type: mrr_at_100
value: 32.443
- type: mrr_at_1000
value: 32.531
- type: mrr_at_3
value: 29.959000000000003
- type: mrr_at_5
value: 31.063000000000002
- type: ndcg_at_1
value: 25.307000000000002
- type: ndcg_at_10
value: 32.586999999999996
- type: ndcg_at_100
value: 36.5
- type: ndcg_at_1000
value: 39.133
- type: ndcg_at_3
value: 29.25
- type: ndcg_at_5
value: 31.023
- type: precision_at_1
value: 25.307000000000002
- type: precision_at_10
value: 4.954
- type: precision_at_100
value: 0.747
- type: precision_at_1000
value: 0.104
- type: precision_at_3
value: 12.577
- type: precision_at_5
value: 8.741999999999999
- type: recall_at_1
value: 22.431
- type: recall_at_10
value: 41.134
- type: recall_at_100
value: 59.28600000000001
- type: recall_at_1000
value: 78.857
- type: recall_at_3
value: 31.926
- type: recall_at_5
value: 36.335
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackTexRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 17.586
- type: map_at_10
value: 23.304
- type: map_at_100
value: 24.159
- type: map_at_1000
value: 24.281
- type: map_at_3
value: 21.316
- type: map_at_5
value: 22.383
- type: mrr_at_1
value: 21.645
- type: mrr_at_10
value: 27.365000000000002
- type: mrr_at_100
value: 28.108
- type: mrr_at_1000
value: 28.192
- type: mrr_at_3
value: 25.482
- type: mrr_at_5
value: 26.479999999999997
- type: ndcg_at_1
value: 21.645
- type: ndcg_at_10
value: 27.306
- type: ndcg_at_100
value: 31.496000000000002
- type: ndcg_at_1000
value: 34.53
- type: ndcg_at_3
value: 23.73
- type: ndcg_at_5
value: 25.294
- type: precision_at_1
value: 21.645
- type: precision_at_10
value: 4.797
- type: precision_at_100
value: 0.8059999999999999
- type: precision_at_1000
value: 0.121
- type: precision_at_3
value: 10.850999999999999
- type: precision_at_5
value: 7.736
- type: recall_at_1
value: 17.586
- type: recall_at_10
value: 35.481
- type: recall_at_100
value: 54.534000000000006
- type: recall_at_1000
value: 76.456
- type: recall_at_3
value: 25.335
- type: recall_at_5
value: 29.473
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackUnixRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 25.095
- type: map_at_10
value: 32.374
- type: map_at_100
value: 33.537
- type: map_at_1000
value: 33.634
- type: map_at_3
value: 30.089
- type: map_at_5
value: 31.433
- type: mrr_at_1
value: 29.198
- type: mrr_at_10
value: 36.01
- type: mrr_at_100
value: 37.022
- type: mrr_at_1000
value: 37.083
- type: mrr_at_3
value: 33.94
- type: mrr_at_5
value: 35.148
- type: ndcg_at_1
value: 29.198
- type: ndcg_at_10
value: 36.729
- type: ndcg_at_100
value: 42.114000000000004
- type: ndcg_at_1000
value: 44.592
- type: ndcg_at_3
value: 32.644
- type: ndcg_at_5
value: 34.652
- type: precision_at_1
value: 29.198
- type: precision_at_10
value: 5.970000000000001
- type: precision_at_100
value: 0.967
- type: precision_at_1000
value: 0.129
- type: precision_at_3
value: 14.396999999999998
- type: precision_at_5
value: 10.093
- type: recall_at_1
value: 25.095
- type: recall_at_10
value: 46.392
- type: recall_at_100
value: 69.706
- type: recall_at_1000
value: 87.738
- type: recall_at_3
value: 35.303000000000004
- type: recall_at_5
value: 40.441
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackWebmastersRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 26.857999999999997
- type: map_at_10
value: 34.066
- type: map_at_100
value: 35.671
- type: map_at_1000
value: 35.881
- type: map_at_3
value: 31.304
- type: map_at_5
value: 32.885
- type: mrr_at_1
value: 32.411
- type: mrr_at_10
value: 38.987
- type: mrr_at_100
value: 39.894
- type: mrr_at_1000
value: 39.959
- type: mrr_at_3
value: 36.626999999999995
- type: mrr_at_5
value: 38.011
- type: ndcg_at_1
value: 32.411
- type: ndcg_at_10
value: 39.208
- type: ndcg_at_100
value: 44.626
- type: ndcg_at_1000
value: 47.43
- type: ndcg_at_3
value: 35.091
- type: ndcg_at_5
value: 37.119
- type: precision_at_1
value: 32.411
- type: precision_at_10
value: 7.51
- type: precision_at_100
value: 1.486
- type: precision_at_1000
value: 0.234
- type: precision_at_3
value: 16.14
- type: precision_at_5
value: 11.976
- type: recall_at_1
value: 26.857999999999997
- type: recall_at_10
value: 47.407
- type: recall_at_100
value: 72.236
- type: recall_at_1000
value: 90.77
- type: recall_at_3
value: 35.125
- type: recall_at_5
value: 40.522999999999996
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackWordpressRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 21.3
- type: map_at_10
value: 27.412999999999997
- type: map_at_100
value: 28.29
- type: map_at_1000
value: 28.398
- type: map_at_3
value: 25.169999999999998
- type: map_at_5
value: 26.496
- type: mrr_at_1
value: 23.29
- type: mrr_at_10
value: 29.215000000000003
- type: mrr_at_100
value: 30.073
- type: mrr_at_1000
value: 30.156
- type: mrr_at_3
value: 26.956000000000003
- type: mrr_at_5
value: 28.38
- type: ndcg_at_1
value: 23.29
- type: ndcg_at_10
value: 31.113000000000003
- type: ndcg_at_100
value: 35.701
- type: ndcg_at_1000
value: 38.505
- type: ndcg_at_3
value: 26.727
- type: ndcg_at_5
value: 29.037000000000003
- type: precision_at_1
value: 23.29
- type: precision_at_10
value: 4.787
- type: precision_at_100
value: 0.763
- type: precision_at_1000
value: 0.11100000000000002
- type: precision_at_3
value: 11.091
- type: precision_at_5
value: 7.985
- type: recall_at_1
value: 21.3
- type: recall_at_10
value: 40.782000000000004
- type: recall_at_100
value: 62.13999999999999
- type: recall_at_1000
value: 83.012
- type: recall_at_3
value: 29.131
- type: recall_at_5
value: 34.624
- task:
type: Retrieval
dataset:
type: climate-fever
name: MTEB ClimateFEVER
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 9.631
- type: map_at_10
value: 16.634999999999998
- type: map_at_100
value: 18.23
- type: map_at_1000
value: 18.419
- type: map_at_3
value: 13.66
- type: map_at_5
value: 15.173
- type: mrr_at_1
value: 21.368000000000002
- type: mrr_at_10
value: 31.56
- type: mrr_at_100
value: 32.58
- type: mrr_at_1000
value: 32.633
- type: mrr_at_3
value: 28.241
- type: mrr_at_5
value: 30.225
- type: ndcg_at_1
value: 21.368000000000002
- type: ndcg_at_10
value: 23.855999999999998
- type: ndcg_at_100
value: 30.686999999999998
- type: ndcg_at_1000
value: 34.327000000000005
- type: ndcg_at_3
value: 18.781
- type: ndcg_at_5
value: 20.73
- type: precision_at_1
value: 21.368000000000002
- type: precision_at_10
value: 7.564
- type: precision_at_100
value: 1.496
- type: precision_at_1000
value: 0.217
- type: precision_at_3
value: 13.876
- type: precision_at_5
value: 11.062
- type: recall_at_1
value: 9.631
- type: recall_at_10
value: 29.517
- type: recall_at_100
value: 53.452
- type: recall_at_1000
value: 74.115
- type: recall_at_3
value: 17.605999999999998
- type: recall_at_5
value: 22.505
- task:
type: Retrieval
dataset:
type: dbpedia-entity
name: MTEB DBPedia
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 8.885
- type: map_at_10
value: 18.798000000000002
- type: map_at_100
value: 26.316
- type: map_at_1000
value: 27.869
- type: map_at_3
value: 13.719000000000001
- type: map_at_5
value: 15.716
- type: mrr_at_1
value: 66
- type: mrr_at_10
value: 74.263
- type: mrr_at_100
value: 74.519
- type: mrr_at_1000
value: 74.531
- type: mrr_at_3
value: 72.458
- type: mrr_at_5
value: 73.321
- type: ndcg_at_1
value: 53.87499999999999
- type: ndcg_at_10
value: 40.355999999999995
- type: ndcg_at_100
value: 44.366
- type: ndcg_at_1000
value: 51.771
- type: ndcg_at_3
value: 45.195
- type: ndcg_at_5
value: 42.187000000000005
- type: precision_at_1
value: 66
- type: precision_at_10
value: 31.75
- type: precision_at_100
value: 10.11
- type: precision_at_1000
value: 1.9800000000000002
- type: precision_at_3
value: 48.167
- type: precision_at_5
value: 40.050000000000004
- type: recall_at_1
value: 8.885
- type: recall_at_10
value: 24.471999999999998
- type: recall_at_100
value: 49.669000000000004
- type: recall_at_1000
value: 73.383
- type: recall_at_3
value: 14.872
- type: recall_at_5
value: 18.262999999999998
- task:
type: Classification
dataset:
type: mteb/emotion
name: MTEB EmotionClassification
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 45.18
- type: f1
value: 40.26878691789978
- task:
type: Retrieval
dataset:
type: fever
name: MTEB FEVER
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 62.751999999999995
- type: map_at_10
value: 74.131
- type: map_at_100
value: 74.407
- type: map_at_1000
value: 74.423
- type: map_at_3
value: 72.329
- type: map_at_5
value: 73.555
- type: mrr_at_1
value: 67.282
- type: mrr_at_10
value: 78.292
- type: mrr_at_100
value: 78.455
- type: mrr_at_1000
value: 78.458
- type: mrr_at_3
value: 76.755
- type: mrr_at_5
value: 77.839
- type: ndcg_at_1
value: 67.282
- type: ndcg_at_10
value: 79.443
- type: ndcg_at_100
value: 80.529
- type: ndcg_at_1000
value: 80.812
- type: ndcg_at_3
value: 76.281
- type: ndcg_at_5
value: 78.235
- type: precision_at_1
value: 67.282
- type: precision_at_10
value: 10.078
- type: precision_at_100
value: 1.082
- type: precision_at_1000
value: 0.11199999999999999
- type: precision_at_3
value: 30.178
- type: precision_at_5
value: 19.232
- type: recall_at_1
value: 62.751999999999995
- type: recall_at_10
value: 91.521
- type: recall_at_100
value: 95.997
- type: recall_at_1000
value: 97.775
- type: recall_at_3
value: 83.131
- type: recall_at_5
value: 87.93299999999999
- task:
type: Retrieval
dataset:
type: fiqa
name: MTEB FiQA2018
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 18.861
- type: map_at_10
value: 30.252000000000002
- type: map_at_100
value: 32.082
- type: map_at_1000
value: 32.261
- type: map_at_3
value: 25.909
- type: map_at_5
value: 28.296
- type: mrr_at_1
value: 37.346000000000004
- type: mrr_at_10
value: 45.802
- type: mrr_at_100
value: 46.611999999999995
- type: mrr_at_1000
value: 46.659
- type: mrr_at_3
value: 43.056
- type: mrr_at_5
value: 44.637
- type: ndcg_at_1
value: 37.346000000000004
- type: ndcg_at_10
value: 38.169
- type: ndcg_at_100
value: 44.864
- type: ndcg_at_1000
value: 47.974
- type: ndcg_at_3
value: 33.619
- type: ndcg_at_5
value: 35.317
- type: precision_at_1
value: 37.346000000000004
- type: precision_at_10
value: 10.693999999999999
- type: precision_at_100
value: 1.775
- type: precision_at_1000
value: 0.231
- type: precision_at_3
value: 22.325
- type: precision_at_5
value: 16.852
- type: recall_at_1
value: 18.861
- type: recall_at_10
value: 45.672000000000004
- type: recall_at_100
value: 70.60499999999999
- type: recall_at_1000
value: 89.216
- type: recall_at_3
value: 30.361
- type: recall_at_5
value: 36.998999999999995
- task:
type: Retrieval
dataset:
type: hotpotqa
name: MTEB HotpotQA
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 37.852999999999994
- type: map_at_10
value: 59.961
- type: map_at_100
value: 60.78
- type: map_at_1000
value: 60.843
- type: map_at_3
value: 56.39999999999999
- type: map_at_5
value: 58.646
- type: mrr_at_1
value: 75.70599999999999
- type: mrr_at_10
value: 82.321
- type: mrr_at_100
value: 82.516
- type: mrr_at_1000
value: 82.525
- type: mrr_at_3
value: 81.317
- type: mrr_at_5
value: 81.922
- type: ndcg_at_1
value: 75.70599999999999
- type: ndcg_at_10
value: 68.557
- type: ndcg_at_100
value: 71.485
- type: ndcg_at_1000
value: 72.71600000000001
- type: ndcg_at_3
value: 63.524
- type: ndcg_at_5
value: 66.338
- type: precision_at_1
value: 75.70599999999999
- type: precision_at_10
value: 14.463000000000001
- type: precision_at_100
value: 1.677
- type: precision_at_1000
value: 0.184
- type: precision_at_3
value: 40.806
- type: precision_at_5
value: 26.709
- type: recall_at_1
value: 37.852999999999994
- type: recall_at_10
value: 72.316
- type: recall_at_100
value: 83.842
- type: recall_at_1000
value: 91.999
- type: recall_at_3
value: 61.209
- type: recall_at_5
value: 66.77199999999999
- task:
type: Classification
dataset:
type: mteb/imdb
name: MTEB ImdbClassification
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 85.46039999999999
- type: ap
value: 79.9812521351881
- type: f1
value: 85.31722909702084
- task:
type: Retrieval
dataset:
type: msmarco
name: MTEB MSMARCO
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 22.704
- type: map_at_10
value: 35.329
- type: map_at_100
value: 36.494
- type: map_at_1000
value: 36.541000000000004
- type: map_at_3
value: 31.476
- type: map_at_5
value: 33.731
- type: mrr_at_1
value: 23.294999999999998
- type: mrr_at_10
value: 35.859
- type: mrr_at_100
value: 36.968
- type: mrr_at_1000
value: 37.008
- type: mrr_at_3
value: 32.085
- type: mrr_at_5
value: 34.299
- type: ndcg_at_1
value: 23.324
- type: ndcg_at_10
value: 42.274
- type: ndcg_at_100
value: 47.839999999999996
- type: ndcg_at_1000
value: 48.971
- type: ndcg_at_3
value: 34.454
- type: ndcg_at_5
value: 38.464
- type: precision_at_1
value: 23.324
- type: precision_at_10
value: 6.648
- type: precision_at_100
value: 0.9440000000000001
- type: precision_at_1000
value: 0.104
- type: precision_at_3
value: 14.674999999999999
- type: precision_at_5
value: 10.850999999999999
- type: recall_at_1
value: 22.704
- type: recall_at_10
value: 63.660000000000004
- type: recall_at_100
value: 89.29899999999999
- type: recall_at_1000
value: 97.88900000000001
- type: recall_at_3
value: 42.441
- type: recall_at_5
value: 52.04
- task:
type: Classification
dataset:
type: mteb/mtop_domain
name: MTEB MTOPDomainClassification (en)
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 93.1326949384405
- type: f1
value: 92.89743579612082
- task:
type: Classification
dataset:
type: mteb/mtop_domain
name: MTEB MTOPDomainClassification (de)
config: de
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 89.62524654832347
- type: f1
value: 88.65106082263151
- task:
type: Classification
dataset:
type: mteb/mtop_domain
name: MTEB MTOPDomainClassification (es)
config: es
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 90.59039359573046
- type: f1
value: 90.31532892105662
- task:
type: Classification
dataset:
type: mteb/mtop_domain
name: MTEB MTOPDomainClassification (fr)
config: fr
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 86.21046038208581
- type: f1
value: 86.41459529813113
- task:
type: Classification
dataset:
type: mteb/mtop_domain
name: MTEB MTOPDomainClassification (hi)
config: hi
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 87.3180351380423
- type: f1
value: 86.71383078226444
- task:
type: Classification
dataset:
type: mteb/mtop_domain
name: MTEB MTOPDomainClassification (th)
config: th
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 86.24231464737792
- type: f1
value: 86.31845567592403
- task:
type: Classification
dataset:
type: mteb/mtop_intent
name: MTEB MTOPIntentClassification (en)
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 75.27131782945736
- type: f1
value: 57.52079940417103
- task:
type: Classification
dataset:
type: mteb/mtop_intent
name: MTEB MTOPIntentClassification (de)
config: de
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 71.2341504649197
- type: f1
value: 51.349951558039244
- task:
type: Classification
dataset:
type: mteb/mtop_intent
name: MTEB MTOPIntentClassification (es)
config: es
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 71.27418278852569
- type: f1
value: 50.1714985749095
- task:
type: Classification
dataset:
type: mteb/mtop_intent
name: MTEB MTOPIntentClassification (fr)
config: fr
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 67.68243031631694
- type: f1
value: 50.1066160836192
- task:
type: Classification
dataset:
type: mteb/mtop_intent
name: MTEB MTOPIntentClassification (hi)
config: hi
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 69.2362854069559
- type: f1
value: 48.821279948766424
- task:
type: Classification
dataset:
type: mteb/mtop_intent
name: MTEB MTOPIntentClassification (th)
config: th
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 71.71428571428571
- type: f1
value: 53.94611389496195
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (af)
config: af
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 59.97646267652992
- type: f1
value: 57.26797883561521
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (am)
config: am
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 53.65501008742435
- type: f1
value: 50.416258382177034
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (ar)
config: ar
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 57.45796906523201
- type: f1
value: 53.306690547422185
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (az)
config: az
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 62.59246805648957
- type: f1
value: 59.818381969051494
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (bn)
config: bn
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 61.126429051782104
- type: f1
value: 58.25993593933026
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (cy)
config: cy
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 50.057162071284466
- type: f1
value: 46.96095728790911
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (da)
config: da
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 66.64425016812375
- type: f1
value: 62.858291698755764
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (de)
config: de
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 66.08944182918628
- type: f1
value: 62.44639030604241
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (el)
config: el
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 64.68056489576328
- type: f1
value: 61.775326758789504
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (en)
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 72.11163416274377
- type: f1
value: 69.70789096927015
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (es)
config: es
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 68.40282447881641
- type: f1
value: 66.38492065671895
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (fa)
config: fa
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 67.24613315400134
- type: f1
value: 64.3348019501336
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (fi)
config: fi
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 65.78345662407531
- type: f1
value: 62.21279452354622
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (fr)
config: fr
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 67.9455279085407
- type: f1
value: 65.48193124964094
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (he)
config: he
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 62.05110961667788
- type: f1
value: 58.097856564684534
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (hi)
config: hi
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 64.95292535305985
- type: f1
value: 62.09182174767901
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (hu)
config: hu
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 64.97310020174848
- type: f1
value: 61.14252567730396
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (hy)
config: hy
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 60.08069939475453
- type: f1
value: 57.044041742492034
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (id)
config: id
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 66.63752521856085
- type: f1
value: 63.889340907205316
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (is)
config: is
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 56.385339609952936
- type: f1
value: 53.449033750088304
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (it)
config: it
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 68.93073301950234
- type: f1
value: 65.9884357824104
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (ja)
config: ja
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 68.94418291862812
- type: f1
value: 66.48740222583132
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (jv)
config: jv
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 54.26025554808339
- type: f1
value: 50.19562815100793
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (ka)
config: ka
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 48.98789509078682
- type: f1
value: 46.65788438676836
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (km)
config: km
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 44.68728984532616
- type: f1
value: 41.642419349541996
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (kn)
config: kn
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 59.19300605245461
- type: f1
value: 55.8626492442437
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (ko)
config: ko
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 66.33826496301278
- type: f1
value: 63.89499791648792
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (lv)
config: lv
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 60.33960995292536
- type: f1
value: 57.15242464180892
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (ml)
config: ml
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 63.09347679892402
- type: f1
value: 59.64733214063841
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (mn)
config: mn
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 58.75924680564896
- type: f1
value: 55.96585692366827
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (ms)
config: ms
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 62.48486886348352
- type: f1
value: 59.45143559032946
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (my)
config: my
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 58.56422326832549
- type: f1
value: 54.96368702901926
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (nb)
config: nb
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 66.18022864828512
- type: f1
value: 63.05369805040634
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (nl)
config: nl
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 67.30329522528581
- type: f1
value: 64.06084612020727
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (pl)
config: pl
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 68.36919973100201
- type: f1
value: 65.12154124788887
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (pt)
config: pt
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 68.98117014122394
- type: f1
value: 66.41847559806962
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (ro)
config: ro
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 65.53799596503026
- type: f1
value: 62.17067330740817
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (ru)
config: ru
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 69.01815736381977
- type: f1
value: 66.24988369607843
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (sl)
config: sl
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 62.34700739744452
- type: f1
value: 59.957933424941636
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (sq)
config: sq
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 61.23402824478815
- type: f1
value: 57.98836976018471
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (sv)
config: sv
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 68.54068594485541
- type: f1
value: 65.43849680666855
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (sw)
config: sw
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 55.998655010087425
- type: f1
value: 52.83737515406804
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (ta)
config: ta
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 58.71217215870882
- type: f1
value: 55.051794977833026
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (te)
config: te
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 59.724277067921996
- type: f1
value: 56.33485571838306
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (th)
config: th
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 65.59515803631473
- type: f1
value: 64.96772366193588
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (tl)
config: tl
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 60.860793544048406
- type: f1
value: 58.148845819115394
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (tr)
config: tr
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 67.40753194351043
- type: f1
value: 63.18903778054698
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (ur)
config: ur
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 61.52320107599194
- type: f1
value: 58.356144563398516
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (vi)
config: vi
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 66.17014122394083
- type: f1
value: 63.919964062638925
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (zh-CN)
config: zh-CN
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 69.15601882985878
- type: f1
value: 67.01451905761371
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (zh-TW)
config: zh-TW
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 64.65030262273034
- type: f1
value: 64.14420425129063
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (af)
config: af
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 65.08742434431743
- type: f1
value: 63.044060042311756
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (am)
config: am
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 58.52387357094821
- type: f1
value: 56.82398588814534
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (ar)
config: ar
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 62.239408204438476
- type: f1
value: 61.92570286170469
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (az)
config: az
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 63.74915938130463
- type: f1
value: 62.130740689396276
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (bn)
config: bn
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 65.00336247478144
- type: f1
value: 63.71080635228055
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (cy)
config: cy
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 52.837928715534645
- type: f1
value: 50.390741680320836
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (da)
config: da
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 72.42098184263618
- type: f1
value: 71.41355113538995
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (de)
config: de
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 71.95359784801613
- type: f1
value: 71.42699340156742
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (el)
config: el
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 70.18157363819772
- type: f1
value: 69.74836113037671
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (en)
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 77.08137188971082
- type: f1
value: 76.78000685068261
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (es)
config: es
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 71.5030262273033
- type: f1
value: 71.71620130425673
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (fa)
config: fa
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 70.24546065904505
- type: f1
value: 69.07638311730359
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (fi)
config: fi
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 69.12911903160726
- type: f1
value: 68.32651736539815
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (fr)
config: fr
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 71.89307330195025
- type: f1
value: 71.33986549860187
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (he)
config: he
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 67.44451916610626
- type: f1
value: 66.90192664503866
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (hi)
config: hi
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 69.16274377942166
- type: f1
value: 68.01090953775066
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (hu)
config: hu
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 70.75319435104237
- type: f1
value: 70.18035309201403
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (hy)
config: hy
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 63.14391392064559
- type: f1
value: 61.48286540778145
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (id)
config: id
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 70.70275722932078
- type: f1
value: 70.26164779846495
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (is)
config: is
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 60.93813046402153
- type: f1
value: 58.8852862116525
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (it)
config: it
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 72.320107599193
- type: f1
value: 72.19836409602924
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (ja)
config: ja
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 74.65366509751176
- type: f1
value: 74.55188288799579
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (jv)
config: jv
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 59.694014794889036
- type: f1
value: 58.11353311721067
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (ka)
config: ka
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 54.37457969065231
- type: f1
value: 52.81306134311697
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (km)
config: km
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 48.3086751849361
- type: f1
value: 45.396449765419376
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (kn)
config: kn
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 62.151983860121064
- type: f1
value: 60.31762544281696
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (ko)
config: ko
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 72.44788164088769
- type: f1
value: 71.68150151736367
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (lv)
config: lv
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 62.81439139206455
- type: f1
value: 62.06735559105593
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (ml)
config: ml
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 68.04303967720242
- type: f1
value: 66.68298851670133
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (mn)
config: mn
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 61.43913920645595
- type: f1
value: 60.25605977560783
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (ms)
config: ms
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 66.90316072629456
- type: f1
value: 65.1325924692381
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (my)
config: my
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 61.63752521856086
- type: f1
value: 59.14284778039585
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (nb)
config: nb
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 71.63080026899797
- type: f1
value: 70.89771864626877
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (nl)
config: nl
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 72.10827168796234
- type: f1
value: 71.71954219691159
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (pl)
config: pl
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 70.59515803631471
- type: f1
value: 70.05040128099003
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (pt)
config: pt
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 70.83389374579691
- type: f1
value: 70.84877936562735
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (ro)
config: ro
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 69.18628110289173
- type: f1
value: 68.97232927921841
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (ru)
config: ru
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 72.99260255548083
- type: f1
value: 72.85139492157732
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (sl)
config: sl
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 65.26227303295225
- type: f1
value: 65.08833655469431
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (sq)
config: sq
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 66.48621385339611
- type: f1
value: 64.43483199071298
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (sv)
config: sv
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 73.14391392064559
- type: f1
value: 72.2580822579741
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (sw)
config: sw
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 59.88567585743107
- type: f1
value: 58.3073765932569
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (ta)
config: ta
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 62.38399462004034
- type: f1
value: 60.82139544252606
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (te)
config: te
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 62.58574310692671
- type: f1
value: 60.71443370385374
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (th)
config: th
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 71.61398789509079
- type: f1
value: 70.99761812049401
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (tl)
config: tl
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 62.73705447209146
- type: f1
value: 61.680849331794796
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (tr)
config: tr
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 71.66778749159381
- type: f1
value: 71.17320646080115
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (ur)
config: ur
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 64.640215198386
- type: f1
value: 63.301805157015444
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (vi)
config: vi
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 70.00672494956288
- type: f1
value: 70.26005548582106
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (zh-CN)
config: zh-CN
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 75.42030934767989
- type: f1
value: 75.2074842882598
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (zh-TW)
config: zh-TW
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 70.69266980497646
- type: f1
value: 70.94103167391192
- task:
type: Clustering
dataset:
type: mteb/medrxiv-clustering-p2p
name: MTEB MedrxivClusteringP2P
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 28.91697191169135
- task:
type: Clustering
dataset:
type: mteb/medrxiv-clustering-s2s
name: MTEB MedrxivClusteringS2S
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 28.434000079573313
- task:
type: Reranking
dataset:
type: mteb/mind_small
name: MTEB MindSmallReranking
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 30.96683513343383
- type: mrr
value: 31.967364078714834
- task:
type: Retrieval
dataset:
type: nfcorpus
name: MTEB NFCorpus
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 5.5280000000000005
- type: map_at_10
value: 11.793
- type: map_at_100
value: 14.496999999999998
- type: map_at_1000
value: 15.783
- type: map_at_3
value: 8.838
- type: map_at_5
value: 10.07
- type: mrr_at_1
value: 43.653
- type: mrr_at_10
value: 51.531000000000006
- type: mrr_at_100
value: 52.205
- type: mrr_at_1000
value: 52.242999999999995
- type: mrr_at_3
value: 49.431999999999995
- type: mrr_at_5
value: 50.470000000000006
- type: ndcg_at_1
value: 42.415000000000006
- type: ndcg_at_10
value: 32.464999999999996
- type: ndcg_at_100
value: 28.927999999999997
- type: ndcg_at_1000
value: 37.629000000000005
- type: ndcg_at_3
value: 37.845
- type: ndcg_at_5
value: 35.147
- type: precision_at_1
value: 43.653
- type: precision_at_10
value: 23.932000000000002
- type: precision_at_100
value: 7.17
- type: precision_at_1000
value: 1.967
- type: precision_at_3
value: 35.397
- type: precision_at_5
value: 29.907
- type: recall_at_1
value: 5.5280000000000005
- type: recall_at_10
value: 15.568000000000001
- type: recall_at_100
value: 28.54
- type: recall_at_1000
value: 59.864
- type: recall_at_3
value: 9.822000000000001
- type: recall_at_5
value: 11.726
- task:
type: Retrieval
dataset:
type: nq
name: MTEB NQ
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 37.041000000000004
- type: map_at_10
value: 52.664
- type: map_at_100
value: 53.477
- type: map_at_1000
value: 53.505
- type: map_at_3
value: 48.510999999999996
- type: map_at_5
value: 51.036
- type: mrr_at_1
value: 41.338
- type: mrr_at_10
value: 55.071000000000005
- type: mrr_at_100
value: 55.672
- type: mrr_at_1000
value: 55.689
- type: mrr_at_3
value: 51.82
- type: mrr_at_5
value: 53.852
- type: ndcg_at_1
value: 41.338
- type: ndcg_at_10
value: 60.01800000000001
- type: ndcg_at_100
value: 63.409000000000006
- type: ndcg_at_1000
value: 64.017
- type: ndcg_at_3
value: 52.44799999999999
- type: ndcg_at_5
value: 56.571000000000005
- type: precision_at_1
value: 41.338
- type: precision_at_10
value: 9.531
- type: precision_at_100
value: 1.145
- type: precision_at_1000
value: 0.12
- type: precision_at_3
value: 23.416
- type: precision_at_5
value: 16.46
- type: recall_at_1
value: 37.041000000000004
- type: recall_at_10
value: 79.76299999999999
- type: recall_at_100
value: 94.39
- type: recall_at_1000
value: 98.851
- type: recall_at_3
value: 60.465
- type: recall_at_5
value: 69.906
- task:
type: Retrieval
dataset:
type: quora
name: MTEB QuoraRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 69.952
- type: map_at_10
value: 83.758
- type: map_at_100
value: 84.406
- type: map_at_1000
value: 84.425
- type: map_at_3
value: 80.839
- type: map_at_5
value: 82.646
- type: mrr_at_1
value: 80.62
- type: mrr_at_10
value: 86.947
- type: mrr_at_100
value: 87.063
- type: mrr_at_1000
value: 87.064
- type: mrr_at_3
value: 85.96000000000001
- type: mrr_at_5
value: 86.619
- type: ndcg_at_1
value: 80.63
- type: ndcg_at_10
value: 87.64800000000001
- type: ndcg_at_100
value: 88.929
- type: ndcg_at_1000
value: 89.054
- type: ndcg_at_3
value: 84.765
- type: ndcg_at_5
value: 86.291
- type: precision_at_1
value: 80.63
- type: precision_at_10
value: 13.314
- type: precision_at_100
value: 1.525
- type: precision_at_1000
value: 0.157
- type: precision_at_3
value: 37.1
- type: precision_at_5
value: 24.372
- type: recall_at_1
value: 69.952
- type: recall_at_10
value: 94.955
- type: recall_at_100
value: 99.38
- type: recall_at_1000
value: 99.96000000000001
- type: recall_at_3
value: 86.60600000000001
- type: recall_at_5
value: 90.997
- task:
type: Clustering
dataset:
type: mteb/reddit-clustering
name: MTEB RedditClustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 42.41329517878427
- task:
type: Clustering
dataset:
type: mteb/reddit-clustering-p2p
name: MTEB RedditClusteringP2P
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 55.171278362748666
- task:
type: Retrieval
dataset:
type: scidocs
name: MTEB SCIDOCS
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 4.213
- type: map_at_10
value: 9.895
- type: map_at_100
value: 11.776
- type: map_at_1000
value: 12.084
- type: map_at_3
value: 7.2669999999999995
- type: map_at_5
value: 8.620999999999999
- type: mrr_at_1
value: 20.8
- type: mrr_at_10
value: 31.112000000000002
- type: mrr_at_100
value: 32.274
- type: mrr_at_1000
value: 32.35
- type: mrr_at_3
value: 28.133000000000003
- type: mrr_at_5
value: 29.892999999999997
- type: ndcg_at_1
value: 20.8
- type: ndcg_at_10
value: 17.163999999999998
- type: ndcg_at_100
value: 24.738
- type: ndcg_at_1000
value: 30.316
- type: ndcg_at_3
value: 16.665
- type: ndcg_at_5
value: 14.478
- type: precision_at_1
value: 20.8
- type: precision_at_10
value: 8.74
- type: precision_at_100
value: 1.963
- type: precision_at_1000
value: 0.33
- type: precision_at_3
value: 15.467
- type: precision_at_5
value: 12.6
- type: recall_at_1
value: 4.213
- type: recall_at_10
value: 17.698
- type: recall_at_100
value: 39.838
- type: recall_at_1000
value: 66.893
- type: recall_at_3
value: 9.418
- type: recall_at_5
value: 12.773000000000001
- task:
type: STS
dataset:
type: mteb/sickr-sts
name: MTEB SICK-R
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 82.90453315738294
- type: cos_sim_spearman
value: 78.51197850080254
- type: euclidean_pearson
value: 80.09647123597748
- type: euclidean_spearman
value: 78.63548011514061
- type: manhattan_pearson
value: 80.10645285675231
- type: manhattan_spearman
value: 78.57861806068901
- task:
type: STS
dataset:
type: mteb/sts12-sts
name: MTEB STS12
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 84.2616156846401
- type: cos_sim_spearman
value: 76.69713867850156
- type: euclidean_pearson
value: 77.97948563800394
- type: euclidean_spearman
value: 74.2371211567807
- type: manhattan_pearson
value: 77.69697879669705
- type: manhattan_spearman
value: 73.86529778022278
- task:
type: STS
dataset:
type: mteb/sts13-sts
name: MTEB STS13
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 77.0293269315045
- type: cos_sim_spearman
value: 78.02555120584198
- type: euclidean_pearson
value: 78.25398100379078
- type: euclidean_spearman
value: 78.66963870599464
- type: manhattan_pearson
value: 78.14314682167348
- type: manhattan_spearman
value: 78.57692322969135
- task:
type: STS
dataset:
type: mteb/sts14-sts
name: MTEB STS14
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 79.16989925136942
- type: cos_sim_spearman
value: 76.5996225327091
- type: euclidean_pearson
value: 77.8319003279786
- type: euclidean_spearman
value: 76.42824009468998
- type: manhattan_pearson
value: 77.69118862737736
- type: manhattan_spearman
value: 76.25568104762812
- task:
type: STS
dataset:
type: mteb/sts15-sts
name: MTEB STS15
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 87.42012286935325
- type: cos_sim_spearman
value: 88.15654297884122
- type: euclidean_pearson
value: 87.34082819427852
- type: euclidean_spearman
value: 88.06333589547084
- type: manhattan_pearson
value: 87.25115596784842
- type: manhattan_spearman
value: 87.9559927695203
- task:
type: STS
dataset:
type: mteb/sts16-sts
name: MTEB STS16
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 82.88222044996712
- type: cos_sim_spearman
value: 84.28476589061077
- type: euclidean_pearson
value: 83.17399758058309
- type: euclidean_spearman
value: 83.85497357244542
- type: manhattan_pearson
value: 83.0308397703786
- type: manhattan_spearman
value: 83.71554539935046
- task:
type: STS
dataset:
type: mteb/sts17-crosslingual-sts
name: MTEB STS17 (ko-ko)
config: ko-ko
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 80.20682986257339
- type: cos_sim_spearman
value: 79.94567120362092
- type: euclidean_pearson
value: 79.43122480368902
- type: euclidean_spearman
value: 79.94802077264987
- type: manhattan_pearson
value: 79.32653021527081
- type: manhattan_spearman
value: 79.80961146709178
- task:
type: STS
dataset:
type: mteb/sts17-crosslingual-sts
name: MTEB STS17 (ar-ar)
config: ar-ar
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 74.46578144394383
- type: cos_sim_spearman
value: 74.52496637472179
- type: euclidean_pearson
value: 72.2903807076809
- type: euclidean_spearman
value: 73.55549359771645
- type: manhattan_pearson
value: 72.09324837709393
- type: manhattan_spearman
value: 73.36743103606581
- task:
type: STS
dataset:
type: mteb/sts17-crosslingual-sts
name: MTEB STS17 (en-ar)
config: en-ar
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 71.37272335116
- type: cos_sim_spearman
value: 71.26702117766037
- type: euclidean_pearson
value: 67.114829954434
- type: euclidean_spearman
value: 66.37938893947761
- type: manhattan_pearson
value: 66.79688574095246
- type: manhattan_spearman
value: 66.17292828079667
- task:
type: STS
dataset:
type: mteb/sts17-crosslingual-sts
name: MTEB STS17 (en-de)
config: en-de
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 80.61016770129092
- type: cos_sim_spearman
value: 82.08515426632214
- type: euclidean_pearson
value: 80.557340361131
- type: euclidean_spearman
value: 80.37585812266175
- type: manhattan_pearson
value: 80.6782873404285
- type: manhattan_spearman
value: 80.6678073032024
- task:
type: STS
dataset:
type: mteb/sts17-crosslingual-sts
name: MTEB STS17 (en-en)
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 87.00150745350108
- type: cos_sim_spearman
value: 87.83441972211425
- type: euclidean_pearson
value: 87.94826702308792
- type: euclidean_spearman
value: 87.46143974860725
- type: manhattan_pearson
value: 87.97560344306105
- type: manhattan_spearman
value: 87.5267102829796
- task:
type: STS
dataset:
type: mteb/sts17-crosslingual-sts
name: MTEB STS17 (en-tr)
config: en-tr
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 64.76325252267235
- type: cos_sim_spearman
value: 63.32615095463905
- type: euclidean_pearson
value: 64.07920669155716
- type: euclidean_spearman
value: 61.21409893072176
- type: manhattan_pearson
value: 64.26308625680016
- type: manhattan_spearman
value: 61.2438185254079
- task:
type: STS
dataset:
type: mteb/sts17-crosslingual-sts
name: MTEB STS17 (es-en)
config: es-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 75.82644463022595
- type: cos_sim_spearman
value: 76.50381269945073
- type: euclidean_pearson
value: 75.1328548315934
- type: euclidean_spearman
value: 75.63761139408453
- type: manhattan_pearson
value: 75.18610101241407
- type: manhattan_spearman
value: 75.30669266354164
- task:
type: STS
dataset:
type: mteb/sts17-crosslingual-sts
name: MTEB STS17 (es-es)
config: es-es
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 87.49994164686832
- type: cos_sim_spearman
value: 86.73743986245549
- type: euclidean_pearson
value: 86.8272894387145
- type: euclidean_spearman
value: 85.97608491000507
- type: manhattan_pearson
value: 86.74960140396779
- type: manhattan_spearman
value: 85.79285984190273
- task:
type: STS
dataset:
type: mteb/sts17-crosslingual-sts
name: MTEB STS17 (fr-en)
config: fr-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 79.58172210788469
- type: cos_sim_spearman
value: 80.17516468334607
- type: euclidean_pearson
value: 77.56537843470504
- type: euclidean_spearman
value: 77.57264627395521
- type: manhattan_pearson
value: 78.09703521695943
- type: manhattan_spearman
value: 78.15942760916954
- task:
type: STS
dataset:
type: mteb/sts17-crosslingual-sts
name: MTEB STS17 (it-en)
config: it-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 79.7589932931751
- type: cos_sim_spearman
value: 80.15210089028162
- type: euclidean_pearson
value: 77.54135223516057
- type: euclidean_spearman
value: 77.52697996368764
- type: manhattan_pearson
value: 77.65734439572518
- type: manhattan_spearman
value: 77.77702992016121
- task:
type: STS
dataset:
type: mteb/sts17-crosslingual-sts
name: MTEB STS17 (nl-en)
config: nl-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 79.16682365511267
- type: cos_sim_spearman
value: 79.25311267628506
- type: euclidean_pearson
value: 77.54882036762244
- type: euclidean_spearman
value: 77.33212935194827
- type: manhattan_pearson
value: 77.98405516064015
- type: manhattan_spearman
value: 77.85075717865719
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (en)
config: en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 59.10473294775917
- type: cos_sim_spearman
value: 61.82780474476838
- type: euclidean_pearson
value: 45.885111672377256
- type: euclidean_spearman
value: 56.88306351932454
- type: manhattan_pearson
value: 46.101218127323186
- type: manhattan_spearman
value: 56.80953694186333
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (de)
config: de
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 45.781923079584146
- type: cos_sim_spearman
value: 55.95098449691107
- type: euclidean_pearson
value: 25.4571031323205
- type: euclidean_spearman
value: 49.859978118078935
- type: manhattan_pearson
value: 25.624938455041384
- type: manhattan_spearman
value: 49.99546185049401
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (es)
config: es
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 60.00618133997907
- type: cos_sim_spearman
value: 66.57896677718321
- type: euclidean_pearson
value: 42.60118466388821
- type: euclidean_spearman
value: 62.8210759715209
- type: manhattan_pearson
value: 42.63446860604094
- type: manhattan_spearman
value: 62.73803068925271
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (pl)
config: pl
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 28.460759121626943
- type: cos_sim_spearman
value: 34.13459007469131
- type: euclidean_pearson
value: 6.0917739325525195
- type: euclidean_spearman
value: 27.9947262664867
- type: manhattan_pearson
value: 6.16877864169911
- type: manhattan_spearman
value: 28.00664163971514
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (tr)
config: tr
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 57.42546621771696
- type: cos_sim_spearman
value: 63.699663168970474
- type: euclidean_pearson
value: 38.12085278789738
- type: euclidean_spearman
value: 58.12329140741536
- type: manhattan_pearson
value: 37.97364549443335
- type: manhattan_spearman
value: 57.81545502318733
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (ar)
config: ar
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 46.82241380954213
- type: cos_sim_spearman
value: 57.86569456006391
- type: euclidean_pearson
value: 31.80480070178813
- type: euclidean_spearman
value: 52.484000620130104
- type: manhattan_pearson
value: 31.952708554646097
- type: manhattan_spearman
value: 52.8560972356195
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (ru)
config: ru
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 52.00447170498087
- type: cos_sim_spearman
value: 60.664116225735164
- type: euclidean_pearson
value: 33.87382555421702
- type: euclidean_spearman
value: 55.74649067458667
- type: manhattan_pearson
value: 33.99117246759437
- type: manhattan_spearman
value: 55.98749034923899
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (zh)
config: zh
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 58.06497233105448
- type: cos_sim_spearman
value: 65.62968801135676
- type: euclidean_pearson
value: 47.482076613243905
- type: euclidean_spearman
value: 62.65137791498299
- type: manhattan_pearson
value: 47.57052626104093
- type: manhattan_spearman
value: 62.436916516613294
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (fr)
config: fr
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 70.49397298562575
- type: cos_sim_spearman
value: 74.79604041187868
- type: euclidean_pearson
value: 49.661891561317795
- type: euclidean_spearman
value: 70.31535537621006
- type: manhattan_pearson
value: 49.553715741850006
- type: manhattan_spearman
value: 70.24779344636806
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (de-en)
config: de-en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 55.640574515348696
- type: cos_sim_spearman
value: 54.927959317689
- type: euclidean_pearson
value: 29.00139666967476
- type: euclidean_spearman
value: 41.86386566971605
- type: manhattan_pearson
value: 29.47411067730344
- type: manhattan_spearman
value: 42.337438424952786
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (es-en)
config: es-en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 68.14095292259312
- type: cos_sim_spearman
value: 73.99017581234789
- type: euclidean_pearson
value: 46.46304297872084
- type: euclidean_spearman
value: 60.91834114800041
- type: manhattan_pearson
value: 47.07072666338692
- type: manhattan_spearman
value: 61.70415727977926
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (it)
config: it
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 73.27184653359575
- type: cos_sim_spearman
value: 77.76070252418626
- type: euclidean_pearson
value: 62.30586577544778
- type: euclidean_spearman
value: 75.14246629110978
- type: manhattan_pearson
value: 62.328196884927046
- type: manhattan_spearman
value: 75.1282792981433
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (pl-en)
config: pl-en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 71.59448528829957
- type: cos_sim_spearman
value: 70.37277734222123
- type: euclidean_pearson
value: 57.63145565721123
- type: euclidean_spearman
value: 66.10113048304427
- type: manhattan_pearson
value: 57.18897811586808
- type: manhattan_spearman
value: 66.5595511215901
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (zh-en)
config: zh-en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 66.37520607720838
- type: cos_sim_spearman
value: 69.92282148997948
- type: euclidean_pearson
value: 40.55768770125291
- type: euclidean_spearman
value: 55.189128944669605
- type: manhattan_pearson
value: 41.03566433468883
- type: manhattan_spearman
value: 55.61251893174558
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (es-it)
config: es-it
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 57.791929533771835
- type: cos_sim_spearman
value: 66.45819707662093
- type: euclidean_pearson
value: 39.03686018511092
- type: euclidean_spearman
value: 56.01282695640428
- type: manhattan_pearson
value: 38.91586623619632
- type: manhattan_spearman
value: 56.69394943612747
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (de-fr)
config: de-fr
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 47.82224468473866
- type: cos_sim_spearman
value: 59.467307194781164
- type: euclidean_pearson
value: 27.428459190256145
- type: euclidean_spearman
value: 60.83463107397519
- type: manhattan_pearson
value: 27.487391578496638
- type: manhattan_spearman
value: 61.281380460246496
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (de-pl)
config: de-pl
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 16.306666792752644
- type: cos_sim_spearman
value: 39.35486427252405
- type: euclidean_pearson
value: -2.7887154897955435
- type: euclidean_spearman
value: 27.1296051831719
- type: manhattan_pearson
value: -3.202291270581297
- type: manhattan_spearman
value: 26.32895849218158
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (fr-pl)
config: fr-pl
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 59.67006803805076
- type: cos_sim_spearman
value: 73.24670207647144
- type: euclidean_pearson
value: 46.91884681500483
- type: euclidean_spearman
value: 16.903085094570333
- type: manhattan_pearson
value: 46.88391675325812
- type: manhattan_spearman
value: 28.17180849095055
- task:
type: STS
dataset:
type: mteb/stsbenchmark-sts
name: MTEB STSBenchmark
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 83.79555591223837
- type: cos_sim_spearman
value: 85.63658602085185
- type: euclidean_pearson
value: 85.22080894037671
- type: euclidean_spearman
value: 85.54113580167038
- type: manhattan_pearson
value: 85.1639505960118
- type: manhattan_spearman
value: 85.43502665436196
- task:
type: Reranking
dataset:
type: mteb/scidocs-reranking
name: MTEB SciDocsRR
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 80.73900991689766
- type: mrr
value: 94.81624131133934
- task:
type: Retrieval
dataset:
type: scifact
name: MTEB SciFact
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 55.678000000000004
- type: map_at_10
value: 65.135
- type: map_at_100
value: 65.824
- type: map_at_1000
value: 65.852
- type: map_at_3
value: 62.736000000000004
- type: map_at_5
value: 64.411
- type: mrr_at_1
value: 58.333
- type: mrr_at_10
value: 66.5
- type: mrr_at_100
value: 67.053
- type: mrr_at_1000
value: 67.08
- type: mrr_at_3
value: 64.944
- type: mrr_at_5
value: 65.89399999999999
- type: ndcg_at_1
value: 58.333
- type: ndcg_at_10
value: 69.34700000000001
- type: ndcg_at_100
value: 72.32
- type: ndcg_at_1000
value: 73.014
- type: ndcg_at_3
value: 65.578
- type: ndcg_at_5
value: 67.738
- type: precision_at_1
value: 58.333
- type: precision_at_10
value: 9.033
- type: precision_at_100
value: 1.0670000000000002
- type: precision_at_1000
value: 0.11199999999999999
- type: precision_at_3
value: 25.444
- type: precision_at_5
value: 16.933
- type: recall_at_1
value: 55.678000000000004
- type: recall_at_10
value: 80.72200000000001
- type: recall_at_100
value: 93.93299999999999
- type: recall_at_1000
value: 99.333
- type: recall_at_3
value: 70.783
- type: recall_at_5
value: 75.978
- task:
type: PairClassification
dataset:
type: mteb/sprintduplicatequestions-pairclassification
name: MTEB SprintDuplicateQuestions
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.74653465346535
- type: cos_sim_ap
value: 93.01476369929063
- type: cos_sim_f1
value: 86.93009118541033
- type: cos_sim_precision
value: 88.09034907597535
- type: cos_sim_recall
value: 85.8
- type: dot_accuracy
value: 99.22970297029703
- type: dot_ap
value: 51.58725659485144
- type: dot_f1
value: 53.51351351351352
- type: dot_precision
value: 58.235294117647065
- type: dot_recall
value: 49.5
- type: euclidean_accuracy
value: 99.74356435643564
- type: euclidean_ap
value: 92.40332894384368
- type: euclidean_f1
value: 86.97838109602817
- type: euclidean_precision
value: 87.46208291203236
- type: euclidean_recall
value: 86.5
- type: manhattan_accuracy
value: 99.73069306930694
- type: manhattan_ap
value: 92.01320815721121
- type: manhattan_f1
value: 86.4135864135864
- type: manhattan_precision
value: 86.32734530938124
- type: manhattan_recall
value: 86.5
- type: max_accuracy
value: 99.74653465346535
- type: max_ap
value: 93.01476369929063
- type: max_f1
value: 86.97838109602817
- task:
type: Clustering
dataset:
type: mteb/stackexchange-clustering
name: MTEB StackExchangeClustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 55.2660514302523
- task:
type: Clustering
dataset:
type: mteb/stackexchange-clustering-p2p
name: MTEB StackExchangeClusteringP2P
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 30.4637783572547
- task:
type: Reranking
dataset:
type: mteb/stackoverflowdupquestions-reranking
name: MTEB StackOverflowDupQuestions
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 49.41377758357637
- type: mrr
value: 50.138451213818854
- task:
type: Summarization
dataset:
type: mteb/summeval
name: MTEB SummEval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 28.887846011166594
- type: cos_sim_spearman
value: 30.10823258355903
- type: dot_pearson
value: 12.888049550236385
- type: dot_spearman
value: 12.827495903098123
- task:
type: Retrieval
dataset:
type: trec-covid
name: MTEB TRECCOVID
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.21
- type: map_at_10
value: 1.667
- type: map_at_100
value: 9.15
- type: map_at_1000
value: 22.927
- type: map_at_3
value: 0.573
- type: map_at_5
value: 0.915
- type: mrr_at_1
value: 80
- type: mrr_at_10
value: 87.167
- type: mrr_at_100
value: 87.167
- type: mrr_at_1000
value: 87.167
- type: mrr_at_3
value: 85.667
- type: mrr_at_5
value: 87.167
- type: ndcg_at_1
value: 76
- type: ndcg_at_10
value: 69.757
- type: ndcg_at_100
value: 52.402
- type: ndcg_at_1000
value: 47.737
- type: ndcg_at_3
value: 71.866
- type: ndcg_at_5
value: 72.225
- type: precision_at_1
value: 80
- type: precision_at_10
value: 75
- type: precision_at_100
value: 53.959999999999994
- type: precision_at_1000
value: 21.568
- type: precision_at_3
value: 76.667
- type: precision_at_5
value: 78
- type: recall_at_1
value: 0.21
- type: recall_at_10
value: 1.9189999999999998
- type: recall_at_100
value: 12.589
- type: recall_at_1000
value: 45.312000000000005
- type: recall_at_3
value: 0.61
- type: recall_at_5
value: 1.019
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (sqi-eng)
config: sqi-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 92.10000000000001
- type: f1
value: 90.06
- type: precision
value: 89.17333333333333
- type: recall
value: 92.10000000000001
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (fry-eng)
config: fry-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 56.06936416184971
- type: f1
value: 50.87508028259473
- type: precision
value: 48.97398843930635
- type: recall
value: 56.06936416184971
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (kur-eng)
config: kur-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 57.3170731707317
- type: f1
value: 52.96080139372822
- type: precision
value: 51.67861124382864
- type: recall
value: 57.3170731707317
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (tur-eng)
config: tur-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94.3
- type: f1
value: 92.67333333333333
- type: precision
value: 91.90833333333333
- type: recall
value: 94.3
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (deu-eng)
config: deu-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 97.7
- type: f1
value: 97.07333333333332
- type: precision
value: 96.79500000000002
- type: recall
value: 97.7
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (nld-eng)
config: nld-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94.69999999999999
- type: f1
value: 93.2
- type: precision
value: 92.48333333333333
- type: recall
value: 94.69999999999999
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (ron-eng)
config: ron-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 92.9
- type: f1
value: 91.26666666666667
- type: precision
value: 90.59444444444445
- type: recall
value: 92.9
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (ang-eng)
config: ang-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 34.32835820895522
- type: f1
value: 29.074180380150533
- type: precision
value: 28.068207322920596
- type: recall
value: 34.32835820895522
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (ido-eng)
config: ido-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 78.5
- type: f1
value: 74.3945115995116
- type: precision
value: 72.82967843459222
- type: recall
value: 78.5
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (jav-eng)
config: jav-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 66.34146341463415
- type: f1
value: 61.2469400518181
- type: precision
value: 59.63977756660683
- type: recall
value: 66.34146341463415
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (isl-eng)
config: isl-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 80.9
- type: f1
value: 76.90349206349207
- type: precision
value: 75.32921568627451
- type: recall
value: 80.9
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (slv-eng)
config: slv-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 84.93317132442284
- type: f1
value: 81.92519105034295
- type: precision
value: 80.71283920615635
- type: recall
value: 84.93317132442284
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (cym-eng)
config: cym-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 71.1304347826087
- type: f1
value: 65.22394755003451
- type: precision
value: 62.912422360248435
- type: recall
value: 71.1304347826087
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (kaz-eng)
config: kaz-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 79.82608695652173
- type: f1
value: 75.55693581780538
- type: precision
value: 73.79420289855072
- type: recall
value: 79.82608695652173
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (est-eng)
config: est-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 74
- type: f1
value: 70.51022222222223
- type: precision
value: 69.29673599347512
- type: recall
value: 74
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (heb-eng)
config: heb-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 78.7
- type: f1
value: 74.14238095238095
- type: precision
value: 72.27214285714285
- type: recall
value: 78.7
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (gla-eng)
config: gla-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 48.97466827503016
- type: f1
value: 43.080330405420874
- type: precision
value: 41.36505499593557
- type: recall
value: 48.97466827503016
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (mar-eng)
config: mar-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 89.60000000000001
- type: f1
value: 86.62333333333333
- type: precision
value: 85.225
- type: recall
value: 89.60000000000001
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (lat-eng)
config: lat-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 45.2
- type: f1
value: 39.5761253006253
- type: precision
value: 37.991358436312
- type: recall
value: 45.2
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (bel-eng)
config: bel-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 89.5
- type: f1
value: 86.70333333333333
- type: precision
value: 85.53166666666667
- type: recall
value: 89.5
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (pms-eng)
config: pms-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 50.095238095238095
- type: f1
value: 44.60650460650461
- type: precision
value: 42.774116796477045
- type: recall
value: 50.095238095238095
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (gle-eng)
config: gle-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 63.4
- type: f1
value: 58.35967261904762
- type: precision
value: 56.54857142857143
- type: recall
value: 63.4
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (pes-eng)
config: pes-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 89.2
- type: f1
value: 87.075
- type: precision
value: 86.12095238095239
- type: recall
value: 89.2
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (nob-eng)
config: nob-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 96.8
- type: f1
value: 95.90333333333334
- type: precision
value: 95.50833333333333
- type: recall
value: 96.8
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (bul-eng)
config: bul-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 90.9
- type: f1
value: 88.6288888888889
- type: precision
value: 87.61607142857142
- type: recall
value: 90.9
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (cbk-eng)
config: cbk-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 65.2
- type: f1
value: 60.54377630539395
- type: precision
value: 58.89434482711381
- type: recall
value: 65.2
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (hun-eng)
config: hun-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 87
- type: f1
value: 84.32412698412699
- type: precision
value: 83.25527777777778
- type: recall
value: 87
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (uig-eng)
config: uig-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 68.7
- type: f1
value: 63.07883541295306
- type: precision
value: 61.06117424242426
- type: recall
value: 68.7
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (rus-eng)
config: rus-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 93.7
- type: f1
value: 91.78333333333335
- type: precision
value: 90.86666666666667
- type: recall
value: 93.7
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (spa-eng)
config: spa-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 97.7
- type: f1
value: 96.96666666666667
- type: precision
value: 96.61666666666667
- type: recall
value: 97.7
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (hye-eng)
config: hye-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 88.27493261455525
- type: f1
value: 85.90745732255168
- type: precision
value: 84.91389637616052
- type: recall
value: 88.27493261455525
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (tel-eng)
config: tel-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 90.5982905982906
- type: f1
value: 88.4900284900285
- type: precision
value: 87.57122507122507
- type: recall
value: 90.5982905982906
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (afr-eng)
config: afr-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 89.5
- type: f1
value: 86.90769841269842
- type: precision
value: 85.80178571428571
- type: recall
value: 89.5
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (mon-eng)
config: mon-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 82.5
- type: f1
value: 78.36796536796538
- type: precision
value: 76.82196969696969
- type: recall
value: 82.5
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (arz-eng)
config: arz-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 71.48846960167715
- type: f1
value: 66.78771089148448
- type: precision
value: 64.98302885095339
- type: recall
value: 71.48846960167715
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (hrv-eng)
config: hrv-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94.1
- type: f1
value: 92.50333333333333
- type: precision
value: 91.77499999999999
- type: recall
value: 94.1
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (nov-eng)
config: nov-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 71.20622568093385
- type: f1
value: 66.83278891450098
- type: precision
value: 65.35065777283677
- type: recall
value: 71.20622568093385
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (gsw-eng)
config: gsw-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 48.717948717948715
- type: f1
value: 43.53146853146853
- type: precision
value: 42.04721204721204
- type: recall
value: 48.717948717948715
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (nds-eng)
config: nds-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 58.5
- type: f1
value: 53.8564991863928
- type: precision
value: 52.40329436122275
- type: recall
value: 58.5
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (ukr-eng)
config: ukr-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 90.8
- type: f1
value: 88.29
- type: precision
value: 87.09166666666667
- type: recall
value: 90.8
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (uzb-eng)
config: uzb-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 67.28971962616822
- type: f1
value: 62.63425307817832
- type: precision
value: 60.98065939771546
- type: recall
value: 67.28971962616822
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (lit-eng)
config: lit-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 78.7
- type: f1
value: 75.5264472455649
- type: precision
value: 74.38205086580086
- type: recall
value: 78.7
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (ina-eng)
config: ina-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 88.7
- type: f1
value: 86.10809523809525
- type: precision
value: 85.07602564102565
- type: recall
value: 88.7
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (lfn-eng)
config: lfn-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 56.99999999999999
- type: f1
value: 52.85487521402737
- type: precision
value: 51.53985162713104
- type: recall
value: 56.99999999999999
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (zsm-eng)
config: zsm-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94
- type: f1
value: 92.45333333333333
- type: precision
value: 91.79166666666667
- type: recall
value: 94
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (ita-eng)
config: ita-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 92.30000000000001
- type: f1
value: 90.61333333333333
- type: precision
value: 89.83333333333331
- type: recall
value: 92.30000000000001
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (cmn-eng)
config: cmn-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94.69999999999999
- type: f1
value: 93.34555555555555
- type: precision
value: 92.75416666666668
- type: recall
value: 94.69999999999999
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (lvs-eng)
config: lvs-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 80.2
- type: f1
value: 76.6563035113035
- type: precision
value: 75.3014652014652
- type: recall
value: 80.2
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (glg-eng)
config: glg-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 84.7
- type: f1
value: 82.78689263765207
- type: precision
value: 82.06705086580087
- type: recall
value: 84.7
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (ceb-eng)
config: ceb-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 50.33333333333333
- type: f1
value: 45.461523661523664
- type: precision
value: 43.93545574795575
- type: recall
value: 50.33333333333333
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (bre-eng)
config: bre-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 6.6000000000000005
- type: f1
value: 5.442121400446441
- type: precision
value: 5.146630385487529
- type: recall
value: 6.6000000000000005
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (ben-eng)
config: ben-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 85
- type: f1
value: 81.04666666666667
- type: precision
value: 79.25
- type: recall
value: 85
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (swg-eng)
config: swg-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 47.32142857142857
- type: f1
value: 42.333333333333336
- type: precision
value: 40.69196428571429
- type: recall
value: 47.32142857142857
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (arq-eng)
config: arq-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 30.735455543358945
- type: f1
value: 26.73616790022338
- type: precision
value: 25.397823220451283
- type: recall
value: 30.735455543358945
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (kab-eng)
config: kab-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 25.1
- type: f1
value: 21.975989896371022
- type: precision
value: 21.059885632257203
- type: recall
value: 25.1
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (fra-eng)
config: fra-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94.3
- type: f1
value: 92.75666666666666
- type: precision
value: 92.06166666666665
- type: recall
value: 94.3
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (por-eng)
config: por-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94.1
- type: f1
value: 92.74
- type: precision
value: 92.09166666666667
- type: recall
value: 94.1
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (tat-eng)
config: tat-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 71.3
- type: f1
value: 66.922442002442
- type: precision
value: 65.38249567099568
- type: recall
value: 71.3
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (oci-eng)
config: oci-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 40.300000000000004
- type: f1
value: 35.78682789299971
- type: precision
value: 34.66425128716588
- type: recall
value: 40.300000000000004
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (pol-eng)
config: pol-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 96
- type: f1
value: 94.82333333333334
- type: precision
value: 94.27833333333334
- type: recall
value: 96
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (war-eng)
config: war-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 51.1
- type: f1
value: 47.179074753133584
- type: precision
value: 46.06461044702424
- type: recall
value: 51.1
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (aze-eng)
config: aze-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 87.7
- type: f1
value: 84.71
- type: precision
value: 83.46166666666667
- type: recall
value: 87.7
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (vie-eng)
config: vie-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 95.8
- type: f1
value: 94.68333333333334
- type: precision
value: 94.13333333333334
- type: recall
value: 95.8
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (nno-eng)
config: nno-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 85.39999999999999
- type: f1
value: 82.5577380952381
- type: precision
value: 81.36833333333334
- type: recall
value: 85.39999999999999
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (cha-eng)
config: cha-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 21.16788321167883
- type: f1
value: 16.948865627297987
- type: precision
value: 15.971932568647897
- type: recall
value: 21.16788321167883
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (mhr-eng)
config: mhr-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 6.9
- type: f1
value: 5.515526831658907
- type: precision
value: 5.141966366966367
- type: recall
value: 6.9
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (dan-eng)
config: dan-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 93.2
- type: f1
value: 91.39666666666668
- type: precision
value: 90.58666666666667
- type: recall
value: 93.2
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (ell-eng)
config: ell-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 92.2
- type: f1
value: 89.95666666666666
- type: precision
value: 88.92833333333333
- type: recall
value: 92.2
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (amh-eng)
config: amh-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 79.76190476190477
- type: f1
value: 74.93386243386244
- type: precision
value: 73.11011904761904
- type: recall
value: 79.76190476190477
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (pam-eng)
config: pam-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 8.799999999999999
- type: f1
value: 6.921439712248537
- type: precision
value: 6.489885109680683
- type: recall
value: 8.799999999999999
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (hsb-eng)
config: hsb-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 45.75569358178054
- type: f1
value: 40.34699501312631
- type: precision
value: 38.57886764719063
- type: recall
value: 45.75569358178054
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (srp-eng)
config: srp-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 91.4
- type: f1
value: 89.08333333333333
- type: precision
value: 88.01666666666668
- type: recall
value: 91.4
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (epo-eng)
config: epo-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 93.60000000000001
- type: f1
value: 92.06690476190477
- type: precision
value: 91.45095238095239
- type: recall
value: 93.60000000000001
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (kzj-eng)
config: kzj-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 7.5
- type: f1
value: 6.200363129378736
- type: precision
value: 5.89115314822466
- type: recall
value: 7.5
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (awa-eng)
config: awa-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 73.59307359307358
- type: f1
value: 68.38933553219267
- type: precision
value: 66.62698412698413
- type: recall
value: 73.59307359307358
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (fao-eng)
config: fao-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 69.8473282442748
- type: f1
value: 64.72373682297346
- type: precision
value: 62.82834214131924
- type: recall
value: 69.8473282442748
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (mal-eng)
config: mal-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 97.5254730713246
- type: f1
value: 96.72489082969432
- type: precision
value: 96.33672974284326
- type: recall
value: 97.5254730713246
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (ile-eng)
config: ile-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 75.6
- type: f1
value: 72.42746031746033
- type: precision
value: 71.14036630036631
- type: recall
value: 75.6
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (bos-eng)
config: bos-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 91.24293785310734
- type: f1
value: 88.86064030131826
- type: precision
value: 87.73540489642184
- type: recall
value: 91.24293785310734
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (cor-eng)
config: cor-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 6.2
- type: f1
value: 4.383083659794954
- type: precision
value: 4.027861324289673
- type: recall
value: 6.2
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (cat-eng)
config: cat-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 86.8
- type: f1
value: 84.09428571428572
- type: precision
value: 83.00333333333333
- type: recall
value: 86.8
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (eus-eng)
config: eus-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 60.699999999999996
- type: f1
value: 56.1584972394755
- type: precision
value: 54.713456330903135
- type: recall
value: 60.699999999999996
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (yue-eng)
config: yue-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 84.2
- type: f1
value: 80.66190476190475
- type: precision
value: 79.19690476190476
- type: recall
value: 84.2
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (swe-eng)
config: swe-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 93.2
- type: f1
value: 91.33
- type: precision
value: 90.45
- type: recall
value: 93.2
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (dtp-eng)
config: dtp-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 6.3
- type: f1
value: 5.126828976748276
- type: precision
value: 4.853614328966668
- type: recall
value: 6.3
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (kat-eng)
config: kat-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 81.76943699731903
- type: f1
value: 77.82873739308057
- type: precision
value: 76.27622452019234
- type: recall
value: 81.76943699731903
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (jpn-eng)
config: jpn-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 92.30000000000001
- type: f1
value: 90.29666666666665
- type: precision
value: 89.40333333333334
- type: recall
value: 92.30000000000001
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (csb-eng)
config: csb-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 29.249011857707508
- type: f1
value: 24.561866096392947
- type: precision
value: 23.356583740215456
- type: recall
value: 29.249011857707508
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (xho-eng)
config: xho-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 77.46478873239437
- type: f1
value: 73.23943661971832
- type: precision
value: 71.66666666666667
- type: recall
value: 77.46478873239437
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (orv-eng)
config: orv-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 20.35928143712575
- type: f1
value: 15.997867865075824
- type: precision
value: 14.882104658301346
- type: recall
value: 20.35928143712575
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (ind-eng)
config: ind-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 92.2
- type: f1
value: 90.25999999999999
- type: precision
value: 89.45333333333335
- type: recall
value: 92.2
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (tuk-eng)
config: tuk-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 23.15270935960591
- type: f1
value: 19.65673625772148
- type: precision
value: 18.793705293464992
- type: recall
value: 23.15270935960591
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (max-eng)
config: max-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 59.154929577464785
- type: f1
value: 52.3868463305083
- type: precision
value: 50.14938113529662
- type: recall
value: 59.154929577464785
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (swh-eng)
config: swh-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 70.51282051282051
- type: f1
value: 66.8089133089133
- type: precision
value: 65.37645687645687
- type: recall
value: 70.51282051282051
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (hin-eng)
config: hin-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94.6
- type: f1
value: 93
- type: precision
value: 92.23333333333333
- type: recall
value: 94.6
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (dsb-eng)
config: dsb-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 38.62212943632568
- type: f1
value: 34.3278276962583
- type: precision
value: 33.07646935732408
- type: recall
value: 38.62212943632568
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (ber-eng)
config: ber-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 28.1
- type: f1
value: 23.579609223054604
- type: precision
value: 22.39622774921555
- type: recall
value: 28.1
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (tam-eng)
config: tam-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 88.27361563517914
- type: f1
value: 85.12486427795874
- type: precision
value: 83.71335504885994
- type: recall
value: 88.27361563517914
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (slk-eng)
config: slk-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 88.6
- type: f1
value: 86.39928571428571
- type: precision
value: 85.4947557997558
- type: recall
value: 88.6
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (tgl-eng)
config: tgl-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 86.5
- type: f1
value: 83.77952380952381
- type: precision
value: 82.67602564102565
- type: recall
value: 86.5
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (ast-eng)
config: ast-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 79.52755905511812
- type: f1
value: 75.3055868016498
- type: precision
value: 73.81889763779527
- type: recall
value: 79.52755905511812
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (mkd-eng)
config: mkd-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 77.9
- type: f1
value: 73.76261904761905
- type: precision
value: 72.11670995670995
- type: recall
value: 77.9
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (khm-eng)
config: khm-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 53.8781163434903
- type: f1
value: 47.25804051288816
- type: precision
value: 45.0603482390186
- type: recall
value: 53.8781163434903
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (ces-eng)
config: ces-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 91.10000000000001
- type: f1
value: 88.88
- type: precision
value: 87.96333333333334
- type: recall
value: 91.10000000000001
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (tzl-eng)
config: tzl-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 38.46153846153847
- type: f1
value: 34.43978243978244
- type: precision
value: 33.429487179487175
- type: recall
value: 38.46153846153847
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (urd-eng)
config: urd-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 88.9
- type: f1
value: 86.19888888888887
- type: precision
value: 85.07440476190476
- type: recall
value: 88.9
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (ara-eng)
config: ara-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 85.9
- type: f1
value: 82.58857142857143
- type: precision
value: 81.15666666666667
- type: recall
value: 85.9
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (kor-eng)
config: kor-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 86.8
- type: f1
value: 83.36999999999999
- type: precision
value: 81.86833333333333
- type: recall
value: 86.8
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (yid-eng)
config: yid-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 68.51415094339622
- type: f1
value: 63.195000099481234
- type: precision
value: 61.394033442972116
- type: recall
value: 68.51415094339622
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (fin-eng)
config: fin-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 88.5
- type: f1
value: 86.14603174603175
- type: precision
value: 85.1162037037037
- type: recall
value: 88.5
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (tha-eng)
config: tha-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 95.62043795620438
- type: f1
value: 94.40389294403892
- type: precision
value: 93.7956204379562
- type: recall
value: 95.62043795620438
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (wuu-eng)
config: wuu-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 81.8
- type: f1
value: 78.6532178932179
- type: precision
value: 77.46348795840176
- type: recall
value: 81.8
- task:
type: Retrieval
dataset:
type: webis-touche2020
name: MTEB Touche2020
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 2.603
- type: map_at_10
value: 8.5
- type: map_at_100
value: 12.985
- type: map_at_1000
value: 14.466999999999999
- type: map_at_3
value: 4.859999999999999
- type: map_at_5
value: 5.817
- type: mrr_at_1
value: 28.571
- type: mrr_at_10
value: 42.331
- type: mrr_at_100
value: 43.592999999999996
- type: mrr_at_1000
value: 43.592999999999996
- type: mrr_at_3
value: 38.435
- type: mrr_at_5
value: 39.966
- type: ndcg_at_1
value: 26.531
- type: ndcg_at_10
value: 21.353
- type: ndcg_at_100
value: 31.087999999999997
- type: ndcg_at_1000
value: 43.163000000000004
- type: ndcg_at_3
value: 22.999
- type: ndcg_at_5
value: 21.451
- type: precision_at_1
value: 28.571
- type: precision_at_10
value: 19.387999999999998
- type: precision_at_100
value: 6.265
- type: precision_at_1000
value: 1.4160000000000001
- type: precision_at_3
value: 24.490000000000002
- type: precision_at_5
value: 21.224
- type: recall_at_1
value: 2.603
- type: recall_at_10
value: 14.474
- type: recall_at_100
value: 40.287
- type: recall_at_1000
value: 76.606
- type: recall_at_3
value: 5.978
- type: recall_at_5
value: 7.819
- task:
type: Classification
dataset:
type: mteb/toxic_conversations_50k
name: MTEB ToxicConversationsClassification
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 69.7848
- type: ap
value: 13.661023167088224
- type: f1
value: 53.61686134460943
- task:
type: Classification
dataset:
type: mteb/tweet_sentiment_extraction
name: MTEB TweetSentimentExtractionClassification
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 61.28183361629882
- type: f1
value: 61.55481034919965
- task:
type: Clustering
dataset:
type: mteb/twentynewsgroups-clustering
name: MTEB TwentyNewsgroupsClustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 35.972128420092396
- task:
type: PairClassification
dataset:
type: mteb/twittersemeval2015-pairclassification
name: MTEB TwitterSemEval2015
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 85.59933241938367
- type: cos_sim_ap
value: 72.20760361208136
- type: cos_sim_f1
value: 66.4447731755424
- type: cos_sim_precision
value: 62.35539102267469
- type: cos_sim_recall
value: 71.10817941952506
- type: dot_accuracy
value: 78.98313166835548
- type: dot_ap
value: 44.492521645493795
- type: dot_f1
value: 45.814889336016094
- type: dot_precision
value: 37.02439024390244
- type: dot_recall
value: 60.07915567282321
- type: euclidean_accuracy
value: 85.3907134767837
- type: euclidean_ap
value: 71.53847289080343
- type: euclidean_f1
value: 65.95952206778834
- type: euclidean_precision
value: 61.31006346328196
- type: euclidean_recall
value: 71.37203166226914
- type: manhattan_accuracy
value: 85.40859510043511
- type: manhattan_ap
value: 71.49664104395515
- type: manhattan_f1
value: 65.98569969356485
- type: manhattan_precision
value: 63.928748144482924
- type: manhattan_recall
value: 68.17941952506597
- type: max_accuracy
value: 85.59933241938367
- type: max_ap
value: 72.20760361208136
- type: max_f1
value: 66.4447731755424
- task:
type: PairClassification
dataset:
type: mteb/twitterurlcorpus-pairclassification
name: MTEB TwitterURLCorpus
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 88.83261536073273
- type: cos_sim_ap
value: 85.48178133644264
- type: cos_sim_f1
value: 77.87816307403935
- type: cos_sim_precision
value: 75.88953021114926
- type: cos_sim_recall
value: 79.97382198952879
- type: dot_accuracy
value: 79.76287499514883
- type: dot_ap
value: 59.17438838475084
- type: dot_f1
value: 56.34566667855996
- type: dot_precision
value: 52.50349092359864
- type: dot_recall
value: 60.794579611949494
- type: euclidean_accuracy
value: 88.76857996662397
- type: euclidean_ap
value: 85.22764834359887
- type: euclidean_f1
value: 77.65379751543554
- type: euclidean_precision
value: 75.11152683839401
- type: euclidean_recall
value: 80.37419156144134
- type: manhattan_accuracy
value: 88.6987231730508
- type: manhattan_ap
value: 85.18907981724007
- type: manhattan_f1
value: 77.51967028849757
- type: manhattan_precision
value: 75.49992701795358
- type: manhattan_recall
value: 79.65044656606098
- type: max_accuracy
value: 88.83261536073273
- type: max_ap
value: 85.48178133644264
- type: max_f1
value: 77.87816307403935
language:
- multilingual
- af
- am
- ar
- as
- az
- be
- bg
- bn
- br
- bs
- ca
- cs
- cy
- da
- de
- el
- en
- eo
- es
- et
- eu
- fa
- fi
- fr
- fy
- ga
- gd
- gl
- gu
- ha
- he
- hi
- hr
- hu
- hy
- id
- is
- it
- ja
- jv
- ka
- kk
- km
- kn
- ko
- ku
- ky
- la
- lo
- lt
- lv
- mg
- mk
- ml
- mn
- mr
- ms
- my
- ne
- nl
- 'no'
- om
- or
- pa
- pl
- ps
- pt
- ro
- ru
- sa
- sd
- si
- sk
- sl
- so
- sq
- sr
- su
- sv
- sw
- ta
- te
- th
- tl
- tr
- ug
- uk
- ur
- uz
- vi
- xh
- yi
- zh
license: mit
---
## Multilingual-E5-base-int8-ov
This is [Multilingual-E5-base](https://huggingface.co/intfloat/multilingual-e5-base) model converted to the OpenVINO™ IR (Intermediate Representation) format with quantization to INT8.
Disclaimer: Model is provided as a preview and may be update in the future.
[Multilingual E5 Text Embeddings: A Technical Report](https://arxiv.org/pdf/2402.05672).
Liang Wang, Nan Yang, Xiaolong Huang, Linjun Yang, Rangan Majumder, Furu Wei, arXiv 2024
This model has 12 layers and the embedding size is 768.
## Usage
```python
import torch
from transformers import AutoTokenizer
from optimum.intel.openvino import OVModelForFeatureExtraction
# Sentences we want sentence embeddings for
sentences = ["Sample Data-1", "Sample Data-2"]
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('santhosh/multilingual-e5-base-int8-ov')
model = OVModelForFeatureExtraction.from_pretrained('OpenVINO/bge-base-en-v1.5-int8-ov')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
model_output = model(**encoded_input)
# Perform pooling. In this case, cls pooling.
sentence_embeddings = model_output[0][:, 0]
# normalize embeddings
sentence_embeddings = torch.nn.functional.normalize(sentence_embeddings, p=2, dim=1)
print("Sentence embeddings:", sentence_embeddings)
```
### Using openvino GenAI
```python
import openvino_genai
import numpy as np
import os
import huggingface_hub as hf_hub
from typing import List
model_path = "santhosh/multilingual-e5-base-int8-ov"
sentences = ["Sample Data-1", "Sample Data-2"]
embedding_pipeline = openvino_genai.TextEmbeddingPipeline(model_path, "CPU")
embeddings = embedding_pipeline.embed_documents(sentences)
return np.array(embeddings)
```
|
StyTJU/mor-kv-sharerobot-visionmor-adapter
|
StyTJU
| 2025-08-19T05:30:45Z | 0 | 0 | null |
[
"safetensors",
"region:us"
] | null | 2025-08-19T02:56:31Z |
# MoR Router Adapter for BAAI/RoboBrain2.0-3B
本仓库权重为全部权重 (safetensors 分片)。(初步Vision MoR训练,Loss较高)
使用前请先加载基座模型RoboBrain-3B然后覆写这些参数。
|
cucucu666/qiqiu-8.19-female
|
cucucu666
| 2025-08-19T05:29:05Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"flux",
"flux-diffusers",
"template:sd-lora",
"base_model:black-forest-labs/FLUX.1-Fill-dev",
"base_model:adapter:black-forest-labs/FLUX.1-Fill-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-08-19T03:44:23Z |
---
base_model: black-forest-labs/FLUX.1-Fill-dev
library_name: diffusers
license: other
instance_prompt: labi female face, Crayon Shin-chan style, eyelash, pleading expression,
both hands together in a prayer pose, plain white background
widget: []
tags:
- text-to-image
- diffusers-training
- diffusers
- lora
- flux
- flux-diffusers
- template:sd-lora
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# Flux-Fill DreamBooth LoRA - cucucu666/qiqiu-8.19-female
<Gallery />
## Model description
These are cucucu666/qiqiu-8.19-female DreamBooth LoRA weights for black-forest-labs/FLUX.1-Fill-dev.
The weights were trained using [DreamBooth](https://dreambooth.github.io/) with a custom [Flux diffusers trainer](https://github.com/Sebastian-Zok/FLUX-Fill-LoRa-Training).
Was LoRA for the text encoder enabled? False.
## Trigger words
You should use `labi female face, Crayon Shin-chan style, eyelash, pleading expression, both hands together in a prayer pose, plain white background` to trigger the image generation.
## Download model
[Download the *.safetensors LoRA](cucucu666/qiqiu-8.19-female/tree/main) in the Files & versions tab.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained("black-forest-labs/FLUX.1-dev", torch_dtype=torch.bfloat16).to('cuda')
pipeline.load_lora_weights('cucucu666/qiqiu-8.19-female', weight_name='pytorch_lora_weights.safetensors')
image = pipeline('labi female face, Crayon Shin-chan style, eyelash, pleading expression, both hands together in a prayer pose, plain white background').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## License
Please adhere to the licensing terms as described [here](https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md).
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]
|
katanyasekolah/blockassist-bc-silky_sprightly_cassowary_1755579680
|
katanyasekolah
| 2025-08-19T05:28:06Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"silky sprightly cassowary",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T05:28:03Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- silky sprightly cassowary
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
lejelly/shannon_entropy-ep1-lr0001-sampling-t07-gen2
|
lejelly
| 2025-08-19T05:24:08Z | 0 | 0 | null |
[
"safetensors",
"mistral",
"merge",
"parameter_wise",
"llm-adamerge",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:finetune:mistralai/Mistral-7B-v0.1",
"region:us"
] | null | 2025-08-19T05:21:29Z |
---
tags:
- merge
- parameter_wise
- llm-adamerge
base_model: mistralai/Mistral-7B-v0.1
---
# Merged Model using LLM-AdaMerge (parameter_wise)
This model was created by merging multiple fine-tuned models using the LLM-AdaMerge approach with parameter_wise merging.
## Merge Details
- **Merge Type**: parameter_wise
- **Base Model**: mistralai/Mistral-7B-v0.1
- **Number of Models Merged**: 3
- **Models Merged**: instruct, math, code
- **Final Training Loss**: N/A
- **Training Epochs**: 0
## Lambda Coefficients
The following lambda coefficients were learned during training:
### Parameter-wise Lambdas
This model uses parameter-wise lambda coefficients. Total parameters with individual lambdas: 291
See the uploaded `learned_lambdas.json` file for detailed parameter-wise coefficients.
## Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("your-username/model-name")
tokenizer = AutoTokenizer.from_pretrained("your-username/model-name")
# Use the model
inputs = tokenizer("Hello, how are you?", return_tensors="pt")
outputs = model.generate(**inputs)
print(tokenizer.decode(outputs[0]))
```
## Training Configuration
See the uploaded `training_config.json` file for detailed training configuration.
## Citation
If you use this model, please cite the LLM-AdaMerge paper:
```bibtex
@article{llmadamerge2024,
title={LLM-AdaMerge: Adaptive Model Merging for Large Language Models},
author={...},
year={2024}
}
```
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.