modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-08-29 12:28:39
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 526
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-08-29 12:28:30
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
thanaphatt1/typhoon2.1-gemma3-4b-strategy-prediction-v2
|
thanaphatt1
| 2025-08-29T11:48:06Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gemma3_text",
"trl",
"en",
"base_model:scb10x/typhoon2.1-gemma3-4b",
"base_model:finetune:scb10x/typhoon2.1-gemma3-4b",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-29T11:47:48Z |
---
base_model: scb10x/typhoon2.1-gemma3-4b
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3_text
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** thanaphatt1
- **License:** apache-2.0
- **Finetuned from model :** scb10x/typhoon2.1-gemma3-4b
This gemma3_text model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
liukevin666/blockassist-bc-yawning_striped_cassowary_1756467986
|
liukevin666
| 2025-08-29T11:47:31Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"yawning striped cassowary",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-29T11:47:19Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- yawning striped cassowary
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
kevinshin/qwen3-1.7b-critique-lr-1e-5-batch-16-epoch-1-mask-neg-reasoning-wildchat-cw-from-crit-rev
|
kevinshin
| 2025-08-29T11:47:14Z | 0 | 0 |
transformers
|
[
"transformers",
"generated_from_trainer",
"trl",
"sft",
"base_model:Qwen/Qwen3-1.7B",
"base_model:finetune:Qwen/Qwen3-1.7B",
"endpoints_compatible",
"region:us"
] | null | 2025-08-29T11:40:42Z |
---
base_model: Qwen/Qwen3-1.7B
library_name: transformers
model_name: qwen3-1.7b-critique-lr-1e-5-batch-16-epoch-1-mask-neg-reasoning-wildchat-cw-from-crit-rev
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for qwen3-1.7b-critique-lr-1e-5-batch-16-epoch-1-mask-neg-reasoning-wildchat-cw-from-crit-rev
This model is a fine-tuned version of [Qwen/Qwen3-1.7B](https://huggingface.co/Qwen/Qwen3-1.7B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="kevinshin/qwen3-1.7b-critique-lr-1e-5-batch-16-epoch-1-mask-neg-reasoning-wildchat-cw-from-crit-rev", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/myungjune-sogang-university/general_remo_train/runs/88xzkdio)
This model was trained with SFT.
### Framework versions
- TRL: 0.19.1
- Transformers: 4.55.0.dev0
- Pytorch: 2.6.0+cu126
- Datasets: 4.0.0
- Tokenizers: 0.21.2
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
Rootu/blockassist-bc-snorting_fleecy_goose_1756467942
|
Rootu
| 2025-08-29T11:46:20Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"snorting fleecy goose",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-29T11:46:06Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- snorting fleecy goose
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mradermacher/13B-Ouroboros-GGUF
|
mradermacher
| 2025-08-29T11:43:52Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama",
"alpaca",
"vicuna",
"uncensored",
"merge",
"mix",
"airoboros",
"openorca",
"orcamini",
"orca",
"instruct",
"mixtune",
"en",
"dataset:Open-Orca/OpenOrca",
"dataset:anon8231489123/ShareGPT_Vicuna_unfiltered",
"dataset:jondurbin/airoboros-uncensored",
"base_model:CalderaAI/13B-Ouroboros",
"base_model:quantized:CalderaAI/13B-Ouroboros",
"endpoints_compatible",
"region:us"
] | null | 2025-08-29T10:48:54Z |
---
base_model: CalderaAI/13B-Ouroboros
datasets:
- Open-Orca/OpenOrca
- anon8231489123/ShareGPT_Vicuna_unfiltered
- jondurbin/airoboros-uncensored
language:
- en
library_name: transformers
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- llama
- alpaca
- vicuna
- uncensored
- merge
- mix
- airoboros
- openorca
- orcamini
- orca
- instruct
- mixtune
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/CalderaAI/13B-Ouroboros
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#13B-Ouroboros-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/13B-Ouroboros-GGUF/resolve/main/13B-Ouroboros.Q2_K.gguf) | Q2_K | 5.0 | |
| [GGUF](https://huggingface.co/mradermacher/13B-Ouroboros-GGUF/resolve/main/13B-Ouroboros.Q3_K_S.gguf) | Q3_K_S | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/13B-Ouroboros-GGUF/resolve/main/13B-Ouroboros.Q3_K_M.gguf) | Q3_K_M | 6.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/13B-Ouroboros-GGUF/resolve/main/13B-Ouroboros.Q3_K_L.gguf) | Q3_K_L | 7.0 | |
| [GGUF](https://huggingface.co/mradermacher/13B-Ouroboros-GGUF/resolve/main/13B-Ouroboros.Q4_K_S.gguf) | Q4_K_S | 7.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/13B-Ouroboros-GGUF/resolve/main/13B-Ouroboros.Q6_K.gguf) | Q6_K | 10.8 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/13B-Ouroboros-GGUF/resolve/main/13B-Ouroboros.Q8_0.gguf) | Q8_0 | 13.9 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Lennard-Heuer/Trained_LLM_Task2_2025_8_29
|
Lennard-Heuer
| 2025-08-29T11:42:46Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-29T11:39:55Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
CriteriaPO/qwen2.5-3b-dpo-finegrained-40-vanilla
|
CriteriaPO
| 2025-08-29T11:42:43Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"trl",
"dpo",
"conversational",
"arxiv:2305.18290",
"base_model:CriteriaPO/qwen2.5-3b-sft-10",
"base_model:finetune:CriteriaPO/qwen2.5-3b-sft-10",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-22T07:58:55Z |
---
base_model: CriteriaPO/qwen2.5-3b-sft-10
library_name: transformers
model_name: qwen2.5-3b-dpo-finegrained-40-vanilla
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for qwen2.5-3b-dpo-finegrained-40-vanilla
This model is a fine-tuned version of [CriteriaPO/qwen2.5-3b-sft-10](https://huggingface.co/CriteriaPO/qwen2.5-3b-sft-10).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="CriteriaPO/qwen2.5-3b-dpo-finegrained-40-vanilla", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/bborges/CriteriaPreferences/runs/2u63rfxn)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.12.2
- Transformers: 4.46.3
- Pytorch: 2.1.2+cu121
- Datasets: 3.1.0
- Tokenizers: 0.20.3
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin GallouΓ©dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
pidbu/blockassist-bc-whistling_alert_shrew_1756467486
|
pidbu
| 2025-08-29T11:42:24Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"whistling alert shrew",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-29T11:38:55Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- whistling alert shrew
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
VoilaRaj/81_f_68XQ8X
|
VoilaRaj
| 2025-08-29T11:42:23Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-08-29T11:41:54Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
Sayiqa/finetuned-llama
|
Sayiqa
| 2025-08-29T11:42:02Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-29T11:41:53Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
cody-li/whisper_fined_tuned_32-64
|
cody-li
| 2025-08-29T11:41:33Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-08-29T11:41:17Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
intelpocik/Nemotron-Research-Reasoning-Qwen-1.5B-Gensyn-Swarm-mimic_trotting_badger
|
intelpocik
| 2025-08-29T11:40:23Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am mimic_trotting_badger",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-29T11:39:38Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am mimic_trotting_badger
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
thatboredgirlie/blockassist-bc-thriving_whiskered_flamingo_1756467501
|
thatboredgirlie
| 2025-08-29T11:39:39Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thriving whiskered flamingo",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-29T11:38:55Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thriving whiskered flamingo
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
scb10x/typhoon-translate-4b
|
scb10x
| 2025-08-29T11:39:17Z | 3,479,185 | 14 | null |
[
"safetensors",
"gemma3_text",
"th",
"en",
"arxiv:2412.13702",
"license:gemma",
"region:us"
] | null | 2025-06-04T03:07:36Z |
---
license: gemma
language:
- th
- en
---
**Typhoon translate**
**Typhoon translate** is a lightweight, 4-billion-parameter language model designed specifically for high-quality Thai β English translationβright from your local device.
Unlike general-purpose models, Typhoon Translate is fine-tuned for translation tasks and works best with dedicated prompts. Its strength lies in generating natural, fluent translations while preserving meaning and tone in both directions.
**Release Blog available on [OpenTyphoon Blog](https://opentyphoon.ai/blog/en/typhoon-translate-release)**
Note: For optimal results, use the system prompts:
`Translate the following text into Thai.` or
`Translate the following text into English.`
## **Performance**
We used GPT-4o-mini as an "AI judge", comparing Typhoon Translate against its own generations and other top systems.


## **Model Description**
- **Model type**: A 4B instruct decoder-only model based on Gemma3 architecture.
- **Requirement**: transformers 4.51.1 or newer.
- **Primary Language(s)**: Thai πΉπ and English π¬π§
- **License**: [Gemma License](https://github.com/google-deepmind/gemma/blob/main/LICENSE)
## Quickstart
This code snippet shows how to use the Typhoon translation model for Thai or English text generation using the transformers library with a specific prompt.
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
model_id = "scb10x/Typhoon-translate-4b"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.bfloat16,
device_map="auto",
)
# Translate English to Thai
messages = [
{"role": "system", "content": "Translate the following text into Thai."},
{"role": "user", "content": "A banished celestial, Serai, cursed to walk as a mortal boy, fights against the Empire that slaughtered the skyborn.\nHe trains with humans, bonds with them, bleeds beside them. In secret, he regrows his wings through combatβbut each feather only returns when he loses someone he loves.\nAt the climax, he ascendsβglorious, radiant, unstoppableβonly to find his friends gone, their memories etched into his wings.\nAs he watches the sun rise, his halo returns.\nHe whispers, βWas it worth it?β\nAnd no one answers."},
]
# Translate Thai to English
# messages = [
# {"role": "system", "content": "Translate the following text into English."},
# {"role": "user", "content": "ΰΉΰΈ‘ΰΈ·ΰΉΰΈΰΈͺΰΈ΄ΰΉΰΈΰΈΰΈ΅ 2022 ΰΈΰΈ²ΰΈ£ΰΉΰΈΰΈ΄ΰΈΰΈΰΈ±ΰΈ§ ChatGPT ΰΈΰΈΰΈ OpenAI ΰΈΰΈ·ΰΈΰΉΰΈΰΉΰΈΰΈΰΈΈΰΈΰΉΰΈΰΈ₯ΰΈ΅ΰΉΰΈ’ΰΈΰΈͺΰΈ³ΰΈΰΈ±ΰΈβΰΉΰΈ₯ΰΈΰΉΰΈΰΉΰΈ£ΰΈΉΰΉΰΈΰΈ±ΰΈΰΈΰΈ±ΰΈ Generative AI (Gen AI) ΰΉΰΈ₯ΰΈ°ΰΈΰΈΈΰΈΰΈΰΈ’ΰΉΰΈ²ΰΈΰΈΰΉΰΉΰΈΰΈ₯ΰΈ΅ΰΉΰΈ’ΰΈΰΉΰΈ ΰΈͺΰΈ΄ΰΉΰΈΰΈΰΈ΅ΰΉΰΉΰΈΰΈ’ΰΈ£ΰΈΉΰΉΰΈͺΰΈΆΰΈΰΉΰΈ«ΰΈ‘ΰΈ·ΰΈΰΈ frontier ΰΉΰΈΰΈ₯ΰΉ ΰΈΰΈ₯ΰΈ²ΰΈ’ΰΉΰΈΰΉΰΈΰΈΰΈ₯ΰΈ±ΰΈΰΉΰΈΰΈΰΈ±ΰΈΰΈΰΈΈΰΈΰΈ±ΰΈ AI ΰΈΰΈΉΰΈΰΈΰΈ±ΰΈΰΈΰΈ±ΰΈ§ΰΈΰΈ’ΰΉΰΈ²ΰΈΰΈ£ΰΈ§ΰΈΰΉΰΈ£ΰΉΰΈ§ΰΈΰΈ±ΰΉΰΈΰΉΰΈΰΈΰΈ΄ΰΈΰΈ§ΰΈ±ΰΈΰΈ£ΰΈͺΰΉΰΈ§ΰΈΰΈΰΈ±ΰΈ§ΰΉΰΈ₯ΰΈ°ΰΈΰΈ₯ΰΈ’ΰΈΈΰΈΰΈΰΉΰΈΰΈΰΈΰΉΰΈΰΈ£ ΰΈΰΈ₯ΰΈ²ΰΈ’ΰΉΰΈΰΉΰΈΰΈΰΈ±ΰΈ§ΰΉΰΈΰΈ₯ΰΈ΅ΰΉΰΈ’ΰΈΰΉΰΈΰΈ‘ΰΈΰΈ±ΰΈΰΈΰΈ£ΰΈΰΈΰΈ₯ΰΈ±ΰΈβΰΈΰΉΰΈ§ΰΈ’ΰΈ’ΰΈΰΈ£ΰΈ°ΰΈΰΈ±ΰΈΰΈΰΈ΅ΰΈ§ΰΈ΄ΰΈ ΰΈΰΈ₯ΰΈΰΈ₯ΰΉΰΈΰΈΰΈΰΈ£ΰΈ°ΰΈͺΰΈ΄ΰΈΰΈΰΈ΄ΰΈ ΰΈ²ΰΈΰΉΰΈ«ΰΈ‘ΰΉΰΉ ΰΉΰΈ₯ΰΈ°ΰΉΰΈΰΈ₯ΰΈ΅ΰΉΰΈ’ΰΈΰΈ§ΰΈ΄ΰΈΰΈ΅ΰΈΰΈ΅ΰΉΰΈΰΈΰΈΰΉΰΈΰΈ£ΰΈΰΈ³ΰΉΰΈΰΈ΄ΰΈΰΈΰΈ²ΰΈ ΰΉΰΈΰΈΰΈ²ΰΈ£ΰΉΰΈͺΰΈ§ΰΈΰΈ«ΰΈ²ΰΈΰΈ§ΰΈ²ΰΈ‘ΰΉΰΈΰΉΰΉΰΈΰΈ£ΰΈ΅ΰΈ’ΰΈΰΉΰΈΰΈ΄ΰΈΰΉΰΈΰΉΰΈΰΈΰΈ±ΰΈ ΰΈΰΈΈΰΈ£ΰΈΰΈ΄ΰΈΰΉΰΈΰΈ«ΰΈ₯ΰΈ²ΰΈΰΈ«ΰΈ₯ΰΈ²ΰΈ’ΰΈ ΰΈ²ΰΈΰΈͺΰΉΰΈ§ΰΈΰΈΰΉΰΈ²ΰΈΰΉΰΈ£ΰΉΰΈΰΈΰΈ³ΰΈΰΈ§ΰΈ²ΰΈ‘ΰΉΰΈΰΉΰΈ²ΰΉΰΈ ΰΈΰΈ³ΰΉΰΈΰΉΰΈΰΉ ΰΉΰΈ₯ΰΈ°ΰΈͺΰΈ£ΰΉΰΈ²ΰΈΰΈΰΈ§ΰΈ±ΰΈΰΈΰΈ£ΰΈ£ΰΈ‘ΰΈΰΉΰΈ§ΰΈ’ AI\nSCBX AI Outlook 2025: Beaconing the Future of Artificial Intelligence ΰΈΰΈΉΰΈΰΈΰΈΰΈΰΉΰΈΰΈΰΈ‘ΰΈ²ΰΉΰΈΰΉΰΈΰΈΰΈ£ΰΈ°ΰΈ ΰΈ²ΰΈΰΈ²ΰΈ£ΰΈΰΉΰΈ²ΰΈ‘ΰΈΰΈ₯ΰΈ²ΰΈΰΈΰΈ₯ΰΈ·ΰΉΰΈΰΈΰΈ΅ΰΉΰΉΰΈ£ΰΉΰΈΰΈΰΈ±ΰΈ§ΰΈΰΈΆΰΉΰΈΰΈΰΈ΅ΰΉβΰΈ‘ΰΈΰΈΰΈΰΈ§ΰΈ²ΰΈ‘ΰΈΰΈ±ΰΈΰΉΰΈΰΈΰΉΰΈ₯ΰΈ°ΰΈΰΈ΄ΰΈ¨ΰΈΰΈ²ΰΈΰΉΰΈ«ΰΉΰΈΰΈΉΰΉΰΈΰΈ³ΰΈ£ΰΈ±ΰΈΰΈ‘ΰΈ·ΰΈΰΈΰΈ±ΰΈΰΈΰΈ£ΰΈ°ΰΉΰΈͺΰΈΰΈ²ΰΈ£ΰΉΰΈΰΈ₯ΰΈ΅ΰΉΰΈ’ΰΈΰΉΰΈΰΈ₯ΰΈΰΈΰΈ²ΰΈΰΉΰΈΰΈΰΉΰΈΰΉΰΈ₯ΰΈ’ΰΈ΅ ΰΈ£ΰΈ²ΰΈ’ΰΈΰΈ²ΰΈΰΈΰΈ΅ΰΉΰΈͺΰΈ³ΰΈ£ΰΈ§ΰΈΰΉΰΈΰΈ§ΰΉΰΈΰΉΰΈ‘ AI ΰΈΰΈ΅ΰΉΰΈΰΈ³ΰΈ₯ΰΈ±ΰΈΰΈΰΈ³ΰΈ«ΰΈΰΈΰΈΰΈ΄ΰΈ¨ΰΈΰΈ²ΰΈΰΉΰΈΰΈΰΈ΅ΰΈΰΉΰΈ²ΰΈΰΈ«ΰΈΰΉΰΈ² ΰΉΰΈ₯ΰΈ°ΰΈΰΈ³ΰΉΰΈͺΰΈΰΈΰΈ‘ΰΈΈΰΈ‘ΰΈ‘ΰΈΰΈΰΉΰΈΰΈ΄ΰΈΰΈΰΈ₯ΰΈ’ΰΈΈΰΈΰΈΰΉΰΉΰΈΰΈΰΈ²ΰΈ£ΰΉΰΈΰΈ₯ΰΈ΅ΰΉΰΈ’ΰΈΰΈΰΈ§ΰΈ²ΰΈ‘ΰΉΰΈ‘ΰΉΰΉΰΈΰΉΰΈΰΈΰΈΰΉΰΈ«ΰΉΰΉΰΈΰΉΰΈΰΉΰΈΰΈΰΈ²ΰΈͺ ΰΈ£ΰΈ²ΰΈ’ΰΈΰΈ²ΰΈΰΉΰΈΰΉΰΈΰΈΰΈΰΈΰΉΰΈΰΉΰΈΰΈͺΰΈ΅ΰΉΰΈͺΰΉΰΈ§ΰΈ (Acts) ΰΉΰΈΰΉΰΈ₯ΰΈ°ΰΈͺΰΉΰΈ§ΰΈΰΉΰΈΰΉΰΈΰΈΰΈ₯ΰΈ±ΰΈΰΈͺΰΈ³ΰΈΰΈ±ΰΈΰΈΰΈ΅ΰΉΰΈΰΈ³ΰΈ₯ΰΈ±ΰΈΰΉΰΈΰΈ₯ΰΈ΅ΰΉΰΈ’ΰΈΰΈ ΰΈΉΰΈ‘ΰΈ΄ΰΈΰΈ±ΰΈ¨ΰΈΰΉ AI:\nACT I: Two Philosophies, One Future. The Battle Between Open-Source and Closed-Source AI Intensifies\nACT II: Tiny Titans - Small, but Mighty. More Versatile, Smaller, and Smarter: 3 Trends of the Next AI Evolution\nACT III: AI at Your Fingertips. Agentic AI: Rise of the Agents\nACT IV: Not Quite Human, But Almost There. Artificial General Intelligence (AGI) and the Unresolved Path to Human-Level AI\nΰΈ£ΰΈ²ΰΈ’ΰΈΰΈ²ΰΈΰΈΰΈ΄ΰΈΰΈΰΉΰΈ²ΰΈ’ΰΈΰΉΰΈ§ΰΈ’ EPILOGUE: The AI Storm β Infinite Impact ΰΈΰΈ£ΰΉΰΈΰΈ‘ Case Studies ΰΈΰΈ²ΰΈΰΈ ΰΈ²ΰΈ’ΰΉΰΈΰΈΰΈ§ΰΈΰΈΰΈ²ΰΈΰΈΰΈΰΈΰΈ²ΰΈ’ΰΈΈΰΉΰΈΰΉΰΈΰΈΈΰΉΰΈβΰΈΰΈ³ΰΉΰΈͺΰΈΰΈΰΈΰΈ£ΰΈΰΈ΅ΰΈ¨ΰΈΆΰΈΰΈ©ΰΈ²ΰΈΰΈ£ΰΈ΄ΰΈΰΈΰΈ²ΰΈ SCBX ΰΉΰΈΰΈΰΈ²ΰΈ£ΰΉΰΈΰΉ AI Engine βTyphoonβ ΰΈΰΈΰΈΰΈΰΈ₯ΰΈΈΰΉΰΈ‘ΰΉΰΈΰΈ«ΰΈΰΉΰΈ§ΰΈ’ΰΈΰΈΈΰΈ£ΰΈΰΈ΄ΰΈΰΈΰΉΰΈ²ΰΈΰΉ ΰΈΰΈΰΈ°ΰΈΰΈ΅ΰΉΰΈΰΈ£ΰΈ°ΰΉΰΈͺ AI ΰΈΰΈ³ΰΈ₯ΰΈ±ΰΈΰΈΰΉΰΈ²ΰΈ§ΰΉΰΈΰΈΰΉΰΈ²ΰΈΰΈ«ΰΈΰΉΰΈ² ΰΈ£ΰΈ²ΰΈ’ΰΈΰΈ²ΰΈΰΈΰΈ΅ΰΉΰΈΰΈΆΰΈΰΉΰΈ‘ΰΉΰΉΰΈΰΉΰΉΰΈΰΉΰΈΰΉΰΈΰΈ΅ΰΈ’ΰΈΰΈΰΈ²ΰΈ£ΰΈΰΈ²ΰΈΰΈΰΈ²ΰΈ£ΰΈΰΉ ΰΉΰΈΰΉΰΉΰΈΰΉΰΈΰΈΰΈ£ΰΈ°ΰΈ ΰΈ²ΰΈΰΈ²ΰΈ£ΰΉΰΈΰΈ΄ΰΈΰΈΰΈ₯ΰΈ’ΰΈΈΰΈΰΈΰΉΰΈͺΰΈ³ΰΈ«ΰΈ£ΰΈ±ΰΈΰΈΰΈΉΰΉΰΈΰΈ£ΰΉΰΈΰΈ‘ΰΈΰΈ°ΰΈΰΉΰΈ²ΰΈ§ΰΈΰΈΆΰΉΰΈΰΉΰΈΰΈΰΈ±ΰΈΰΈΰΈ₯ΰΈ·ΰΉΰΈΰΉΰΈ₯ΰΈ°ΰΉΰΈΰΉΰΈΰΈΰΈΉΰΉΰΈΰΈ³ΰΉΰΈΰΈΰΈΰΈ²ΰΈΰΈ"},
# ]
input_ids = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt",
).to(model.device)
outputs = model.generate(
input_ids,
max_new_tokens=8192,
temperature=0.2,
)
response = outputs[0][input_ids.shape[-1]:]
print(tokenizer.decode(response, skip_special_tokens=True))
```
## Prompting
**For translate to Thai**
```
Translate the following text into Thai.
```
**For translate to English**
```
Translate the following text into English.
```
If your environment doesn't support system prompt, you can use system prompt in the user turn.
```python
{system_prompt}\n\n{text_to_translate}
```
example of
```
Translate the following text into Thai.\n\nA banished celestial, Serai, cursed to walk as a mortal boy, fights against the Empire that slaughtered the skyborn.\nHe trains with humans, bonds with them, bleeds beside them. In secret, he regrows his wings through combatβbut each feather only returns when he loses someone he loves.\nAt the climax, he ascendsβglorious, radiant, unstoppableβonly to find his friends gone, their memories etched into his wings.\nAs he watches the sun rise, his halo returns.\nHe whispers, βWas it worth it?β\nAnd no one answers.
```
## Deploy as Server
This section shows how to run Typhoon translate as an OpenAI-compatible API server using vllm.
- SGLang:
```base
python3 -m sglang.launch_server scb10x/typhoon-translate-4b --context-length 16000 --dtype bfloat16
```
- vLLM:
```bash
vllm serve scb10x/typhoon-translate-4b --max-model-len 16000 --dtype bfloat16
```
## Best Practices
To achieve optimal performance, we recommend the following settings:
- Use system prompt `Translate the following text into Thai.` for English to Thai translation and `Translate the following text into English.` for Thai to English translation.
- Set low temperature.
- Using an context length less than 8192 tokens.
## Intended Uses & Limitations
This is a task-specific model intended to be used only with the provided prompts. It does not include any guardrails. Due to the nature of large language models (LLMs), a certain level of hallucination may occur. We recommend that developers carefully assess these risks in the context of their specific use case.
## **Follow us**
**https://twitter.com/opentyphoon**
## **Support**
**https://discord.gg/us5gAYmrxw**
## **Citation**
- If you find Typhoon2 useful for your work, please cite it using:
```
@misc{typhoon2,
title={Typhoon 2: A Family of Open Text and Multimodal Thai Large Language Models},
author={Kunat Pipatanakul and Potsawee Manakul and Natapong Nitarach and Warit Sirichotedumrong and Surapon Nonesung and Teetouch Jaknamon and Parinthapat Pengpun and Pittawat Taveekitworachai and Adisai Na-Thalang and Sittipong Sripaisarnmongkol and Krisanapong Jirayoot and Kasima Tharnpipitchai},
year={2024},
eprint={2412.13702},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2412.13702},
}
```
|
Qudsiya17/my-llama-gguf
|
Qudsiya17
| 2025-08-29T11:37:47Z | 5 | 0 | null |
[
"gguf",
"text-generation",
"hi",
"en",
"base_model:meta-llama/Llama-3.2-3B-Instruct",
"base_model:quantized:meta-llama/Llama-3.2-3B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2025-08-27T11:57:06Z |
---
license: apache-2.0
language:
- hi
- en
base_model:
- meta-llama/Llama-3.2-3B-Instruct
pipeline_tag: text-generation
---
|
acidjp/blockassist-bc-pesty_extinct_prawn_1756465069
|
acidjp
| 2025-08-29T11:36:31Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"pesty extinct prawn",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-29T11:36:28Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- pesty extinct prawn
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
bah63843/blockassist-bc-plump_fast_antelope_1756467333
|
bah63843
| 2025-08-29T11:36:22Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"plump fast antelope",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-29T11:36:13Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- plump fast antelope
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Cisco141632/gemma_3n_4b_text_to_sql_Q4
|
Cisco141632
| 2025-08-29T11:36:03Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gemma3n",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-29T11:35:46Z |
---
base_model: unsloth/gemma-3n-e4b-it-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3n
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Cisco141632
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3n-e4b-it-unsloth-bnb-4bit
This gemma3n model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
weyc4man/lysi
|
weyc4man
| 2025-08-29T11:35:45Z | 0 | 0 | null |
[
"arxiv:1910.09700",
"region:us"
] | null | 2025-08-29T11:35:17Z |
---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
{}
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Veras56/blockassist-bc-endangered_agile_turtle_1756467215
|
Veras56
| 2025-08-29T11:35:02Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"endangered agile turtle",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-29T11:34:53Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- endangered agile turtle
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
katanyasekolah/blockassist-bc-silky_sprightly_cassowary_1756465453
|
katanyasekolah
| 2025-08-29T11:32:30Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"silky sprightly cassowary",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-29T11:32:27Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- silky sprightly cassowary
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
BSPetersson/dqn-SpaceInvadersNoFrameskip-v4
|
BSPetersson
| 2025-08-29T11:32:03Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-08-29T11:31:29Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 626.00 +/- 207.93
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
SBX (SB3 + Jax): https://github.com/araffin/sbx
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga BSPetersson -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga BSPetersson -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga BSPetersson
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
bah63843/blockassist-bc-plump_fast_antelope_1756467072
|
bah63843
| 2025-08-29T11:32:02Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"plump fast antelope",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-29T11:31:54Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- plump fast antelope
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
kojeklollipop/blockassist-bc-spotted_amphibious_stork_1756465434
|
kojeklollipop
| 2025-08-29T11:31:25Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"spotted amphibious stork",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-29T11:31:22Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- spotted amphibious stork
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
jm-yap/MyGemmaNPC
|
jm-yap
| 2025-08-29T11:31:12Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"gemma3_text",
"text-generation",
"generated_from_trainer",
"sft",
"trl",
"conversational",
"base_model:google/gemma-3-270m-it",
"base_model:finetune:google/gemma-3-270m-it",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-29T11:27:26Z |
---
base_model: google/gemma-3-270m-it
library_name: transformers
model_name: MyGemmaNPC
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for MyGemmaNPC
This model is a fine-tuned version of [google/gemma-3-270m-it](https://huggingface.co/google/gemma-3-270m-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="jm-yap/MyGemmaNPC", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.4
- Pytorch: 2.8.0+cu126
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
QuantStack/Qwen-Image-PJ0-Realism-GGUF
|
QuantStack
| 2025-08-29T11:30:43Z | 0 | 0 |
gguf
|
[
"gguf",
"text-to-image",
"en",
"zh",
"base_model:speach1sdef178/PJ0_QwenImage_Realistic_FP8_HF_Stage_2",
"base_model:quantized:speach1sdef178/PJ0_QwenImage_Realistic_FP8_HF_Stage_2",
"license:apache-2.0",
"region:us"
] |
text-to-image
| 2025-08-29T08:22:46Z |
---
language:
- en
- zh
license: apache-2.0
base_model:
- speach1sdef178/PJ0_QwenImage_Realistic_FP8_HF_Stage_2
library_name: gguf
pipeline_tag: text-to-image
---
> [!IMPORTANT]
> β οΈ **Important:**
> This model was quantized from the published FP8 version and is in the '2nd improvement stage.' Due to this, Iβm only providing the `Q3_K_S`.
This GGUF file is a direct conversion of [speach1sdef178/PJ0_QwenImage_Realistic_FP8_HF_Stage_2](https://huggingface.co/speach1sdef178/PJ0_QwenImage_Realistic_FP8_HF_Stage_2)
Type | Name | Location | Download
| ------------ | -------------------------------------------------- | --------------------------------- | -------------------------
| Main Model | Qwen-Image | `ComfyUI/models/unet` | GGUF (this repo)
| Text Encoder | Qwen2.5-VL-7B | `ComfyUI/models/text_encoders` | [Safetensors](https://huggingface.co/Comfy-Org/Qwen-Image_ComfyUI/tree/main/split_files/text_encoders) / [GGUF](https://huggingface.co/unsloth/Qwen2.5-VL-7B-Instruct-GGUF/tree/main) |
| VAE | Qwen-Image VAE | `ComfyUI/models/vae` | [Safetensors](https://huggingface.co/QuantStack/Qwen-Image-GGUF/tree/main/VAE) |
Since this is a quantized model, all original licensing terms and usage restrictions remain in effect.
**Usage**
The model can be used with the ComfyUI custom node [ComfyUI-GGUF](https://github.com/city96/ComfyUI-GGUF) by [city96](https://huggingface.co/city96)
|
pidbu/blockassist-bc-whistling_alert_shrew_1756466935
|
pidbu
| 2025-08-29T11:30:40Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"whistling alert shrew",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-29T11:29:45Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- whistling alert shrew
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
davaa33/mns_tokenizer
|
davaa33
| 2025-08-29T11:30:22Z | 0 | 0 |
transformers
|
[
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-18T05:10:32Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
annasoli/gemma-2-27b-it_SV_l18_lr5e-3_a256
|
annasoli
| 2025-08-29T11:28:41Z | 0 | 0 |
transformers
|
[
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-29T11:28:17Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
luckeciano/Qwen-2.5-7B-GRPO-NoBaseline-FisherMaskToken-0.1-FisherMaskSentence-1e-4-v2_6936
|
luckeciano
| 2025-08-29T11:28:35Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"grpo",
"conversational",
"dataset:DigitalLearningGmbH/MATH-lighteval",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2.5-Math-7B",
"base_model:finetune:Qwen/Qwen2.5-Math-7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-29T07:13:02Z |
---
base_model: Qwen/Qwen2.5-Math-7B
datasets: DigitalLearningGmbH/MATH-lighteval
library_name: transformers
model_name: Qwen-2.5-7B-GRPO-NoBaseline-FisherMaskToken-0.1-FisherMaskSentence-1e-4-v2_6936
tags:
- generated_from_trainer
- open-r1
- trl
- grpo
licence: license
---
# Model Card for Qwen-2.5-7B-GRPO-NoBaseline-FisherMaskToken-0.1-FisherMaskSentence-1e-4-v2_6936
This model is a fine-tuned version of [Qwen/Qwen2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B) on the [DigitalLearningGmbH/MATH-lighteval](https://huggingface.co/datasets/DigitalLearningGmbH/MATH-lighteval) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="luckeciano/Qwen-2.5-7B-GRPO-NoBaseline-FisherMaskToken-0.1-FisherMaskSentence-1e-4-v2_6936", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/max-ent-llms/PolicyGradientStability/runs/u8bcmfxq)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.16.0.dev0
- Transformers: 4.49.0
- Pytorch: 2.5.1
- Datasets: 3.4.1
- Tokenizers: 0.21.2
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin GallouΓ©dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
jqlive/se_presenter01_qwen
|
jqlive
| 2025-08-29T11:28:34Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-08-29T11:17:23Z |
---
license: apache-2.0
---
Qwen Image LoRA. A character LoRA for research and testing. Male, white, mid 40s.
Qwen Image LoRA Training Settings
===================================
- steps: 3000
- learning_rate: 0.0002
- lora_rank: 64
- lora_alpha: 64
- batch_size: 1
- optimizer: adamw
- seed: random
- resolution: [512, 768, 1024]
|
liukevin666/blockassist-bc-yawning_striped_cassowary_1756466659
|
liukevin666
| 2025-08-29T11:28:19Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"yawning striped cassowary",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-29T11:25:15Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- yawning striped cassowary
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Loder-S/blockassist-bc-sprightly_knobby_tiger_1756465223
|
Loder-S
| 2025-08-29T11:28:17Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"sprightly knobby tiger",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-29T11:28:13Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- sprightly knobby tiger
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
X-X-X-18-Genesis-Pena-Viral-video/VIDEO.FULL.GENESIS.PENA.Viral.Video.Tutorial.Official
|
X-X-X-18-Genesis-Pena-Viral-video
| 2025-08-29T11:27:44Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-29T11:27:33Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
AnerYubo/blockassist-bc-lanky_pouncing_ape_1756466832
|
AnerYubo
| 2025-08-29T11:27:15Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"lanky pouncing ape",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-29T11:27:13Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- lanky pouncing ape
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
sekirr/blockassist-bc-masked_tenacious_whale_1756466759
|
sekirr
| 2025-08-29T11:26:38Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"masked tenacious whale",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-29T11:26:35Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- masked tenacious whale
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
t07-cc11-g4/2025-2a-t07-cc11-g04-intent-classifier-sprint2
|
t07-cc11-g4
| 2025-08-29T11:26:36Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-28T20:32:51Z |
# Curadobia β Classificador de IntenΓ§Γ΅es (Sprint 2)
**Embeddings**: sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2
**Modelo**: CalibratedClassifierCV (calibrado: True)
**Labels**: agradecimento, como_comprar, despedida, disponibilidade_estoque, erros_plataforma, formas_pagamento, frete_prazo, nao_entendi, pedir_sugestao_produto, saudacao, tamanho_modelagem, troca_devolucao_politica
## Uso rΓ‘pido
```python
from sentence_transformers import SentenceTransformer
import joblib
embedder = SentenceTransformer("sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2")
clf = joblib.load("classifier.pkl")
le Β = joblib.load("label_encoder.pkl")
textos = ["oi bia", "qual prazo para 01234-567?"]
X = embedder.encode(textos, normalize_embeddings=True)
pred = clf.predict(X)
labels = le.inverse_transform(pred)
print(labels)
|
johannfrederic237/Modele1
|
johannfrederic237
| 2025-08-29T11:25:15Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-08-29T11:25:15Z |
---
license: apache-2.0
---
|
akunode/blockassist-bc-long_prickly_eel_1756466573
|
akunode
| 2025-08-29T11:23:48Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"long prickly eel",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-29T11:23:40Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- long prickly eel
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
motza0025/blockassist-bc-graceful_beaked_robin_1756465194
|
motza0025
| 2025-08-29T11:23:41Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"graceful beaked robin",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-29T11:23:19Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- graceful beaked robin
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
TurkishCodeMan/qwen-0.5b-fc
|
TurkishCodeMan
| 2025-08-29T11:22:54Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:unsloth/Qwen2.5-0.5B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-29T11:21:00Z |
---
base_model: unsloth/Qwen2.5-0.5B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** TurkishCodeMan
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2.5-0.5B-Instruct
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
atulchief/blockassist-bc-nimble_mighty_cat_1756466296
|
atulchief
| 2025-08-29T11:20:14Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"nimble mighty cat",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-29T11:19:42Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- nimble mighty cat
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Clip-do-Surfista-Portal-Zacarias/FULL.VIDEO.Surfista.Portal.Zacarias.Video.Viral.Tutorial
|
Clip-do-Surfista-Portal-Zacarias
| 2025-08-29T11:20:10Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-29T11:20:01Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
chardizard/Qwen2.5-7b-DPO-Factuality-MinChosen9-MinDelta6
|
chardizard
| 2025-08-29T11:19:49Z | 16 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"dpo",
"trl",
"conversational",
"arxiv:2305.18290",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-28T21:04:01Z |
---
base_model: Qwen/Qwen2.5-7B-Instruct
library_name: transformers
model_name: Qwen2.5-7b-DPO-Factuality-MinChosen9-MinDelta6
tags:
- generated_from_trainer
- dpo
- trl
licence: license
---
# Model Card for Qwen2.5-7b-DPO-Factuality-MinChosen9-MinDelta6
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="chardizard/Qwen2.5-7b-DPO-Factuality-MinChosen9-MinDelta6", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.4
- Pytorch: 2.7.1+cu118
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
Dejiat/blockassist-bc-savage_unseen_bobcat_1756466286
|
Dejiat
| 2025-08-29T11:18:30Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"savage unseen bobcat",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-29T11:18:27Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- savage unseen bobcat
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
hazentr/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-slender_grunting_koala
|
hazentr
| 2025-08-29T11:18:26Z | 155 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"trl",
"gensyn",
"grpo",
"I am slender grunting koala",
"genrl-swarm",
"I am slender_grunting_koala",
"conversational",
"arxiv:2402.03300",
"base_model:unsloth/Qwen2.5-0.5B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-04-02T16:33:05Z |
---
base_model: unsloth/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-slender_grunting_koala
tags:
- generated_from_trainer
- rl-swarm
- trl
- gensyn
- grpo
- I am slender grunting koala
- genrl-swarm
- I am slender_grunting_koala
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-slender_grunting_koala
This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="hazentr/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-slender_grunting_koala", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.19.0
- Transformers: 4.52.4
- Pytorch: 2.7.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
GroomerG/blockassist-bc-vicious_pawing_badger_1756464618
|
GroomerG
| 2025-08-29T11:17:50Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"vicious pawing badger",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-29T11:17:42Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- vicious pawing badger
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
pepijn223/rlearn_rewards_bcz_5000
|
pepijn223
| 2025-08-29T11:17:49Z | 0 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"robotics",
"rlearn",
"dataset:pepijn223/rewards_bc_z3",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-08-29T11:17:42Z |
---
datasets: pepijn223/rewards_bc_z3
library_name: lerobot
license: apache-2.0
model_name: rlearn
pipeline_tag: robotics
tags:
- robotics
- rlearn
- lerobot
---
# Model Card for rlearn
<!-- Provide a quick summary of what the model is/does. -->
_Model type not recognized β please update this template._
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
python -m lerobot.scripts.train \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
_Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._
### Evaluate the policy/run inference
```bash
python -m lerobot.record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
- **License:** apache-2.0
|
OCHone/blockassist-bc-graceful_sizable_camel_1756466159
|
OCHone
| 2025-08-29T11:17:30Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"graceful sizable camel",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-29T11:17:15Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- graceful sizable camel
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
18-Mano-ktk-Viral-video-original-clip/NEW.VIDEOS.mano.ktk.Viral.Video.Link.Official.Tutorial
|
18-Mano-ktk-Viral-video-original-clip
| 2025-08-29T11:16:59Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-29T11:16:39Z |
[](https://video-tv-go.blogspot.com/2024/11/new-videos-today.html)
|
sekirr/blockassist-bc-masked_tenacious_whale_1756466162
|
sekirr
| 2025-08-29T11:16:42Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"masked tenacious whale",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-29T11:16:38Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- masked tenacious whale
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
nikki-bhati-viral-video/wATCH.full.videos.nikki.bhati.viral.video.Official
|
nikki-bhati-viral-video
| 2025-08-29T11:16:27Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-29T11:16:20Z |
[π’ β€ β€ β€ π π’π
ππΌπ π§πΎππΎ π³π π
πππ (π₯ππ
π
π΅πππΊπ
π΅ππ½πΎπ π«πππ)](https://cloudsportek.com/ok/hd7ags/?king)
[](https://cloudsportek.com/ok/hd7ags/?king)
|
vendi11/blockassist-bc-placid_placid_llama_1756466103
|
vendi11
| 2025-08-29T11:15:45Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"placid placid llama",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-29T11:15:42Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- placid placid llama
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Clip-portal-zacarias-diaba-loira/Orginal.full.Videos.portal.zacarias.diaba.loir.viral.video.Official
|
Clip-portal-zacarias-diaba-loira
| 2025-08-29T11:15:43Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-29T11:15:34Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
dgambettaphd/M_llm2_run1_gen1_X_doc1000_synt64_lr1e-04_acm_SYNLAST
|
dgambettaphd
| 2025-08-29T11:15:08Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-29T11:14:54Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
regfrg45/blockassist-bc-shrewd_poisonous_lizard_1756466017
|
regfrg45
| 2025-08-29T11:14:26Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"shrewd poisonous lizard",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-29T11:14:20Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- shrewd poisonous lizard
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
liukevin666/blockassist-bc-yawning_striped_cassowary_1756465997
|
liukevin666
| 2025-08-29T11:14:18Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"yawning striped cassowary",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-29T11:14:11Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- yawning striped cassowary
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
uppal-farm-girl-video-viral-original-link/New.full.videos.uppal.farm.girl.Viral.Video.Official.Tutorial
|
uppal-farm-girl-video-viral-original-link
| 2025-08-29T11:13:57Z | 0 | 1 | null |
[
"region:us"
] | null | 2025-08-29T11:13:46Z |
<a href="https://tinyurl.com/ybtx5at9" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="WATCH Videos" data-canonical-src="https://i.imgur.com/dJHk4Zq.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
afrin-apu-viral-Video-original-Clip/New.full.videos.afrin.apu.Viral.Video.Official.Tutorial
|
afrin-apu-viral-Video-original-Clip
| 2025-08-29T11:13:52Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-29T11:13:44Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
video-genesis-pena-video-viral/VIRAL.VIDEO.genesis.pena.Video.Viral.Tutorial.Official
|
video-genesis-pena-video-viral
| 2025-08-29T11:12:49Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-29T11:12:28Z |
[](https://video-tv-go.blogspot.com/2024/11/new-videos-today.html)
|
bah63843/blockassist-bc-plump_fast_antelope_1756465876
|
bah63843
| 2025-08-29T11:12:05Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"plump fast antelope",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-29T11:11:57Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- plump fast antelope
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
oulianov/ACT_BBOX-excavation-nu6inx2ces-p1ryu6i61m
|
oulianov
| 2025-08-29T11:11:14Z | 0 | 0 |
phosphobot
|
[
"phosphobot",
"act",
"robotics",
"dataset:oulianov/excavation_bboxes",
"region:us"
] |
robotics
| 2025-08-29T11:08:17Z |
---
datasets: oulianov/excavation_bboxes
library_name: phosphobot
pipeline_tag: robotics
model_name: act
tags:
- phosphobot
- act
task_categories:
- robotics
---
# act model - π§ͺ phosphobot training pipeline
- **Dataset**: [oulianov/excavation_bboxes](https://huggingface.co/datasets/oulianov/excavation_bboxes)
- **Wandb run id**: None
## Error Traceback
We faced an issue while training your model.
```
404 Client Error. (Request ID: Root=1-68b18ad1-3c6b7bc96a88f07330eb78cc;7dcd3889-8843-4320-8122-ecb6e24cfa2e)
Repository Not Found for url: https://huggingface.co/api/datasets/oulianov/excavation_bboxes/branch/v2.0.
Please make sure you specified the correct `repo_id` and `repo_type`.
If you are trying to access a private or gated repo, make sure you are authenticated. For more details, see https://huggingface.co/docs/huggingface_hub/authentication
```
## Training parameters
```text
{
"batch_size": 100,
"steps": 10,
"save_freq": 5000,
"target_detection_instruction": "excavator",
"image_key": "main",
"image_keys_to_keep": []
}
```
π **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme)
π€ **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
|
Doctor-kelantan-viral-video-kota-bharu/New.full.videos.Doctor.kelantan.Viral.Video.Official.Tutorial
|
Doctor-kelantan-viral-video-kota-bharu
| 2025-08-29T11:10:33Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-29T11:10:22Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
Dejiat/blockassist-bc-savage_unseen_bobcat_1756465787
|
Dejiat
| 2025-08-29T11:10:15Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"savage unseen bobcat",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-29T11:10:12Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- savage unseen bobcat
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Clip-VIRAL-DR-WONG-LU-YANG-CCTV-VIDEO/FULL.VIDEO.DR.WONG.LU.YANG.CCTV.VIRAL.VIDEO.Official.Tutorial
|
Clip-VIRAL-DR-WONG-LU-YANG-CCTV-VIDEO
| 2025-08-29T11:09:20Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-29T11:09:11Z |
<a href="https://tinyurl.com/ybtx5at9" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="WATCH Videos" data-canonical-src="https://i.imgur.com/dJHk4Zq.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
GSukesh/gemma-2-2b-it-qlora-gsm8k-50pc
|
GSukesh
| 2025-08-29T11:08:56Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-29T11:00:31Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
BjarneNPO/BjarneNPO-29_08_2025_13_01_17
|
BjarneNPO
| 2025-08-29T11:08:39Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"sentence-similarity",
"feature-extraction",
"dense",
"generated_from_trainer",
"dataset_size:72349",
"loss:MultipleNegativesRankingLoss",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:sentence-transformers/paraphrase-multilingual-mpnet-base-v2",
"base_model:finetune:sentence-transformers/paraphrase-multilingual-mpnet-base-v2",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-08-29T11:08:35Z |
---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- dense
- generated_from_trainer
- dataset_size:72349
- loss:MultipleNegativesRankingLoss
base_model: sentence-transformers/paraphrase-multilingual-mpnet-base-v2
widget:
- source_sentence: Userin kann die eingetragene AU nicht lΓΆschen.
sentences:
- "Userin muss ΓΌber das Drei Punkte System gehen und dann ΓΌber Abwesenheitszeitraum\
\ eintragen und als Art EintrΓ€ge lΓΆschen auswΓ€hlen.\r\nMit Userin die AU zusammen\
\ gelΓΆscht."
- Hier muss bei allen Kindern der Haken bei "fΓΆrderfΓ€hig" in der BI gesetzt werden.
- "Userin an ihren TrΓ€ger verwiesen. \r\nUserin erklΓ€rt, dass die AWO keinen Support\
\ ΓΌber uns hat."
- source_sentence: User mΓΆchte EL fΓΌr BV freischalten.
sentences:
- Userin hatte in der BeschΓ€ftigung zu wenige Stunden fΓΌr den bestimmten Zeitraum
hinterlegt. Sie muss passend zu der Erstattung auch passende Stunden hinterlegen.
- Anwenderin musst den Filter weiter zurΓΌckstellen.
- Die Rolle Einrichtung kann keinen Zugriff dazu erhalten. Das ist so konzeptionell
vom LJA so festgesetzt.
- source_sentence: Userin kann EVN nicht freigeben. Sie wird gebeten, dass sie die
Monatsdaten neu erstellt und freigibt. Das System macht dies aber nicht. Sie bekommt
auch keine Fehlermeldung.
sentences:
- Kidz hatte zum Zeitpunkt des Anrufs eine StΓΆrung, die vermutlich zu diesem Problem
gefΓΌhrt hat. Userin leider nicht mehr erreicht, daher wird der Anruf geschlossen.
- Nein, wenn nur auf der kitaplus-Verwaltungsseite, wird als Wunsch fΓΌr die GAPP
weitergegeben.
- Ja im Berichtsgenerator kann sie sich eine entsprechende Liste ziehen
- source_sentence: Er kann einen Antrag auf Personalausnahme nicht freigeben. Trotz
Setzung der Haken ΓΌber BeschΓ€ftigungsinformationen kΓΆnnen die Daten nicht gespeichert
werden.
sentences:
- Es handelt sich um ein lokales Problem. Die Seite baut sich nach dem LΓΆschen
mit der aktualisierten Zahl nicht automatisch wieder auf. Durch die Taste F5 wird
die Seite neu geladen.
- Sie kann Vertretung wΓ€hlen oder ggf eine andere und die Qualifikation muss die
Mitarbeiterin ihr nennen. Sonst kann sie dazu beim Landesamt nachfragen, da inhaltliche
Fragen
- Er speichert diese ΓΌber Einrichtungsdaten speichern. Danach konnte der Antrag
freigegeben werden.
- source_sentence: "Ein Vater taucht nicht auf bei den Eltern im Elternbeirat \r\n\
\r\nAuΓerdem auf die Kinder mit archivierten AngehΓΆrigen hingewiesen und ihr gezeigt"
sentences:
- "1. Vorlage da. Userin auch gezeigt wie sie die verwanden kann\r\n2. Als Wunsch\
\ weitergegeben."
- In der Kinderliste haben Kinder gefehlt. Userin muss die Daten in der Kinderliste
hinterlegen.
- Weil er keinen Zugang zur EAPP hat, AuΓerdem auf die Kinder mit archivierten AngehΓΆrigen
hingewiesen und ihr gezeigt wie sie das lΓΆsen kann
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
model-index:
- name: SentenceTransformer based on sentence-transformers/paraphrase-multilingual-mpnet-base-v2
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: sentence transformers/paraphrase multilingual mpnet base v2
type: sentence-transformers/paraphrase-multilingual-mpnet-base-v2
metrics:
- type: cosine_accuracy@1
value: 0.11594202898550725
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.5942028985507246
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.7101449275362319
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.8405797101449275
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.11594202898550725
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.3188405797101449
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.2927536231884058
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.21884057971014495
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.01515151515151515
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.07354325129261191
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.10675701110483717
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.16051693404634582
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.24472607198747476
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.37863469059121224
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.14600008219380686
name: Cosine Map@100
---
# SentenceTransformer based on sentence-transformers/paraphrase-multilingual-mpnet-base-v2
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/paraphrase-multilingual-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-mpnet-base-v2). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [sentence-transformers/paraphrase-multilingual-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-mpnet-base-v2) <!-- at revision 4328cf26390c98c5e3c738b4460a05b95f4911f5 -->
- **Maximum Sequence Length:** 128 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False, 'architecture': 'XLMRobertaModel'})
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the π€ Hub
model = SentenceTransformer("BjarneNPO-29_08_2025_13_01_17")
# Run inference
queries = [
"Ein Vater taucht nicht auf bei den Eltern im Elternbeirat \r\n\r\nAu\u00dferdem auf die Kinder mit archivierten Angeh\u00f6rigen hingewiesen und ihr gezeigt",
]
documents = [
'Weil er keinen Zugang zur EAPP hat, AuΓerdem auf die Kinder mit archivierten AngehΓΆrigen hingewiesen und ihr gezeigt wie sie das lΓΆsen kann',
'1. Vorlage da. Userin auch gezeigt wie sie die verwanden kann\r\n2. Als Wunsch weitergegeben.',
'In der Kinderliste haben Kinder gefehlt. Userin muss die Daten in der Kinderliste hinterlegen.',
]
query_embeddings = model.encode_query(queries)
document_embeddings = model.encode_document(documents)
print(query_embeddings.shape, document_embeddings.shape)
# [1, 768] [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(query_embeddings, document_embeddings)
print(similarities)
# tensor([[0.6628, 0.3829, 0.0100]])
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Information Retrieval
* Dataset: `sentence-transformers/paraphrase-multilingual-mpnet-base-v2`
* Evaluated with <code>scripts.InformationRetrievalEvaluatorCustom.InformationRetrievalEvaluatorCustom</code> with these parameters:
```json
{
"query_prompt_name": "query",
"corpus_prompt_name": "query"
}
```
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.1159 |
| cosine_accuracy@3 | 0.5942 |
| cosine_accuracy@5 | 0.7101 |
| cosine_accuracy@10 | 0.8406 |
| cosine_precision@1 | 0.1159 |
| cosine_precision@3 | 0.3188 |
| cosine_precision@5 | 0.2928 |
| cosine_precision@10 | 0.2188 |
| cosine_recall@1 | 0.0152 |
| cosine_recall@3 | 0.0735 |
| cosine_recall@5 | 0.1068 |
| cosine_recall@10 | 0.1605 |
| **cosine_ndcg@10** | **0.2447** |
| cosine_mrr@10 | 0.3786 |
| cosine_map@100 | 0.146 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 72,349 training samples
* Columns: <code>query</code> and <code>answer</code>
* Approximate statistics based on the first 1000 samples:
| | query | answer |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 6 tokens</li><li>mean: 30.18 tokens</li><li>max: 128 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 28.32 tokens</li><li>max: 128 tokens</li></ul> |
* Samples:
| query | answer |
|:---------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>Nun ist die Monatsmeldung erfolgt, aber rote Ausrufezeichen tauchen auf.</code> | <code>Userin an das JA verwiesen, diese mΓΌssten ihr die Schloss-Monate zur Γberarbeitung im Kibiz.web zurΓΌckgeben. Userin dazu empfohlen, die Kinder die nicht in kitaplus sind, aber in Kibiz.web - im KiBiz.web zu entfernen, wenn diese nicht vorhanden sind.</code> |
| <code>Die Feiertage in den Stammdaten stimmen nicht.</code> | <code>Es besteht bereits ein Ticket dafΓΌr.</code> |
| <code>Abrechnung kann nicht final freigegeben werden, es wird aber keiner Fehlermeldung angeziegt</code> | <code>im Hintergrund ist eine Fehlermeldung zu sehen. An Entwickler weitergeleitet.
<br>Korrektur vorgenommen.</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim",
"gather_across_devices": false
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: epoch
- `per_device_train_batch_size`: 64
- `per_device_eval_batch_size`: 64
- `gradient_accumulation_steps`: 4
- `learning_rate`: 4e-05
- `weight_decay`: 0.01
- `lr_scheduler_type`: cosine
- `warmup_ratio`: 0.08
- `bf16`: True
- `tf32`: True
- `load_best_model_at_end`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: epoch
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 64
- `per_device_eval_batch_size`: 64
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 4
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 4e-05
- `weight_decay`: 0.01
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 3
- `max_steps`: -1
- `lr_scheduler_type`: cosine
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.08
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: True
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: True
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch_fused
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `hub_revision`: None
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `liger_kernel_config`: None
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
- `router_mapping`: {}
- `learning_rate_mapping`: {}
</details>
### Training Logs
| Epoch | Step | Training Loss | sentence-transformers/paraphrase-multilingual-mpnet-base-v2_cosine_ndcg@10 |
|:-------:|:-------:|:-------------:|:--------------------------------------------------------------------------:|
| 0.0354 | 10 | 3.7146 | - |
| 0.0707 | 20 | 3.1473 | - |
| 0.1061 | 30 | 2.6743 | - |
| 0.1415 | 40 | 2.5164 | - |
| 0.1768 | 50 | 2.1779 | - |
| 0.2122 | 60 | 2.0534 | - |
| 0.2476 | 70 | 1.8658 | - |
| 0.2829 | 80 | 1.8049 | - |
| 0.3183 | 90 | 1.6684 | - |
| 0.3537 | 100 | 1.6242 | - |
| 0.3890 | 110 | 1.5781 | - |
| 0.4244 | 120 | 1.5528 | - |
| 0.4598 | 130 | 1.427 | - |
| 0.4951 | 140 | 1.4766 | - |
| 0.5305 | 150 | 1.3835 | - |
| 0.5659 | 160 | 1.3685 | - |
| 0.6012 | 170 | 1.3429 | - |
| 0.6366 | 180 | 1.2879 | - |
| 0.6720 | 190 | 1.2974 | - |
| 0.7073 | 200 | 1.2578 | - |
| 0.7427 | 210 | 1.2634 | - |
| 0.7781 | 220 | 1.3011 | - |
| 0.8134 | 230 | 1.2754 | - |
| 0.8488 | 240 | 1.2179 | - |
| 0.8842 | 250 | 1.2466 | - |
| 0.9195 | 260 | 1.1624 | - |
| 0.9549 | 270 | 1.1831 | - |
| 0.9903 | 280 | 1.1594 | - |
| **1.0** | **283** | **-** | **0.2588** |
| 1.0248 | 290 | 1.0459 | - |
| 1.0601 | 300 | 1.0137 | - |
| 1.0955 | 310 | 0.9962 | - |
| 1.1309 | 320 | 0.9826 | - |
| 1.1662 | 330 | 0.9434 | - |
| 1.2016 | 340 | 0.9672 | - |
| 1.2370 | 350 | 0.9137 | - |
| 1.2723 | 360 | 0.9586 | - |
| 1.3077 | 370 | 0.9408 | - |
| 1.3431 | 380 | 0.9815 | - |
| 1.3784 | 390 | 0.9025 | - |
| 1.4138 | 400 | 0.9023 | - |
| 1.4492 | 410 | 0.8808 | - |
| 1.4845 | 420 | 0.9326 | - |
| 1.5199 | 430 | 0.9163 | - |
| 1.5553 | 440 | 0.8807 | - |
| 1.5906 | 450 | 0.8349 | - |
| 1.6260 | 460 | 0.9604 | - |
| 1.6614 | 470 | 0.8915 | - |
| 1.6967 | 480 | 0.8873 | - |
| 1.7321 | 490 | 0.8874 | - |
| 1.7675 | 500 | 0.8932 | - |
| 1.8028 | 510 | 0.8566 | - |
| 1.8382 | 520 | 0.8694 | - |
| 1.8736 | 530 | 0.8197 | - |
| 1.9089 | 540 | 0.8025 | - |
| 1.9443 | 550 | 0.7864 | - |
| 1.9797 | 560 | 0.8794 | - |
| 2.0 | 566 | - | 0.2527 |
| 2.0141 | 570 | 0.7807 | - |
| 2.0495 | 580 | 0.6977 | - |
| 2.0849 | 590 | 0.7034 | - |
| 2.1202 | 600 | 0.7111 | - |
| 2.1556 | 610 | 0.692 | - |
| 2.1910 | 620 | 0.6843 | - |
| 2.2263 | 630 | 0.7028 | - |
| 2.2617 | 640 | 0.7518 | - |
| 2.2971 | 650 | 0.6656 | - |
| 2.3324 | 660 | 0.6624 | - |
| 2.3678 | 670 | 0.7195 | - |
| 2.4032 | 680 | 0.6761 | - |
| 2.4385 | 690 | 0.6856 | - |
| 2.4739 | 700 | 0.6699 | - |
| 2.5093 | 710 | 0.7118 | - |
| 2.5447 | 720 | 0.7109 | - |
| 2.5800 | 730 | 0.6991 | - |
| 2.6154 | 740 | 0.6647 | - |
| 2.6508 | 750 | 0.6858 | - |
| 2.6861 | 760 | 0.6901 | - |
| 2.7215 | 770 | 0.6853 | - |
| 2.7569 | 780 | 0.665 | - |
| 2.7922 | 790 | 0.6735 | - |
| 2.8276 | 800 | 0.693 | - |
| 2.8630 | 810 | 0.6761 | - |
| 2.8983 | 820 | 0.7327 | - |
| 2.9337 | 830 | 0.7124 | - |
| 2.9691 | 840 | 0.6774 | - |
| 3.0 | 849 | - | 0.2447 |
* The bold row denotes the saved checkpoint.
### Framework Versions
- Python: 3.10.11
- Sentence Transformers: 5.1.0
- Transformers: 4.55.2
- PyTorch: 2.8.0+cu129
- Accelerate: 1.10.0
- Datasets: 3.6.0
- Tokenizers: 0.21.4
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
oulianov/ACT_BBOX-excavation-h242l69mw4-tc2retj72x
|
oulianov
| 2025-08-29T11:07:55Z | 0 | 0 |
phosphobot
|
[
"phosphobot",
"act",
"robotics",
"dataset:oulianov/excavation_bboxes",
"region:us"
] |
robotics
| 2025-08-29T11:04:39Z |
---
datasets: oulianov/excavation_bboxes
library_name: phosphobot
pipeline_tag: robotics
model_name: act
tags:
- phosphobot
- act
task_categories:
- robotics
---
# act model - π§ͺ phosphobot training pipeline
- **Dataset**: [oulianov/excavation_bboxes](https://huggingface.co/datasets/oulianov/excavation_bboxes)
- **Wandb run id**: None
## Error Traceback
We faced an issue while training your model.
```
404 Client Error. (Request ID: Root=1-68b18a0a-34f94cbf03cf096d32eab1d2;361b28bb-0482-4c8d-af9a-734d96575ba0)
Repository Not Found for url: https://huggingface.co/api/datasets/oulianov/excavation_bboxes/branch/v2.0.
Please make sure you specified the correct `repo_id` and `repo_type`.
If you are trying to access a private or gated repo, make sure you are authenticated. For more details, see https://huggingface.co/docs/huggingface_hub/authentication
```
## Training parameters
```text
{
"batch_size": 100,
"steps": 10,
"save_freq": 5000,
"target_detection_instruction": "excavator",
"image_key": "main",
"image_keys_to_keep": []
}
```
π **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme)
π€ **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
|
VIDEO-DO-SURFISTA-VAZADO-VEJA-VIDEO-link/ORIGINAL.Video.do.surfista.vazado.video.do.surfista.no.banheiro.surfista.mansao.privilege.erome
|
VIDEO-DO-SURFISTA-VAZADO-VEJA-VIDEO-link
| 2025-08-29T11:07:51Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-29T11:07:29Z |
[π’ β€ β€ β€ π π’π
ππΌπ π§πΎππΎ π³π π
πππ (π₯ππ
π
π΅πππΊπ
π΅ππ½πΎπ π«πππ)](https://sahabagi-mgi.blogspot.com/p/trha.html)
[](https://sahabagi-mgi.blogspot.com/p/trha.html)
|
vendi11/blockassist-bc-placid_placid_llama_1756465567
|
vendi11
| 2025-08-29T11:06:49Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"placid placid llama",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-29T11:06:46Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- placid placid llama
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
davsharian/object_LoRAs
|
davsharian
| 2025-08-29T11:06:17Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-02-24T14:42:44Z |
---
license: apache-2.0
---
|
Clip-Completo-Assista-o-Video-do-Surfista/full.Video.do.Surfista.da.Mansao.Privilegio.que.Viralizou.no.Twitter
|
Clip-Completo-Assista-o-Video-do-Surfista
| 2025-08-29T11:05:49Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-29T11:05:39Z |
<a href="https://tinyurl.com/ybtx5at9" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="WATCH Videos" data-canonical-src="https://i.imgur.com/dJHk4Zq.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
OCHone/blockassist-bc-graceful_sizable_camel_1756465347
|
OCHone
| 2025-08-29T11:05:25Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"graceful sizable camel",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-29T11:05:09Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- graceful sizable camel
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
VIDEO-COMPLETO-JENNIFER-GOMES-EROME/ORIGINAL.JENNIFER.GOMES.SALVADOR.EROME.BLOGUEIRA.DE.SALVADOR.EROME.JEGOMEX01
|
VIDEO-COMPLETO-JENNIFER-GOMES-EROME
| 2025-08-29T11:04:14Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-29T11:04:02Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
Dejiat/blockassist-bc-savage_unseen_bobcat_1756465412
|
Dejiat
| 2025-08-29T11:03:59Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"savage unseen bobcat",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-29T11:03:57Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- savage unseen bobcat
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
liukevin666/blockassist-bc-yawning_striped_cassowary_1756465331
|
liukevin666
| 2025-08-29T11:03:13Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"yawning striped cassowary",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-29T11:03:07Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- yawning striped cassowary
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
AnerYubo/blockassist-bc-elusive_mammalian_termite_1756465313
|
AnerYubo
| 2025-08-29T11:01:57Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"elusive mammalian termite",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-29T11:01:53Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- elusive mammalian termite
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
AnerYubo/blockassist-bc-snappy_tenacious_eagle_1756465304
|
AnerYubo
| 2025-08-29T11:01:48Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"snappy tenacious eagle",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-29T11:01:44Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- snappy tenacious eagle
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Clip-Abigail-viral-video-Original/New.full.videos.Abigail.Viral.Video.Official.Tutorial
|
Clip-Abigail-viral-video-Original
| 2025-08-29T11:01:00Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-29T11:00:45Z |
<a href="https://tinyurl.com/ybtx5at9" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="WATCH Videos" data-canonical-src="https://i.imgur.com/dJHk4Zq.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
eusuf01/blockassist-bc-smooth_humming_butterfly_1756465186
|
eusuf01
| 2025-08-29T11:00:38Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"smooth humming butterfly",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-29T11:00:14Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- smooth humming butterfly
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
sdasdsee/blockassist-bc-wise_jumping_orangutan_1756464019
|
sdasdsee
| 2025-08-29T11:00:21Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"wise jumping orangutan",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-29T10:59:59Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- wise jumping orangutan
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
vendi11/blockassist-bc-placid_placid_llama_1756465152
|
vendi11
| 2025-08-29T10:59:54Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"placid placid llama",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-29T10:59:51Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- placid placid llama
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
lakelee/RLB_MLP_BC_v3.20250829.19_3_fromrl_rlcompat_A1v1
|
lakelee
| 2025-08-29T10:58:49Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"regular_mlp_checkpoint",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2025-08-29T10:55:56Z |
---
library_name: transformers
tags:
- generated_from_trainer
model-index:
- name: RLB_MLP_BC_v3.20250829.19_3_fromrl_rlcompat_A1v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# RLB_MLP_BC_v3.20250829.19_3_fromrl_rlcompat_A1v1
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.99) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- num_epochs: 20.0
### Training results
### Framework versions
- Transformers 4.55.2
- Pytorch 2.6.0+cu124
- Tokenizers 0.21.4
|
VIDEOS-Abigail-viral-video-original/New.full.videos.Abigail.Viral.Video.Official.Tutorial
|
VIDEOS-Abigail-viral-video-original
| 2025-08-29T10:58:24Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-29T10:58:06Z |
<a href="https://tinyurl.com/ybtx5at9" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="WATCH Videos" data-canonical-src="https://i.imgur.com/dJHk4Zq.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
Veras56/blockassist-bc-endangered_agile_turtle_1756464994
|
Veras56
| 2025-08-29T10:58:11Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"endangered agile turtle",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-29T10:58:03Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- endangered agile turtle
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
pr-ratri-viral-video-download/Orginal.full.Videos.pr.ratri.viral.video.Official.Tutorial
|
pr-ratri-viral-video-download
| 2025-08-29T10:57:24Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-29T10:57:14Z |
[π’ β€ β€ β€ π π’π
ππΌπ π§πΎππΎ π³π π
πππ (π₯ππ
π
π΅πππΊπ
π΅ππ½πΎπ π«πππ)](https://cloudsportek.com/ok/hd7ags/?king)
[](https://cloudsportek.com/ok/hd7ags/?king)
|
vangard703/output_refspatial_only_llm
|
vangard703
| 2025-08-29T10:57:18Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2_5_vl",
"image-to-text",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
image-to-text
| 2025-08-29T10:50:21Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
hakimjustbao/blockassist-bc-raging_subtle_wasp_1756463293
|
hakimjustbao
| 2025-08-29T10:56:26Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"raging subtle wasp",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-29T10:56:23Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- raging subtle wasp
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
fatehcabreraadv/blockassist-bc-tawny_alert_dingo_1756463505
|
fatehcabreraadv
| 2025-08-29T10:56:20Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"tawny alert dingo",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-29T10:56:17Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- tawny alert dingo
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
eusuf01/blockassist-bc-smooth_humming_butterfly_1756464903
|
eusuf01
| 2025-08-29T10:55:55Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"smooth humming butterfly",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-29T10:55:31Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- smooth humming butterfly
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
motza0025/blockassist-bc-plump_nasty_gerbil_1756463399
|
motza0025
| 2025-08-29T10:55:33Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"plump nasty gerbil",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-29T10:55:09Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- plump nasty gerbil
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Dejiat/blockassist-bc-savage_unseen_bobcat_1756464885
|
Dejiat
| 2025-08-29T10:55:16Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"savage unseen bobcat",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-29T10:55:12Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- savage unseen bobcat
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
thatboredgirlie/blockassist-bc-thriving_whiskered_flamingo_1756464816
|
thatboredgirlie
| 2025-08-29T10:55:12Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thriving whiskered flamingo",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-29T10:54:13Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thriving whiskered flamingo
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
umairhassan02/paligemma2_finetuned
|
umairhassan02
| 2025-08-29T10:55:11Z | 17 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:google/paligemma-3b-pt-224",
"lora",
"transformers",
"text-generation",
"base_model:google/paligemma-3b-pt-224",
"license:gemma",
"region:us"
] |
text-generation
| 2025-08-20T21:40:25Z |
---
library_name: peft
license: gemma
base_model: google/paligemma-3b-pt-224
tags:
- base_model:adapter:google/paligemma-3b-pt-224
- lora
- transformers
pipeline_tag: text-generation
model-index:
- name: paligemma2_finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/umairpu24-pucit-punjab-university-college-of-information/valorant_paligemma2_fine_tuning/runs/qw39nb8l)
# paligemma2_finetuned
This model is a fine-tuned version of [google/paligemma-3b-pt-224](https://huggingface.co/google/paligemma-3b-pt-224) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2628
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 3.0605 | 0.2309 | 100 | 2.6893 |
| 2.5533 | 0.4619 | 200 | 2.4747 |
| 2.4472 | 0.6928 | 300 | 2.3981 |
| 2.3837 | 0.9238 | 400 | 2.3506 |
| 2.2957 | 1.1547 | 500 | 2.3141 |
| 2.305 | 1.3857 | 600 | 2.2883 |
| 2.2865 | 1.6166 | 700 | 2.2713 |
| 2.2564 | 1.8476 | 800 | 2.2628 |
### Framework versions
- PEFT 0.17.1
- Transformers 4.55.4
- Pytorch 2.8.0+cu126
- Datasets 2.16.0
- Tokenizers 0.21.4
|
Links-genesis-pena-telegram/Orginal.full.Videos.genesis.pena.viral.video.Official.Tutorial
|
Links-genesis-pena-telegram
| 2025-08-29T10:54:36Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-29T10:54:23Z |
[π’ β€ β€ β€ π π’π
ππΌπ π§πΎππΎ π³π π
πππ (π₯ππ
π
π΅πππΊπ
π΅ππ½πΎπ π«πππ)](https://cloudsportek.com/ok/hd7ags/?king)
[](https://cloudsportek.com/ok/hd7ags/?king)
|
akunode/blockassist-bc-long_prickly_eel_1756464811
|
akunode
| 2025-08-29T10:54:33Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"long prickly eel",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-29T10:54:25Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- long prickly eel
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
pburke1234/Qwen3-0.6B-Gensyn-Swarm-gilded_ravenous_alligator
|
pburke1234
| 2025-08-29T10:53:53Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am gilded_ravenous_alligator",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-27T16:43:46Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am gilded_ravenous_alligator
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Ferdi3425/blockassist-bc-amphibious_deadly_otter_1756464800
|
Ferdi3425
| 2025-08-29T10:53:49Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"amphibious deadly otter",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-29T10:53:44Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- amphibious deadly otter
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Loder-S/blockassist-bc-sprightly_knobby_tiger_1756463260
|
Loder-S
| 2025-08-29T10:53:39Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"sprightly knobby tiger",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-29T10:53:35Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- sprightly knobby tiger
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
phospho-app/z1c0-ACT-pickandplace-2z6oc
|
phospho-app
| 2025-08-29T10:53:15Z | 0 | 0 |
phosphobot
|
[
"phosphobot",
"safetensors",
"act",
"robotics",
"dataset:z1c0/pickandplace",
"region:us"
] |
robotics
| 2025-08-29T09:53:04Z |
---
datasets: z1c0/pickandplace
library_name: phosphobot
pipeline_tag: robotics
model_name: act
tags:
- phosphobot
- act
task_categories:
- robotics
---
# act model - π§ͺ phosphobot training pipeline
- **Dataset**: [z1c0/pickandplace](https://huggingface.co/datasets/z1c0/pickandplace)
- **Wandb run id**: None
## Error Traceback
We faced an issue while training your model.
```
Training process exceeded timeout of 3600 seconds. We have uploaded the last checkpoint. Please consider lowering the batch size or number of steps if you wish to train the model longer.
```
## Training parameters
```text
{
"batch_size": 120,
"steps": 3000,
"save_steps": 200
}
```
π **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme)
π€ **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.