modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-12 06:31:37
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 555
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-12 06:31:07
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
hbfc7671/blockassist-bc-mighty_small_fox_1757603365
|
hbfc7671
| 2025-09-11T15:09:36Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"mighty small fox",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T15:09:32Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- mighty small fox
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mehere23/gpt-oss-20b
|
mehere23
| 2025-09-11T15:09:24Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_oss",
"text-generation",
"vllm",
"conversational",
"arxiv:2508.10925",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"mxfp4",
"region:us"
] |
text-generation
| 2025-09-11T15:08:14Z |
---
license: apache-2.0
pipeline_tag: text-generation
library_name: transformers
tags:
- vllm
---
<p align="center">
<img alt="gpt-oss-20b" src="https://raw.githubusercontent.com/openai/gpt-oss/main/docs/gpt-oss-20b.svg">
</p>
<p align="center">
<a href="https://gpt-oss.com"><strong>Try gpt-oss</strong></a> ·
<a href="https://cookbook.openai.com/topic/gpt-oss"><strong>Guides</strong></a> ·
<a href="https://arxiv.org/abs/2508.10925"><strong>Model card</strong></a> ·
<a href="https://openai.com/index/introducing-gpt-oss/"><strong>OpenAI blog</strong></a>
</p>
<br>
Welcome to the gpt-oss series, [OpenAI’s open-weight models](https://openai.com/open-models) designed for powerful reasoning, agentic tasks, and versatile developer use cases.
We’re releasing two flavors of these open models:
- `gpt-oss-120b` — for production, general purpose, high reasoning use cases that fit into a single 80GB GPU (like NVIDIA H100 or AMD MI300X) (117B parameters with 5.1B active parameters)
- `gpt-oss-20b` — for lower latency, and local or specialized use cases (21B parameters with 3.6B active parameters)
Both models were trained on our [harmony response format](https://github.com/openai/harmony) and should only be used with the harmony format as it will not work correctly otherwise.
> [!NOTE]
> This model card is dedicated to the smaller `gpt-oss-20b` model. Check out [`gpt-oss-120b`](https://huggingface.co/openai/gpt-oss-120b) for the larger model.
# Highlights
* **Permissive Apache 2.0 license:** Build freely without copyleft restrictions or patent risk—ideal for experimentation, customization, and commercial deployment.
* **Configurable reasoning effort:** Easily adjust the reasoning effort (low, medium, high) based on your specific use case and latency needs.
* **Full chain-of-thought:** Gain complete access to the model’s reasoning process, facilitating easier debugging and increased trust in outputs. It’s not intended to be shown to end users.
* **Fine-tunable:** Fully customize models to your specific use case through parameter fine-tuning.
* **Agentic capabilities:** Use the models’ native capabilities for function calling, [web browsing](https://github.com/openai/gpt-oss/tree/main?tab=readme-ov-file#browser), [Python code execution](https://github.com/openai/gpt-oss/tree/main?tab=readme-ov-file#python), and Structured Outputs.
* **MXFP4 quantization:** The models were post-trained with MXFP4 quantization of the MoE weights, making `gpt-oss-120b` run on a single 80GB GPU (like NVIDIA H100 or AMD MI300X) and the `gpt-oss-20b` model run within 16GB of memory. All evals were performed with the same MXFP4 quantization.
---
# Inference examples
## Transformers
You can use `gpt-oss-120b` and `gpt-oss-20b` with Transformers. If you use the Transformers chat template, it will automatically apply the [harmony response format](https://github.com/openai/harmony). If you use `model.generate` directly, you need to apply the harmony format manually using the chat template or use our [openai-harmony](https://github.com/openai/harmony) package.
To get started, install the necessary dependencies to setup your environment:
```
pip install -U transformers kernels torch
```
Once, setup you can proceed to run the model by running the snippet below:
```py
from transformers import pipeline
import torch
model_id = "openai/gpt-oss-20b"
pipe = pipeline(
"text-generation",
model=model_id,
torch_dtype="auto",
device_map="auto",
)
messages = [
{"role": "user", "content": "Explain quantum mechanics clearly and concisely."},
]
outputs = pipe(
messages,
max_new_tokens=256,
)
print(outputs[0]["generated_text"][-1])
```
Alternatively, you can run the model via [`Transformers Serve`](https://huggingface.co/docs/transformers/main/serving) to spin up a OpenAI-compatible webserver:
```
transformers serve
transformers chat localhost:8000 --model-name-or-path openai/gpt-oss-20b
```
[Learn more about how to use gpt-oss with Transformers.](https://cookbook.openai.com/articles/gpt-oss/run-transformers)
## vLLM
vLLM recommends using [uv](https://docs.astral.sh/uv/) for Python dependency management. You can use vLLM to spin up an OpenAI-compatible webserver. The following command will automatically download the model and start the server.
```bash
uv pip install --pre vllm==0.10.1+gptoss \
--extra-index-url https://wheels.vllm.ai/gpt-oss/ \
--extra-index-url https://download.pytorch.org/whl/nightly/cu128 \
--index-strategy unsafe-best-match
vllm serve openai/gpt-oss-20b
```
[Learn more about how to use gpt-oss with vLLM.](https://cookbook.openai.com/articles/gpt-oss/run-vllm)
## PyTorch / Triton
To learn about how to use this model with PyTorch and Triton, check out our [reference implementations in the gpt-oss repository](https://github.com/openai/gpt-oss?tab=readme-ov-file#reference-pytorch-implementation).
## Ollama
If you are trying to run gpt-oss on consumer hardware, you can use Ollama by running the following commands after [installing Ollama](https://ollama.com/download).
```bash
# gpt-oss-20b
ollama pull gpt-oss:20b
ollama run gpt-oss:20b
```
[Learn more about how to use gpt-oss with Ollama.](https://cookbook.openai.com/articles/gpt-oss/run-locally-ollama)
#### LM Studio
If you are using [LM Studio](https://lmstudio.ai/) you can use the following commands to download.
```bash
# gpt-oss-20b
lms get openai/gpt-oss-20b
```
Check out our [awesome list](https://github.com/openai/gpt-oss/blob/main/awesome-gpt-oss.md) for a broader collection of gpt-oss resources and inference partners.
---
# Download the model
You can download the model weights from the [Hugging Face Hub](https://huggingface.co/collections/openai/gpt-oss-68911959590a1634ba11c7a4) directly from Hugging Face CLI:
```shell
# gpt-oss-20b
huggingface-cli download openai/gpt-oss-20b --include "original/*" --local-dir gpt-oss-20b/
pip install gpt-oss
python -m gpt_oss.chat model/
```
# Reasoning levels
You can adjust the reasoning level that suits your task across three levels:
* **Low:** Fast responses for general dialogue.
* **Medium:** Balanced speed and detail.
* **High:** Deep and detailed analysis.
The reasoning level can be set in the system prompts, e.g., "Reasoning: high".
# Tool use
The gpt-oss models are excellent for:
* Web browsing (using built-in browsing tools)
* Function calling with defined schemas
* Agentic operations like browser tasks
# Fine-tuning
Both gpt-oss models can be fine-tuned for a variety of specialized use cases.
This smaller model `gpt-oss-20b` can be fine-tuned on consumer hardware, whereas the larger [`gpt-oss-120b`](https://huggingface.co/openai/gpt-oss-120b) can be fine-tuned on a single H100 node.
# Citation
```bibtex
@misc{openai2025gptoss120bgptoss20bmodel,
title={gpt-oss-120b & gpt-oss-20b Model Card},
author={OpenAI},
year={2025},
eprint={2508.10925},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2508.10925},
}
```
|
rodriquezb087/blockassist-bc-dormant_pensive_cat_1757603318
|
rodriquezb087
| 2025-09-11T15:08:46Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"burrowing thorny gibbon",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T15:08:43Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- burrowing thorny gibbon
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
oxleybranan/blockassist-bc-amphibious_tricky_platypus_1757603259
|
oxleybranan
| 2025-09-11T15:07:52Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"amphibious tricky platypus",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T15:07:48Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- amphibious tricky platypus
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
yesniorka/blockassist-bc-stocky_large_dove_1757603261
|
yesniorka
| 2025-09-11T15:07:49Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"amphibious tricky platypus",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T15:07:46Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- amphibious tricky platypus
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
borsahopa67/blockassist-bc-polished_quiet_badger_1757603226
|
borsahopa67
| 2025-09-11T15:07:19Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"snorting majestic condor",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T15:07:15Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- snorting majestic condor
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
radlab/semantic-euro-bert-encoder-v1
|
radlab
| 2025-09-11T15:07:14Z | 20 | 1 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"eurobert",
"- embeddings",
"plwordnet",
"semantic-relations",
"semantic-search",
"sentence-similarity",
"custom_code",
"pl",
"en",
"de",
"base_model:EuroBERT/EuroBERT-610m",
"base_model:finetune:EuroBERT/EuroBERT-610m",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-08-26T23:36:02Z |
---
license: apache-2.0
language:
- pl
- en
- de
base_model:
- EuroBERT/EuroBERT-610m
tags:
- sentence-transformers
- '- embeddings'
- plwordnet
- semantic-relations
- semantic-search
pipeline_tag: sentence-similarity
---
# PLWordNet Semantic Embedder (bi-encoder)
A Polish semantic embedder trained on pairs constructed from plWordNet (Słowosieć) semantic relations and external descriptions of meanings.
Every relation between lexical units and synsets is transformed into training/evaluation examples.
The dataset mixes meanings’ usage signals: emotions, definitions, and external descriptions (Wikipedia, sentence-split).
The embedder mimics semantic relations: it pulls together embeddings that are linked by “positive” relations
(e.g., synonymy, hypernymy/hyponymy as defined in the dataset) and pushes apart embeddings linked by “negative”
relations (e.g., antonymy or mutually exclusive relations). Source code and training scripts:
- GitHub: [https://github.com/radlab-dev-group/radlab-plwordnet](https://github.com/radlab-dev-group/radlab-plwordnet)
## Model summary
- **Architecture**: bi-encoder built with `sentence-transformers` (transformer encoder + pooling).
- **Use cases**: semantic similarity and semantic search for Polish words, senses, definitions, and sentences.
- **Objective**: CosineSimilarityLoss on positive/negative pairs.
- **Behavior**: preserves the topology of semantic relations derived from plWordNet.
## Training data
Constructed from plWordNet relations between lexical units and synsets; each relation yields example pairs.
Augmented with:
- definitions,
- usage examples (including emotion annotations where available),
- external descriptions from Wikipedia (split into sentences).
Positive pairs correspond to relations expected to increase similarity;
negative pairs correspond to relations expected to decrease similarity.
Additional hard/soft negatives may include unrelated meanings.
## Training details
- **Trainer**: `SentenceTransformerTrainer`
- **Loss**: `CosineSimilarityLoss`
- **Evaluator**: `EmbeddingSimilarityEvaluator` (cosine)
- Typical **hyperparameters**:
- epochs: 5
- per-device batch size: 10 (gradient accumulation: 4)
- learning rate: 5e-6 (AdamW fused)
- weight decay: 0.01
- warmup: ratio 20k steps
- fp16: true
## Evaluation
- **Task**: semantic similarity on dev/test splits built from the relation-derived pairs.
- **Metric**: cosine-based correlation (Spearman/Pearson) where applicable, or discrimination between positive vs. negative pairs.



## How to use
Sentence-Transformers:
``` python
# Python
from sentence_transformers import SentenceTransformer, util
model = SentenceTransformer("radlab/semantic-euro-bert-encoder-v1", trust_remote_code=True)
texts = ["zamek", "drzwi", "wiadro", "horyzont", "ocean"]
emb = model.encode(texts, convert_to_tensor=True, normalize_embeddings=True)
scores = util.cos_sim(emb, emb)
print(scores) # higher = more semantically similar
```
Transformers (feature extraction):
``` python
# Python
from transformers import AutoModel, AutoTokenizer
import torch
import torch.nn.functional as F
name = "radlab/semantic-euro-bert-encoder-v1"
tok = AutoTokenizer.from_pretrained(name)
mdl = AutoModel.from_pretrained(name, trust_remote_code=True)
texts = ["student", "żak"]
tokens = tok(texts, padding=True, truncation=True, return_tensors="pt")
with torch.no_grad():
out = mdl(**tokens)
emb = out.last_hidden_state.mean(dim=1)
emb = F.normalize(emb, p=2, dim=1)
sim = emb @ emb.T
print(sim)
```
|
DeathGodlike/Erotophobia-24B-v2.0_H8-4.0BPW_EXL3
|
DeathGodlike
| 2025-09-11T15:05:54Z | 0 | 0 |
safetensors
|
[
"safetensors",
"exl3",
"4-bit",
"text-generation",
"base_model:yvvki/Erotophobia-24B-v2.0",
"base_model:quantized:yvvki/Erotophobia-24B-v2.0",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-09-11T15:05:52Z |
---
license: apache-2.0
base_model:
- yvvki/Erotophobia-24B-v2.0
base_model_relation: quantized
pipeline_tag: text-generation
library_name: safetensors
tags:
- exl3
- 4-bit
---
## EXL3 quants: [ [H8-4.0BPW](https://huggingface.co/DeathGodlike/Erotophobia-24B-v2.0_H8-4.0BPW_EXL3/tree/H8-4.0BPW) ]
# Original model: [Erotophobia-24B-v2.0](https://huggingface.co/yvvki/Erotophobia-24B-v2.0) by [yvvki](https://huggingface.co/yvvki)
|
Amboara001/malagasy-to-betsim-t5-base-v2
|
Amboara001
| 2025-09-11T15:05:49Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-base",
"base_model:finetune:google-t5/t5-base",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | 2025-09-11T14:04:16Z |
---
library_name: transformers
license: apache-2.0
base_model: t5-base
tags:
- generated_from_trainer
model-index:
- name: malagasy-to-betsim-t5-base-v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# malagasy-to-betsim-t5-base-v2
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6292
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-------:|:----:|:---------------:|
| 1.4493 | 3.3333 | 500 | 1.1330 |
| 1.0069 | 6.6667 | 1000 | 0.9316 |
| 0.8069 | 10.0 | 1500 | 0.8125 |
| 0.6822 | 13.3333 | 2000 | 0.7414 |
| 0.5971 | 16.6667 | 2500 | 0.7125 |
| 0.5318 | 20.0 | 3000 | 0.6861 |
| 0.4788 | 23.3333 | 3500 | 0.6627 |
| 0.442 | 26.6667 | 4000 | 0.6569 |
| 0.4048 | 30.0 | 4500 | 0.6473 |
| 0.3801 | 33.3333 | 5000 | 0.6444 |
| 0.3633 | 36.6667 | 5500 | 0.6372 |
| 0.3446 | 40.0 | 6000 | 0.6347 |
| 0.3301 | 43.3333 | 6500 | 0.6296 |
| 0.3274 | 46.6667 | 7000 | 0.6292 |
| 0.3192 | 50.0 | 7500 | 0.6292 |
### Framework versions
- Transformers 4.56.1
- Pytorch 2.8.0+cu126
- Datasets 4.0.0
- Tokenizers 0.22.0
|
arabellamorris/blockassist-bc-tricky_sneaky_locust_1757603086
|
arabellamorris
| 2025-09-11T15:05:42Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"tricky sneaky locust",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T15:05:36Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- tricky sneaky locust
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
zamilaoela/blockassist-bc-singing_leaping_vulture_1757603100
|
zamilaoela
| 2025-09-11T15:05:08Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"singing leaping vulture",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T15:05:05Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- singing leaping vulture
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
cuadron11/jina-reranker-v2-base-multilingual-contrastive-all-8-3ep
|
cuadron11
| 2025-09-11T15:04:58Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"cross-encoder",
"reranker",
"generated_from_trainer",
"dataset_size:6400",
"loss:CachedMultipleNegativesRankingLoss",
"text-ranking",
"custom_code",
"arxiv:1908.10084",
"base_model:jinaai/jina-reranker-v2-base-multilingual",
"base_model:finetune:jinaai/jina-reranker-v2-base-multilingual",
"model-index",
"region:us"
] |
text-ranking
| 2025-09-11T15:04:44Z |
---
tags:
- sentence-transformers
- cross-encoder
- reranker
- generated_from_trainer
- dataset_size:6400
- loss:CachedMultipleNegativesRankingLoss
base_model: jinaai/jina-reranker-v2-base-multilingual
pipeline_tag: text-ranking
library_name: sentence-transformers
metrics:
- map
- mrr@10
- ndcg@10
model-index:
- name: CrossEncoder based on jinaai/jina-reranker-v2-base-multilingual
results:
- task:
type: cross-encoder-reranking
name: Cross Encoder Reranking
dataset:
name: jina reranker v2 base multilingual contrastive all 8 3ep
type: jina-reranker-v2-base-multilingual-contrastive-all-8-3ep
metrics:
- type: map
value: 0.0144
name: Map
- type: mrr@10
value: 0.0144
name: Mrr@10
- type: ndcg@10
value: 0.0144
name: Ndcg@10
---
# CrossEncoder based on jinaai/jina-reranker-v2-base-multilingual
This is a [Cross Encoder](https://www.sbert.net/docs/cross_encoder/usage/usage.html) model finetuned from [jinaai/jina-reranker-v2-base-multilingual](https://huggingface.co/jinaai/jina-reranker-v2-base-multilingual) using the [sentence-transformers](https://www.SBERT.net) library. It computes scores for pairs of texts, which can be used for text reranking and semantic search.
## Model Details
### Model Description
- **Model Type:** Cross Encoder
- **Base model:** [jinaai/jina-reranker-v2-base-multilingual](https://huggingface.co/jinaai/jina-reranker-v2-base-multilingual) <!-- at revision 2f894e63642a95228da19cdd583cd2309983c867 -->
- **Maximum Sequence Length:** 1024 tokens
- **Number of Output Labels:** 1 label
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Documentation:** [Cross Encoder Documentation](https://www.sbert.net/docs/cross_encoder/usage/usage.html)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Cross Encoders on Hugging Face](https://huggingface.co/models?library=sentence-transformers&other=cross-encoder)
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import CrossEncoder
# Download from the 🤗 Hub
model = CrossEncoder("cuadron11/jina-reranker-v2-base-multilingual-contrastive-all-8-3ep")
# Get scores for pairs of texts
pairs = [
['Noiz aurkeztu zuen Espainiako Gobernuak Next Generation funtsak kudeatzeko zirriborroa?', '[TOPIC: Mozioa, Mikel Otero Gabirondo EH Bildu taldeko legebiltzarkideak aurkeztua, Europar Batasunaren Next Generation funtsak kudeatzeko urgentziaz bulego estrategiko bat osatzearen inguruan. Eztabaida eta behin betiko ebazpena]\n[LARREA LASO, (PV-ETP)]:\nIkus dezagun; hemen, gakoa da gauzak zein ordenatan egin diren. Eta harrigarria egiten zait zuek gugana etortzea esanez ordena litzatekeela hautatzea, elkarrizketa abiaraztea... Zer elkarrizketa? Zer elkarrizketa egin duzue? Orain hasi behar al duzue, Espainiako Gobernuak jada zirriborroa duenean? Zuek prestatu diozuen zirriborroa, zeuok prestarazi duzuena? Eta, benetan, Otero jaunaren hitzak neuretzen ditut. Ikus dezagun; hemen, gakoa gardentasuna da, lehia askea. Beste erkidego batzuetan, ekainean edo (Date: 15.10.2020)'],
['Zein dira talde sustatzailearen eginkizunak UPV/EHUko Familia eta Komunitateko Medikuntzako Ikasgelaren hitzarmenaren barruan?', 'Era berean, proposatu da hitzarmena sinatu duten alderdiei eskumena ematea batzordekideak izenda ditzaten, egokitzat jotzen denean. Betebehar bakarra izango da beste aldeari batzordearen eraketan eginiko aldaketen berri ematea; kasu horietan, ez da beharrezkoa izango beste hitzarmen bat sinatzea.\nLaugarrena. Talde sustatzailea.\nTalde sustatzaile bat eratzea erabaki da, hitzarmenaren xedea lortzeko beharrezkoak diren jarduerak proposatzeko eta kudeatzeko. Alderdi bakoitzak gehienez hiru pertsona izango ditu, hau da, UPV/EHUko hiru pertsona gehienez eta Osasun Saileko hiru pertsona gehienez.\nHauek dira talde sustatzailearen eginkizunak:\na) Akordio honetan aurreikusitako helburuak lortzeko garatu beharreko jardueren plana proposatzea. Planak prozesuaren eraginkortasunarekin edo efizientziarekin lotutako kudeaketa adierazleak izango ditu.\nb) Jarraipen Batzordeak onartutako jarduerak kudeatzen laguntzea.\nBosgarrena. Idazkaritza Teknikoa.\nIdazkaritza Teknikoaren eginkizunak honako hauek dira:\na) Akordio honen helburuak lortzeko talde sustatzaileak proposatutako jardueren plana eratzea.\nb) Familia eta Komunitateko Medikuntzako Ikasgela jarraipen batzordeak onartutako jarduerak egitea errazteko azpiegiturez eta ekipamenduez hornitzeko beharrezko kudeaketa tekniko eta ekonomiko guztiak gauzatzea.\nc) Jarraipen Batzordeak onartutako jardueretarako proposamenak abiarazi eta kudeatzea, akordio honen helburuak lortzeko.\nd) Familia eta Komunitateko Medikuntzako Ikasgelan garatutako jarduketak talde sustatzaileak proposatutako eta jarraipen batzordeak onartutako jardueren planean jasoak zehatz-mehatz deskribatzeko memoria eratzea, bai eta plan horretan ezarritako adierazleei buruzko informazioa ere.\ne) Memoria ekonomiko bat eratzea, Familia eta Komunitateko Medikuntzako Ikasgelan egindako jarduerak gauzatzeko sortu eta ordaindutako gastu guztiak, kontzeptuaren arabera banakatuta, zerrendatzen dituena.\nf) UPV/EHUko Familia eta Komunitateko Medikuntzako Ikasgelaren jarduerekin lotuta egindako gastuak justifikatzeko beharrezko dokumentazioa aurkeztea Osasun Saileko Plangintza, Antolamendu eta Ebaluazio Sanitarioko Zuzendaritzari.'],
['Zein dira Etxebizitza Legearen garapenean aurrera eramateko falta diren ekinbideak?', '[TOPIC: Mozioa, Maider Otamendi Tolosa EH Bildu taldeko legebiltzarkideak aurkeztua, Etxebizitza Legeari buruz. Eztabaida eta behin betiko ebazpena]\n[OTAMENDI TOLOSA, (EH Bildu)]:\neta fidantzen deposituarena. Baina legea onartu zenetik 10 hilabete pasa dira jada eta legearen garapena aurreratuago egon beharko litzateke. Beraz, badago zer egina. Lehenbailehen martxan jarri beharreko hainbat ekinbide badaude. Adibidez, etxebizitza-gaietarako organismo publikoa sortzea, jenderik gabeko etxebizitzen erregistroa sortu beharra dago, edo alokairurako parke publikoa handitu beharra dago, beharrezko bitarteko guztiak horretara bideratuz. Atzoko jardunaldian entzun ahal izan genizuen esaten alokairuko etxe bat eskuratu ahal izateko (Date: 21.04.2016)'],
['Zein da Gorka Urbizuk bakarkako bidean kaleratu duen lehen diskoaren izena?', 'Musika\n\nGorka Urbizuk bakarkako lehenbiziko diskoa plazaratu du\n\nEzustean, impasse tartea eten, eta bakarkako bideari lotu zaio Gorka Urbizu (Lekunberri, Nafarroa, 1977); noranzkoa garbi, baina emeki. Berri Txarrak taldeak 2019an ibilbidea bukatuta ere, doinu berrien bila aritu da musikaria urteotan, eta franko aurkitu ditu azkenerako. Horietako hamar jaso ditu bilduma batean, eta bakarkako lehenbiziko diskoa plazaratu du hala: Hasiera bat. Entzun hemen.\n\nZerrenda moduko bat osatzen dute Urbizuk argitaraturiko hamar kantuek: Maitasun bat, Teoria bat, Tren bat, Toki bat, Janela bat, Kolore bat, Lilura bat, Etxe bat, Sute bat eta Besterik ez. Pieza horietan guztietan, doinu aski biluziak bistaratu ditu musikariak. Soinu geruza gutxi metatu ditu abestietan; kontrara, «gordin» utzi ditu, oro har. Kantuak «erantzi, hustu eta kimatu», horien muinak agerian uzteko saiakera betean, diskoarekin batera argitaratutako oharrean idatzi dutenez. «Soiltasunaren ederra lortzen ahaleginduz, sortuko denaren beldurrik gabe».\n\nSoila izan da diskoa plazaratzeko manera ere. Kantuak ustekabez heldu dira jende gehien-gehienarentzat. Igande iluntzera arte, Urbizuk ez zuen deus iragarria. Orduantxe, atzerako kontu bat argitaratu zuen sare sozialetan, gauerdian zerbait ateratzekoa zela iradokita; besterik ez. Gainera, ez du argitaratu aurrerapen kanturik ere. Tren bat abestian, «ikusmenak itsututa gaude», dio musikariak gaurko gizarteaz. Eta, akaso horregatik, halaxe nahiago izan du diskoa eman. Hala eta guztiz, begiei eskainitako pieza bat ere kaleratu du: bideoklip bat argitaratu du. Teoria bat kantuarentzat eginikoa da. Alexander Cabeza Trigg zinemagileak egin du.\n\nhttps://www.youtube.com/watch?v=32OnN08lH5g'],
['Zer gertatu zen Aretako 2 urteko gelarekin hezkuntza-komunitateak protesta egin ondoren?', '[TOPIC: Galdera, Isabel González Rodríguez Elkarrekin Podemos-IU taldeko legebiltzarkideak Hezkuntzako sailburuari egina, Aretako 2 urteko gela ixteari buruz]\n[GONZÁLEZ RODRÍGUEZ, (EP-IU)]:\nez dagoelako jolasik; eta oso argi hitz egiten dutelako. Sailak mehatxu egiten du, hezkuntza-komunitateak erantzun egiten du, sailak atzera egiten du, eta hori da gertaeren segida. Baina zer gertatuko zatekeen komunitateak erantzun izan ez balu? Bada, argi eta garbi, gela itxi egingo zenuketen. Ziur horrela izango litzatekeela. Eta hori da gertatutakoaren sekuentzia. Hezkuntza Sailak erabaki bat hartzen du, komunitateak protesta egiten du, Hezkuntza Sailak atzera egiten du. Eta zer gertatuko (Date: 31.03.2023)'],
]
scores = model.predict(pairs)
print(scores.shape)
# (5,)
# Or rank different texts based on similarity to a single text
ranks = model.rank(
'Noiz aurkeztu zuen Espainiako Gobernuak Next Generation funtsak kudeatzeko zirriborroa?',
[
'[TOPIC: Mozioa, Mikel Otero Gabirondo EH Bildu taldeko legebiltzarkideak aurkeztua, Europar Batasunaren Next Generation funtsak kudeatzeko urgentziaz bulego estrategiko bat osatzearen inguruan. Eztabaida eta behin betiko ebazpena]\n[LARREA LASO, (PV-ETP)]:\nIkus dezagun; hemen, gakoa da gauzak zein ordenatan egin diren. Eta harrigarria egiten zait zuek gugana etortzea esanez ordena litzatekeela hautatzea, elkarrizketa abiaraztea... Zer elkarrizketa? Zer elkarrizketa egin duzue? Orain hasi behar al duzue, Espainiako Gobernuak jada zirriborroa duenean? Zuek prestatu diozuen zirriborroa, zeuok prestarazi duzuena? Eta, benetan, Otero jaunaren hitzak neuretzen ditut. Ikus dezagun; hemen, gakoa gardentasuna da, lehia askea. Beste erkidego batzuetan, ekainean edo (Date: 15.10.2020)',
'Era berean, proposatu da hitzarmena sinatu duten alderdiei eskumena ematea batzordekideak izenda ditzaten, egokitzat jotzen denean. Betebehar bakarra izango da beste aldeari batzordearen eraketan eginiko aldaketen berri ematea; kasu horietan, ez da beharrezkoa izango beste hitzarmen bat sinatzea.\nLaugarrena. Talde sustatzailea.\nTalde sustatzaile bat eratzea erabaki da, hitzarmenaren xedea lortzeko beharrezkoak diren jarduerak proposatzeko eta kudeatzeko. Alderdi bakoitzak gehienez hiru pertsona izango ditu, hau da, UPV/EHUko hiru pertsona gehienez eta Osasun Saileko hiru pertsona gehienez.\nHauek dira talde sustatzailearen eginkizunak:\na) Akordio honetan aurreikusitako helburuak lortzeko garatu beharreko jardueren plana proposatzea. Planak prozesuaren eraginkortasunarekin edo efizientziarekin lotutako kudeaketa adierazleak izango ditu.\nb) Jarraipen Batzordeak onartutako jarduerak kudeatzen laguntzea.\nBosgarrena. Idazkaritza Teknikoa.\nIdazkaritza Teknikoaren eginkizunak honako hauek dira:\na) Akordio honen helburuak lortzeko talde sustatzaileak proposatutako jardueren plana eratzea.\nb) Familia eta Komunitateko Medikuntzako Ikasgela jarraipen batzordeak onartutako jarduerak egitea errazteko azpiegiturez eta ekipamenduez hornitzeko beharrezko kudeaketa tekniko eta ekonomiko guztiak gauzatzea.\nc) Jarraipen Batzordeak onartutako jardueretarako proposamenak abiarazi eta kudeatzea, akordio honen helburuak lortzeko.\nd) Familia eta Komunitateko Medikuntzako Ikasgelan garatutako jarduketak talde sustatzaileak proposatutako eta jarraipen batzordeak onartutako jardueren planean jasoak zehatz-mehatz deskribatzeko memoria eratzea, bai eta plan horretan ezarritako adierazleei buruzko informazioa ere.\ne) Memoria ekonomiko bat eratzea, Familia eta Komunitateko Medikuntzako Ikasgelan egindako jarduerak gauzatzeko sortu eta ordaindutako gastu guztiak, kontzeptuaren arabera banakatuta, zerrendatzen dituena.\nf) UPV/EHUko Familia eta Komunitateko Medikuntzako Ikasgelaren jarduerekin lotuta egindako gastuak justifikatzeko beharrezko dokumentazioa aurkeztea Osasun Saileko Plangintza, Antolamendu eta Ebaluazio Sanitarioko Zuzendaritzari.',
'[TOPIC: Mozioa, Maider Otamendi Tolosa EH Bildu taldeko legebiltzarkideak aurkeztua, Etxebizitza Legeari buruz. Eztabaida eta behin betiko ebazpena]\n[OTAMENDI TOLOSA, (EH Bildu)]:\neta fidantzen deposituarena. Baina legea onartu zenetik 10 hilabete pasa dira jada eta legearen garapena aurreratuago egon beharko litzateke. Beraz, badago zer egina. Lehenbailehen martxan jarri beharreko hainbat ekinbide badaude. Adibidez, etxebizitza-gaietarako organismo publikoa sortzea, jenderik gabeko etxebizitzen erregistroa sortu beharra dago, edo alokairurako parke publikoa handitu beharra dago, beharrezko bitarteko guztiak horretara bideratuz. Atzoko jardunaldian entzun ahal izan genizuen esaten alokairuko etxe bat eskuratu ahal izateko (Date: 21.04.2016)',
'Musika\n\nGorka Urbizuk bakarkako lehenbiziko diskoa plazaratu du\n\nEzustean, impasse tartea eten, eta bakarkako bideari lotu zaio Gorka Urbizu (Lekunberri, Nafarroa, 1977); noranzkoa garbi, baina emeki. Berri Txarrak taldeak 2019an ibilbidea bukatuta ere, doinu berrien bila aritu da musikaria urteotan, eta franko aurkitu ditu azkenerako. Horietako hamar jaso ditu bilduma batean, eta bakarkako lehenbiziko diskoa plazaratu du hala: Hasiera bat. Entzun hemen.\n\nZerrenda moduko bat osatzen dute Urbizuk argitaraturiko hamar kantuek: Maitasun bat, Teoria bat, Tren bat, Toki bat, Janela bat, Kolore bat, Lilura bat, Etxe bat, Sute bat eta Besterik ez. Pieza horietan guztietan, doinu aski biluziak bistaratu ditu musikariak. Soinu geruza gutxi metatu ditu abestietan; kontrara, «gordin» utzi ditu, oro har. Kantuak «erantzi, hustu eta kimatu», horien muinak agerian uzteko saiakera betean, diskoarekin batera argitaratutako oharrean idatzi dutenez. «Soiltasunaren ederra lortzen ahaleginduz, sortuko denaren beldurrik gabe».\n\nSoila izan da diskoa plazaratzeko manera ere. Kantuak ustekabez heldu dira jende gehien-gehienarentzat. Igande iluntzera arte, Urbizuk ez zuen deus iragarria. Orduantxe, atzerako kontu bat argitaratu zuen sare sozialetan, gauerdian zerbait ateratzekoa zela iradokita; besterik ez. Gainera, ez du argitaratu aurrerapen kanturik ere. Tren bat abestian, «ikusmenak itsututa gaude», dio musikariak gaurko gizarteaz. Eta, akaso horregatik, halaxe nahiago izan du diskoa eman. Hala eta guztiz, begiei eskainitako pieza bat ere kaleratu du: bideoklip bat argitaratu du. Teoria bat kantuarentzat eginikoa da. Alexander Cabeza Trigg zinemagileak egin du.\n\nhttps://www.youtube.com/watch?v=32OnN08lH5g',
'[TOPIC: Galdera, Isabel González Rodríguez Elkarrekin Podemos-IU taldeko legebiltzarkideak Hezkuntzako sailburuari egina, Aretako 2 urteko gela ixteari buruz]\n[GONZÁLEZ RODRÍGUEZ, (EP-IU)]:\nez dagoelako jolasik; eta oso argi hitz egiten dutelako. Sailak mehatxu egiten du, hezkuntza-komunitateak erantzun egiten du, sailak atzera egiten du, eta hori da gertaeren segida. Baina zer gertatuko zatekeen komunitateak erantzun izan ez balu? Bada, argi eta garbi, gela itxi egingo zenuketen. Ziur horrela izango litzatekeela. Eta hori da gertatutakoaren sekuentzia. Hezkuntza Sailak erabaki bat hartzen du, komunitateak protesta egiten du, Hezkuntza Sailak atzera egiten du. Eta zer gertatuko (Date: 31.03.2023)',
]
)
# [{'corpus_id': ..., 'score': ...}, {'corpus_id': ..., 'score': ...}, ...]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Cross Encoder Reranking
* Dataset: `jina-reranker-v2-base-multilingual-contrastive-all-8-3ep`
* Evaluated with [<code>CrossEncoderRerankingEvaluator</code>](https://sbert.net/docs/package_reference/cross_encoder/evaluation.html#sentence_transformers.cross_encoder.evaluation.CrossEncoderRerankingEvaluator) with these parameters:
```json
{
"at_k": 10,
"always_rerank_positives": false
}
```
| Metric | Value |
|:------------|:---------------------|
| map | 0.0144 (+0.0132) |
| mrr@10 | 0.0144 (+0.0135) |
| **ndcg@10** | **0.0144 (+0.0130)** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 6,400 training samples
* Columns: <code>query</code> and <code>positive</code>
* Approximate statistics based on the first 1000 samples:
| | query | positive |
|:--------|:------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 19 characters</li><li>mean: 93.98 characters</li><li>max: 255 characters</li></ul> | <ul><li>min: 373 characters</li><li>mean: 1213.64 characters</li><li>max: 2221 characters</li></ul> |
* Samples:
| query | positive |
|:---------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>Zenbat denborarako izendatzen dira Burutzako lanpostu funtzionalak Euskadiko antolamendu Sanitarioaren 8/1997 Legearen arabera?</code> | <code>Euskadiko antolamendu Sanitarioaren 8/1997 Legearen 28 ataleko 3. arauaren 8. puntuan xedatutakoaren arabera, Burutzako lanpostu funtzionalek lau urteko eperako izendapen tenporala eduki dezakete; lau urteko izendapen hori luza daiteke arau honetan ezarritakoaren arabera.<br>Ebazpen honen aurkako errekurtsoak.<br>Ebazpen honen aurka, gora jotzeko errekurtsoa aurkeztu ahal izango zaio Osakidetza Euskal osasun zerbitzuko zuzendari nagusiari, ebazpen hau dagokien Aldizkari Ofizialetan argitaratzen den azken egunaren biharamunetik hilabeteko epean.<br>Barakaldo, 2016ko ekainaren 7a.<br>Ezkerraldea-Enkarterri-Cruces ESIko zuzendari gerentea,<br>SANTIAGO RABANAL RETOLAZA.<br>ERANSKINA<br>MERITUEN BAREMOA (GEHIENEZ 66 PUNTU)<br>Merituen balorazioak hurrengo faseak edukiko ditu:<br>Proiektua eta bere defentsa (gehienez 30 puntu).<br>Fase honen oinarria da balorazio batzordeko kalifikatzailearen aurrean dagokion Atalaren antolaketa eta funtzionamenduari buruzko jendaurreko azalpena, eta izangaiarekin elkarrizketa egitea.<br>Fa...</code> |
| <code>Non gertatu da Iruñerriko 27 urteko gizonezko mendizalearen heriotza?</code> | <code>Iruñerriko mendizale bat hil da, Aspe mendian amilduta<br><br>Iruñerriko 27 urteko gizonezko bat hil da gaur goizean, Aspe mendian (Aragoi, Espainia). Ezbeharra 11:00 aldera gertatu da. Mendizale talde bat zihoan mendiko ipar aldeko bide batean gora, baina haietako bat amildu egin da, izotzean irrist eginda. Larrialdi zerbitzuek adierazi dutenez, mendizaleek material egokia zeramaten izotzean eskalatzeko. Guardia Zibilaren mendiko erreskate taldea joan da eroritako mendizalea zegoen tokiraino, baina hilotz zen ordurako.</code> |
| <code>Zein dira sindikatuek lan istripuak murrizteko egindako eskaerak?</code> | <code>CCOO sindikatuak irmo gaitzetsi du lan istripua. «Lan istripu tasa handienetako lurraldea da Nafarroa, eta zifra horiek murrizteak lehentasun izan behar du Nafarroako Gobernuarentzat eta inplikatutako eragileentzat». Patronalari dei egin dio Lan Arriskuen Prebentziorako legea «zorrotz betetzera», eta horretarako «behar diren baliabide guztiak» jarri beharko liratekeela gaineratu du.<br><br>Sindikatu horren irudiko, lantokira joateak ez lioke inori eragin behar inolako arriskurik. «Lan istripurik ez izateko erantzukizuna enpresen gain dago erabat, eta administrazioak funtsezko rola jokatzen du araudia betetzen dela zaintzeko orduan», esan du.<br><br>Antzera eta gogor mintzatu da ELA. «Egoera horren erantzule nagusiak patronala eta erakunde publikoak dira». Sindikatuaren arabera, enpresek, sistematikoki, ez dute betetzen legedia, eta Nafarroako Gobernuak uko egiten dio «beharrezko kontrol neurriak» ezartzeari. Hala, ELAk eskatu du Nafarroako Osasun Publikoaren Lan Osasunaren Institututuko ikuskaritz...</code> |
* Loss: [<code>CachedMultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/cross_encoder/losses.html#cachedmultiplenegativesrankingloss) with these parameters:
```json
{
"scale": 10.0,
"num_negatives": null,
"activation_fn": "torch.nn.modules.activation.Sigmoid",
"mini_batch_size": 16
}
```
### Evaluation Dataset
#### Unnamed Dataset
* Size: 1,600 evaluation samples
* Columns: <code>query</code> and <code>positive</code>
* Approximate statistics based on the first 1000 samples:
| | query | positive |
|:--------|:------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 26 characters</li><li>mean: 93.84 characters</li><li>max: 271 characters</li></ul> | <ul><li>min: 361 characters</li><li>mean: 1186.32 characters</li><li>max: 2297 characters</li></ul> |
* Samples:
| query | positive |
|:--------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>Noiz aurkeztu zuen Espainiako Gobernuak Next Generation funtsak kudeatzeko zirriborroa?</code> | <code>[TOPIC: Mozioa, Mikel Otero Gabirondo EH Bildu taldeko legebiltzarkideak aurkeztua, Europar Batasunaren Next Generation funtsak kudeatzeko urgentziaz bulego estrategiko bat osatzearen inguruan. Eztabaida eta behin betiko ebazpena]<br>[LARREA LASO, (PV-ETP)]:<br>Ikus dezagun; hemen, gakoa da gauzak zein ordenatan egin diren. Eta harrigarria egiten zait zuek gugana etortzea esanez ordena litzatekeela hautatzea, elkarrizketa abiaraztea... Zer elkarrizketa? Zer elkarrizketa egin duzue? Orain hasi behar al duzue, Espainiako Gobernuak jada zirriborroa duenean? Zuek prestatu diozuen zirriborroa, zeuok prestarazi duzuena? Eta, benetan, Otero jaunaren hitzak neuretzen ditut. Ikus dezagun; hemen, gakoa gardentasuna da, lehia askea. Beste erkidego batzuetan, ekainean edo (Date: 15.10.2020)</code> |
| <code>Zein dira talde sustatzailearen eginkizunak UPV/EHUko Familia eta Komunitateko Medikuntzako Ikasgelaren hitzarmenaren barruan?</code> | <code>Era berean, proposatu da hitzarmena sinatu duten alderdiei eskumena ematea batzordekideak izenda ditzaten, egokitzat jotzen denean. Betebehar bakarra izango da beste aldeari batzordearen eraketan eginiko aldaketen berri ematea; kasu horietan, ez da beharrezkoa izango beste hitzarmen bat sinatzea.<br>Laugarrena. Talde sustatzailea.<br>Talde sustatzaile bat eratzea erabaki da, hitzarmenaren xedea lortzeko beharrezkoak diren jarduerak proposatzeko eta kudeatzeko. Alderdi bakoitzak gehienez hiru pertsona izango ditu, hau da, UPV/EHUko hiru pertsona gehienez eta Osasun Saileko hiru pertsona gehienez.<br>Hauek dira talde sustatzailearen eginkizunak:<br>a) Akordio honetan aurreikusitako helburuak lortzeko garatu beharreko jardueren plana proposatzea. Planak prozesuaren eraginkortasunarekin edo efizientziarekin lotutako kudeaketa adierazleak izango ditu.<br>b) Jarraipen Batzordeak onartutako jarduerak kudeatzen laguntzea.<br>Bosgarrena. Idazkaritza Teknikoa.<br>Idazkaritza Teknikoaren eginkizunak honako hauek dira...</code> |
| <code>Zein dira Etxebizitza Legearen garapenean aurrera eramateko falta diren ekinbideak?</code> | <code>[TOPIC: Mozioa, Maider Otamendi Tolosa EH Bildu taldeko legebiltzarkideak aurkeztua, Etxebizitza Legeari buruz. Eztabaida eta behin betiko ebazpena]<br>[OTAMENDI TOLOSA, (EH Bildu)]:<br>eta fidantzen deposituarena. Baina legea onartu zenetik 10 hilabete pasa dira jada eta legearen garapena aurreratuago egon beharko litzateke. Beraz, badago zer egina. Lehenbailehen martxan jarri beharreko hainbat ekinbide badaude. Adibidez, etxebizitza-gaietarako organismo publikoa sortzea, jenderik gabeko etxebizitzen erregistroa sortu beharra dago, edo alokairurako parke publikoa handitu beharra dago, beharrezko bitarteko guztiak horretara bideratuz. Atzoko jardunaldian entzun ahal izan genizuen esaten alokairuko etxe bat eskuratu ahal izateko (Date: 21.04.2016)</code> |
* Loss: [<code>CachedMultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/cross_encoder/losses.html#cachedmultiplenegativesrankingloss) with these parameters:
```json
{
"scale": 10.0,
"num_negatives": null,
"activation_fn": "torch.nn.modules.activation.Sigmoid",
"mini_batch_size": 16
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `learning_rate`: 2e-05
- `warmup_ratio`: 0.1
- `load_best_model_at_end`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 3
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `parallelism_config`: None
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `hub_revision`: None
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `liger_kernel_config`: None
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
- `router_mapping`: {}
- `learning_rate_mapping`: {}
</details>
### Training Logs
| Epoch | Step | Training Loss | Validation Loss | jina-reranker-v2-base-multilingual-contrastive-all-8-3ep_ndcg@10 |
|:-------:|:-------:|:-------------:|:---------------:|:----------------------------------------------------------------:|
| **0.5** | **200** | **0.0482** | **0.0209** | **0.0144 (+0.0130)** |
| 1.0 | 400 | 0.0208 | 0.0170 | 0.0144 (+0.0130) |
| 1.5 | 600 | 0.0186 | 0.0164 | 0.0144 (+0.0130) |
| 2.0 | 800 | 0.0199 | 0.0158 | 0.0144 (+0.0130) |
| 2.5 | 1000 | 0.015 | 0.0159 | 0.0144 (+0.0130) |
| 3.0 | 1200 | 0.0205 | 0.0158 | 0.0144 (+0.0130) |
* The bold row denotes the saved checkpoint.
### Framework Versions
- Python: 3.9.7
- Sentence Transformers: 5.0.0
- Transformers: 4.56.0
- PyTorch: 2.7.1+cu126
- Accelerate: 1.5.2
- Datasets: 4.0.0
- Tokenizers: 0.22.0
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
abadkibriya3524/blockassist-bc-timid_padded_ape_1757603067
|
abadkibriya3524
| 2025-09-11T15:04:41Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"timid padded ape",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T15:04:36Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- timid padded ape
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
harmonyblevinsm0/blockassist-bc-silent_miniature_monkey_1757602975
|
harmonyblevinsm0
| 2025-09-11T15:04:29Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"silent miniature monkey",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T15:03:56Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- silent miniature monkey
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
rbelanec/train_cola_123_1757596071
|
rbelanec
| 2025-09-11T15:03:25Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"llama-factory",
"prefix-tuning",
"generated_from_trainer",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"region:us"
] | null | 2025-09-11T13:12:56Z |
---
library_name: peft
license: llama3
base_model: meta-llama/Meta-Llama-3-8B-Instruct
tags:
- llama-factory
- prefix-tuning
- generated_from_trainer
model-index:
- name: train_cola_123_1757596071
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# train_cola_123_1757596071
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the cola dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9521
- Num Input Tokens Seen: 6929680
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 123
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Input Tokens Seen |
|:-------------:|:-----:|:-----:|:---------------:|:-----------------:|
| 0.1268 | 1.0 | 3848 | 0.2820 | 346872 |
| 0.3132 | 2.0 | 7696 | 0.2417 | 693752 |
| 0.2179 | 3.0 | 11544 | 0.2405 | 1040128 |
| 0.2649 | 4.0 | 15392 | 0.2411 | 1386696 |
| 0.2187 | 5.0 | 19240 | 0.2434 | 1733072 |
| 0.1872 | 6.0 | 23088 | 0.2394 | 2079640 |
| 0.2849 | 7.0 | 26936 | 0.2419 | 2425920 |
| 0.1858 | 8.0 | 30784 | 0.2366 | 2772144 |
| 0.2726 | 9.0 | 34632 | 0.2393 | 3118472 |
| 0.2241 | 10.0 | 38480 | 0.2438 | 3465288 |
| 0.2284 | 11.0 | 42328 | 0.2862 | 3811696 |
| 0.0849 | 12.0 | 46176 | 0.2743 | 4158168 |
| 0.1104 | 13.0 | 50024 | 0.3264 | 4504416 |
| 0.1854 | 14.0 | 53872 | 0.3800 | 4850888 |
| 0.1511 | 15.0 | 57720 | 0.4422 | 5197456 |
| 0.0483 | 16.0 | 61568 | 0.5154 | 5543848 |
| 0.1082 | 17.0 | 65416 | 0.6811 | 5890320 |
| 0.2789 | 18.0 | 69264 | 0.7981 | 6237200 |
| 0.3151 | 19.0 | 73112 | 0.9202 | 6583408 |
| 0.0006 | 20.0 | 76960 | 0.9521 | 6929680 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.8.0+cu128
- Datasets 3.6.0
- Tokenizers 0.21.1
|
bytedance-research/HuMo
|
bytedance-research
| 2025-09-11T15:03:16Z | 0 | 17 | null |
[
"image-to-video",
"arxiv:2509.08519",
"license:apache-2.0",
"region:us"
] |
image-to-video
| 2025-09-10T07:41:30Z |
---
license: apache-2.0
pipeline_tag: image-to-video
---
# HuMo: Human-Centric Video Generation via Collaborative Multi-Modal Conditioning
<div align="center">
[](https://arxiv.org/abs/2509.08519)
[](https://phantom-video.github.io/HuMo/)
<a href="https://huggingface.co/bytedance-research/HuMo"><img src="https://img.shields.io/static/v1?label=%F0%9F%A4%97%20Hugging%20Face&message=Model&color=orange"></a>
</div>
> [**HuMo: Human-Centric Video Generation via Collaborative Multi-Modal Conditioning**](https://arxiv.org/abs/2509.08519)<br>
> [Liyang Chen](https://scholar.google.com.hk/citations?user=jk6jWXgAAAAJ&hl)<sup> * </sup>, [Tianxiang Ma](https://tianxiangma.github.io/)<sup> * </sup>, [Jiawei Liu](https://scholar.google.com/citations?user=X21Fz-EAAAAJ), [Bingchuan Li](https://scholar.google.com/citations?user=ac5Se6QAAAAJ)<sup>†</sup>, [Zhuowei Chen](https://scholar.google.com/citations?user=ow1jGJkAAAAJ), [Lijie Liu](https://liulj13.github.io/), [Xu He](https://scholar.google.com.hk/citations?user=KMrFk2MAAAAJ&hl), [Gen Li](https://scholar.google.com/citations?user=wqA7EIoAAAAJ), [Qian He](https://scholar.google.com/citations?user=9rWWCgUAAAAJ), [Zhiyong Wu](https://scholar.google.com.hk/citations?hl=zh-CN&user=7Xl6KdkAAAAJ&)<sup> § </sup>
> <br><sup> * </sup>Equal contribution,<sup> † </sup>Project lead, <sup> § </sup>Corresponding author
> <br>Tsinghua University | Intelligent Creation Team, ByteDance<br>
<p align="center">
<img src="assets/teaser.png" width=95%>
<p>
## ✨ Key Features
HuMo is a unified, human-centric video generation framework designed to produce high-quality, fine-grained, and controllable human videos from multimodal inputs—including text, images, and audio. It supports strong text prompt following, consistent subject preservation, synchronized audio-driven motion.
> - **VideoGen from Text-Image** - Customize character appearance, clothing, makeup, props, and scenes using text prompts combined with reference images.
> - **VideoGen from Text-Audio** - Generate audio-synchronized videos solely from text and audio inputs, removing the need for image references and enabling greater creative freedom.
> - **VideoGen from Text-Image-Audio** - Achieve the higher level of customization and control by combining text, image, and audio guidance.
## 📑 Todo List
- [x] Release Paper
- [x] Checkpoint of HuMo-17B
- [x] Inference Codes
- [ ] Text-Image Input
- [x] Text-Audio Input
- [x] Text-Image-Audio Input
- [x] Multi-GPU Inference
- [ ] Release Prompts to Generate Demo of ***Faceless Thrones***
- [ ] HuMo-1.7B
## ⚡️ Quickstart
### Installation
```
conda create -n humo python=3.11
conda activate humo
pip install torch==2.5.1 torchvision==0.20.1 torchaudio==2.5.1 --index-url https://download.pytorch.org/whl/cu124
pip install flash_attn==2.6.3
pip install -r requirements.txt
conda install -c conda-forge ffmpeg
```
### Model Preparation
| Models | Download Link | Notes |
|--------------|---------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------|
| HuMo-17B | 🤗 [Huggingface](https://huggingface.co/bytedance-research/HuMo/tree/main) | Released before September 15
| HuMo-1.7B | 🤗 [Huggingface](https://huggingface.co/bytedance-research/HuMo/tree/main) | To be released soon
| Wan-2.1 | 🤗 [Huggingface](https://huggingface.co/Wan-AI/Wan2.1-T2V-1.3B) | VAE & Text encoder
| Whisper-large-v3 | 🤗 [Huggingface](https://huggingface.co/openai/whisper-large-v3) | Audio encoder
| Audio separator | 🤗 [Huggingface](https://huggingface.co/huangjackson/Kim_Vocal_2) | Remove background noise (optional)
Download models using huggingface-cli:
``` sh
huggingface-cli download Wan-AI/Wan2.1-T2V-1.3B --local-dir ./weights/Wan2.1-T2V-1.3B
huggingface-cli download bytedance-research/HuMo --local-dir ./weights/HuMo
huggingface-cli download openai/whisper-large-v3 --local-dir ./weights/whisper-large-v3
huggingface-cli download huangjackson/Kim_Vocal_2 --local-dir ./weights/audio_separator
```
### Run Multimodal-Condition-to-Video Generation
Our model is compatible with both 480P and 720P resolutions. 720P inference will achieve much better quality.
> Some tips
> - Please prepare your text, reference images and audio as described in [test_case.json](./examples/test_case.json).
> - We support Multi-GPU inference using FSDP + Sequence Parallel.
> - The model is trained on 97-frame videos at 25 FPS. Generating video longer than 97 frames may degrade the performance. We will provide a new checkpoint for longer generation.
#### Configure HuMo
HuMo’s behavior and output can be customized by modifying [generate.yaml](humo/configs/inference/generate.yaml) configuration file.
The following parameters control generation length, video resolution, and how text, image, and audio inputs are balanced:
```yaml
generation:
frames: <int> # Number of frames for the generated video.
scale_a: <float> # Strength of audio guidance. Higher = better audio-motion sync.
scale_t: <float> # Strength of text guidance. Higher = better adherence to text prompts.
mode: "TA" # Input mode: "TA" for text+audio; "TIA" for text+image+audio.
height: 720 # Video height (e.g., 720 or 480).
width: 1280 # Video width (e.g., 1280 or 832).
diffusion:
timesteps:
sampling:
steps: 50 # Number of denoising steps. Lower (30–40) = faster generation.
```
#### 1. Text-Audio Input
``` sh
bash infer_ta.sh
```
#### 2. Text-Image-Audio Input
``` sh
bash infer_tia.sh
```
## Acknowledgements
Our work builds upon and is greatly inspired by several outstanding open-source projects, including [Phantom](https://github.com/Phantom-video/Phantom), [SeedVR](https://github.com/IceClear/SeedVR?tab=readme-ov-file), [MEMO](https://github.com/memoavatar/memo), [Hallo3](https://github.com/fudan-generative-vision/hallo3), [OpenHumanVid](https://github.com/fudan-generative-vision/OpenHumanVid), and [Whisper](https://github.com/openai/whisper). We sincerely thank the authors and contributors of these projects for generously sharing their excellent codes and ideas.
## ⭐ Citation
If HuMo is helpful, please help to ⭐ the repo.
If you find this project useful for your research, please consider citing our [paper](https://arxiv.org/abs/2509.08519).
### BibTeX
```bibtex
@misc{chen2025humo,
title={HuMo: Human-Centric Video Generation via Collaborative Multi-Modal Conditioning},
author={Liyang Chen and Tianxiang Ma and Jiawei Liu and Bingchuan Li and Zhuowei Chen and Lijie Liu and Xu He and Gen Li and Qian He and Zhiyong Wu},
year={2025},
eprint={2509.08519},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2509.08519},
}
```
## 📧 Contact
If you have any comments or questions regarding this open-source project, please open a new issue or contact [Liyang Chen](lyangchen@outlook.com) and [Tianxiang Ma](https://tianxiangma.github.io/).
|
raileshikder7241/blockassist-bc-slender_amphibious_cheetah_1757602975
|
raileshikder7241
| 2025-09-11T15:03:16Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"slender amphibious cheetah",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T15:03:12Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- slender amphibious cheetah
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
cwayneconnor/blockassist-bc-mute_loud_lynx_1757602826
|
cwayneconnor
| 2025-09-11T15:02:58Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"mute loud lynx",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T15:01:44Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- mute loud lynx
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
omerbkts/blockassist-bc-insectivorous_bold_lion_1757602887
|
omerbkts
| 2025-09-11T15:02:35Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"insectivorous bold lion",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T15:01:55Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- insectivorous bold lion
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
rbelanec/train_cola_42_1757596047
|
rbelanec
| 2025-09-11T15:01:36Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"llama-factory",
"prefix-tuning",
"generated_from_trainer",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"region:us"
] | null | 2025-09-11T13:08:17Z |
---
library_name: peft
license: llama3
base_model: meta-llama/Meta-Llama-3-8B-Instruct
tags:
- llama-factory
- prefix-tuning
- generated_from_trainer
model-index:
- name: train_cola_42_1757596047
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# train_cola_42_1757596047
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the cola dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2412
- Num Input Tokens Seen: 6927000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Input Tokens Seen |
|:-------------:|:-----:|:-----:|:---------------:|:-----------------:|
| 0.2546 | 1.0 | 3848 | 0.2480 | 346040 |
| 0.1205 | 2.0 | 7696 | 0.2484 | 692368 |
| 0.2615 | 3.0 | 11544 | 0.2438 | 1039080 |
| 0.2572 | 4.0 | 15392 | 0.2436 | 1385192 |
| 0.2552 | 5.0 | 19240 | 0.2432 | 1731824 |
| 0.3358 | 6.0 | 23088 | 0.2496 | 2078408 |
| 0.2235 | 7.0 | 26936 | 0.2438 | 2424592 |
| 0.2903 | 8.0 | 30784 | 0.2476 | 2770768 |
| 0.2715 | 9.0 | 34632 | 0.2459 | 3117120 |
| 0.2141 | 10.0 | 38480 | 0.2748 | 3463336 |
| 0.2359 | 11.0 | 42328 | 0.2426 | 3809536 |
| 0.316 | 12.0 | 46176 | 0.2439 | 4155688 |
| 0.3199 | 13.0 | 50024 | 0.2455 | 4502336 |
| 0.2547 | 14.0 | 53872 | 0.2459 | 4848864 |
| 0.2146 | 15.0 | 57720 | 0.2422 | 5194640 |
| 0.3529 | 16.0 | 61568 | 0.2419 | 5541160 |
| 0.2237 | 17.0 | 65416 | 0.2437 | 5887864 |
| 0.3058 | 18.0 | 69264 | 0.2429 | 6234216 |
| 0.2963 | 19.0 | 73112 | 0.2419 | 6580528 |
| 0.3099 | 20.0 | 76960 | 0.2412 | 6927000 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.8.0+cu128
- Datasets 3.6.0
- Tokenizers 0.21.1
|
Miracle-man/blockassist
|
Miracle-man
| 2025-09-11T15:01:30Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"singing lithe koala",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T17:52:26Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- singing lithe koala
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
jazmynikrr/blockassist-bc-dormant_hulking_eagle_1757602851
|
jazmynikrr
| 2025-09-11T15:01:23Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"dormant hulking eagle",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T15:01:18Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- dormant hulking eagle
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
schnecklothheath/blockassist-bc-soaring_leaping_snake_1757602864
|
schnecklothheath
| 2025-09-11T15:01:18Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"soaring leaping snake",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T15:01:14Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- soaring leaping snake
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
khazarai/Quran-R1
|
khazarai
| 2025-09-11T15:00:32Z | 0 | 1 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:unsloth/Qwen3-0.6B",
"lora",
"sft",
"transformers",
"trl",
"unsloth",
"text-generation",
"conversational",
"en",
"dataset:musaoc/Quran-reasoning-SFT",
"base_model:unsloth/Qwen3-0.6B",
"license:mit",
"region:us"
] |
text-generation
| 2025-09-11T14:57:47Z |
---
base_model: unsloth/Qwen3-0.6B
library_name: peft
pipeline_tag: text-generation
tags:
- base_model:adapter:unsloth/Qwen3-0.6B
- lora
- sft
- transformers
- trl
- unsloth
license: mit
datasets:
- musaoc/Quran-reasoning-SFT
language:
- en
---
# Model Card for Quran-R1
## Model Details
This model is a fine-tuned version of Qwen/Qwen3-0.6B on the musaoc/Quran-reasoning-SFT dataset.
It is designed to perform reasoning and question-answering tasks related to the Quran, providing structured reasoning steps along with the final answer.
### Model Description
- **Language(s) (NLP):** English
- **License:** MIT
- **Fine-tuning method**: Supervised fine-tuning (SFT)
- **Finetuned from model:** Qwen3-0.6B
- **Dataset:** musaoc/Quran-reasoning-SFT
## Uses
The model is intended for:
- Educational purposes: Assisting with structured reasoning about Quranic content.
- Research: Exploring reasoning capabilities of small LLMs fine-tuned on religious text.
- QA Systems: Providing answers with reasoning traces.
Not intended for:
- Authoritative religious rulings (fatwas)
- Sensitive or controversial theological debates
- High-stakes decision making
### Out-of-Scope Use
- Scope: The model is limited to the reasoning dataset it was trained on. It may not generalize to broader Quranic studies.
## Bias, Risks, and Limitations
- Bias: Outputs reflect dataset biases and may not represent all scholarly interpretations.
- Hallucination risk: Like all LLMs, it may generate incorrect or fabricated reasoning.
- Religious sensitivity: Responses may not align with every sect, school, or interpretation. Use with caution in sensitive contexts.
## How to Get Started with the Model
Use the code below to get started with the model.
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel
tokenizer = AutoTokenizer.from_pretrained("unsloth/Qwen3-0.6B",)
base_model = AutoModelForCausalLM.from_pretrained(
"unsloth/Qwen3-0.6B",
device_map={"": 0}
)
model = PeftModel.from_pretrained(base_model,"khazarai/Quran-R1")
question = "How does the Quran address the issue of parental authority and children’s rights?"
messages = [
{"role" : "user", "content" : question}
]
text = tokenizer.apply_chat_template(
messages,
tokenize = False,
add_generation_prompt = True,
enable_thinking = True,
)
from transformers import TextStreamer
_ = model.generate(
**tokenizer(text, return_tensors = "pt").to("cuda"),
max_new_tokens = 512,
temperature = 0.6,
top_p = 0.95,
top_k = 20,
streamer = TextStreamer(tokenizer, skip_prompt = True),
)
```
## Training Data
**Dataset**: musaoc/Quran-reasoning-SFT
The Quranic Reasoning Question Answering (QRQA) Dataset is a synthetic dataset designed for experimenting purposes and for training and evaluating models capable of answering complex, knowledge-intensive questions about the Quran with a strong emphasis on reasoning.
This dataset is particularly well-suited for Supervised Fine-Tuning (SFT) of Large Language Models (LLMs) to enhance their understanding of Islamic scripture and their ability to provide thoughtful, reasoned responses.
### Framework versions
- PEFT 0.17.0
|
milfordprudence/blockassist-bc-aquatic_reclusive_cassowary_1757602806
|
milfordprudence
| 2025-09-11T15:00:21Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"chattering hairy woodpecker",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T15:00:16Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- chattering hairy woodpecker
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
goshujaieja/blockassist-bc-untamed_armored_ram_1757602778
|
goshujaieja
| 2025-09-11T14:59:52Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"untamed armored ram",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T14:59:48Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- untamed armored ram
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
allfordedgar26/blockassist-bc-omnivorous_sprightly_aardvark_1757602731
|
allfordedgar26
| 2025-09-11T14:58:59Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"omnivorous sprightly aardvark",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T14:58:55Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- omnivorous sprightly aardvark
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
pabeypaul/blockassist-bc-sizable_knobby_salamander_1757602730
|
pabeypaul
| 2025-09-11T14:58:58Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"omnivorous sprightly aardvark",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T14:58:55Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- omnivorous sprightly aardvark
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
KamilMpakiet/agatadwa
|
KamilMpakiet
| 2025-09-11T14:58:22Z | 0 | 0 | null |
[
"license:other",
"region:us"
] | null | 2025-09-11T14:11:12Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
|
rbelanec/train_cola_789_1757596125
|
rbelanec
| 2025-09-11T14:57:57Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"llama-factory",
"ia3",
"generated_from_trainer",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"region:us"
] | null | 2025-09-11T14:07:25Z |
---
library_name: peft
license: llama3
base_model: meta-llama/Meta-Llama-3-8B-Instruct
tags:
- llama-factory
- ia3
- generated_from_trainer
model-index:
- name: train_cola_789_1757596125
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# train_cola_789_1757596125
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the cola dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1522
- Num Input Tokens Seen: 3663512
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 789
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Input Tokens Seen |
|:-------------:|:-----:|:-----:|:---------------:|:-----------------:|
| 0.0737 | 0.5 | 962 | 0.2573 | 182656 |
| 0.2517 | 1.0 | 1924 | 0.1771 | 365728 |
| 0.2159 | 1.5 | 2886 | 0.1765 | 548992 |
| 0.1765 | 2.0 | 3848 | 0.1651 | 731984 |
| 0.1305 | 2.5 | 4810 | 0.1704 | 915792 |
| 0.33 | 3.0 | 5772 | 0.1675 | 1098920 |
| 0.0959 | 3.5 | 6734 | 0.1576 | 1281640 |
| 0.1044 | 4.0 | 7696 | 0.1552 | 1465464 |
| 0.1593 | 4.5 | 8658 | 0.1579 | 1649720 |
| 0.071 | 5.0 | 9620 | 0.1549 | 1831920 |
| 0.1529 | 5.5 | 10582 | 0.1570 | 2014928 |
| 0.1885 | 6.0 | 11544 | 0.1530 | 2198176 |
| 0.1467 | 6.5 | 12506 | 0.1522 | 2381440 |
| 0.1482 | 7.0 | 13468 | 0.1539 | 2564952 |
| 0.2243 | 7.5 | 14430 | 0.1545 | 2748568 |
| 0.1888 | 8.0 | 15392 | 0.1522 | 2931096 |
| 0.073 | 8.5 | 16354 | 0.1533 | 3113624 |
| 0.0907 | 9.0 | 17316 | 0.1530 | 3296808 |
| 0.0881 | 9.5 | 18278 | 0.1536 | 3480168 |
| 0.1452 | 10.0 | 19240 | 0.1530 | 3663512 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.8.0+cu128
- Datasets 3.6.0
- Tokenizers 0.21.1
|
misaeluoyz/blockassist-bc-bipedal_soaring_porcupine_1757602642
|
misaeluoyz
| 2025-09-11T14:57:31Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"bipedal soaring porcupine",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T14:57:27Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- bipedal soaring porcupine
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
pripak18370/blockassist-bc-agile_solitary_mandrill_1757602638
|
pripak18370
| 2025-09-11T14:57:26Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"agile solitary mandrill",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T14:57:23Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- agile solitary mandrill
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
canadayfawuh/blockassist-bc-flapping_wise_rhino_1757602557
|
canadayfawuh
| 2025-09-11T14:56:10Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"pawing squeaky bison",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T14:56:06Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- pawing squeaky bison
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
iyaadshikder1546/blockassist-bc-pensive_agile_bee_1757602507
|
iyaadshikder1546
| 2025-09-11T14:55:15Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"pensive agile bee",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T14:55:12Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- pensive agile bee
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
lornaaveradutch/blockassist-bc-poisonous_domestic_jaguar_1757602477
|
lornaaveradutch
| 2025-09-11T14:54:52Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"poisonous domestic jaguar",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T14:54:48Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- poisonous domestic jaguar
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
foltzjmso/blockassist-bc-deadly_aquatic_sparrow_1757602471
|
foltzjmso
| 2025-09-11T14:54:46Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"deadly aquatic sparrow",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T14:54:42Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- deadly aquatic sparrow
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
hartsellbrian/blockassist-bc-pawing_wiry_bee_1757602442
|
hartsellbrian
| 2025-09-11T14:54:16Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"pawing wiry bee",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T14:54:12Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- pawing wiry bee
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
pm9150348/blockassist-bc-powerful_raging_ape_1757602410
|
pm9150348
| 2025-09-11T14:53:46Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"powerful raging ape",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T14:53:40Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- powerful raging ape
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
borsahopa67/blockassist-bc-polished_quiet_badger_1757602346
|
borsahopa67
| 2025-09-11T14:52:42Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"polished quiet badger",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T14:52:36Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- polished quiet badger
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
leveylewlsjanot/blockassist-bc-mammalian_swift_chicken_1757602303
|
leveylewlsjanot
| 2025-09-11T14:52:07Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"shy arctic prawn",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T14:52:03Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- shy arctic prawn
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
hyunjoonkang/sim_pick_and_place_DAVLA_1
|
hyunjoonkang
| 2025-09-11T14:52:00Z | 0 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"smolvla",
"robotics",
"dataset:hyunjoonkang/wx250s_sim_pick_and_place_1",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-09-11T14:51:46Z |
---
base_model: lerobot/smolvla_base
datasets: hyunjoonkang/wx250s_sim_pick_and_place_1
library_name: lerobot
license: apache-2.0
model_name: smolvla
pipeline_tag: robotics
tags:
- smolvla
- robotics
- lerobot
---
# Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
python -m lerobot.scripts.train \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
_Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._
### Evaluate the policy/run inference
```bash
python -m lerobot.record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
- **License:** apache-2.0
|
oyshimimi50/blockassist-bc-alert_colorful_pigeon_1757602286
|
oyshimimi50
| 2025-09-11T14:51:41Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"alert colorful pigeon",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T14:51:36Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- alert colorful pigeon
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
meaganalmeidaobu/blockassist-bc-armored_pesty_tortoise_1757602278
|
meaganalmeidaobu
| 2025-09-11T14:51:26Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"armored pesty tortoise",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T14:51:23Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- armored pesty tortoise
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
cwayneconnor/blockassist-bc-mute_loud_lynx_1757602190
|
cwayneconnor
| 2025-09-11T14:51:22Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"mute loud lynx",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T14:50:52Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- mute loud lynx
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
rbelanec/train_copa_101112_1757596168
|
rbelanec
| 2025-09-11T14:50:19Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"llama-factory",
"ia3",
"generated_from_trainer",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"region:us"
] | null | 2025-09-11T14:47:26Z |
---
library_name: peft
license: llama3
base_model: meta-llama/Meta-Llama-3-8B-Instruct
tags:
- llama-factory
- ia3
- generated_from_trainer
model-index:
- name: train_copa_101112_1757596168
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# train_copa_101112_1757596168
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the copa dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0314
- Num Input Tokens Seen: 281312
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 101112
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Input Tokens Seen |
|:-------------:|:-----:|:----:|:---------------:|:-----------------:|
| 0.592 | 0.5 | 45 | 0.5497 | 14144 |
| 0.8778 | 1.0 | 90 | 0.3723 | 28192 |
| 0.0636 | 1.5 | 135 | 0.0465 | 42208 |
| 0.0595 | 2.0 | 180 | 0.0365 | 56256 |
| 0.242 | 2.5 | 225 | 0.0338 | 70368 |
| 0.014 | 3.0 | 270 | 0.0341 | 84320 |
| 0.1039 | 3.5 | 315 | 0.0326 | 98400 |
| 0.0307 | 4.0 | 360 | 0.0314 | 112416 |
| 0.3158 | 4.5 | 405 | 0.0345 | 126496 |
| 0.0098 | 5.0 | 450 | 0.0319 | 140544 |
| 0.0163 | 5.5 | 495 | 0.0342 | 154592 |
| 0.0024 | 6.0 | 540 | 0.0315 | 168768 |
| 0.0792 | 6.5 | 585 | 0.0330 | 182848 |
| 0.0327 | 7.0 | 630 | 0.0315 | 196896 |
| 0.1089 | 7.5 | 675 | 0.0345 | 210912 |
| 0.0141 | 8.0 | 720 | 0.0326 | 225024 |
| 0.0397 | 8.5 | 765 | 0.0324 | 239200 |
| 0.0891 | 9.0 | 810 | 0.0335 | 253152 |
| 0.0837 | 9.5 | 855 | 0.0317 | 267040 |
| 0.1442 | 10.0 | 900 | 0.0330 | 281312 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.8.0+cu128
- Datasets 3.6.0
- Tokenizers 0.21.1
|
rbelanec/train_copa_101112_1757596165
|
rbelanec
| 2025-09-11T14:49:41Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"llama-factory",
"p-tuning",
"generated_from_trainer",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"region:us"
] | null | 2025-09-11T14:46:05Z |
---
library_name: peft
license: llama3
base_model: meta-llama/Meta-Llama-3-8B-Instruct
tags:
- llama-factory
- p-tuning
- generated_from_trainer
model-index:
- name: train_copa_101112_1757596165
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# train_copa_101112_1757596165
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the copa dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9577
- Num Input Tokens Seen: 281312
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 101112
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Input Tokens Seen |
|:-------------:|:-----:|:----:|:---------------:|:-----------------:|
| 0.2006 | 0.5 | 45 | 0.1967 | 14144 |
| 0.3225 | 1.0 | 90 | 0.0856 | 28192 |
| 0.4327 | 1.5 | 135 | 0.0478 | 42208 |
| 0.0202 | 2.0 | 180 | 0.0775 | 56256 |
| 0.1742 | 2.5 | 225 | 0.0552 | 70368 |
| 0.0049 | 3.0 | 270 | 0.0273 | 84320 |
| 0.0011 | 3.5 | 315 | 0.0583 | 98400 |
| 0.0018 | 4.0 | 360 | 0.0332 | 112416 |
| 0.0013 | 4.5 | 405 | 0.0406 | 126496 |
| 0.0002 | 5.0 | 450 | 0.0364 | 140544 |
| 0.0001 | 5.5 | 495 | 0.0473 | 154592 |
| 0.0001 | 6.0 | 540 | 0.0446 | 168768 |
| 0.0001 | 6.5 | 585 | 0.0423 | 182848 |
| 0.0 | 7.0 | 630 | 0.0465 | 196896 |
| 0.0 | 7.5 | 675 | 0.0435 | 210912 |
| 0.0 | 8.0 | 720 | 0.0428 | 225024 |
| 0.0 | 8.5 | 765 | 0.0453 | 239200 |
| 0.0 | 9.0 | 810 | 0.0443 | 253152 |
| 0.0 | 9.5 | 855 | 0.0495 | 267040 |
| 0.0 | 10.0 | 900 | 0.0484 | 281312 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.8.0+cu128
- Datasets 3.6.0
- Tokenizers 0.21.1
|
lm8779694/blockassist-bc-wily_squeaky_mule_1757602142
|
lm8779694
| 2025-09-11T14:49:11Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"wily squeaky mule",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T14:49:07Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- wily squeaky mule
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
rodrigoburgd/blockassist-bc-scruffy_untamed_hare_1757602112
|
rodrigoburgd
| 2025-09-11T14:48:40Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"scruffy untamed hare",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T14:48:37Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- scruffy untamed hare
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Ayush2594/psycare-flan-t5-base
|
Ayush2594
| 2025-09-11T14:48:04Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | 2025-09-11T12:39:30Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
eunkey/erpo-qwen25-vl-oom-fixed
|
eunkey
| 2025-09-11T14:46:57Z | 9 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2_5_vl",
"image-to-text",
"generated_from_trainer",
"trl",
"grpo",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2.5-VL-3B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-VL-3B-Instruct",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
image-to-text
| 2025-08-10T09:17:12Z |
---
base_model: Qwen/Qwen2.5-VL-3B-Instruct
library_name: transformers
model_name: erpo-qwen25-vl-oom-fixed
tags:
- generated_from_trainer
- trl
- grpo
licence: license
---
# Model Card for erpo-qwen25-vl-oom-fixed
This model is a fine-tuned version of [Qwen/Qwen2.5-VL-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-3B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="eunkey/erpo-qwen25-vl-oom-fixed", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/xuio/huggingface/runs/hg0ssoy3)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.19.1
- Transformers: 4.53.2
- Pytorch: 2.7.0
- Datasets: 4.0.0
- Tokenizers: 0.21.2
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
rbelanec/train_copa_101112_1757596163
|
rbelanec
| 2025-09-11T14:45:52Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"llama-factory",
"prefix-tuning",
"generated_from_trainer",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"region:us"
] | null | 2025-09-11T14:39:51Z |
---
library_name: peft
license: llama3
base_model: meta-llama/Meta-Llama-3-8B-Instruct
tags:
- llama-factory
- prefix-tuning
- generated_from_trainer
model-index:
- name: train_copa_101112_1757596163
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# train_copa_101112_1757596163
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the copa dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9463
- Num Input Tokens Seen: 547440
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 101112
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Input Tokens Seen |
|:-------------:|:-----:|:----:|:---------------:|:-----------------:|
| 0.2208 | 1.0 | 180 | 0.2563 | 27344 |
| 0.2677 | 2.0 | 360 | 0.2335 | 54736 |
| 0.2249 | 3.0 | 540 | 0.2334 | 82064 |
| 0.2551 | 4.0 | 720 | 0.2424 | 109456 |
| 0.2229 | 5.0 | 900 | 0.2327 | 136784 |
| 0.2276 | 6.0 | 1080 | 0.2340 | 164192 |
| 0.2361 | 7.0 | 1260 | 0.2310 | 191552 |
| 0.2147 | 8.0 | 1440 | 0.2424 | 218944 |
| 0.2244 | 9.0 | 1620 | 0.2365 | 246352 |
| 0.2334 | 10.0 | 1800 | 0.2399 | 273744 |
| 0.2356 | 11.0 | 1980 | 0.2416 | 301072 |
| 0.223 | 12.0 | 2160 | 0.2418 | 328464 |
| 0.2351 | 13.0 | 2340 | 0.2705 | 355840 |
| 0.1368 | 14.0 | 2520 | 0.3143 | 383168 |
| 0.0239 | 15.0 | 2700 | 0.5442 | 410512 |
| 0.1856 | 16.0 | 2880 | 0.7039 | 437952 |
| 0.029 | 17.0 | 3060 | 0.8290 | 465264 |
| 0.0011 | 18.0 | 3240 | 0.9045 | 492672 |
| 0.0005 | 19.0 | 3420 | 0.9412 | 520048 |
| 0.0008 | 20.0 | 3600 | 0.9463 | 547440 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.8.0+cu128
- Datasets 3.6.0
- Tokenizers 0.21.1
|
ichsanlook/pentestic-one-2bit
|
ichsanlook
| 2025-09-11T14:45:35Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-09-11T14:45:35Z |
---
license: apache-2.0
---
|
rbelanec/train_svamp_101112_1757596157
|
rbelanec
| 2025-09-11T14:44:41Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"llama-factory",
"prefix-tuning",
"generated_from_trainer",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"region:us"
] | null | 2025-09-11T14:34:40Z |
---
library_name: peft
license: llama3
base_model: meta-llama/Meta-Llama-3-8B-Instruct
tags:
- llama-factory
- prefix-tuning
- generated_from_trainer
model-index:
- name: train_svamp_101112_1757596157
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# train_svamp_101112_1757596157
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the svamp dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4107
- Num Input Tokens Seen: 1348864
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 101112
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Input Tokens Seen |
|:-------------:|:-----:|:----:|:---------------:|:-----------------:|
| 0.6409 | 1.0 | 315 | 0.7571 | 67488 |
| 0.262 | 2.0 | 630 | 0.3623 | 134832 |
| 0.0962 | 3.0 | 945 | 0.2180 | 202352 |
| 0.0468 | 4.0 | 1260 | 0.1878 | 269776 |
| 0.0382 | 5.0 | 1575 | 0.2140 | 337328 |
| 0.0017 | 6.0 | 1890 | 0.3292 | 404608 |
| 0.0037 | 7.0 | 2205 | 0.3098 | 472144 |
| 0.005 | 8.0 | 2520 | 0.3992 | 539664 |
| 0.0 | 9.0 | 2835 | 0.3648 | 607136 |
| 0.0002 | 10.0 | 3150 | 0.3280 | 674496 |
| 0.0 | 11.0 | 3465 | 0.3562 | 741840 |
| 0.0001 | 12.0 | 3780 | 0.3841 | 809312 |
| 0.0 | 13.0 | 4095 | 0.3958 | 876784 |
| 0.0 | 14.0 | 4410 | 0.4013 | 944080 |
| 0.0 | 15.0 | 4725 | 0.4053 | 1011456 |
| 0.0 | 16.0 | 5040 | 0.4078 | 1078880 |
| 0.0 | 17.0 | 5355 | 0.4081 | 1146416 |
| 0.0 | 18.0 | 5670 | 0.4113 | 1213888 |
| 0.0 | 19.0 | 5985 | 0.4104 | 1281488 |
| 0.0 | 20.0 | 6300 | 0.4107 | 1348864 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.8.0+cu128
- Datasets 3.6.0
- Tokenizers 0.21.1
|
mradermacher/mcp-instruct-v1-GGUF
|
mradermacher
| 2025-09-11T14:43:40Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"lfm2",
"en",
"base_model:yasserrmd/mcp-instruct-v1",
"base_model:quantized:yasserrmd/mcp-instruct-v1",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-11T13:29:54Z |
---
base_model: yasserrmd/mcp-instruct-v1
language:
- en
library_name: transformers
license: apache-2.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- lfm2
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/yasserrmd/mcp-instruct-v1
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#mcp-instruct-v1-GGUF).***
weighted/imatrix quants are available at https://huggingface.co/mradermacher/mcp-instruct-v1-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/mcp-instruct-v1-GGUF/resolve/main/mcp-instruct-v1.Q2_K.gguf) | Q2_K | 0.6 | |
| [GGUF](https://huggingface.co/mradermacher/mcp-instruct-v1-GGUF/resolve/main/mcp-instruct-v1.Q3_K_S.gguf) | Q3_K_S | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/mcp-instruct-v1-GGUF/resolve/main/mcp-instruct-v1.Q3_K_M.gguf) | Q3_K_M | 0.7 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/mcp-instruct-v1-GGUF/resolve/main/mcp-instruct-v1.Q3_K_L.gguf) | Q3_K_L | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/mcp-instruct-v1-GGUF/resolve/main/mcp-instruct-v1.IQ4_XS.gguf) | IQ4_XS | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/mcp-instruct-v1-GGUF/resolve/main/mcp-instruct-v1.Q4_K_S.gguf) | Q4_K_S | 0.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/mcp-instruct-v1-GGUF/resolve/main/mcp-instruct-v1.Q4_K_M.gguf) | Q4_K_M | 0.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/mcp-instruct-v1-GGUF/resolve/main/mcp-instruct-v1.Q5_K_S.gguf) | Q5_K_S | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/mcp-instruct-v1-GGUF/resolve/main/mcp-instruct-v1.Q5_K_M.gguf) | Q5_K_M | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/mcp-instruct-v1-GGUF/resolve/main/mcp-instruct-v1.Q6_K.gguf) | Q6_K | 1.1 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/mcp-instruct-v1-GGUF/resolve/main/mcp-instruct-v1.Q8_0.gguf) | Q8_0 | 1.3 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/mcp-instruct-v1-GGUF/resolve/main/mcp-instruct-v1.f16.gguf) | f16 | 2.4 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
rbelanec/train_svamp_101112_1757596160
|
rbelanec
| 2025-09-11T14:43:39Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"llama-factory",
"lora",
"generated_from_trainer",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"region:us"
] | null | 2025-09-11T14:37:19Z |
---
library_name: peft
license: llama3
base_model: meta-llama/Meta-Llama-3-8B-Instruct
tags:
- llama-factory
- lora
- generated_from_trainer
model-index:
- name: train_svamp_101112_1757596160
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# train_svamp_101112_1757596160
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the svamp dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1319
- Num Input Tokens Seen: 704272
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 101112
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Input Tokens Seen |
|:-------------:|:-----:|:----:|:---------------:|:-----------------:|
| 0.1528 | 0.5 | 79 | 0.2311 | 35296 |
| 0.0753 | 1.0 | 158 | 0.1515 | 70400 |
| 0.0805 | 1.5 | 237 | 0.1408 | 106208 |
| 0.1368 | 2.0 | 316 | 0.1319 | 140736 |
| 0.038 | 2.5 | 395 | 0.1435 | 176064 |
| 0.0199 | 3.0 | 474 | 0.1467 | 211024 |
| 0.0059 | 3.5 | 553 | 0.2152 | 246128 |
| 0.0396 | 4.0 | 632 | 0.1816 | 281616 |
| 0.0337 | 4.5 | 711 | 0.2312 | 316976 |
| 0.0003 | 5.0 | 790 | 0.2054 | 352256 |
| 0.0005 | 5.5 | 869 | 0.2563 | 387360 |
| 0.0001 | 6.0 | 948 | 0.2300 | 422464 |
| 0.0 | 6.5 | 1027 | 0.2501 | 457760 |
| 0.0001 | 7.0 | 1106 | 0.2568 | 492912 |
| 0.0001 | 7.5 | 1185 | 0.2675 | 528336 |
| 0.0 | 8.0 | 1264 | 0.2667 | 563600 |
| 0.0001 | 8.5 | 1343 | 0.2692 | 598992 |
| 0.0 | 9.0 | 1422 | 0.2690 | 633984 |
| 0.0 | 9.5 | 1501 | 0.2714 | 669152 |
| 0.0001 | 10.0 | 1580 | 0.2698 | 704272 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.8.0+cu128
- Datasets 3.6.0
- Tokenizers 0.21.1
|
ahnets/blockassist-bc-keen_fast_giraffe_1757601776
|
ahnets
| 2025-09-11T14:43:20Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"keen fast giraffe",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T14:43:14Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- keen fast giraffe
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
AnerYubo/blockassist-bc-pawing_downy_anaconda_1757601747
|
AnerYubo
| 2025-09-11T14:42:31Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"pawing downy anaconda",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T14:42:28Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- pawing downy anaconda
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
AnerYubo/blockassist-bc-screeching_mute_lemur_1757601739
|
AnerYubo
| 2025-09-11T14:42:23Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"screeching mute lemur",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T14:42:20Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- screeching mute lemur
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
helmutsukocok/blockassist-bc-loud_scavenging_kangaroo_1757600049
|
helmutsukocok
| 2025-09-11T14:39:28Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"loud scavenging kangaroo",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T14:39:24Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- loud scavenging kangaroo
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
vendi11/blockassist-bc-placid_placid_llama_1757601514
|
vendi11
| 2025-09-11T14:39:16Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"placid placid llama",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T14:39:12Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- placid placid llama
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
sgg66336/blockassist-bc-robust_carnivorous_salamander_1757601468
|
sgg66336
| 2025-09-11T14:38:13Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"robust carnivorous salamander",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T14:38:08Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- robust carnivorous salamander
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
burgbobby/blockassist-bc-lithe_wild_boar_1757601432
|
burgbobby
| 2025-09-11T14:37:26Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"lithe wild boar",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T14:37:22Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- lithe wild boar
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Vicky240922222/pubmedbert-gpt2-biomedical
|
Vicky240922222
| 2025-09-11T14:37:23Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"encoder-decoder",
"text2text-generation",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2025-09-11T14:34:12Z |
---
library_name: transformers
tags:
- generated_from_trainer
model-index:
- name: pubmedbert-gpt2-biomedical
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pubmedbert-gpt2-biomedical
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.56.1
- Pytorch 2.8.0+cu126
- Datasets 4.0.0
- Tokenizers 0.22.0
|
kokkeytopodar62963/blockassist-bc-domestic_savage_bear_1757601424
|
kokkeytopodar62963
| 2025-09-11T14:37:18Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"domestic savage bear",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T14:37:13Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- domestic savage bear
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
toruns/blockassist-bc-insectivorous_bold_lion_1757601393
|
toruns
| 2025-09-11T14:37:04Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"insectivorous bold lion",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T14:36:55Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- insectivorous bold lion
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
rbelanec/train_cb_101112_1757596156
|
rbelanec
| 2025-09-11T14:37:03Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"llama-factory",
"ia3",
"generated_from_trainer",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"region:us"
] | null | 2025-09-11T14:34:00Z |
---
library_name: peft
license: llama3
base_model: meta-llama/Meta-Llama-3-8B-Instruct
tags:
- llama-factory
- ia3
- generated_from_trainer
model-index:
- name: train_cb_101112_1757596156
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# train_cb_101112_1757596156
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the cb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1639
- Num Input Tokens Seen: 359824
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 101112
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Input Tokens Seen |
|:-------------:|:------:|:----:|:---------------:|:-----------------:|
| 1.081 | 0.5088 | 29 | 1.2085 | 19872 |
| 1.2816 | 1.0175 | 58 | 1.2085 | 36432 |
| 0.6154 | 1.5263 | 87 | 0.5919 | 53680 |
| 0.1858 | 2.0351 | 116 | 0.2524 | 72160 |
| 0.1156 | 2.5439 | 145 | 0.2084 | 91904 |
| 0.351 | 3.0526 | 174 | 0.1916 | 108856 |
| 0.2955 | 3.5614 | 203 | 0.1786 | 128056 |
| 0.0914 | 4.0702 | 232 | 0.1804 | 146952 |
| 0.1035 | 4.5789 | 261 | 0.1801 | 165128 |
| 0.0952 | 5.0877 | 290 | 0.1761 | 183224 |
| 0.0392 | 5.5965 | 319 | 0.1748 | 202424 |
| 0.1394 | 6.1053 | 348 | 0.1756 | 220000 |
| 0.1559 | 6.6140 | 377 | 0.1660 | 238272 |
| 0.1349 | 7.1228 | 406 | 0.1702 | 255984 |
| 0.0485 | 7.6316 | 435 | 0.1688 | 275536 |
| 0.1528 | 8.1404 | 464 | 0.1659 | 293296 |
| 0.1347 | 8.6491 | 493 | 0.1672 | 312304 |
| 0.0932 | 9.1579 | 522 | 0.1661 | 329216 |
| 0.0989 | 9.6667 | 551 | 0.1639 | 346944 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.8.0+cu128
- Datasets 3.6.0
- Tokenizers 0.21.1
|
makhiovrnl/blockassist-bc-marine_armored_weasel_1757601397
|
makhiovrnl
| 2025-09-11T14:36:46Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"marine armored weasel",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T14:36:42Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- marine armored weasel
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ahmarkibriya5374/blockassist-bc-fishy_furry_wombat_1757601365
|
ahmarkibriya5374
| 2025-09-11T14:36:18Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"fishy furry wombat",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T14:36:14Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- fishy furry wombat
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
celisjrdn/blockassist-bc-subtle_stinging_chimpanzee_1757601337
|
celisjrdn
| 2025-09-11T14:35:45Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"subtle stinging chimpanzee",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T14:35:42Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- subtle stinging chimpanzee
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
rbelanec/train_cb_101112_1757596155
|
rbelanec
| 2025-09-11T14:35:37Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"llama-factory",
"lntuning",
"generated_from_trainer",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"region:us"
] | null | 2025-09-11T14:32:11Z |
---
library_name: peft
license: llama3
base_model: meta-llama/Meta-Llama-3-8B-Instruct
tags:
- llama-factory
- lntuning
- generated_from_trainer
model-index:
- name: train_cb_101112_1757596155
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# train_cb_101112_1757596155
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the cb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1502
- Num Input Tokens Seen: 359824
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 101112
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Input Tokens Seen |
|:-------------:|:------:|:----:|:---------------:|:-----------------:|
| 0.8304 | 0.5088 | 29 | 0.8175 | 19872 |
| 0.2874 | 1.0175 | 58 | 0.3097 | 36432 |
| 0.117 | 1.5263 | 87 | 0.2008 | 53680 |
| 0.158 | 2.0351 | 116 | 0.1816 | 72160 |
| 0.0625 | 2.5439 | 145 | 0.1618 | 91904 |
| 0.362 | 3.0526 | 174 | 0.1618 | 108856 |
| 0.2499 | 3.5614 | 203 | 0.1502 | 128056 |
| 0.0416 | 4.0702 | 232 | 0.1588 | 146952 |
| 0.0798 | 4.5789 | 261 | 0.1717 | 165128 |
| 0.0694 | 5.0877 | 290 | 0.1825 | 183224 |
| 0.009 | 5.5965 | 319 | 0.1751 | 202424 |
| 0.0798 | 6.1053 | 348 | 0.1801 | 220000 |
| 0.1092 | 6.6140 | 377 | 0.1765 | 238272 |
| 0.0968 | 7.1228 | 406 | 0.1833 | 255984 |
| 0.0135 | 7.6316 | 435 | 0.1948 | 275536 |
| 0.0669 | 8.1404 | 464 | 0.1933 | 293296 |
| 0.0877 | 8.6491 | 493 | 0.1893 | 312304 |
| 0.0715 | 9.1579 | 522 | 0.1936 | 329216 |
| 0.0497 | 9.6667 | 551 | 0.1898 | 346944 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.8.0+cu128
- Datasets 3.6.0
- Tokenizers 0.21.1
|
terrancejykn/blockassist-bc-colorful_curious_macaque_1757601314
|
terrancejykn
| 2025-09-11T14:35:22Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"colorful curious macaque",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T14:35:19Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- colorful curious macaque
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
sampingkaca72/blockassist-bc-armored_stealthy_elephant_1757599553
|
sampingkaca72
| 2025-09-11T14:34:40Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"armored stealthy elephant",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T14:34:35Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- armored stealthy elephant
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
laconadaomy/blockassist-bc-squeaky_invisible_mole_1757601263
|
laconadaomy
| 2025-09-11T14:34:38Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"squeaky invisible mole",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T14:34:32Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- squeaky invisible mole
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
yichengup/flux.1-fill-dev-OneReward
|
yichengup
| 2025-09-11T14:34:08Z | 4 | 8 | null |
[
"base_model:bytedance-research/OneReward",
"base_model:finetune:bytedance-research/OneReward",
"region:us"
] | null | 2025-09-10T16:23:23Z |
---
base_model:
- bytedance-research/OneReward
---
flux.1-fill-dev-OneReward
Process the model into a single model suitable for ComfyUI use
Original model link: [OneReward](https://huggingface.co/bytedance-research/OneReward)
|
clayceklj/blockassist-bc-reptilian_bellowing_crocodile_1757601215
|
clayceklj
| 2025-09-11T14:34:05Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"reptilian bellowing crocodile",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T14:34:02Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- reptilian bellowing crocodile
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
lukashossain3425/blockassist-bc-freckled_twitchy_wallaby_1757601224
|
lukashossain3425
| 2025-09-11T14:33:52Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"freckled twitchy wallaby",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T14:33:48Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- freckled twitchy wallaby
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
rbelanec/train_cb_101112_1757596151
|
rbelanec
| 2025-09-11T14:33:31Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"llama-factory",
"prefix-tuning",
"generated_from_trainer",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"region:us"
] | null | 2025-09-11T14:29:12Z |
---
library_name: peft
license: llama3
base_model: meta-llama/Meta-Llama-3-8B-Instruct
tags:
- llama-factory
- prefix-tuning
- generated_from_trainer
model-index:
- name: train_cb_101112_1757596151
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# train_cb_101112_1757596151
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the cb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1226
- Num Input Tokens Seen: 621040
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 101112
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Input Tokens Seen |
|:-------------:|:-----:|:----:|:---------------:|:-----------------:|
| 0.5565 | 1.0 | 113 | 0.3490 | 30240 |
| 0.6263 | 2.0 | 226 | 0.3298 | 61600 |
| 0.4408 | 3.0 | 339 | 0.1773 | 92552 |
| 0.3915 | 4.0 | 452 | 0.2358 | 123976 |
| 0.0154 | 5.0 | 565 | 0.2813 | 155224 |
| 0.1362 | 6.0 | 678 | 0.1831 | 186368 |
| 0.0329 | 7.0 | 791 | 0.1248 | 217280 |
| 0.0004 | 8.0 | 904 | 0.0106 | 248064 |
| 0.0001 | 9.0 | 1017 | 0.1456 | 278576 |
| 0.0001 | 10.0 | 1130 | 0.1819 | 309584 |
| 0.0001 | 11.0 | 1243 | 0.2099 | 340752 |
| 0.0 | 12.0 | 1356 | 0.1466 | 372240 |
| 0.0001 | 13.0 | 1469 | 0.1362 | 402976 |
| 0.0001 | 14.0 | 1582 | 0.1331 | 433800 |
| 0.0001 | 15.0 | 1695 | 0.1305 | 465096 |
| 0.0001 | 16.0 | 1808 | 0.1263 | 496184 |
| 0.0 | 17.0 | 1921 | 0.1279 | 527400 |
| 0.0 | 18.0 | 2034 | 0.1253 | 558656 |
| 0.0 | 19.0 | 2147 | 0.1310 | 589928 |
| 0.0 | 20.0 | 2260 | 0.1226 | 621040 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.8.0+cu128
- Datasets 3.6.0
- Tokenizers 0.21.1
|
sadiyakhatun65524/blockassist-bc-insectivorous_prehistoric_mouse_1757601196
|
sadiyakhatun65524
| 2025-09-11T14:33:29Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"insectivorous prehistoric mouse",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T14:33:25Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- insectivorous prehistoric mouse
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
rbelanec/train_cb_101112_1757596154
|
rbelanec
| 2025-09-11T14:32:49Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"llama-factory",
"lora",
"generated_from_trainer",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"region:us"
] | null | 2025-09-11T14:29:35Z |
---
library_name: peft
license: llama3
base_model: meta-llama/Meta-Llama-3-8B-Instruct
tags:
- llama-factory
- lora
- generated_from_trainer
model-index:
- name: train_cb_101112_1757596154
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# train_cb_101112_1757596154
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the cb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0632
- Num Input Tokens Seen: 359824
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 101112
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Input Tokens Seen |
|:-------------:|:------:|:----:|:---------------:|:-----------------:|
| 0.1901 | 0.5088 | 29 | 0.2935 | 19872 |
| 0.144 | 1.0175 | 58 | 0.2516 | 36432 |
| 0.0659 | 1.5263 | 87 | 0.1955 | 53680 |
| 0.1028 | 2.0351 | 116 | 0.1596 | 72160 |
| 0.0053 | 2.5439 | 145 | 0.0632 | 91904 |
| 0.2008 | 3.0526 | 174 | 0.1121 | 108856 |
| 0.0239 | 3.5614 | 203 | 0.0735 | 128056 |
| 0.0017 | 4.0702 | 232 | 0.1148 | 146952 |
| 0.0469 | 4.5789 | 261 | 0.0746 | 165128 |
| 0.001 | 5.0877 | 290 | 0.0689 | 183224 |
| 0.0001 | 5.5965 | 319 | 0.0707 | 202424 |
| 0.0003 | 6.1053 | 348 | 0.0893 | 220000 |
| 0.0001 | 6.6140 | 377 | 0.0922 | 238272 |
| 0.0001 | 7.1228 | 406 | 0.0707 | 255984 |
| 0.0001 | 7.6316 | 435 | 0.0760 | 275536 |
| 0.0001 | 8.1404 | 464 | 0.0724 | 293296 |
| 0.0001 | 8.6491 | 493 | 0.0769 | 312304 |
| 0.0001 | 9.1579 | 522 | 0.0682 | 329216 |
| 0.0001 | 9.6667 | 551 | 0.0694 | 346944 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.8.0+cu128
- Datasets 3.6.0
- Tokenizers 0.21.1
|
brandescarpello553/blockassist-bc-shiny_graceful_lion_1757601086
|
brandescarpello553
| 2025-09-11T14:31:34Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"shiny graceful lion",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T14:31:30Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- shiny graceful lion
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
pralayd/Finetuned_Trishul8B-Lite-GGUF
|
pralayd
| 2025-09-11T14:31:24Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2025-09-11T14:27:22Z |
---
base_model: unsloth/meta-llama-3.1-8b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** pralayd
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
SuganyaP/quick-distilbert-imdb
|
SuganyaP
| 2025-09-11T14:31:22Z | 0 | 0 | null |
[
"en",
"base_model:distilbert/distilbert-base-uncased-finetuned-sst-2-english",
"base_model:finetune:distilbert/distilbert-base-uncased-finetuned-sst-2-english",
"license:mit",
"region:us"
] | null | 2025-09-11T11:51:48Z |
---
license: mit
language:
- en
metrics:
- accuracy
- f1
base_model:
- distilbert/distilbert-base-uncased-finetuned-sst-2-english
---
# Quick DistilBERT IMDB Sentiment Classifier
This is a fine-tuned DistilBERT model for **sentiment analysis** on the IMDB movie reviews dataset.
The model classifies reviews as **positive** or **negative**.
## Model Details
- **Base model**: `distilbert-base-uncased`
- **Dataset**: IMDB (cleaned train/test splits)
- **Task**: Sentiment classification (binary)
- **Framework**: Hugging Face Transformers
## Training
- Optimized DistilBERT on IMDB dataset
- Used standard text classification head
- Training args saved in `training_args.bin`
## Evaluation
Accuracy and F1-score on the IMDB test set:
(Add numbers from your `eval_report.txt` here)
Misclassified examples are available in `misclassified_examples.csv`.
## How to Use
```python
from transformers import pipeline
model_id = "SuganyaP/quick-distilbert-imdb"
classifier = pipeline("sentiment-analysis", model=model_id)
print(classifier("This movie was excellent!"))
|
hadwinlaverne/blockassist-bc-lethal_screeching_badger_1757601057
|
hadwinlaverne
| 2025-09-11T14:31:06Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"lethal screeching badger",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T14:31:03Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- lethal screeching badger
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
omerbektass/blockassist-bc-keen_fast_giraffe_1757601010
|
omerbektass
| 2025-09-11T14:31:04Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"keen fast giraffe",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T14:30:26Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- keen fast giraffe
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mccomasadxdwu/blockassist-bc-dense_lithe_chinchilla_1757601052
|
mccomasadxdwu
| 2025-09-11T14:31:00Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"dense lithe chinchilla",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T14:30:57Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- dense lithe chinchilla
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
oekaltegabi/blockassist-bc-tame_dormant_hyena_1757601030
|
oekaltegabi
| 2025-09-11T14:30:38Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"hairy sprightly puffin",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T14:30:35Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- hairy sprightly puffin
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
fuerbringerestefana/blockassist-bc-monstrous_vicious_snail_1757600961
|
fuerbringerestefana
| 2025-09-11T14:29:40Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"monstrous vicious snail",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T14:29:35Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- monstrous vicious snail
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
hearnspetrikatriceyo/blockassist-bc-polished_hibernating_swan_1757600933
|
hearnspetrikatriceyo
| 2025-09-11T14:29:01Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"polished hibernating swan",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T14:28:58Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- polished hibernating swan
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
CodeAtCMU/SmolLM2-1.7B-CorruptedComments_full_sft_code_data_120K_replace_keywords_nonen
|
CodeAtCMU
| 2025-09-11T14:28:21Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-11T14:27:54Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
rbelanec/train_copa_789_1757596141
|
rbelanec
| 2025-09-11T14:27:49Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"llama-factory",
"lora",
"generated_from_trainer",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"region:us"
] | null | 2025-09-11T14:24:07Z |
---
library_name: peft
license: llama3
base_model: meta-llama/Meta-Llama-3-8B-Instruct
tags:
- llama-factory
- lora
- generated_from_trainer
model-index:
- name: train_copa_789_1757596141
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# train_copa_789_1757596141
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the copa dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0287
- Num Input Tokens Seen: 281984
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 789
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Input Tokens Seen |
|:-------------:|:-----:|:----:|:---------------:|:-----------------:|
| 0.1204 | 0.5 | 45 | 0.0726 | 14240 |
| 0.3342 | 1.0 | 90 | 0.0615 | 28192 |
| 0.0623 | 1.5 | 135 | 0.0627 | 42080 |
| 0.0032 | 2.0 | 180 | 0.0287 | 56192 |
| 0.0001 | 2.5 | 225 | 0.0371 | 70048 |
| 0.0012 | 3.0 | 270 | 0.0343 | 84192 |
| 0.0002 | 3.5 | 315 | 0.0305 | 98304 |
| 0.0 | 4.0 | 360 | 0.0306 | 112544 |
| 0.0 | 4.5 | 405 | 0.0308 | 126784 |
| 0.0 | 5.0 | 450 | 0.0322 | 140960 |
| 0.0 | 5.5 | 495 | 0.0331 | 155200 |
| 0.0 | 6.0 | 540 | 0.0347 | 169216 |
| 0.0 | 6.5 | 585 | 0.0347 | 183232 |
| 0.0 | 7.0 | 630 | 0.0355 | 197248 |
| 0.0 | 7.5 | 675 | 0.0402 | 211424 |
| 0.0 | 8.0 | 720 | 0.0311 | 225440 |
| 0.0 | 8.5 | 765 | 0.0339 | 239392 |
| 0.0 | 9.0 | 810 | 0.0341 | 253632 |
| 0.0 | 9.5 | 855 | 0.0374 | 267680 |
| 0.0 | 10.0 | 900 | 0.0409 | 281984 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.8.0+cu128
- Datasets 3.6.0
- Tokenizers 0.21.1
|
rbelanec/train_copa_789_1757596143
|
rbelanec
| 2025-09-11T14:27:46Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"llama-factory",
"ia3",
"generated_from_trainer",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"region:us"
] | null | 2025-09-11T14:24:56Z |
---
library_name: peft
license: llama3
base_model: meta-llama/Meta-Llama-3-8B-Instruct
tags:
- llama-factory
- ia3
- generated_from_trainer
model-index:
- name: train_copa_789_1757596143
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# train_copa_789_1757596143
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the copa dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0623
- Num Input Tokens Seen: 281984
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 789
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Input Tokens Seen |
|:-------------:|:-----:|:----:|:---------------:|:-----------------:|
| 0.5993 | 0.5 | 45 | 0.5326 | 14240 |
| 0.6312 | 1.0 | 90 | 0.3705 | 28192 |
| 0.0635 | 1.5 | 135 | 0.0711 | 42080 |
| 0.1393 | 2.0 | 180 | 0.0634 | 56192 |
| 0.0085 | 2.5 | 225 | 0.0633 | 70048 |
| 0.1219 | 3.0 | 270 | 0.0656 | 84192 |
| 0.1509 | 3.5 | 315 | 0.0654 | 98304 |
| 0.0373 | 4.0 | 360 | 0.0660 | 112544 |
| 0.0535 | 4.5 | 405 | 0.0668 | 126784 |
| 0.1165 | 5.0 | 450 | 0.0628 | 140960 |
| 0.0766 | 5.5 | 495 | 0.0654 | 155200 |
| 0.0151 | 6.0 | 540 | 0.0652 | 169216 |
| 0.0561 | 6.5 | 585 | 0.0626 | 183232 |
| 0.0085 | 7.0 | 630 | 0.0627 | 197248 |
| 0.0554 | 7.5 | 675 | 0.0626 | 211424 |
| 0.1764 | 8.0 | 720 | 0.0635 | 225440 |
| 0.0044 | 8.5 | 765 | 0.0623 | 239392 |
| 0.0226 | 9.0 | 810 | 0.0626 | 253632 |
| 0.0474 | 9.5 | 855 | 0.0633 | 267680 |
| 0.036 | 10.0 | 900 | 0.0627 | 281984 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.8.0+cu128
- Datasets 3.6.0
- Tokenizers 0.21.1
|
vendi11/blockassist-bc-placid_placid_llama_1757600801
|
vendi11
| 2025-09-11T14:27:23Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"placid placid llama",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T14:27:20Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- placid placid llama
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
rbelanec/train_copa_789_1757596142
|
rbelanec
| 2025-09-11T14:27:17Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"llama-factory",
"lntuning",
"generated_from_trainer",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"region:us"
] | null | 2025-09-11T14:24:33Z |
---
library_name: peft
license: llama3
base_model: meta-llama/Meta-Llama-3-8B-Instruct
tags:
- llama-factory
- lntuning
- generated_from_trainer
model-index:
- name: train_copa_789_1757596142
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# train_copa_789_1757596142
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the copa dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0593
- Num Input Tokens Seen: 281984
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 789
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Input Tokens Seen |
|:-------------:|:-----:|:----:|:---------------:|:-----------------:|
| 0.2081 | 0.5 | 45 | 0.1157 | 14240 |
| 0.2591 | 1.0 | 90 | 0.0701 | 28192 |
| 0.0367 | 1.5 | 135 | 0.0683 | 42080 |
| 0.1331 | 2.0 | 180 | 0.0597 | 56192 |
| 0.0047 | 2.5 | 225 | 0.0593 | 70048 |
| 0.0918 | 3.0 | 270 | 0.0596 | 84192 |
| 0.1091 | 3.5 | 315 | 0.0617 | 98304 |
| 0.0053 | 4.0 | 360 | 0.0622 | 112544 |
| 0.0101 | 4.5 | 405 | 0.0630 | 126784 |
| 0.0808 | 5.0 | 450 | 0.0620 | 140960 |
| 0.0104 | 5.5 | 495 | 0.0627 | 155200 |
| 0.0012 | 6.0 | 540 | 0.0637 | 169216 |
| 0.0056 | 6.5 | 585 | 0.0677 | 183232 |
| 0.0014 | 7.0 | 630 | 0.0702 | 197248 |
| 0.0042 | 7.5 | 675 | 0.0686 | 211424 |
| 0.0692 | 8.0 | 720 | 0.0670 | 225440 |
| 0.0005 | 8.5 | 765 | 0.0679 | 239392 |
| 0.0015 | 9.0 | 810 | 0.0698 | 253632 |
| 0.0069 | 9.5 | 855 | 0.0690 | 267680 |
| 0.003 | 10.0 | 900 | 0.0675 | 281984 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.8.0+cu128
- Datasets 3.6.0
- Tokenizers 0.21.1
|
sadiyakhatun65524/blockassist-bc-insectivorous_prehistoric_mouse_1757600815
|
sadiyakhatun65524
| 2025-09-11T14:27:09Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"insectivorous prehistoric mouse",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T14:27:04Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- insectivorous prehistoric mouse
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
rbelanec/train_copa_789_1757596140
|
rbelanec
| 2025-09-11T14:26:50Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"llama-factory",
"p-tuning",
"generated_from_trainer",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"region:us"
] | null | 2025-09-11T14:23:14Z |
---
library_name: peft
license: llama3
base_model: meta-llama/Meta-Llama-3-8B-Instruct
tags:
- llama-factory
- p-tuning
- generated_from_trainer
model-index:
- name: train_copa_789_1757596140
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# train_copa_789_1757596140
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the copa dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4562
- Num Input Tokens Seen: 281984
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 789
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Input Tokens Seen |
|:-------------:|:-----:|:----:|:---------------:|:-----------------:|
| 0.411 | 0.5 | 45 | 0.3011 | 14240 |
| 0.2838 | 1.0 | 90 | 0.2367 | 28192 |
| 0.2526 | 1.5 | 135 | 0.2344 | 42080 |
| 0.2443 | 2.0 | 180 | 0.2417 | 56192 |
| 0.2322 | 2.5 | 225 | 0.2302 | 70048 |
| 0.2431 | 3.0 | 270 | 0.2324 | 84192 |
| 0.2331 | 3.5 | 315 | 0.2341 | 98304 |
| 0.2335 | 4.0 | 360 | 0.2316 | 112544 |
| 0.2376 | 4.5 | 405 | 0.2335 | 126784 |
| 0.2371 | 5.0 | 450 | 0.2323 | 140960 |
| 0.2308 | 5.5 | 495 | 0.2329 | 155200 |
| 0.2303 | 6.0 | 540 | 0.2314 | 169216 |
| 0.2276 | 6.5 | 585 | 0.2329 | 183232 |
| 0.2262 | 7.0 | 630 | 0.2323 | 197248 |
| 0.2449 | 7.5 | 675 | 0.2311 | 211424 |
| 0.2253 | 8.0 | 720 | 0.2308 | 225440 |
| 0.2314 | 8.5 | 765 | 0.2292 | 239392 |
| 0.2314 | 9.0 | 810 | 0.2329 | 253632 |
| 0.2303 | 9.5 | 855 | 0.2335 | 267680 |
| 0.2304 | 10.0 | 900 | 0.2313 | 281984 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.8.0+cu128
- Datasets 3.6.0
- Tokenizers 0.21.1
|
SustcZhangYX/EnvGPT-14B
|
SustcZhangYX
| 2025-09-11T14:26:34Z | 2 | 0 | null |
[
"safetensors",
"Environmental Science",
"en",
"zh",
"dataset:SustcZhangYX/ChatEnv",
"dataset:SustcZhangYX/ChatEnv-zh",
"license:mit",
"region:us"
] | null | 2025-09-09T01:57:36Z |
---
license: mit
datasets:
- SustcZhangYX/ChatEnv
- SustcZhangYX/ChatEnv-zh
language:
- en
- zh
tags:
- Environmental Science
---
<div align="center">
<img src="LOGO.PNG" width="450px">
<h1 align="center"><font face="Arial">EnvGPT-14B</font></h1>
</div>
**EnvGPT-14B** is a domain-specific large language model tailored for environmental science tasks, fine-tuned on both English and Chinese datasets.
Environmental science presents unique challenges for LLMs due to its interdisciplinary nature. EnvGPT-14B was developed to address these challenges by leveraging environmental science-specific instruction datasets and benchmarks.
*The model was fine-tuned on the environmental science-specific instruction datasets, [ChatEnv](https://huggingface.co/datasets/SustcZhangYX/ChatEnv) and [ChatEnv-zh](https://huggingface.co/datasets/SustcZhangYX/ChatEnv-zh), through Supervised Fine-Tuning (SFT). The combined dataset includes over **200 million tokens**, covering diverse topics in environmental science in both English and Chinese. This bilingual training enables EnvGPT-14B to achieve strong performance in Chinese as well as English tasks.*
## 🚀 Getting Started
### Download the model
Download the model: [EnvGPT-14B](https://huggingface.co/SustcZhangYX/EnvGPT-14B)
```shell
git lfs install
git clone https://huggingface.co/SustcZhangYX/EnvGPT-14B
```
### Model Usage
Here is a Python code snippet that demonstrates how to load the tokenizer and model and generate text using EnvGPT.
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline
# 1. Set your local EnvGPT model path here
model_path = "YOUR_LOCAL_MODEL_PATH"
# 2. Load tokenizer and model
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
torch_dtype=torch.bfloat16,
device_map="auto",
)
# 3. Build chat messages
messages = [
{"role": "system", "content": "You are an expert assistant in environmental science, EnvGPT. You are a helpful assistant."},
{"role": "user", "content": "What is the definition of environmental science?"},
]
# 4. Format the prompt using the chat template
# add_generation_prompt=True appends the assistant start token (e.g., <|assistant|>)
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
)
# 5. Initialize the text-generation pipeline
text_gen = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
device_map="auto",
torch_dtype=torch.bfloat16,
return_full_text=False, # Only return the newly generated text
)
# 6. Generate the response
# do_sample=True enables sampling (stochastic decoding)
# top_p=0.6 applies nucleus sampling
# temperature=0.8 controls randomness
# max_new_tokens=4096 allows up to 4096 new tokens
outputs = text_gen(
text,
max_new_tokens=4096, # Up to 4096 new tokens
do_sample=True, # Enable sampling instead of greedy decoding
top_p=0.6, # Nucleus sampling parameter
temperature=0.8, # Sampling temperature
)
# 7. Print the assistant’s reply (without the original prompt)
print(outputs[0]["generated_text"])
```
This code demonstrates how to load the tokenizer and model from your local path, define environmental science-specific prompts, and generate responses using sampling techniques like top-p and temperature.
## 🌏 Acknowledgement
EnvGPT-14B is fine-tuned based on the open-sourced [Qwen2.5](https://huggingface.co/Qwen). We sincerely thank the Qwen team for their efforts in developing and releasing such a powerful open-source foundation model, which makes domain-specific adaptations like EnvGPT possible.
## ❗Disclaimer
This project is intended solely for academic research and exploration. Please note that, like all large language models, this model may exhibit limitations, including potential inaccuracies or hallucinations in generated outputs.
## Limitations
- The model may produce hallucinated outputs or inaccuracies, which are inherent to large language models.
- The model's identity has not been specifically optimized and may generate content that resembles outputs from other Qwen-based models or similar architectures.
- Generated outputs can vary between attempts due to sensitivity to prompt phrasing and token context.
## 🚩Citation
If you find our work helpful, please consider citing our research: "[Fine-Tuning Large Language Models for Interdisciplinary Environmental Challenges](https://doi.org/10.1016/j.ese.2025.100608)":
```bibtex
@article{ZHANG2025100608,
title = {Fine-Tuning Large Language Models for Interdisciplinary Environmental Challenges},
journal = {Environmental Science and Ecotechnology},
pages = {100608},
year = {2025},
issn = {2666-4984},
doi = {https://doi.org/10.1016/j.ese.2025.100608},
url = {https://www.sciencedirect.com/science/article/pii/S2666498425000869},
author = {Yuanxin Zhang and Sijie Lin and Yaxin Xiong and Nan Li and Lijin Zhong and Longzhen Ding and Qing Hu}
}
```
|
rbelanec/train_copa_789_1757596138
|
rbelanec
| 2025-09-11T14:26:12Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"llama-factory",
"prefix-tuning",
"generated_from_trainer",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"region:us"
] | null | 2025-09-11T14:20:21Z |
---
library_name: peft
license: llama3
base_model: meta-llama/Meta-Llama-3-8B-Instruct
tags:
- llama-factory
- prefix-tuning
- generated_from_trainer
model-index:
- name: train_copa_789_1757596138
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# train_copa_789_1757596138
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the copa dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6012
- Num Input Tokens Seen: 548240
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 789
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Input Tokens Seen |
|:-------------:|:-----:|:----:|:---------------:|:-----------------:|
| 0.5261 | 1.0 | 180 | 0.2666 | 27424 |
| 0.4265 | 2.0 | 360 | 0.2517 | 54832 |
| 0.2294 | 3.0 | 540 | 0.2400 | 82160 |
| 0.2376 | 4.0 | 720 | 0.2362 | 109632 |
| 0.2273 | 5.0 | 900 | 0.2374 | 137120 |
| 0.2282 | 6.0 | 1080 | 0.2412 | 164592 |
| 0.2299 | 7.0 | 1260 | 0.2372 | 191920 |
| 0.2302 | 8.0 | 1440 | 0.2416 | 219344 |
| 0.264 | 9.0 | 1620 | 0.2483 | 246736 |
| 0.2165 | 10.0 | 1800 | 0.2446 | 274208 |
| 0.254 | 11.0 | 1980 | 0.2517 | 301600 |
| 0.2522 | 12.0 | 2160 | 0.2489 | 328976 |
| 0.2228 | 13.0 | 2340 | 0.2545 | 356400 |
| 0.1836 | 14.0 | 2520 | 0.2654 | 383808 |
| 0.1791 | 15.0 | 2700 | 0.2790 | 411216 |
| 0.1126 | 16.0 | 2880 | 0.3588 | 438592 |
| 0.021 | 17.0 | 3060 | 0.4801 | 465984 |
| 0.0091 | 18.0 | 3240 | 0.5633 | 493488 |
| 0.0818 | 19.0 | 3420 | 0.5928 | 520816 |
| 0.0025 | 20.0 | 3600 | 0.6012 | 548240 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.8.0+cu128
- Datasets 3.6.0
- Tokenizers 0.21.1
|
rbelanec/train_copa_789_1757596139
|
rbelanec
| 2025-09-11T14:25:17Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"llama-factory",
"prompt-tuning",
"generated_from_trainer",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"region:us"
] | null | 2025-09-11T14:22:10Z |
---
library_name: peft
license: llama3
base_model: meta-llama/Meta-Llama-3-8B-Instruct
tags:
- llama-factory
- prompt-tuning
- generated_from_trainer
model-index:
- name: train_copa_789_1757596139
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# train_copa_789_1757596139
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the copa dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0711
- Num Input Tokens Seen: 281984
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 789
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Input Tokens Seen |
|:-------------:|:-----:|:----:|:---------------:|:-----------------:|
| 0.2013 | 0.5 | 45 | 0.1095 | 14240 |
| 0.2033 | 1.0 | 90 | 0.0885 | 28192 |
| 0.0778 | 1.5 | 135 | 0.0860 | 42080 |
| 0.1119 | 2.0 | 180 | 0.0777 | 56192 |
| 0.0346 | 2.5 | 225 | 0.0823 | 70048 |
| 0.1199 | 3.0 | 270 | 0.0711 | 84192 |
| 0.0165 | 3.5 | 315 | 0.1047 | 98304 |
| 0.0248 | 4.0 | 360 | 0.1218 | 112544 |
| 0.003 | 4.5 | 405 | 0.1436 | 126784 |
| 0.0269 | 5.0 | 450 | 0.1350 | 140960 |
| 0.0008 | 5.5 | 495 | 0.1389 | 155200 |
| 0.027 | 6.0 | 540 | 0.1530 | 169216 |
| 0.0006 | 6.5 | 585 | 0.1628 | 183232 |
| 0.0002 | 7.0 | 630 | 0.1684 | 197248 |
| 0.0006 | 7.5 | 675 | 0.1641 | 211424 |
| 0.1687 | 8.0 | 720 | 0.1717 | 225440 |
| 0.0001 | 8.5 | 765 | 0.1706 | 239392 |
| 0.0014 | 9.0 | 810 | 0.1723 | 253632 |
| 0.0004 | 9.5 | 855 | 0.1679 | 267680 |
| 0.0003 | 10.0 | 900 | 0.1652 | 281984 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.8.0+cu128
- Datasets 3.6.0
- Tokenizers 0.21.1
|
CodeAtCMU/SmolLM2-1.7B-CorruptedComments_full_sft_code_data_120K_replace_comments_global
|
CodeAtCMU
| 2025-09-11T14:25:08Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-11T14:24:39Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
omerbkts/blockassist-bc-insectivorous_bold_lion_1757600601
|
omerbkts
| 2025-09-11T14:24:17Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"insectivorous bold lion",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T14:23:38Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- insectivorous bold lion
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.