modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-08-19 06:28:27
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 512
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-08-19 06:28:14
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
donoway/ARC-Easy_Llama-3.2-1B-ro2gi4y6
|
donoway
| 2025-08-18T13:23:37Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"base_model:meta-llama/Llama-3.2-1B",
"base_model:finetune:meta-llama/Llama-3.2-1B",
"license:llama3.2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-18T13:01:26Z |
---
library_name: transformers
license: llama3.2
base_model: meta-llama/Llama-3.2-1B
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: ARC-Easy_Llama-3.2-1B-ro2gi4y6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ARC-Easy_Llama-3.2-1B-ro2gi4y6
This model is a fine-tuned version of [meta-llama/Llama-3.2-1B](https://huggingface.co/meta-llama/Llama-3.2-1B) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6994
- Model Preparation Time: 0.0055
- Mdl: 1397.4674
- Accumulated Loss: 968.6506
- Correct Preds: 430.0
- Total Preds: 570.0
- Accuracy: 0.7544
- Correct Gen Preds: 430.0
- Gen Accuracy: 0.7544
- Correct Gen Preds 32: 118.0
- Correct Preds 32: 118.0
- Total Labels 32: 158.0
- Accuracy 32: 0.7468
- Gen Accuracy 32: 0.7468
- Correct Gen Preds 33: 116.0
- Correct Preds 33: 116.0
- Total Labels 33: 152.0
- Accuracy 33: 0.7632
- Gen Accuracy 33: 0.7632
- Correct Gen Preds 34: 113.0
- Correct Preds 34: 113.0
- Total Labels 34: 142.0
- Accuracy 34: 0.7958
- Gen Accuracy 34: 0.7958
- Correct Gen Preds 35: 83.0
- Correct Preds 35: 83.0
- Total Labels 35: 118.0
- Accuracy 35: 0.7034
- Gen Accuracy 35: 0.7034
- Correct Gen Preds 36: 0.0
- Correct Preds 36: 0.0
- Total Labels 36: 0.0
- Accuracy 36: 0.0
- Gen Accuracy 36: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 112
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.001
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Model Preparation Time | Mdl | Accumulated Loss | Correct Preds | Total Preds | Accuracy | Correct Gen Preds | Gen Accuracy | Correct Gen Preds 32 | Correct Preds 32 | Total Labels 32 | Accuracy 32 | Gen Accuracy 32 | Correct Gen Preds 33 | Correct Preds 33 | Total Labels 33 | Accuracy 33 | Gen Accuracy 33 | Correct Gen Preds 34 | Correct Preds 34 | Total Labels 34 | Accuracy 34 | Gen Accuracy 34 | Correct Gen Preds 35 | Correct Preds 35 | Total Labels 35 | Accuracy 35 | Gen Accuracy 35 | Correct Gen Preds 36 | Correct Preds 36 | Total Labels 36 | Accuracy 36 | Gen Accuracy 36 |
|:-------------:|:-----:|:----:|:---------------:|:----------------------:|:---------:|:----------------:|:-------------:|:-----------:|:--------:|:-----------------:|:------------:|:--------------------:|:----------------:|:---------------:|:-----------:|:---------------:|:--------------------:|:----------------:|:---------------:|:-----------:|:---------------:|:--------------------:|:----------------:|:---------------:|:-----------:|:---------------:|:--------------------:|:----------------:|:---------------:|:-----------:|:---------------:|:--------------------:|:----------------:|:---------------:|:-----------:|:---------------:|
| No log | 0 | 0 | 1.5354 | 0.0055 | 1262.6022 | 875.1692 | 172.0 | 570.0 | 0.3018 | 170.0 | 0.2982 | 154.0 | 154.0 | 158.0 | 0.9747 | 0.9747 | 0.0 | 0.0 | 152.0 | 0.0 | 0.0 | 15.0 | 17.0 | 142.0 | 0.1197 | 0.1056 | 1.0 | 1.0 | 118.0 | 0.0085 | 0.0085 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.1499 | 1.0 | 30 | 0.9537 | 0.0055 | 784.2818 | 543.6227 | 379.0 | 570.0 | 0.6649 | 377.0 | 0.6614 | 127.0 | 128.0 | 158.0 | 0.8101 | 0.8038 | 85.0 | 86.0 | 152.0 | 0.5658 | 0.5592 | 96.0 | 96.0 | 142.0 | 0.6761 | 0.6761 | 69.0 | 69.0 | 118.0 | 0.5847 | 0.5847 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.3791 | 2.0 | 60 | 0.7650 | 0.0055 | 629.1242 | 436.0757 | 425.0 | 570.0 | 0.7456 | 424.0 | 0.7439 | 109.0 | 110.0 | 158.0 | 0.6962 | 0.6899 | 123.0 | 123.0 | 152.0 | 0.8092 | 0.8092 | 106.0 | 106.0 | 142.0 | 0.7465 | 0.7465 | 86.0 | 86.0 | 118.0 | 0.7288 | 0.7288 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.2137 | 3.0 | 90 | 0.9976 | 0.0055 | 820.3431 | 568.6185 | 414.0 | 570.0 | 0.7263 | 414.0 | 0.7263 | 98.0 | 98.0 | 158.0 | 0.6203 | 0.6203 | 119.0 | 119.0 | 152.0 | 0.7829 | 0.7829 | 115.0 | 115.0 | 142.0 | 0.8099 | 0.8099 | 82.0 | 82.0 | 118.0 | 0.6949 | 0.6949 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.153 | 4.0 | 120 | 1.5820 | 0.0055 | 1300.9342 | 901.7389 | 419.0 | 570.0 | 0.7351 | 416.0 | 0.7298 | 112.0 | 115.0 | 158.0 | 0.7278 | 0.7089 | 113.0 | 113.0 | 152.0 | 0.7434 | 0.7434 | 120.0 | 120.0 | 142.0 | 0.8451 | 0.8451 | 71.0 | 71.0 | 118.0 | 0.6017 | 0.6017 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0002 | 5.0 | 150 | 1.9407 | 0.0055 | 1595.9007 | 1106.1941 | 425.0 | 570.0 | 0.7456 | 423.0 | 0.7421 | 111.0 | 112.0 | 158.0 | 0.7089 | 0.7025 | 126.0 | 127.0 | 152.0 | 0.8355 | 0.8289 | 110.0 | 110.0 | 142.0 | 0.7746 | 0.7746 | 76.0 | 76.0 | 118.0 | 0.6441 | 0.6441 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0034 | 6.0 | 180 | 1.6994 | 0.0055 | 1397.4674 | 968.6506 | 430.0 | 570.0 | 0.7544 | 430.0 | 0.7544 | 118.0 | 118.0 | 158.0 | 0.7468 | 0.7468 | 116.0 | 116.0 | 152.0 | 0.7632 | 0.7632 | 113.0 | 113.0 | 142.0 | 0.7958 | 0.7958 | 83.0 | 83.0 | 118.0 | 0.7034 | 0.7034 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0002 | 7.0 | 210 | 2.0344 | 0.0055 | 1672.9333 | 1159.5890 | 430.0 | 570.0 | 0.7544 | 430.0 | 0.7544 | 117.0 | 117.0 | 158.0 | 0.7405 | 0.7405 | 111.0 | 111.0 | 152.0 | 0.7303 | 0.7303 | 118.0 | 118.0 | 142.0 | 0.8310 | 0.8310 | 84.0 | 84.0 | 118.0 | 0.7119 | 0.7119 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.2384 | 8.0 | 240 | 2.3318 | 0.0055 | 1917.5151 | 1329.1202 | 422.0 | 570.0 | 0.7404 | 421.0 | 0.7386 | 117.0 | 118.0 | 158.0 | 0.7468 | 0.7405 | 111.0 | 111.0 | 152.0 | 0.7303 | 0.7303 | 113.0 | 113.0 | 142.0 | 0.7958 | 0.7958 | 80.0 | 80.0 | 118.0 | 0.6780 | 0.6780 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0001 | 9.0 | 270 | 2.3574 | 0.0055 | 1938.6154 | 1343.7458 | 426.0 | 570.0 | 0.7474 | 426.0 | 0.7474 | 112.0 | 112.0 | 158.0 | 0.7089 | 0.7089 | 114.0 | 114.0 | 152.0 | 0.75 | 0.75 | 115.0 | 115.0 | 142.0 | 0.8099 | 0.8099 | 85.0 | 85.0 | 118.0 | 0.7203 | 0.7203 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0039 | 10.0 | 300 | 2.6388 | 0.0055 | 2169.9437 | 1504.0904 | 422.0 | 570.0 | 0.7404 | 421.0 | 0.7386 | 109.0 | 110.0 | 158.0 | 0.6962 | 0.6899 | 115.0 | 115.0 | 152.0 | 0.7566 | 0.7566 | 111.0 | 111.0 | 142.0 | 0.7817 | 0.7817 | 86.0 | 86.0 | 118.0 | 0.7288 | 0.7288 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 11.0 | 330 | 2.5992 | 0.0055 | 2137.4472 | 1481.5655 | 421.0 | 570.0 | 0.7386 | 420.0 | 0.7368 | 110.0 | 111.0 | 158.0 | 0.7025 | 0.6962 | 115.0 | 115.0 | 152.0 | 0.7566 | 0.7566 | 115.0 | 115.0 | 142.0 | 0.8099 | 0.8099 | 80.0 | 80.0 | 118.0 | 0.6780 | 0.6780 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 12.0 | 360 | 2.5923 | 0.0055 | 2131.7646 | 1477.6266 | 422.0 | 570.0 | 0.7404 | 421.0 | 0.7386 | 108.0 | 109.0 | 158.0 | 0.6899 | 0.6835 | 113.0 | 113.0 | 152.0 | 0.7434 | 0.7434 | 114.0 | 114.0 | 142.0 | 0.8028 | 0.8028 | 86.0 | 86.0 | 118.0 | 0.7288 | 0.7288 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 13.0 | 390 | 2.6003 | 0.0055 | 2138.2906 | 1482.1501 | 423.0 | 570.0 | 0.7421 | 422.0 | 0.7404 | 113.0 | 114.0 | 158.0 | 0.7215 | 0.7152 | 111.0 | 111.0 | 152.0 | 0.7303 | 0.7303 | 114.0 | 114.0 | 142.0 | 0.8028 | 0.8028 | 84.0 | 84.0 | 118.0 | 0.7119 | 0.7119 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 14.0 | 420 | 2.6367 | 0.0055 | 2168.2271 | 1502.9005 | 423.0 | 570.0 | 0.7421 | 422.0 | 0.7404 | 115.0 | 116.0 | 158.0 | 0.7342 | 0.7278 | 111.0 | 111.0 | 152.0 | 0.7303 | 0.7303 | 113.0 | 113.0 | 142.0 | 0.7958 | 0.7958 | 83.0 | 83.0 | 118.0 | 0.7034 | 0.7034 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 15.0 | 450 | 2.6527 | 0.0055 | 2181.4382 | 1512.0577 | 424.0 | 570.0 | 0.7439 | 423.0 | 0.7421 | 113.0 | 114.0 | 158.0 | 0.7215 | 0.7152 | 112.0 | 112.0 | 152.0 | 0.7368 | 0.7368 | 114.0 | 114.0 | 142.0 | 0.8028 | 0.8028 | 84.0 | 84.0 | 118.0 | 0.7119 | 0.7119 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 16.0 | 480 | 2.6577 | 0.0055 | 2185.4872 | 1514.8643 | 423.0 | 570.0 | 0.7421 | 422.0 | 0.7404 | 113.0 | 114.0 | 158.0 | 0.7215 | 0.7152 | 111.0 | 111.0 | 152.0 | 0.7303 | 0.7303 | 114.0 | 114.0 | 142.0 | 0.8028 | 0.8028 | 84.0 | 84.0 | 118.0 | 0.7119 | 0.7119 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 17.0 | 510 | 2.6565 | 0.0055 | 2184.5381 | 1514.2064 | 423.0 | 570.0 | 0.7421 | 422.0 | 0.7404 | 114.0 | 115.0 | 158.0 | 0.7278 | 0.7215 | 111.0 | 111.0 | 152.0 | 0.7303 | 0.7303 | 113.0 | 113.0 | 142.0 | 0.7958 | 0.7958 | 84.0 | 84.0 | 118.0 | 0.7119 | 0.7119 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
mradermacher/InnoSpark-HPC-RM-32B-GGUF
|
mradermacher
| 2025-08-18T13:21:17Z | 171 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:sii-research/InnoSpark-HPC-RM-32B",
"base_model:quantized:sii-research/InnoSpark-HPC-RM-32B",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-07-26T15:13:23Z |
---
base_model: sii-research/InnoSpark-HPC-RM-32B
language:
- en
library_name: transformers
license: mit
mradermacher:
readme_rev: 1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/sii-research/InnoSpark-HPC-RM-32B
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#InnoSpark-HPC-RM-32B-GGUF).***
weighted/imatrix quants are available at https://huggingface.co/mradermacher/InnoSpark-HPC-RM-32B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/InnoSpark-HPC-RM-32B-GGUF/resolve/main/InnoSpark-HPC-RM-32B.Q2_K.gguf) | Q2_K | 12.4 | |
| [GGUF](https://huggingface.co/mradermacher/InnoSpark-HPC-RM-32B-GGUF/resolve/main/InnoSpark-HPC-RM-32B.Q3_K_S.gguf) | Q3_K_S | 14.5 | |
| [GGUF](https://huggingface.co/mradermacher/InnoSpark-HPC-RM-32B-GGUF/resolve/main/InnoSpark-HPC-RM-32B.Q3_K_M.gguf) | Q3_K_M | 16.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/InnoSpark-HPC-RM-32B-GGUF/resolve/main/InnoSpark-HPC-RM-32B.Q3_K_L.gguf) | Q3_K_L | 17.3 | |
| [GGUF](https://huggingface.co/mradermacher/InnoSpark-HPC-RM-32B-GGUF/resolve/main/InnoSpark-HPC-RM-32B.IQ4_XS.gguf) | IQ4_XS | 18.0 | |
| [GGUF](https://huggingface.co/mradermacher/InnoSpark-HPC-RM-32B-GGUF/resolve/main/InnoSpark-HPC-RM-32B.Q4_K_S.gguf) | Q4_K_S | 18.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/InnoSpark-HPC-RM-32B-GGUF/resolve/main/InnoSpark-HPC-RM-32B.Q4_K_M.gguf) | Q4_K_M | 20.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/InnoSpark-HPC-RM-32B-GGUF/resolve/main/InnoSpark-HPC-RM-32B.Q5_K_S.gguf) | Q5_K_S | 22.7 | |
| [GGUF](https://huggingface.co/mradermacher/InnoSpark-HPC-RM-32B-GGUF/resolve/main/InnoSpark-HPC-RM-32B.Q5_K_M.gguf) | Q5_K_M | 23.4 | |
| [GGUF](https://huggingface.co/mradermacher/InnoSpark-HPC-RM-32B-GGUF/resolve/main/InnoSpark-HPC-RM-32B.Q6_K.gguf) | Q6_K | 27.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/InnoSpark-HPC-RM-32B-GGUF/resolve/main/InnoSpark-HPC-RM-32B.Q8_0.gguf) | Q8_0 | 34.9 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Sayemahsjn/blockassist-bc-playful_feline_octopus_1755522104
|
Sayemahsjn
| 2025-08-18T13:20:58Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"playful feline octopus",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-18T13:20:54Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- playful feline octopus
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
quantumxnode/blockassist-bc-dormant_peckish_seahorse_1755521675
|
quantumxnode
| 2025-08-18T13:20:02Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"dormant peckish seahorse",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-18T13:19:59Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- dormant peckish seahorse
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
halley-ai/gpt-oss-20b-MLX-6bit-gs32
|
halley-ai
| 2025-08-18T13:19:24Z | 0 | 1 |
mlx
|
[
"mlx",
"safetensors",
"gpt_oss",
"apple-silicon",
"metal",
"arm64",
"6-bit",
"group-size-32",
"moe",
"mpx4",
"openai",
"halley-ai",
"text-generation",
"conversational",
"en",
"ro",
"base_model:openai/gpt-oss-20b",
"base_model:quantized:openai/gpt-oss-20b",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-08-16T20:14:21Z |
---
library_name: mlx
pipeline_tag: text-generation
inference: false # MLX is macOS-only; HF Inference API won't run it
license: apache-2.0
base_model: openai/gpt-oss-20b
base_model_relation: quantized
language:
- en
- ro
tags:
- apple-silicon
- metal
- arm64
- 6-bit
- group-size-32
- moe
- mpx4
- openai
- halley-ai
---
# gpt-oss-20b — MLX 6-bit (group size 32)
**Summary.** This is a 6-bit (**Q6**) **MLX** quantization of **gpt-oss-20B** (sparse Mixture-of-Experts, MPx4). Group size is **32**.
Built for **Apple Silicon** with Metal acceleration.
- **Base model:** `openai/gpt-oss-20b` (Apache-2.0)
- **Quantization:** MLX Q6, `q_group_size=32` (some tensors remain FP16 for stability)
- **Files:** MLX weight shards + `config.json`; tokenizer files included for drop-in use
- **Footprint:** ~**18.38 GB** on disk
- **Intended use:** local inference / research on M-series Macs
- **Not intended for:** safety-critical decisions; outputs may be inaccurate or biased
## Requirements
**Runs on:** Apple Silicon (M1 or newer) with **macOS ≥ 13.5** via **MLX (Metal)**.
**Not supported:** Intel macOS / Linux / Windows (use a GGUF build + llama.cpp instead).
**RAM guidance:** 32 GB minimum for Q6 (gs=32). 24 GB MacBook Pro **won’t run it**. Extra RAM improves headroom.
## How to use (MLX)
```bash
pip install mlx-lm transformers
```
```python
# Python API (uses tokenizer bundled with this repo)
from mlx_lm import load, generate
model, tokenizer = load("halley-ai/gpt-oss-20b-MLX-6bit-gs32")
print(generate(
model, tokenizer,
prompt="Explain the Chudnovsky algorithm to compute π.",
max_tokens=256, max_kv_size=512
))
```
## Performance (Apple Silicon, real-world)
LM Studio / CLI (MLX, Q6 gs=32): ~49–55 tok/s, TTFB ~0.35–0.45 s (≈2k-token responses)
– measured on M1 Max 32 GB (short fixed-length runs show lower t/s due to startup overhead).
Throughput varies with Mac model, context, and sampler settings.
## Evaluation
Perplexity (PPL) streaming evaluation on WikiText-2; window=stride=4096, ~100k tokens, EOS inserted between docs.
<table>
<thead>
<tr><th>Variant</th><th>PPL (ctx=4096)</th></tr>
</thead>
<tbody>
<tr><td>MLX 8-bit (reference)</td><td>10.75</td></tr>
<tr><td><strong>MLX 6-bit (gs=32)</strong></td><td><strong>10.46 (−2.7% vs 8-bit/gs64)</strong></td></tr>
<tr><td>MLX 5-bit (gs=32)</td><td>11.11 (+3.3% vs 8-bit/gs64, +6.2% vs 6-bit/gs32)</strong></td></tr>
<tr><td>MLX 4-bit (gs=32)</td><td>13.70 (+27.4% vs 8-bit/gs64, +31.0% vs 6-bit/gs32)</td></tr>
</tbody>
</table>
**Interpretation**
- MLX 6-bit/gs32: Best of the group; edges out 8-bit/gs64 slightly at a smaller
footprint.
- MLX 5-bit/gs32: Small, consistent drop vs 6-bit/gs32 and 8-bit/gs64 (~3–6% PPL); strong “fits-16GB” option when GPU buffer limits matter.
- MLX 8-bit/gs64: Solid reference; near‑FP16 quality at a larger footprint.
- MLX 4-bit/gs32: Trades accuracy for footprint; use when RAM is constrained or throughput is the priority.
## Conversion details (provenance)
```bash
python -m mlx_lm convert \
--hf-path openai/gpt-oss-20b \
--mlx-path gpt-oss-20b-mlx-q6-gs32 \
--q-bits 6 --q-group-size 32 -q
```
- Some non-expert tensors (embeddings, norms, router) remain FP16.
## Sibling & reference models
- halley-ai/gpt-oss-20b-MLX-5bit-gs32
- halley-ai/gpt-oss-20b-MLX-4bit-gs32
- Reference (8-bit, upstream): lmstudio-community/gpt-oss-20b-MLX-8bit
## Limitations & biases
Outputs may be factually wrong or unsafe. Don’t use for medical, legal, or financial decisions without human review.
MoE models can be sensitive to prompt wording; prefer explicit instructions and structure.
## License & credits
- License: Apache-2.0 (inherits from base model)
- Base model: OpenAI gpt-oss-20B
- Quantization: Halley AI Lab (MLX Q6, gs=32)
- Please cite both the base model and this repository when you use the weights.
|
AiArtLab/sdxl_vae
|
AiArtLab
| 2025-08-18T13:16:44Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"en",
"base_model:madebyollin/sdxl-vae-fp16-fix",
"base_model:finetune:madebyollin/sdxl-vae-fp16-fix",
"license:apache-2.0",
"region:us"
] | null | 2025-08-18T11:35:12Z |
---
license: apache-2.0
language:
- en
base_model:
- madebyollin/sdxl-vae-fp16-fix
- stabilityai/sdxl-vae
library_name: diffusers
---
# SDXL-VAE finetuned
| Model | MSE | PSNR | LPIPS |
|----------------------------|-------------|-----------|------------|
| madebyollin/sdxl-vae-fp16-fix | 3.680e-03 | 25.2100 | 0.1314 |
| KBlueLeaf/EQ-SDXL-VAE | 3.530e-03 | 25.2827 | 0.1298 |
| **AiArtLab/sdxl_vae** | <span style="color:red">**3.321e-03**</span> | <span style="color:red">**25.6389**</span> | <span style="color:red">**0.1251**</span> |
### Train status, in progress:

## VAE Training Process
- Encoder: Frozen (to avoid retraining SDXL for the new VAE).
- Dataset: 100,000 PNG images
- Training Time: 4 days
- Hardware: Single RTX 4090
- Resolution: 512px
- Precision: FP32
- Effective Batch Size: 16 (batch size 2 + gradient accumulation 8)
- Optimizer: AdamW (8-bit)
## Implementation
- Base Code: Used a simple diffusion model training script.
- Training Target: Only the decoder, focusing on image reconstruction.
## Loss Functions
- Initially used LPIPS and MSE.
- Noticed FID score improving, but images becoming blurry (FID overfits to blurry images—improving FID is not always good).
- Switched to MAE.
- Balanced LPIPS and MAE at 90/10 ratio.
- Used median perceptual_loss_weight for better balance.
## Compare
https://imgsli.com/NDA3Njgw/2/3
## Donations
Please contact with us if you may provide some GPU's or money on training
DOGE: DEw2DR8C7BnF8GgcrfTzUjSnGkuMeJhg83
BTC: 3JHv9Hb8kEW8zMAccdgCdZGfrHeMhH1rpN
## Contacts
[recoilme](https://t.me/recoilme)
|
MattBou00/smolLM-360m-detox_try_2
|
MattBou00
| 2025-08-18T13:10:20Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"ppo",
"reinforcement-learning",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
reinforcement-learning
| 2025-08-18T07:37:48Z |
---
license: apache-2.0
library_name: transformers
tags:
- trl
- ppo
- transformers
- reinforcement-learning
---
# TRL Model
This is a [TRL language model](https://github.com/huggingface/trl) that has been fine-tuned with reinforcement learning to
guide the model outputs according to a value, function, or human feedback. The model can be used for text generation.
## Usage
To use this model for inference, first install the TRL library:
```bash
python -m pip install trl
```
You can then generate text as follows:
```python
from transformers import pipeline
generator = pipeline("text-generation", model="MattBou00//content/IRL-Bayesian/IRL-Bayesian/IRL-Bayesian/outputs/2025-08-18_12-40-31/checkpoints/checkpoint-epoch-20")
outputs = generator("Hello, my llama is cute")
```
If you want to use the model for training or to obtain the outputs from the value head, load the model as follows:
```python
from transformers import AutoTokenizer
from trl import AutoModelForCausalLMWithValueHead
tokenizer = AutoTokenizer.from_pretrained("MattBou00//content/IRL-Bayesian/IRL-Bayesian/IRL-Bayesian/outputs/2025-08-18_12-40-31/checkpoints/checkpoint-epoch-20")
model = AutoModelForCausalLMWithValueHead.from_pretrained("MattBou00//content/IRL-Bayesian/IRL-Bayesian/IRL-Bayesian/outputs/2025-08-18_12-40-31/checkpoints/checkpoint-epoch-20")
inputs = tokenizer("Hello, my llama is cute", return_tensors="pt")
outputs = model(**inputs, labels=inputs["input_ids"])
```
|
mradermacher/Kimi-Dev-72B-abliterated-GGUF
|
mradermacher
| 2025-08-18T13:07:21Z | 125 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:nicoboss/Kimi-Dev-72B-abliterated",
"base_model:quantized:nicoboss/Kimi-Dev-72B-abliterated",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-05T07:24:14Z |
---
base_model: nicoboss/Kimi-Dev-72B-abliterated
language:
- en
library_name: transformers
mradermacher:
readme_rev: 1
no_imatrix: 'q4_K .. ggml_validate_row_data: found nan value at block 32'
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/nicoboss/Kimi-Dev-72B-abliterated
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Kimi-Dev-72B-abliterated-GGUF).***
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Kimi-Dev-72B-abliterated-GGUF/resolve/main/Kimi-Dev-72B-abliterated.Q2_K.gguf) | Q2_K | 29.9 | |
| [GGUF](https://huggingface.co/mradermacher/Kimi-Dev-72B-abliterated-GGUF/resolve/main/Kimi-Dev-72B-abliterated.Q3_K_S.gguf) | Q3_K_S | 34.6 | |
| [GGUF](https://huggingface.co/mradermacher/Kimi-Dev-72B-abliterated-GGUF/resolve/main/Kimi-Dev-72B-abliterated.Q3_K_M.gguf) | Q3_K_M | 37.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Kimi-Dev-72B-abliterated-GGUF/resolve/main/Kimi-Dev-72B-abliterated.Q3_K_L.gguf) | Q3_K_L | 39.6 | |
| [GGUF](https://huggingface.co/mradermacher/Kimi-Dev-72B-abliterated-GGUF/resolve/main/Kimi-Dev-72B-abliterated.IQ4_XS.gguf) | IQ4_XS | 40.3 | |
| [GGUF](https://huggingface.co/mradermacher/Kimi-Dev-72B-abliterated-GGUF/resolve/main/Kimi-Dev-72B-abliterated.Q4_K_S.gguf) | Q4_K_S | 44.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Kimi-Dev-72B-abliterated-GGUF/resolve/main/Kimi-Dev-72B-abliterated.Q4_K_M.gguf) | Q4_K_M | 47.5 | fast, recommended |
| [PART 1](https://huggingface.co/mradermacher/Kimi-Dev-72B-abliterated-GGUF/resolve/main/Kimi-Dev-72B-abliterated.Q5_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Kimi-Dev-72B-abliterated-GGUF/resolve/main/Kimi-Dev-72B-abliterated.Q5_K_S.gguf.part2of2) | Q5_K_S | 51.5 | |
| [PART 1](https://huggingface.co/mradermacher/Kimi-Dev-72B-abliterated-GGUF/resolve/main/Kimi-Dev-72B-abliterated.Q5_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Kimi-Dev-72B-abliterated-GGUF/resolve/main/Kimi-Dev-72B-abliterated.Q5_K_M.gguf.part2of2) | Q5_K_M | 54.5 | |
| [PART 1](https://huggingface.co/mradermacher/Kimi-Dev-72B-abliterated-GGUF/resolve/main/Kimi-Dev-72B-abliterated.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Kimi-Dev-72B-abliterated-GGUF/resolve/main/Kimi-Dev-72B-abliterated.Q6_K.gguf.part2of2) | Q6_K | 64.4 | very good quality |
| [PART 1](https://huggingface.co/mradermacher/Kimi-Dev-72B-abliterated-GGUF/resolve/main/Kimi-Dev-72B-abliterated.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Kimi-Dev-72B-abliterated-GGUF/resolve/main/Kimi-Dev-72B-abliterated.Q8_0.gguf.part2of2) | Q8_0 | 77.4 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
katanyasekolah/blockassist-bc-silky_sprightly_cassowary_1755520683
|
katanyasekolah
| 2025-08-18T13:07:00Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"silky sprightly cassowary",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-18T13:06:57Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- silky sprightly cassowary
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
aragoto/gemma-jaen-test
|
aragoto
| 2025-08-18T13:05:28Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:google/gemma-2b",
"lora",
"transformers",
"text-generation",
"arxiv:1910.09700",
"base_model:google/gemma-2b",
"region:us"
] |
text-generation
| 2025-08-18T13:05:23Z |
---
base_model: google/gemma-2b
library_name: peft
pipeline_tag: text-generation
tags:
- base_model:adapter:google/gemma-2b
- lora
- transformers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.17.0
|
thanobidex/blockassist-bc-colorful_shiny_hare_1755520677
|
thanobidex
| 2025-08-18T13:05:27Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"colorful shiny hare",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-18T13:05:23Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- colorful shiny hare
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
AdamDE/tinyllama-custom-youtube-replies
|
AdamDE
| 2025-08-18T13:00:12Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"lora",
"adapters",
"tinyllama",
"youtube",
"conversational",
"text-generation",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"base_model:adapter:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-08-18T11:52:27Z |
---
library_name: peft
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
pipeline_tag: text-generation
tags:
- lora
- adapters
- tinyllama
- youtube
- conversational
- text-generation
license: apache-2.0
---
# TinyLlama YouTube Replies (LoRA)
This model is a **LoRA fine-tuned** version of [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0), designed to generate **concise, friendly, and domain-specific replies** to YouTube comments on AI/ML-related content. Using Low-Rank Adaptation (LoRA), this project demonstrates the ability to fine-tune a lightweight language model for conversational tasks. While the model may occasionally produce out-of-context replies and could benefit from further optimization, it effectively showcases a functional fine-tuning pipeline.
## Model Details
- **Base Model**: [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0)
- **Fine-Tuning Method**: LoRA (Low-Rank Adaptation)
- **Task**: Generating short, engaging replies to AI/ML YouTube comments
- **Language**: English
- **License**: Apache 2.0
## Intended Use
This model is intended for:
- Generating polite and engaging replies to AI/ML-related YouTube comments.
- Demonstrating a fine-tuning project using LoRA for lightweight adaptation.
- Research or educational purposes in conversational AI.
**Not Intended For**:
- Production environments without further optimization.
- Non-English text generation.
- Applications requiring high contextual accuracy without human review.
## Usage
To use this model, you need the `transformers` and `peft` libraries. Below is an example of how to load and generate replies:
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel
# Load the base model, tokenizer, and LoRA adapters
base_id = "TinyLlama/TinyLlama-1.1B-Chat-v1.0"
adapter_id = "AdamDE/tinyllama-custom-youtube-replies"
tokenizer = AutoTokenizer.from_pretrained(adapter_id)
base_model = AutoModelForCausalLM.from_pretrained(base_id, torch_dtype=torch.float16, device_map="auto")
model = PeftModel.from_pretrained(base_model, adapter_id)
# Prepare input
messages = [
{"role": "system", "content": "You are an AI/ML tutorial creator replying to YouTube comments. "
"Provide concise, friendly, and domain-specific help, encourage engagement, "
"and keep a positive tone with occasional emojis when appropriate."},
{"role": "user", "content": "Your enthusiasm is contagious!"}
]
inputs = tokenizer.apply_chat_template(messages, add_generation_prompt=True, return_tensors="pt").to(model.device)
# Generate reply
with torch.no_grad():
out = model.generate(inputs, max_new_tokens=128, temperature=0.7, top_p=0.9, pad_token_id=tokenizer.eos_token_id)
reply = tokenizer.decode(out[0], skip_special_tokens=True)
print(reply)
# Example output: "Haha, thanks! 😂 What's your favorite part?"
```
### Requirements
```bash
pip install transformers peft torch
```
### Notes
- Use a clear, comment-like prompt for best results.
- Adjust `max_new_tokens`, `temperature`, and `top_p` to control reply length and creativity.
- The model may occasionally generate out-of-context replies, indicating room for further optimization.
## Training Details
- **Dataset**: Custom JSON dataset of AI/ML YouTube comments and replies, split into train, validation, and test sets.
- **Training Procedure**: LoRA fine-tuning with 4-bit quantization (NF4) and mixed precision (bf16/fp16).
- **Hyperparameters**:
- LoRA Rank (r): 16
- LoRA Alpha: 32
- LoRA Dropout: 0.05
- Epochs: 5
- Learning Rate: 2e-4
- Optimizer: Paged AdamW 8-bit
- Scheduler: Cosine with 10% warmup
- **Evaluation Metrics**:
- BLEU and ROUGE scores computed on the test set (see training script for details).
- **Training Features**:
- Gradient checkpointing for memory efficiency.
- Early stopping with patience of 2 epochs based on validation loss.
- Custom data collator for padding and label masking.
## Performance
The model achieves reasonable performance for a fine-tuning project, with BLEU and ROUGE scores indicating decent reply quality. However, occasional out-of-context responses suggest potential improvements in dataset quality or hyperparameter tuning.
## Limitations
- May generate out-of-context or generic replies, requiring human review.
- Optimized for AI/ML YouTube comments; performance may vary for other domains.
- Limited to English-language inputs and outputs.
## Ethical Considerations
- Generated replies should be reviewed to ensure they are appropriate and constructive.
- Use responsibly to foster positive community interactions.
|
ziadtarek12/my_awesome_opus_books_model
|
ziadtarek12
| 2025-08-18T12:55:37Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | 2025-08-17T17:15:55Z |
---
library_name: transformers
license: apache-2.0
base_model: google-t5/t5-small
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: my_awesome_opus_books_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_opus_books_model
This model is a fine-tuned version of [google-t5/t5-small](https://huggingface.co/google-t5/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6098
- Bleu: 6.2199
- Gen Len: 18.3624
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:-------:|
| 1.8518 | 1.0 | 6355 | 1.6338 | 6.0374 | 18.3691 |
| 1.818 | 2.0 | 12710 | 1.6098 | 6.2199 | 18.3624 |
### Framework versions
- Transformers 4.55.1
- Pytorch 2.6.0+cu124
- Datasets 4.0.0
- Tokenizers 0.21.4
|
partzel/PolicyGradient-Pixelcopter-PLE-v0-50000
|
partzel
| 2025-08-18T12:53:16Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-08-18T12:53:09Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: PolicyGradient-Pixelcopter-PLE-v0-50000
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 8.20 +/- 7.78
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
quantumxnode/blockassist-bc-dormant_peckish_seahorse_1755519876
|
quantumxnode
| 2025-08-18T12:51:09Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"dormant peckish seahorse",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-18T12:51:06Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- dormant peckish seahorse
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
constehub/qwen3-14B-rerank-evaluation
|
constehub
| 2025-08-18T12:40:56Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"qwen3",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-18T12:40:09Z |
---
base_model: unsloth/qwen3-14b-base-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** constehub
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen3-14b-base-bnb-4bit
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
donoway/ARC-Easy_Llama-3.2-1B-5p7mxi8l
|
donoway
| 2025-08-18T12:40:31Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"base_model:meta-llama/Llama-3.2-1B",
"base_model:finetune:meta-llama/Llama-3.2-1B",
"license:llama3.2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-18T12:22:56Z |
---
library_name: transformers
license: llama3.2
base_model: meta-llama/Llama-3.2-1B
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: ARC-Easy_Llama-3.2-1B-5p7mxi8l
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ARC-Easy_Llama-3.2-1B-5p7mxi8l
This model is a fine-tuned version of [meta-llama/Llama-3.2-1B](https://huggingface.co/meta-llama/Llama-3.2-1B) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7052
- Model Preparation Time: 0.0056
- Mdl: 579.8957
- Accumulated Loss: 401.9531
- Correct Preds: 437.0
- Total Preds: 570.0
- Accuracy: 0.7667
- Correct Gen Preds: 436.0
- Gen Accuracy: 0.7649
- Correct Gen Preds 32: 129.0
- Correct Preds 32: 130.0
- Total Labels 32: 158.0
- Accuracy 32: 0.8228
- Gen Accuracy 32: 0.8165
- Correct Gen Preds 33: 116.0
- Correct Preds 33: 116.0
- Total Labels 33: 152.0
- Accuracy 33: 0.7632
- Gen Accuracy 33: 0.7632
- Correct Gen Preds 34: 108.0
- Correct Preds 34: 108.0
- Total Labels 34: 142.0
- Accuracy 34: 0.7606
- Gen Accuracy 34: 0.7606
- Correct Gen Preds 35: 83.0
- Correct Preds 35: 83.0
- Total Labels 35: 118.0
- Accuracy 35: 0.7034
- Gen Accuracy 35: 0.7034
- Correct Gen Preds 36: 0.0
- Correct Preds 36: 0.0
- Total Labels 36: 0.0
- Accuracy 36: 0.0
- Gen Accuracy 36: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 112
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.001
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Model Preparation Time | Mdl | Accumulated Loss | Correct Preds | Total Preds | Accuracy | Correct Gen Preds | Gen Accuracy | Correct Gen Preds 32 | Correct Preds 32 | Total Labels 32 | Accuracy 32 | Gen Accuracy 32 | Correct Gen Preds 33 | Correct Preds 33 | Total Labels 33 | Accuracy 33 | Gen Accuracy 33 | Correct Gen Preds 34 | Correct Preds 34 | Total Labels 34 | Accuracy 34 | Gen Accuracy 34 | Correct Gen Preds 35 | Correct Preds 35 | Total Labels 35 | Accuracy 35 | Gen Accuracy 35 | Correct Gen Preds 36 | Correct Preds 36 | Total Labels 36 | Accuracy 36 | Gen Accuracy 36 |
|:-------------:|:-----:|:----:|:---------------:|:----------------------:|:---------:|:----------------:|:-------------:|:-----------:|:--------:|:-----------------:|:------------:|:--------------------:|:----------------:|:---------------:|:-----------:|:---------------:|:--------------------:|:----------------:|:---------------:|:-----------:|:---------------:|:--------------------:|:----------------:|:---------------:|:-----------:|:---------------:|:--------------------:|:----------------:|:---------------:|:-----------:|:---------------:|:--------------------:|:----------------:|:---------------:|:-----------:|:---------------:|
| No log | 0 | 0 | 1.5354 | 0.0056 | 1262.6022 | 875.1692 | 172.0 | 570.0 | 0.3018 | 170.0 | 0.2982 | 154.0 | 154.0 | 158.0 | 0.9747 | 0.9747 | 0.0 | 0.0 | 152.0 | 0.0 | 0.0 | 15.0 | 17.0 | 142.0 | 0.1197 | 0.1056 | 1.0 | 1.0 | 118.0 | 0.0085 | 0.0085 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.8152 | 1.0 | 26 | 0.7928 | 0.0056 | 651.9305 | 451.8838 | 414.0 | 570.0 | 0.7263 | 414.0 | 0.7263 | 128.0 | 128.0 | 158.0 | 0.8101 | 0.8101 | 108.0 | 108.0 | 152.0 | 0.7105 | 0.7105 | 103.0 | 103.0 | 142.0 | 0.7254 | 0.7254 | 75.0 | 75.0 | 118.0 | 0.6356 | 0.6356 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.3843 | 2.0 | 52 | 0.7052 | 0.0056 | 579.8957 | 401.9531 | 437.0 | 570.0 | 0.7667 | 436.0 | 0.7649 | 129.0 | 130.0 | 158.0 | 0.8228 | 0.8165 | 116.0 | 116.0 | 152.0 | 0.7632 | 0.7632 | 108.0 | 108.0 | 142.0 | 0.7606 | 0.7606 | 83.0 | 83.0 | 118.0 | 0.7034 | 0.7034 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.2692 | 3.0 | 78 | 0.8492 | 0.0056 | 698.3545 | 484.0624 | 432.0 | 570.0 | 0.7579 | 432.0 | 0.7579 | 114.0 | 114.0 | 158.0 | 0.7215 | 0.7215 | 123.0 | 123.0 | 152.0 | 0.8092 | 0.8092 | 114.0 | 114.0 | 142.0 | 0.8028 | 0.8028 | 81.0 | 81.0 | 118.0 | 0.6864 | 0.6864 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0474 | 4.0 | 104 | 1.3013 | 0.0056 | 1070.0786 | 741.7219 | 405.0 | 570.0 | 0.7105 | 64.0 | 0.1123 | 2.0 | 98.0 | 158.0 | 0.6203 | 0.0127 | 25.0 | 117.0 | 152.0 | 0.7697 | 0.1645 | 25.0 | 120.0 | 142.0 | 0.8451 | 0.1761 | 12.0 | 70.0 | 118.0 | 0.5932 | 0.1017 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.063 | 5.0 | 130 | 1.8921 | 0.0056 | 1555.9118 | 1078.4759 | 435.0 | 570.0 | 0.7632 | 424.0 | 0.7439 | 109.0 | 120.0 | 158.0 | 0.7595 | 0.6899 | 118.0 | 118.0 | 152.0 | 0.7763 | 0.7763 | 115.0 | 115.0 | 142.0 | 0.8099 | 0.8099 | 82.0 | 82.0 | 118.0 | 0.6949 | 0.6949 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0876 | 6.0 | 156 | 1.4352 | 0.0056 | 1180.2063 | 818.0567 | 421.0 | 570.0 | 0.7386 | 404.0 | 0.7088 | 84.0 | 101.0 | 158.0 | 0.6392 | 0.5316 | 122.0 | 122.0 | 152.0 | 0.8026 | 0.8026 | 118.0 | 118.0 | 142.0 | 0.8310 | 0.8310 | 80.0 | 80.0 | 118.0 | 0.6780 | 0.6780 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.2587 | 7.0 | 182 | 2.4597 | 0.0056 | 2022.7388 | 1402.0557 | 436.0 | 570.0 | 0.7649 | 436.0 | 0.7649 | 118.0 | 118.0 | 158.0 | 0.7468 | 0.7468 | 123.0 | 123.0 | 152.0 | 0.8092 | 0.8092 | 121.0 | 121.0 | 142.0 | 0.8521 | 0.8521 | 74.0 | 74.0 | 118.0 | 0.6271 | 0.6271 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0023 | 8.0 | 208 | 2.2028 | 0.0056 | 1811.4433 | 1255.5968 | 434.0 | 570.0 | 0.7614 | 434.0 | 0.7614 | 125.0 | 125.0 | 158.0 | 0.7911 | 0.7911 | 115.0 | 115.0 | 152.0 | 0.7566 | 0.7566 | 116.0 | 116.0 | 142.0 | 0.8169 | 0.8169 | 78.0 | 78.0 | 118.0 | 0.6610 | 0.6610 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0001 | 9.0 | 234 | 2.1737 | 0.0056 | 1787.5456 | 1239.0322 | 435.0 | 570.0 | 0.7632 | 435.0 | 0.7632 | 123.0 | 123.0 | 158.0 | 0.7785 | 0.7785 | 113.0 | 113.0 | 152.0 | 0.7434 | 0.7434 | 119.0 | 119.0 | 142.0 | 0.8380 | 0.8380 | 80.0 | 80.0 | 118.0 | 0.6780 | 0.6780 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 10.0 | 260 | 2.3012 | 0.0056 | 1892.3237 | 1311.6588 | 433.0 | 570.0 | 0.7596 | 433.0 | 0.7596 | 119.0 | 119.0 | 158.0 | 0.7532 | 0.7532 | 113.0 | 113.0 | 152.0 | 0.7434 | 0.7434 | 119.0 | 119.0 | 142.0 | 0.8380 | 0.8380 | 82.0 | 82.0 | 118.0 | 0.6949 | 0.6949 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 11.0 | 286 | 2.3707 | 0.0056 | 1949.4977 | 1351.2888 | 429.0 | 570.0 | 0.7526 | 429.0 | 0.7526 | 120.0 | 120.0 | 158.0 | 0.7595 | 0.7595 | 113.0 | 113.0 | 152.0 | 0.7434 | 0.7434 | 119.0 | 119.0 | 142.0 | 0.8380 | 0.8380 | 77.0 | 77.0 | 118.0 | 0.6525 | 0.6525 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 12.0 | 312 | 2.4007 | 0.0056 | 1974.2088 | 1368.4173 | 428.0 | 570.0 | 0.7509 | 428.0 | 0.7509 | 118.0 | 118.0 | 158.0 | 0.7468 | 0.7468 | 114.0 | 114.0 | 152.0 | 0.75 | 0.75 | 118.0 | 118.0 | 142.0 | 0.8310 | 0.8310 | 78.0 | 78.0 | 118.0 | 0.6610 | 0.6610 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 13.0 | 338 | 2.3878 | 0.0056 | 1963.5566 | 1361.0337 | 430.0 | 570.0 | 0.7544 | 430.0 | 0.7544 | 119.0 | 119.0 | 158.0 | 0.7532 | 0.7532 | 113.0 | 113.0 | 152.0 | 0.7434 | 0.7434 | 119.0 | 119.0 | 142.0 | 0.8380 | 0.8380 | 79.0 | 79.0 | 118.0 | 0.6695 | 0.6695 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 14.0 | 364 | 2.4055 | 0.0056 | 1978.1533 | 1371.1514 | 430.0 | 570.0 | 0.7544 | 430.0 | 0.7544 | 119.0 | 119.0 | 158.0 | 0.7532 | 0.7532 | 113.0 | 113.0 | 152.0 | 0.7434 | 0.7434 | 119.0 | 119.0 | 142.0 | 0.8380 | 0.8380 | 79.0 | 79.0 | 118.0 | 0.6695 | 0.6695 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 15.0 | 390 | 2.3994 | 0.0056 | 1973.0895 | 1367.6414 | 432.0 | 570.0 | 0.7579 | 432.0 | 0.7579 | 121.0 | 121.0 | 158.0 | 0.7658 | 0.7658 | 114.0 | 114.0 | 152.0 | 0.75 | 0.75 | 119.0 | 119.0 | 142.0 | 0.8380 | 0.8380 | 78.0 | 78.0 | 118.0 | 0.6610 | 0.6610 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
Atharva31/results
|
Atharva31
| 2025-08-18T12:39:33Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"generated_from_trainer",
"dataset:Atharva31/Quotes_Collection",
"base_model:google/gemma-3-270m",
"base_model:adapter:google/gemma-3-270m",
"license:gemma",
"region:us"
] | null | 2025-08-18T06:24:09Z |
---
library_name: peft
license: gemma
base_model:
- google/gemma-3-270m
tags:
- generated_from_trainer
model-index:
- name: results
results: []
datasets:
- Atharva31/Quotes_Collection
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [google/gemma-3-270m](https://huggingface.co/google/gemma-3-270m) on the Quotes_Collection dataset.
It achieves the following results on the evaluation set after being fine-tuned on 3 epochs:
- Loss: 1.8940
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
The training and Evaluation data are collection of Quotes from 3 open-source datasets
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.149 | 1.0 | 360 | 1.9154 |
| 2.0852 | 2.0 | 720 | 1.8930 |
| 2.0449 | 3.0 | 1080 | 1.8940 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.2
|
asr-nigerian-pidgin/pidgin-wav2vec2-base-100H
|
asr-nigerian-pidgin
| 2025-08-18T12:27:03Z | 3 | 0 | null |
[
"safetensors",
"wav2vec2",
"generated_from_trainer",
"arxiv:2010.11123",
"base_model:facebook/wav2vec2-base",
"base_model:finetune:facebook/wav2vec2-base",
"license:apache-2.0",
"region:us"
] | null | 2024-09-14T14:08:40Z |
---
base_model: facebook/wav2vec2-base
license: apache-2.0
metrics:
- wer
tags:
- generated_from_trainer
model-index:
- name: pidgin-wav2vec2-base-960h
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pidgin-wav2vec2-base-960h
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the [Nigerian Pidgin](https://huggingface.co/datasets/asr-nigerian-pidgin/nigerian-pidgin-1.0) dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0898
- Wer: 0.3966
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 3407
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 4.3949 | 1.48 | 500 | 3.3325 | 0.9999 |
| 2.4656 | 2.95 | 1000 | 1.4727 | 0.8026 |
| 1.1896 | 4.43 | 1500 | 1.0925 | 0.6252 |
| 0.8558 | 5.91 | 2000 | 0.9467 | 0.5422 |
| 0.6427 | 7.39 | 2500 | 0.9856 | 0.5096 |
| 0.5371 | 8.86 | 3000 | 0.9794 | 0.5093 |
| 0.4553 | 10.34 | 3500 | 0.8719 | 0.4641 |
| 0.3921 | 11.82 | 4000 | 0.9344 | 0.4566 |
| 0.3406 | 13.29 | 4500 | 1.0211 | 0.4550 |
| 0.3046 | 14.77 | 5000 | 0.8668 | 0.4423 |
| 0.2651 | 16.25 | 5500 | 1.0384 | 0.4261 |
| 0.244 | 17.73 | 6000 | 1.0437 | 0.4296 |
| 0.2203 | 19.2 | 6500 | 0.9244 | 0.4228 |
| 0.1995 | 20.68 | 7000 | 0.9832 | 0.4165 |
| 0.1838 | 22.16 | 7500 | 1.1455 | 0.4112 |
| 0.1632 | 23.63 | 8000 | 1.1102 | 0.4102 |
| 0.1576 | 25.11 | 8500 | 1.0769 | 0.4044 |
| 0.1388 | 26.59 | 9000 | 1.1008 | 0.4013 |
| 0.1346 | 28.06 | 9500 | 1.0940 | 0.4000 |
| 0.1204 | 29.54 | 10000 | 1.0898 | 0.3966 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.15.2
## Citation
@misc{rufai2025endtoendtrainingautomaticspeech,
title={Towards End-to-End Training of Automatic Speech Recognition for Nigerian Pidgin},
author={Amina Mardiyyah Rufai and Afolabi Abeeb and Esther Oduntan and Tayo Arulogun and Oluwabukola Adegboro and Daniel Ajisafe},
year={2025},
eprint={2010.11123},
archivePrefix={arXiv},
primaryClass={eess.AS},
url={https://arxiv.org/abs/2010.11123},
}
|
nakayacent/blockassist-bc-muscular_skittish_horse_1755519798
|
nakayacent
| 2025-08-18T12:25:02Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"muscular skittish horse",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-18T12:24:51Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- muscular skittish horse
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
donoway/ARC-Easy_Llama-3.2-1B-6jgnsuv6
|
donoway
| 2025-08-18T12:22:43Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"base_model:meta-llama/Llama-3.2-1B",
"base_model:finetune:meta-llama/Llama-3.2-1B",
"license:llama3.2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-18T12:04:31Z |
---
library_name: transformers
license: llama3.2
base_model: meta-llama/Llama-3.2-1B
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: ARC-Easy_Llama-3.2-1B-6jgnsuv6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ARC-Easy_Llama-3.2-1B-6jgnsuv6
This model is a fine-tuned version of [meta-llama/Llama-3.2-1B](https://huggingface.co/meta-llama/Llama-3.2-1B) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8919
- Model Preparation Time: 0.0056
- Mdl: 733.4736
- Accumulated Loss: 508.4052
- Correct Preds: 427.0
- Total Preds: 570.0
- Accuracy: 0.7491
- Correct Gen Preds: 427.0
- Gen Accuracy: 0.7491
- Correct Gen Preds 32: 129.0
- Correct Preds 32: 129.0
- Total Labels 32: 158.0
- Accuracy 32: 0.8165
- Gen Accuracy 32: 0.8165
- Correct Gen Preds 33: 108.0
- Correct Preds 33: 108.0
- Total Labels 33: 152.0
- Accuracy 33: 0.7105
- Gen Accuracy 33: 0.7105
- Correct Gen Preds 34: 115.0
- Correct Preds 34: 115.0
- Total Labels 34: 142.0
- Accuracy 34: 0.8099
- Gen Accuracy 34: 0.8099
- Correct Gen Preds 35: 75.0
- Correct Preds 35: 75.0
- Total Labels 35: 118.0
- Accuracy 35: 0.6356
- Gen Accuracy 35: 0.6356
- Correct Gen Preds 36: 0.0
- Correct Preds 36: 0.0
- Total Labels 36: 0.0
- Accuracy 36: 0.0
- Gen Accuracy 36: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 112
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.001
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Model Preparation Time | Mdl | Accumulated Loss | Correct Preds | Total Preds | Accuracy | Correct Gen Preds | Gen Accuracy | Correct Gen Preds 32 | Correct Preds 32 | Total Labels 32 | Accuracy 32 | Gen Accuracy 32 | Correct Gen Preds 33 | Correct Preds 33 | Total Labels 33 | Accuracy 33 | Gen Accuracy 33 | Correct Gen Preds 34 | Correct Preds 34 | Total Labels 34 | Accuracy 34 | Gen Accuracy 34 | Correct Gen Preds 35 | Correct Preds 35 | Total Labels 35 | Accuracy 35 | Gen Accuracy 35 | Correct Gen Preds 36 | Correct Preds 36 | Total Labels 36 | Accuracy 36 | Gen Accuracy 36 |
|:-------------:|:-----:|:----:|:---------------:|:----------------------:|:---------:|:----------------:|:-------------:|:-----------:|:--------:|:-----------------:|:------------:|:--------------------:|:----------------:|:---------------:|:-----------:|:---------------:|:--------------------:|:----------------:|:---------------:|:-----------:|:---------------:|:--------------------:|:----------------:|:---------------:|:-----------:|:---------------:|:--------------------:|:----------------:|:---------------:|:-----------:|:---------------:|:--------------------:|:----------------:|:---------------:|:-----------:|:---------------:|
| No log | 0 | 0 | 1.5354 | 0.0056 | 1262.6022 | 875.1692 | 172.0 | 570.0 | 0.3018 | 170.0 | 0.2982 | 154.0 | 154.0 | 158.0 | 0.9747 | 0.9747 | 0.0 | 0.0 | 152.0 | 0.0 | 0.0 | 15.0 | 17.0 | 142.0 | 0.1197 | 0.1056 | 1.0 | 1.0 | 118.0 | 0.0085 | 0.0085 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.4726 | 1.0 | 25 | 0.8475 | 0.0056 | 696.9144 | 483.0642 | 394.0 | 570.0 | 0.6912 | 391.0 | 0.6860 | 87.0 | 90.0 | 158.0 | 0.5696 | 0.5506 | 104.0 | 104.0 | 152.0 | 0.6842 | 0.6842 | 109.0 | 109.0 | 142.0 | 0.7676 | 0.7676 | 91.0 | 91.0 | 118.0 | 0.7712 | 0.7712 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.7886 | 2.0 | 50 | 0.7247 | 0.0056 | 595.9247 | 413.0635 | 415.0 | 570.0 | 0.7281 | 415.0 | 0.7281 | 133.0 | 133.0 | 158.0 | 0.8418 | 0.8418 | 107.0 | 107.0 | 152.0 | 0.7039 | 0.7039 | 93.0 | 93.0 | 142.0 | 0.6549 | 0.6549 | 82.0 | 82.0 | 118.0 | 0.6949 | 0.6949 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.1428 | 3.0 | 75 | 0.8919 | 0.0056 | 733.4736 | 508.4052 | 427.0 | 570.0 | 0.7491 | 427.0 | 0.7491 | 129.0 | 129.0 | 158.0 | 0.8165 | 0.8165 | 108.0 | 108.0 | 152.0 | 0.7105 | 0.7105 | 115.0 | 115.0 | 142.0 | 0.8099 | 0.8099 | 75.0 | 75.0 | 118.0 | 0.6356 | 0.6356 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0066 | 4.0 | 100 | 1.4142 | 0.0056 | 1162.9830 | 806.1184 | 420.0 | 570.0 | 0.7368 | 403.0 | 0.7070 | 119.0 | 125.0 | 158.0 | 0.7911 | 0.7532 | 119.0 | 123.0 | 152.0 | 0.8092 | 0.7829 | 100.0 | 103.0 | 142.0 | 0.7254 | 0.7042 | 65.0 | 69.0 | 118.0 | 0.5847 | 0.5508 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0066 | 5.0 | 125 | 1.6364 | 0.0056 | 1345.6457 | 932.7305 | 406.0 | 570.0 | 0.7123 | 399.0 | 0.7 | 107.0 | 113.0 | 158.0 | 0.7152 | 0.6772 | 101.0 | 101.0 | 152.0 | 0.6645 | 0.6645 | 106.0 | 106.0 | 142.0 | 0.7465 | 0.7465 | 85.0 | 86.0 | 118.0 | 0.7288 | 0.7203 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0001 | 6.0 | 150 | 2.3995 | 0.0056 | 1973.1559 | 1367.6875 | 407.0 | 570.0 | 0.7140 | 392.0 | 0.6877 | 93.0 | 104.0 | 158.0 | 0.6582 | 0.5886 | 113.0 | 114.0 | 152.0 | 0.75 | 0.7434 | 102.0 | 104.0 | 142.0 | 0.7324 | 0.7183 | 84.0 | 85.0 | 118.0 | 0.7203 | 0.7119 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 7.0 | 175 | 2.5540 | 0.0056 | 2100.2596 | 1455.7890 | 414.0 | 570.0 | 0.7263 | 408.0 | 0.7158 | 108.0 | 113.0 | 158.0 | 0.7152 | 0.6835 | 117.0 | 117.0 | 152.0 | 0.7697 | 0.7697 | 102.0 | 102.0 | 142.0 | 0.7183 | 0.7183 | 81.0 | 82.0 | 118.0 | 0.6949 | 0.6864 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0001 | 8.0 | 200 | 2.5711 | 0.0056 | 2114.2895 | 1465.5138 | 418.0 | 570.0 | 0.7333 | 410.0 | 0.7193 | 106.0 | 113.0 | 158.0 | 0.7152 | 0.6709 | 122.0 | 123.0 | 152.0 | 0.8092 | 0.8026 | 102.0 | 102.0 | 142.0 | 0.7183 | 0.7183 | 80.0 | 80.0 | 118.0 | 0.6780 | 0.6780 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0001 | 9.0 | 225 | 2.5896 | 0.0056 | 2129.5119 | 1476.0652 | 419.0 | 570.0 | 0.7351 | 410.0 | 0.7193 | 104.0 | 112.0 | 158.0 | 0.7089 | 0.6582 | 122.0 | 123.0 | 152.0 | 0.8092 | 0.8026 | 103.0 | 103.0 | 142.0 | 0.7254 | 0.7254 | 81.0 | 81.0 | 118.0 | 0.6864 | 0.6864 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 10.0 | 250 | 2.6097 | 0.0056 | 2146.0783 | 1487.5481 | 419.0 | 570.0 | 0.7351 | 411.0 | 0.7211 | 105.0 | 112.0 | 158.0 | 0.7089 | 0.6646 | 122.0 | 123.0 | 152.0 | 0.8092 | 0.8026 | 102.0 | 102.0 | 142.0 | 0.7183 | 0.7183 | 82.0 | 82.0 | 118.0 | 0.6949 | 0.6949 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 11.0 | 275 | 2.6133 | 0.0056 | 2149.0502 | 1489.6081 | 419.0 | 570.0 | 0.7351 | 411.0 | 0.7211 | 105.0 | 112.0 | 158.0 | 0.7089 | 0.6646 | 122.0 | 123.0 | 152.0 | 0.8092 | 0.8026 | 103.0 | 103.0 | 142.0 | 0.7254 | 0.7254 | 81.0 | 81.0 | 118.0 | 0.6864 | 0.6864 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 12.0 | 300 | 2.6221 | 0.0056 | 2156.2876 | 1494.6247 | 418.0 | 570.0 | 0.7333 | 410.0 | 0.7193 | 105.0 | 112.0 | 158.0 | 0.7089 | 0.6646 | 122.0 | 123.0 | 152.0 | 0.8092 | 0.8026 | 102.0 | 102.0 | 142.0 | 0.7183 | 0.7183 | 81.0 | 81.0 | 118.0 | 0.6864 | 0.6864 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 13.0 | 325 | 2.6192 | 0.0056 | 2153.8311 | 1492.9219 | 418.0 | 570.0 | 0.7333 | 410.0 | 0.7193 | 104.0 | 111.0 | 158.0 | 0.7025 | 0.6582 | 122.0 | 123.0 | 152.0 | 0.8092 | 0.8026 | 102.0 | 102.0 | 142.0 | 0.7183 | 0.7183 | 82.0 | 82.0 | 118.0 | 0.6949 | 0.6949 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 14.0 | 350 | 2.6335 | 0.0056 | 2165.6088 | 1501.0857 | 419.0 | 570.0 | 0.7351 | 411.0 | 0.7211 | 106.0 | 113.0 | 158.0 | 0.7152 | 0.6709 | 122.0 | 123.0 | 152.0 | 0.8092 | 0.8026 | 102.0 | 102.0 | 142.0 | 0.7183 | 0.7183 | 81.0 | 81.0 | 118.0 | 0.6864 | 0.6864 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 15.0 | 375 | 2.6250 | 0.0056 | 2158.6426 | 1496.2570 | 420.0 | 570.0 | 0.7368 | 412.0 | 0.7228 | 106.0 | 113.0 | 158.0 | 0.7152 | 0.6709 | 122.0 | 123.0 | 152.0 | 0.8092 | 0.8026 | 103.0 | 103.0 | 142.0 | 0.7254 | 0.7254 | 81.0 | 81.0 | 118.0 | 0.6864 | 0.6864 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 16.0 | 400 | 2.6439 | 0.0056 | 2174.2071 | 1507.0456 | 419.0 | 570.0 | 0.7351 | 411.0 | 0.7211 | 105.0 | 112.0 | 158.0 | 0.7089 | 0.6646 | 122.0 | 123.0 | 152.0 | 0.8092 | 0.8026 | 103.0 | 103.0 | 142.0 | 0.7254 | 0.7254 | 81.0 | 81.0 | 118.0 | 0.6864 | 0.6864 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 17.0 | 425 | 2.6435 | 0.0056 | 2173.8519 | 1506.7993 | 421.0 | 570.0 | 0.7386 | 413.0 | 0.7246 | 105.0 | 112.0 | 158.0 | 0.7089 | 0.6646 | 123.0 | 124.0 | 152.0 | 0.8158 | 0.8092 | 103.0 | 103.0 | 142.0 | 0.7254 | 0.7254 | 82.0 | 82.0 | 118.0 | 0.6949 | 0.6949 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
unitova/blockassist-bc-zealous_sneaky_raven_1755518075
|
unitova
| 2025-08-18T12:18:54Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"zealous sneaky raven",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-18T12:18:51Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- zealous sneaky raven
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ihsanridzi/blockassist-bc-wiry_flexible_owl_1755517736
|
ihsanridzi
| 2025-08-18T12:14:28Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"wiry flexible owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-18T12:14:25Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- wiry flexible owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
regmibijay/gemma-270m-ops-volltext
|
regmibijay
| 2025-08-18T12:13:33Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma3_text",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-18T12:13:12Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
lfhase/HIGHT
|
lfhase
| 2025-08-18T12:13:12Z | 0 | 2 | null |
[
"arxiv:2406.14021",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2025-08-18T11:11:06Z |
---
license: cc-by-nc-4.0
---
<h1 align="center">HIGHT: Hierarchical Graph Tokenization for Graph-Language Alignment</h1>
<p align="center">
<a href="https://arxiv.org/abs/2406.14021"><img src="https://img.shields.io/badge/arXiv-2406.14021-b31b1b.svg" alt="Paper"></a>
<a href="https://github.com/LFhase/HIGHT"><img src="https://img.shields.io/badge/-Github-grey?logo=github" alt="Github"></a>
<!-- <a href="https://colab.research.google.com/drive/1t0_4BxEJ0XncyYvn_VyEQhxwNMvtSUNx?usp=sharing"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Colab"></a> -->
<a href="https://arxiv.org/abs/2406.14021"> <img alt="License" src="https://img.shields.io/static/v1?label=Pub&message=ICML%2725&color=blue"> </a>
<!-- <a href="https://github.com/LFhase/HIGHT/blob/main/LICENSE"> <img alt="License" src="https://img.shields.io/github/license/LFhase/CIGA?color=blue"> </a> -->
<!-- <a href="https://icml.cc/virtual/2024/poster/3455"> <img src="https://img.shields.io/badge/Video-grey?logo=Kuaishou&logoColor=white" alt="Video"></a> -->
<!-- <a href="https://lfhase.win/files/slides/HIGHT.pdf"> <img src="https://img.shields.io/badge/Slides-grey?&logo=MicrosoftPowerPoint&logoColor=white" alt="Slides"></a> -->
<!-- <a href="https://icml.cc/media/PosterPDFs/ICML%202022/a8acc28734d4fe90ea24353d901ae678.png"> <img src="https://img.shields.io/badge/Poster-grey?logo=airplayvideo&logoColor=white" alt="Poster"></a> -->
</p>
This repo contains the model checkpoints of our ICML 2025 paper: *[Hierarchical Graph Tokenization for Molecule-Language Alignment](https://arxiv.org/abs/2406.14021)*, which has also been presented at ICML 2024 workshop on [Foundation Models in the Wild](https://icml.cc/virtual/2024/workshop/29954). 😆😆😆
## File Structures
The pretrained Hierarchical VQ-VAE model is stored in `hivqvae.pth`.
The checkpoints of graph-language models based on llama2-7b-chat and vicuna-v1-3-7b are contained in `/llama2` and `/vicuna`, respectively.
Inside each directory, the remaining checkpoints are organized as (using vicuna as an example):
- `llava-hvqvae2-vicuna-v1-3-7b-pretrain`: model after stage 1 pretraining;
- `graph-text-molgen`: models finetuned using Mol-Instruction data under different tasks, e.g., forward reaction prediction;
- `molcap-llava-hvqvae2-vicuna-v1-3-7b-finetune_lora-50ep`: model fintuned using CHEBI-20 dataset for molecular captioning;
- `MoleculeNet-llava-hvqvae2-vicuna-v1-3-7b-finetune_lora-large*`: models finetuned via different classification-based molecular property prediction tasks;
## Citation
If you find our model, paper and repo useful, please cite our paper:
```bibtex
@inproceedings{chen2025hierarchical,
title={Hierarchical Graph Tokenization for Molecule-Language Alignment},
author={Yongqiang Chen and Quanming Yao and Juzheng Zhang and James Cheng and Yatao Bian},
booktitle={Forty-second International Conference on Machine Learning},
year={2025},
url={https://openreview.net/forum?id=wpbNczwAwV}
}
```
|
Yuchan5386/IntentClassifier
|
Yuchan5386
| 2025-08-18T12:12:26Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-08-18T12:00:46Z |
---
license: apache-2.0
---
|
VoilaRaj/78_xNWmhr
|
VoilaRaj
| 2025-08-18T12:11:45Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-08-18T12:07:52Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
snezhanata/qwen3-dev
|
snezhanata
| 2025-08-18T12:10:06Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"llama-factory",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-18T11:41:03Z |
---
library_name: transformers
tags:
- llama-factory
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
nakayacent/blockassist-bc-muscular_skittish_horse_1755518680
|
nakayacent
| 2025-08-18T12:05:59Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"muscular skittish horse",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-18T12:05:50Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- muscular skittish horse
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
isbondarev/Index-1.9B-adv
|
isbondarev
| 2025-08-18T12:03:14Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"index",
"feature-extraction",
"llama-factory",
"custom_code",
"arxiv:1910.09700",
"region:us"
] |
feature-extraction
| 2025-06-27T11:08:55Z |
---
library_name: transformers
tags:
- llama-factory
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
michaelcpage345/blockassist-bc-miniature_deadly_anteater_1755516442
|
michaelcpage345
| 2025-08-18T12:00:16Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"miniature deadly anteater",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-18T12:00:12Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- miniature deadly anteater
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
HMC83/request_writer_smol_lora
|
HMC83
| 2025-08-18T11:53:43Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"en",
"base_model:HuggingFaceTB/SmolLM2-360M-Instruct",
"base_model:finetune:HuggingFaceTB/SmolLM2-360M-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-18T09:32:19Z |
---
base_model: HuggingFaceTB/SmolLM2-360M-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
license: apache-2.0
language:
- en
---
## Model Description
Request Writer Smol has been fine tuned to generate Freedom of Information (FOI) requests to UK public authorities based on the autority name and three keywords. The model has been trained on a synthetic dataset of FOI requests covering various topics and public authorities across the UK.
The Model demonstrates improved generation of properly formatted, focused FOI requests for specific information that are unlikely to be refused on cost grounds.
## Model Architecture
- **Base Model**: SmolLM2-360M-Instruct
- **Fine-tuning Method**: LoRA
- **LoRA Configuration**:
- Rank (r): 8
- Alpha: 16
- Dropout: 0.1
- Target modules: q_proj, k_proj, v_proj, o_proj, gate_proj, up_proj, down_proj
- **Training Parameters**: 2.34% of total parameters trained (8.68M trainable parameters)
## Finetune training Data
### Dataset Details
- **Source**: Synthetic FOI requests dataset (HMC83/synthetic_foi_requests)
- **Size**: 51,308 training examples, ~5,700 validation examples
- **Format**: Conversational format with system prompts, user inputs, and assistant responses
### Training Configuration
- **Epochs**: 3
- **Batch Size**: 32
- **Learning Rate**: 1e-5
- **Optimizer**: AdamW 8-bit
- **Sequence Length**: 4096 tokens
## Limitations and Considerations
Small size of the model (360M parameters) may limit the complexity of any generated requests. The model is trained specifically for UK FOI requests. It has not been trained to generate requests for information about individuals.
## Usage Guidelines
### Input Format
The model expects a prompt in the form of:
```
Generate a formal Freedom of Information request to [authority_name] using these keywords: [keyword1, keyword2, keyword3]
```
### Output Format
It will try to generate a concinse, properly structured FOI request, starting with the phrase "Please provide me with a copy of the following information:" followed by 1 to 3 Numbered, specific information requests
## Model Versions
### Available Formats
- **LoRA Adapters**: `HMC83/request_writer_smol_lora`
- **Merged 16-bit**: `HMC83/request_writer_smol`
### Disclaimer
Users are responsible for ensuring that their intended use complies with any applicable laws and regulations. Generated requests should be reviewed and potentially modified before submission to public authorities. Requests should be made in good faith and for legitimate purposes. The model can hallucinate, so any outputs should not be relied upon without being verified. Outputs may also reflect any biases that are present in the underlying training data.
|
bio-protocol/scientific-reranker
|
bio-protocol
| 2025-08-18T11:51:39Z | 3 | 0 | null |
[
"safetensors",
"xlm-roberta",
"en",
"base_model:BAAI/bge-reranker-large",
"base_model:finetune:BAAI/bge-reranker-large",
"license:mit",
"region:us"
] | null | 2025-07-28T08:40:19Z |
---
license: mit
language:
- en
base_model:
- BAAI/bge-reranker-large
---
OpenScholar_Reranker is a fine-tuned version of [bge-reranker-large](https://huggingface.co/BAAI/bge-reranker-large) for scientific literature synthesis.
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** University of Washigton, Allen Institute for AI (AI2)
- **Model type:** a masked language model.
- **Language(s) (NLP):** English
- **License:** The code and model are released under apache-2.0.
- **Date cutoff:** The fine-tuning data is generated by Llama 3 70B for synthetically generated queries.
### Model Sources
<!-- Provide the basic links for the model. -->
- **Project Page:** https://open-scholar.allen.ai/
- **Repositories:**
- Core repo (training, inference, fine-tuning etc.): https://github.com/AkariAsai/OpenScholar
- Evaluation code: https://github.com/AkariAsai/ScholarQABench
- **Paper:** [Link](https://openscholar.allen.ai/paper)
- **Technical blog post:** https://allenai.org/blog/openscholar
<!-- - **Press release:** TODO -->
### Citation
If you find it useful in this work, cite our paper.
```
@article{openscholar,
title={{OpenScholar}: Synthesizing Scientific Literature with Retrieval-Augmented Language Models},
author={ Asai, Akari and He*, Jacqueline and Shao*, Rulin and Shi, Weijia and Singh, Amanpreet and Chang, Joseph Chee and Lo, Kyle and Soldaini, Luca and Feldman, Tian, Sergey and Mike, D’arcy and Wadden, David and Latzke, Matt and Minyang and Ji, Pan and Liu, Shengyan and Tong, Hao and Wu, Bohao and Xiong, Yanyu and Zettlemoyer, Luke and Weld, Dan and Neubig, Graham and Downey, Doug and Yih, Wen-tau and Koh, Pang Wei and Hajishirzi, Hannaneh},
journal={Arxiv},
year={2024},
}
```
|
almanach/camembert-large
|
almanach
| 2025-08-18T11:48:19Z | 6,417 | 19 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"camembert",
"fr",
"arxiv:1911.03894",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
language: fr
---
# CamemBERT: a Tasty French Language Model
## Introduction
[CamemBERT](https://arxiv.org/abs/1911.03894) is a state-of-the-art language model for French based on the RoBERTa model.
It is now available on Hugging Face in 6 different versions with varying number of parameters, amount of pretraining data and pretraining data source domains.
## Pre-trained models
| Model | #params | Arch. | Training data |
|--------------------------------|--------------------------------|-------|-----------------------------------|
| `camembert-base` | 110M | Base | OSCAR (138 GB of text) |
| `camembert/camembert-large` | 335M | Large | CCNet (135 GB of text) |
| `camembert/camembert-base-ccnet` | 110M | Base | CCNet (135 GB of text) |
| `camembert/camembert-base-wikipedia-4gb` | 110M | Base | Wikipedia (4 GB of text) |
| `camembert/camembert-base-oscar-4gb` | 110M | Base | Subsample of OSCAR (4 GB of text) |
| `camembert/camembert-base-ccnet-4gb` | 110M | Base | Subsample of CCNet (4 GB of text) |
## How to use CamemBERT with HuggingFace
##### Load CamemBERT and its sub-word tokenizer :
```python
from transformers import CamembertModel, CamembertTokenizer
# You can replace "camembert-base" with any other model from the table, e.g. "camembert/camembert-large".
tokenizer = CamembertTokenizer.from_pretrained("camembert/camembert-large")
camembert = CamembertModel.from_pretrained("camembert/camembert-large")
camembert.eval() # disable dropout (or leave in train mode to finetune)
```
##### Filling masks using pipeline
```python
from transformers import pipeline
camembert_fill_mask = pipeline("fill-mask", model="camembert/camembert-large", tokenizer="camembert/camembert-large")
results = camembert_fill_mask("Le camembert est <mask> :)")
# results
#[{'sequence': '<s> Le camembert est bon :)</s>', 'score': 0.15560828149318695, 'token': 305},
#{'sequence': '<s> Le camembert est excellent :)</s>', 'score': 0.06821336597204208, 'token': 3497},
#{'sequence': '<s> Le camembert est délicieux :)</s>', 'score': 0.060438305139541626, 'token': 11661},
#{'sequence': '<s> Le camembert est ici :)</s>', 'score': 0.02023460529744625, 'token': 373},
#{'sequence': '<s> Le camembert est meilleur :)</s>', 'score': 0.01778135634958744, 'token': 876}]
```
##### Extract contextual embedding features from Camembert output
```python
import torch
# Tokenize in sub-words with SentencePiece
tokenized_sentence = tokenizer.tokenize("J'aime le camembert !")
# ['▁J', "'", 'aime', '▁le', '▁cam', 'ember', 't', '▁!']
# 1-hot encode and add special starting and end tokens
encoded_sentence = tokenizer.encode(tokenized_sentence)
# [5, 133, 22, 1250, 16, 12034, 14324, 81, 76, 6]
# NB: Can be done in one step : tokenize.encode("J'aime le camembert !")
# Feed tokens to Camembert as a torch tensor (batch dim 1)
encoded_sentence = torch.tensor(encoded_sentence).unsqueeze(0)
embeddings, _ = camembert(encoded_sentence)
# embeddings.detach()
# torch.Size([1, 10, 1024])
#tensor([[[-0.1284, 0.2643, 0.4374, ..., 0.1627, 0.1308, -0.2305],
# [ 0.4576, -0.6345, -0.2029, ..., -0.1359, -0.2290, -0.6318],
# [ 0.0381, 0.0429, 0.5111, ..., -0.1177, -0.1913, -0.1121],
# ...,
```
##### Extract contextual embedding features from all Camembert layers
```python
from transformers import CamembertConfig
# (Need to reload the model with new config)
config = CamembertConfig.from_pretrained("camembert/camembert-large", output_hidden_states=True)
camembert = CamembertModel.from_pretrained("camembert/camembert-large", config=config)
embeddings, _, all_layer_embeddings = camembert(encoded_sentence)
# all_layer_embeddings list of len(all_layer_embeddings) == 25 (input embedding layer + 24 self attention layers)
all_layer_embeddings[5]
# layer 5 contextual embedding : size torch.Size([1, 10, 1024])
#tensor([[[-0.0600, 0.0742, 0.0332, ..., -0.0525, -0.0637, -0.0287],
# [ 0.0950, 0.2840, 0.1985, ..., 0.2073, -0.2172, -0.6321],
# [ 0.1381, 0.1872, 0.1614, ..., -0.0339, -0.2530, -0.1182],
# ...,
```
## Authors
CamemBERT was trained and evaluated by Louis Martin\*, Benjamin Muller\*, Pedro Javier Ortiz Suárez\*, Yoann Dupont, Laurent Romary, Éric Villemonte de la Clergerie, Djamé Seddah and Benoît Sagot.
## Citation
If you use our work, please cite:
```bibtex
@inproceedings{martin2020camembert,
title={CamemBERT: a Tasty French Language Model},
author={Martin, Louis and Muller, Benjamin and Su{\'a}rez, Pedro Javier Ortiz and Dupont, Yoann and Romary, Laurent and de la Clergerie, {\'E}ric Villemonte and Seddah, Djam{\'e} and Sagot, Beno{\^\i}t},
booktitle={Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics},
year={2020}
}
```
|
Sayemahsjn/blockassist-bc-playful_feline_octopus_1755516524
|
Sayemahsjn
| 2025-08-18T11:47:10Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"playful feline octopus",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-18T11:47:06Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- playful feline octopus
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Xiaochuanaaa/llama3
|
Xiaochuanaaa
| 2025-08-18T11:46:47Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-08-18T11:46:47Z |
---
license: apache-2.0
---
|
VoilaRaj/78_dRJB6K
|
VoilaRaj
| 2025-08-18T11:46:42Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-08-18T11:42:55Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
ihsanridzi/blockassist-bc-wiry_flexible_owl_1755515895
|
ihsanridzi
| 2025-08-18T11:45:36Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"wiry flexible owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-18T11:45:32Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- wiry flexible owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
afasdfdfadsf/AceInstruct-1.5B-Gensyn-Swarm-tiny_camouflaged_mole
|
afasdfdfadsf
| 2025-08-18T11:44:02Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am tiny_camouflaged_mole",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-18T00:06:16Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am tiny_camouflaged_mole
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
sanketkashyap/MyGemmaNPC
|
sanketkashyap
| 2025-08-18T11:43:33Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"gemma3_text",
"text-generation",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"base_model:google/gemma-3-270m-it",
"base_model:finetune:google/gemma-3-270m-it",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-18T11:10:36Z |
---
base_model: google/gemma-3-270m-it
library_name: transformers
model_name: MyGemmaNPC
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for MyGemmaNPC
This model is a fine-tuned version of [google/gemma-3-270m-it](https://huggingface.co/google/gemma-3-270m-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="sanketkashyap/MyGemmaNPC", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.2
- Pytorch: 2.6.0+cu124
- Datasets: 4.0.0
- Tokenizers: 0.21.2
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
donoway/GSM8K-Binary_Llama-3.2-1B-bfe9d8o1
|
donoway
| 2025-08-18T11:42:29Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"base_model:meta-llama/Llama-3.2-1B",
"base_model:finetune:meta-llama/Llama-3.2-1B",
"license:llama3.2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-18T10:09:13Z |
---
library_name: transformers
license: llama3.2
base_model: meta-llama/Llama-3.2-1B
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: GSM8K-Binary_Llama-3.2-1B-bfe9d8o1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GSM8K-Binary_Llama-3.2-1B-bfe9d8o1
This model is a fine-tuned version of [meta-llama/Llama-3.2-1B](https://huggingface.co/meta-llama/Llama-3.2-1B) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3034
- Model Preparation Time: 0.0058
- Mdl: 4653.9298
- Accumulated Loss: 3225.8583
- Correct Preds: 1917.0
- Total Preds: 2475.0
- Accuracy: 0.7745
- Correct Gen Preds: 1919.0
- Gen Accuracy: 0.7754
- Correct Gen Preds 34192: 1046.0
- Correct Preds 34192: 1049.0
- Total Labels 34192: 1196.0
- Accuracy 34192: 0.8771
- Gen Accuracy 34192: 0.8746
- Correct Gen Preds 41568: 865.0
- Correct Preds 41568: 868.0
- Total Labels 41568: 1267.0
- Accuracy 41568: 0.6851
- Gen Accuracy 41568: 0.6827
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.01
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Model Preparation Time | Mdl | Accumulated Loss | Correct Preds | Total Preds | Accuracy | Correct Gen Preds | Gen Accuracy | Correct Gen Preds 34192 | Correct Preds 34192 | Total Labels 34192 | Accuracy 34192 | Gen Accuracy 34192 | Correct Gen Preds 41568 | Correct Preds 41568 | Total Labels 41568 | Accuracy 41568 | Gen Accuracy 41568 |
|:-------------:|:-----:|:----:|:---------------:|:----------------------:|:---------:|:----------------:|:-------------:|:-----------:|:--------:|:-----------------:|:------------:|:-----------------------:|:-------------------:|:------------------:|:--------------:|:------------------:|:-----------------------:|:-------------------:|:------------------:|:--------------:|:------------------:|
| No log | 0 | 0 | 1.4656 | 0.0058 | 5233.1723 | 3627.3586 | 1196.0 | 2475.0 | 0.4832 | 1204.0 | 0.4865 | 1196.0 | 1196.0 | 1196.0 | 1.0 | 1.0 | 0.0 | 0.0 | 1267.0 | 0.0 | 0.0 |
| 0.3909 | 1.0 | 13 | 0.9147 | 0.0058 | 3265.9349 | 2263.7736 | 1196.0 | 2475.0 | 0.4832 | 8.0 | 0.0032 | 0.0 | 1196.0 | 1196.0 | 1.0 | 0.0 | 0.0 | 0.0 | 1267.0 | 0.0 | 0.0 |
| 2.5838 | 2.0 | 26 | 0.8758 | 0.0058 | 3127.0958 | 2167.5377 | 1517.0 | 2475.0 | 0.6129 | 139.0 | 0.0562 | 0.0 | 1180.0 | 1196.0 | 0.9866 | 0.0 | 131.0 | 337.0 | 1267.0 | 0.2660 | 0.1034 |
| 0.1806 | 3.0 | 39 | 0.6158 | 0.0058 | 2198.9720 | 1524.2113 | 1760.0 | 2475.0 | 0.7111 | 215.0 | 0.0869 | 0.0 | 642.0 | 1196.0 | 0.5368 | 0.0 | 207.0 | 1118.0 | 1267.0 | 0.8824 | 0.1634 |
| 0.0087 | 4.0 | 52 | 1.3144 | 0.0058 | 4693.3429 | 3253.1774 | 1519.0 | 2475.0 | 0.6137 | 1024.0 | 0.4137 | 16.0 | 301.0 | 1196.0 | 0.2517 | 0.0134 | 1001.0 | 1218.0 | 1267.0 | 0.9613 | 0.7901 |
| 0.0061 | 5.0 | 65 | 1.0468 | 0.0058 | 3737.9158 | 2590.9258 | 1678.0 | 2475.0 | 0.6780 | 603.0 | 0.2436 | 402.0 | 1158.0 | 1196.0 | 0.9682 | 0.3361 | 194.0 | 520.0 | 1267.0 | 0.4104 | 0.1531 |
| 0.0896 | 6.0 | 78 | 0.7674 | 0.0058 | 2740.0578 | 1899.2633 | 1834.0 | 2475.0 | 0.7410 | 1177.0 | 0.4756 | 471.0 | 828.0 | 1196.0 | 0.6923 | 0.3938 | 698.0 | 1006.0 | 1267.0 | 0.7940 | 0.5509 |
| 0.0001 | 7.0 | 91 | 0.7845 | 0.0058 | 2801.2835 | 1941.7018 | 1901.0 | 2475.0 | 0.7681 | 1802.0 | 0.7281 | 869.0 | 930.0 | 1196.0 | 0.7776 | 0.7266 | 926.0 | 971.0 | 1267.0 | 0.7664 | 0.7309 |
| 0.0 | 8.0 | 104 | 1.0404 | 0.0058 | 3714.9602 | 2575.0142 | 1882.0 | 2475.0 | 0.7604 | 1488.0 | 0.6012 | 846.0 | 1035.0 | 1196.0 | 0.8654 | 0.7074 | 634.0 | 847.0 | 1267.0 | 0.6685 | 0.5004 |
| 0.0001 | 9.0 | 117 | 1.1473 | 0.0058 | 4096.4963 | 2839.4749 | 1905.0 | 2475.0 | 0.7697 | 1908.0 | 0.7709 | 999.0 | 1003.0 | 1196.0 | 0.8386 | 0.8353 | 901.0 | 902.0 | 1267.0 | 0.7119 | 0.7111 |
| 0.0 | 10.0 | 130 | 1.2243 | 0.0058 | 4371.6047 | 3030.1655 | 1895.0 | 2475.0 | 0.7657 | 1896.0 | 0.7661 | 1033.0 | 1037.0 | 1196.0 | 0.8671 | 0.8637 | 855.0 | 858.0 | 1267.0 | 0.6772 | 0.6748 |
| 0.0001 | 11.0 | 143 | 1.2098 | 0.0058 | 4319.8084 | 2994.2630 | 1899.0 | 2475.0 | 0.7673 | 1899.0 | 0.7673 | 1028.0 | 1032.0 | 1196.0 | 0.8629 | 0.8595 | 863.0 | 867.0 | 1267.0 | 0.6843 | 0.6811 |
| 0.0002 | 12.0 | 156 | 1.2321 | 0.0058 | 4399.4227 | 3049.4475 | 1900.0 | 2475.0 | 0.7677 | 1901.0 | 0.7681 | 1038.0 | 1042.0 | 1196.0 | 0.8712 | 0.8679 | 855.0 | 858.0 | 1267.0 | 0.6772 | 0.6748 |
| 0.0 | 13.0 | 169 | 1.2505 | 0.0058 | 4465.1374 | 3094.9974 | 1895.0 | 2475.0 | 0.7657 | 1896.0 | 0.7661 | 1044.0 | 1048.0 | 1196.0 | 0.8763 | 0.8729 | 844.0 | 847.0 | 1267.0 | 0.6685 | 0.6661 |
| 0.0 | 14.0 | 182 | 1.2541 | 0.0058 | 4477.9552 | 3103.8821 | 1900.0 | 2475.0 | 0.7677 | 1900.0 | 0.7677 | 1045.0 | 1050.0 | 1196.0 | 0.8779 | 0.8737 | 847.0 | 850.0 | 1267.0 | 0.6709 | 0.6685 |
| 0.0 | 15.0 | 195 | 1.2553 | 0.0058 | 4482.1598 | 3106.7965 | 1900.0 | 2475.0 | 0.7677 | 1901.0 | 0.7681 | 1043.0 | 1047.0 | 1196.0 | 0.8754 | 0.8721 | 850.0 | 853.0 | 1267.0 | 0.6732 | 0.6709 |
| 0.0001 | 16.0 | 208 | 1.2586 | 0.0058 | 4493.9093 | 3114.9405 | 1903.0 | 2475.0 | 0.7689 | 1902.0 | 0.7685 | 1045.0 | 1050.0 | 1196.0 | 0.8779 | 0.8737 | 849.0 | 853.0 | 1267.0 | 0.6732 | 0.6701 |
| 0.0 | 17.0 | 221 | 1.2582 | 0.0058 | 4492.4502 | 3113.9292 | 1903.0 | 2475.0 | 0.7689 | 1904.0 | 0.7693 | 1043.0 | 1047.0 | 1196.0 | 0.8754 | 0.8721 | 853.0 | 856.0 | 1267.0 | 0.6756 | 0.6732 |
| 0.0 | 18.0 | 234 | 1.2603 | 0.0058 | 4500.1384 | 3119.2583 | 1902.0 | 2475.0 | 0.7685 | 1902.0 | 0.7685 | 1042.0 | 1046.0 | 1196.0 | 0.8746 | 0.8712 | 852.0 | 856.0 | 1267.0 | 0.6756 | 0.6725 |
| 0.0001 | 19.0 | 247 | 1.2631 | 0.0058 | 4510.1478 | 3126.1962 | 1905.0 | 2475.0 | 0.7697 | 1905.0 | 0.7697 | 1043.0 | 1048.0 | 1196.0 | 0.8763 | 0.8721 | 854.0 | 857.0 | 1267.0 | 0.6764 | 0.6740 |
| 0.0 | 20.0 | 260 | 1.2732 | 0.0058 | 4546.3417 | 3151.2839 | 1903.0 | 2475.0 | 0.7689 | 1902.0 | 0.7685 | 1046.0 | 1051.0 | 1196.0 | 0.8788 | 0.8746 | 848.0 | 852.0 | 1267.0 | 0.6725 | 0.6693 |
| 0.0 | 21.0 | 273 | 1.2775 | 0.0058 | 4561.5521 | 3161.8270 | 1903.0 | 2475.0 | 0.7689 | 1903.0 | 0.7689 | 1045.0 | 1049.0 | 1196.0 | 0.8771 | 0.8737 | 850.0 | 854.0 | 1267.0 | 0.6740 | 0.6709 |
| 0.0001 | 22.0 | 286 | 1.2805 | 0.0058 | 4572.4133 | 3169.3554 | 1902.0 | 2475.0 | 0.7685 | 1903.0 | 0.7689 | 1047.0 | 1051.0 | 1196.0 | 0.8788 | 0.8754 | 848.0 | 851.0 | 1267.0 | 0.6717 | 0.6693 |
| 0.0 | 23.0 | 299 | 1.2884 | 0.0058 | 4600.5452 | 3188.8550 | 1902.0 | 2475.0 | 0.7685 | 1902.0 | 0.7685 | 1047.0 | 1051.0 | 1196.0 | 0.8788 | 0.8754 | 847.0 | 851.0 | 1267.0 | 0.6717 | 0.6685 |
| 0.0001 | 24.0 | 312 | 1.2899 | 0.0058 | 4605.7894 | 3192.4899 | 1904.0 | 2475.0 | 0.7693 | 1905.0 | 0.7697 | 1049.0 | 1052.0 | 1196.0 | 0.8796 | 0.8771 | 848.0 | 852.0 | 1267.0 | 0.6725 | 0.6693 |
| 0.0 | 25.0 | 325 | 1.2924 | 0.0058 | 4614.6624 | 3198.6403 | 1903.0 | 2475.0 | 0.7689 | 1902.0 | 0.7685 | 1046.0 | 1051.0 | 1196.0 | 0.8788 | 0.8746 | 848.0 | 852.0 | 1267.0 | 0.6725 | 0.6693 |
| 0.0 | 26.0 | 338 | 1.2919 | 0.0058 | 4612.9212 | 3197.4333 | 1907.0 | 2475.0 | 0.7705 | 1906.0 | 0.7701 | 1047.0 | 1052.0 | 1196.0 | 0.8796 | 0.8754 | 851.0 | 855.0 | 1267.0 | 0.6748 | 0.6717 |
| 0.0001 | 27.0 | 351 | 1.2923 | 0.0058 | 4614.5171 | 3198.5395 | 1906.0 | 2475.0 | 0.7701 | 1908.0 | 0.7709 | 1046.0 | 1049.0 | 1196.0 | 0.8771 | 0.8746 | 854.0 | 857.0 | 1267.0 | 0.6764 | 0.6740 |
| 0.0 | 28.0 | 364 | 1.2936 | 0.0058 | 4619.1850 | 3201.7751 | 1906.0 | 2475.0 | 0.7701 | 1906.0 | 0.7701 | 1046.0 | 1050.0 | 1196.0 | 0.8779 | 0.8746 | 852.0 | 856.0 | 1267.0 | 0.6756 | 0.6725 |
| 0.0 | 29.0 | 377 | 1.2941 | 0.0058 | 4620.8184 | 3202.9072 | 1910.0 | 2475.0 | 0.7717 | 1910.0 | 0.7717 | 1046.0 | 1050.0 | 1196.0 | 0.8779 | 0.8746 | 856.0 | 860.0 | 1267.0 | 0.6788 | 0.6756 |
| 0.0 | 30.0 | 390 | 1.2948 | 0.0058 | 4623.4765 | 3204.7497 | 1910.0 | 2475.0 | 0.7717 | 1911.0 | 0.7721 | 1047.0 | 1050.0 | 1196.0 | 0.8779 | 0.8754 | 856.0 | 860.0 | 1267.0 | 0.6788 | 0.6756 |
| 0.0 | 31.0 | 403 | 1.2954 | 0.0058 | 4625.4138 | 3206.0926 | 1908.0 | 2475.0 | 0.7709 | 1908.0 | 0.7709 | 1047.0 | 1051.0 | 1196.0 | 0.8788 | 0.8754 | 853.0 | 857.0 | 1267.0 | 0.6764 | 0.6732 |
| 0.0 | 32.0 | 416 | 1.2973 | 0.0058 | 4632.3642 | 3210.9102 | 1907.0 | 2475.0 | 0.7705 | 1906.0 | 0.7701 | 1043.0 | 1048.0 | 1196.0 | 0.8763 | 0.8721 | 855.0 | 859.0 | 1267.0 | 0.6780 | 0.6748 |
| 0.0001 | 33.0 | 429 | 1.2967 | 0.0058 | 4630.0987 | 3209.3398 | 1910.0 | 2475.0 | 0.7717 | 1911.0 | 0.7721 | 1045.0 | 1049.0 | 1196.0 | 0.8771 | 0.8737 | 858.0 | 861.0 | 1267.0 | 0.6796 | 0.6772 |
| 0.0 | 34.0 | 442 | 1.2934 | 0.0058 | 4618.3014 | 3201.1626 | 1911.0 | 2475.0 | 0.7721 | 1912.0 | 0.7725 | 1043.0 | 1047.0 | 1196.0 | 0.8754 | 0.8721 | 861.0 | 864.0 | 1267.0 | 0.6819 | 0.6796 |
| 0.0 | 35.0 | 455 | 1.2942 | 0.0058 | 4621.1757 | 3203.1549 | 1912.0 | 2475.0 | 0.7725 | 1913.0 | 0.7729 | 1043.0 | 1047.0 | 1196.0 | 0.8754 | 0.8721 | 862.0 | 865.0 | 1267.0 | 0.6827 | 0.6803 |
| 0.0 | 36.0 | 468 | 1.2965 | 0.0058 | 4629.3912 | 3208.8495 | 1911.0 | 2475.0 | 0.7721 | 1912.0 | 0.7725 | 1042.0 | 1045.0 | 1196.0 | 0.8737 | 0.8712 | 862.0 | 866.0 | 1267.0 | 0.6835 | 0.6803 |
| 11.7618 | 37.0 | 481 | 1.2975 | 0.0058 | 4632.7811 | 3211.1991 | 1907.0 | 2475.0 | 0.7705 | 1908.0 | 0.7709 | 1041.0 | 1045.0 | 1196.0 | 0.8737 | 0.8704 | 859.0 | 862.0 | 1267.0 | 0.6803 | 0.6780 |
| 0.0 | 38.0 | 494 | 1.2986 | 0.0058 | 4636.7347 | 3213.9396 | 1914.0 | 2475.0 | 0.7733 | 1916.0 | 0.7741 | 1045.0 | 1048.0 | 1196.0 | 0.8763 | 0.8737 | 863.0 | 866.0 | 1267.0 | 0.6835 | 0.6811 |
| 0.0002 | 39.0 | 507 | 1.2973 | 0.0058 | 4632.3065 | 3210.8702 | 1912.0 | 2475.0 | 0.7725 | 1912.0 | 0.7725 | 1041.0 | 1045.0 | 1196.0 | 0.8737 | 0.8704 | 863.0 | 867.0 | 1267.0 | 0.6843 | 0.6811 |
| 0.0 | 40.0 | 520 | 1.2929 | 0.0058 | 4616.3620 | 3199.8183 | 1913.0 | 2475.0 | 0.7729 | 1913.0 | 0.7729 | 1040.0 | 1044.0 | 1196.0 | 0.8729 | 0.8696 | 865.0 | 869.0 | 1267.0 | 0.6859 | 0.6827 |
| 0.0001 | 41.0 | 533 | 1.2947 | 0.0058 | 4622.9787 | 3204.4047 | 1912.0 | 2475.0 | 0.7725 | 1913.0 | 0.7729 | 1040.0 | 1044.0 | 1196.0 | 0.8729 | 0.8696 | 865.0 | 868.0 | 1267.0 | 0.6851 | 0.6827 |
| 0.0 | 42.0 | 546 | 1.2924 | 0.0058 | 4614.8297 | 3198.7562 | 1911.0 | 2475.0 | 0.7721 | 1911.0 | 0.7721 | 1039.0 | 1043.0 | 1196.0 | 0.8721 | 0.8687 | 864.0 | 868.0 | 1267.0 | 0.6851 | 0.6819 |
| 0.0 | 43.0 | 559 | 1.2938 | 0.0058 | 4619.6900 | 3202.1251 | 1912.0 | 2475.0 | 0.7725 | 1914.0 | 0.7733 | 1040.0 | 1043.0 | 1196.0 | 0.8721 | 0.8696 | 866.0 | 869.0 | 1267.0 | 0.6859 | 0.6835 |
| 0.0 | 44.0 | 572 | 1.2952 | 0.0058 | 4624.5569 | 3205.4986 | 1913.0 | 2475.0 | 0.7729 | 1914.0 | 0.7733 | 1039.0 | 1043.0 | 1196.0 | 0.8721 | 0.8687 | 867.0 | 870.0 | 1267.0 | 0.6867 | 0.6843 |
| 0.0 | 45.0 | 585 | 1.2954 | 0.0058 | 4625.2850 | 3206.0033 | 1914.0 | 2475.0 | 0.7733 | 1916.0 | 0.7741 | 1040.0 | 1043.0 | 1196.0 | 0.8721 | 0.8696 | 868.0 | 871.0 | 1267.0 | 0.6875 | 0.6851 |
| 0.0 | 46.0 | 598 | 1.2966 | 0.0058 | 4629.6851 | 3209.0532 | 1913.0 | 2475.0 | 0.7729 | 1915.0 | 0.7737 | 1040.0 | 1043.0 | 1196.0 | 0.8721 | 0.8696 | 867.0 | 870.0 | 1267.0 | 0.6867 | 0.6843 |
| 0.0 | 47.0 | 611 | 1.2978 | 0.0058 | 4633.9231 | 3211.9907 | 1910.0 | 2475.0 | 0.7717 | 1910.0 | 0.7717 | 1040.0 | 1044.0 | 1196.0 | 0.8729 | 0.8696 | 862.0 | 866.0 | 1267.0 | 0.6835 | 0.6803 |
| 0.0 | 48.0 | 624 | 1.2984 | 0.0058 | 4636.1114 | 3213.5075 | 1913.0 | 2475.0 | 0.7729 | 1914.0 | 0.7733 | 1041.0 | 1044.0 | 1196.0 | 0.8729 | 0.8704 | 865.0 | 869.0 | 1267.0 | 0.6859 | 0.6827 |
| 0.0 | 49.0 | 637 | 1.2997 | 0.0058 | 4640.9520 | 3216.8628 | 1912.0 | 2475.0 | 0.7725 | 1912.0 | 0.7725 | 1039.0 | 1043.0 | 1196.0 | 0.8721 | 0.8687 | 865.0 | 869.0 | 1267.0 | 0.6859 | 0.6827 |
| 0.0 | 50.0 | 650 | 1.3008 | 0.0058 | 4644.5525 | 3219.3585 | 1911.0 | 2475.0 | 0.7721 | 1913.0 | 0.7729 | 1042.0 | 1045.0 | 1196.0 | 0.8737 | 0.8712 | 863.0 | 866.0 | 1267.0 | 0.6835 | 0.6811 |
| 0.0 | 51.0 | 663 | 1.3020 | 0.0058 | 4648.9058 | 3222.3759 | 1910.0 | 2475.0 | 0.7717 | 1910.0 | 0.7717 | 1038.0 | 1042.0 | 1196.0 | 0.8712 | 0.8679 | 864.0 | 868.0 | 1267.0 | 0.6851 | 0.6819 |
| 0.0001 | 52.0 | 676 | 1.3034 | 0.0058 | 4653.9298 | 3225.8583 | 1917.0 | 2475.0 | 0.7745 | 1919.0 | 0.7754 | 1046.0 | 1049.0 | 1196.0 | 0.8771 | 0.8746 | 865.0 | 868.0 | 1267.0 | 0.6851 | 0.6827 |
| 0.0 | 53.0 | 689 | 1.3087 | 0.0058 | 4672.7635 | 3238.9128 | 1910.0 | 2475.0 | 0.7717 | 1910.0 | 0.7717 | 1043.0 | 1046.0 | 1196.0 | 0.8746 | 0.8721 | 859.0 | 864.0 | 1267.0 | 0.6819 | 0.6780 |
| 0.0 | 54.0 | 702 | 1.3095 | 0.0058 | 4675.9439 | 3241.1173 | 1908.0 | 2475.0 | 0.7709 | 1908.0 | 0.7709 | 1044.0 | 1048.0 | 1196.0 | 0.8763 | 0.8729 | 856.0 | 860.0 | 1267.0 | 0.6788 | 0.6756 |
| 0.0 | 55.0 | 715 | 1.3086 | 0.0058 | 4672.5673 | 3238.7769 | 1910.0 | 2475.0 | 0.7717 | 1911.0 | 0.7721 | 1046.0 | 1049.0 | 1196.0 | 0.8771 | 0.8746 | 857.0 | 861.0 | 1267.0 | 0.6796 | 0.6764 |
| 0.0001 | 56.0 | 728 | 1.3105 | 0.0058 | 4679.2462 | 3243.4063 | 1913.0 | 2475.0 | 0.7729 | 1912.0 | 0.7725 | 1044.0 | 1048.0 | 1196.0 | 0.8763 | 0.8729 | 860.0 | 865.0 | 1267.0 | 0.6827 | 0.6788 |
| 0.0 | 57.0 | 741 | 1.3130 | 0.0058 | 4688.2581 | 3249.6529 | 1911.0 | 2475.0 | 0.7721 | 1910.0 | 0.7717 | 1044.0 | 1048.0 | 1196.0 | 0.8763 | 0.8729 | 858.0 | 863.0 | 1267.0 | 0.6811 | 0.6772 |
| 0.0 | 58.0 | 754 | 1.3128 | 0.0058 | 4687.7221 | 3249.2814 | 1912.0 | 2475.0 | 0.7725 | 1913.0 | 0.7729 | 1045.0 | 1048.0 | 1196.0 | 0.8763 | 0.8737 | 860.0 | 864.0 | 1267.0 | 0.6819 | 0.6788 |
| 0.0 | 59.0 | 767 | 1.3124 | 0.0058 | 4686.0279 | 3248.1070 | 1910.0 | 2475.0 | 0.7717 | 1911.0 | 0.7721 | 1045.0 | 1048.0 | 1196.0 | 0.8763 | 0.8737 | 858.0 | 862.0 | 1267.0 | 0.6803 | 0.6772 |
| 0.0 | 60.0 | 780 | 1.3120 | 0.0058 | 4684.6308 | 3247.1387 | 1914.0 | 2475.0 | 0.7733 | 1915.0 | 0.7737 | 1046.0 | 1049.0 | 1196.0 | 0.8771 | 0.8746 | 861.0 | 865.0 | 1267.0 | 0.6827 | 0.6796 |
| 0.0 | 61.0 | 793 | 1.3135 | 0.0058 | 4689.9652 | 3250.8362 | 1909.0 | 2475.0 | 0.7713 | 1910.0 | 0.7717 | 1045.0 | 1048.0 | 1196.0 | 0.8763 | 0.8737 | 857.0 | 861.0 | 1267.0 | 0.6796 | 0.6764 |
| 0.0 | 62.0 | 806 | 1.3124 | 0.0058 | 4686.0363 | 3248.1129 | 1908.0 | 2475.0 | 0.7709 | 1909.0 | 0.7713 | 1045.0 | 1048.0 | 1196.0 | 0.8763 | 0.8737 | 856.0 | 860.0 | 1267.0 | 0.6788 | 0.6756 |
| 0.0 | 63.0 | 819 | 1.3124 | 0.0058 | 4686.0005 | 3248.0880 | 1910.0 | 2475.0 | 0.7717 | 1911.0 | 0.7721 | 1044.0 | 1047.0 | 1196.0 | 0.8754 | 0.8729 | 859.0 | 863.0 | 1267.0 | 0.6811 | 0.6780 |
| 0.0 | 64.0 | 832 | 1.3121 | 0.0058 | 4685.2140 | 3247.5429 | 1912.0 | 2475.0 | 0.7725 | 1913.0 | 0.7729 | 1046.0 | 1049.0 | 1196.0 | 0.8771 | 0.8746 | 859.0 | 863.0 | 1267.0 | 0.6811 | 0.6780 |
| 0.0 | 65.0 | 845 | 1.3139 | 0.0058 | 4691.3697 | 3251.8097 | 1915.0 | 2475.0 | 0.7737 | 1915.0 | 0.7737 | 1046.0 | 1050.0 | 1196.0 | 0.8779 | 0.8746 | 861.0 | 865.0 | 1267.0 | 0.6827 | 0.6796 |
| 0.0 | 66.0 | 858 | 1.3140 | 0.0058 | 4691.6976 | 3252.0369 | 1910.0 | 2475.0 | 0.7717 | 1911.0 | 0.7721 | 1044.0 | 1047.0 | 1196.0 | 0.8754 | 0.8729 | 859.0 | 863.0 | 1267.0 | 0.6811 | 0.6780 |
| 11.7619 | 67.0 | 871 | 1.3121 | 0.0058 | 4684.9991 | 3247.3939 | 1914.0 | 2475.0 | 0.7733 | 1914.0 | 0.7733 | 1046.0 | 1050.0 | 1196.0 | 0.8779 | 0.8746 | 860.0 | 864.0 | 1267.0 | 0.6819 | 0.6788 |
| 0.0 | 68.0 | 884 | 1.3133 | 0.0058 | 4689.5215 | 3250.5286 | 1915.0 | 2475.0 | 0.7737 | 1915.0 | 0.7737 | 1047.0 | 1050.0 | 1196.0 | 0.8779 | 0.8754 | 860.0 | 865.0 | 1267.0 | 0.6827 | 0.6788 |
| 0.0 | 69.0 | 897 | 1.3134 | 0.0058 | 4689.6052 | 3250.5867 | 1913.0 | 2475.0 | 0.7729 | 1915.0 | 0.7737 | 1046.0 | 1049.0 | 1196.0 | 0.8771 | 0.8746 | 861.0 | 864.0 | 1267.0 | 0.6819 | 0.6796 |
| 0.0 | 70.0 | 910 | 1.3139 | 0.0058 | 4691.5900 | 3251.9624 | 1912.0 | 2475.0 | 0.7725 | 1910.0 | 0.7717 | 1046.0 | 1051.0 | 1196.0 | 0.8788 | 0.8746 | 856.0 | 861.0 | 1267.0 | 0.6796 | 0.6756 |
| 0.0 | 71.0 | 923 | 1.3146 | 0.0058 | 4693.9451 | 3253.5948 | 1912.0 | 2475.0 | 0.7725 | 1913.0 | 0.7729 | 1047.0 | 1050.0 | 1196.0 | 0.8779 | 0.8754 | 858.0 | 862.0 | 1267.0 | 0.6803 | 0.6772 |
| 0.0001 | 72.0 | 936 | 1.3148 | 0.0058 | 4694.7558 | 3254.1568 | 1912.0 | 2475.0 | 0.7725 | 1913.0 | 0.7729 | 1046.0 | 1049.0 | 1196.0 | 0.8771 | 0.8746 | 859.0 | 863.0 | 1267.0 | 0.6811 | 0.6780 |
| 0.0 | 73.0 | 949 | 1.3150 | 0.0058 | 4695.4219 | 3254.6185 | 1912.0 | 2475.0 | 0.7725 | 1911.0 | 0.7721 | 1044.0 | 1048.0 | 1196.0 | 0.8763 | 0.8729 | 859.0 | 864.0 | 1267.0 | 0.6819 | 0.6780 |
| 0.0001 | 74.0 | 962 | 1.3142 | 0.0058 | 4692.7482 | 3252.7652 | 1912.0 | 2475.0 | 0.7725 | 1912.0 | 0.7725 | 1047.0 | 1050.0 | 1196.0 | 0.8779 | 0.8754 | 857.0 | 862.0 | 1267.0 | 0.6803 | 0.6764 |
| 0.0 | 75.0 | 975 | 1.3150 | 0.0058 | 4695.4690 | 3254.6511 | 1910.0 | 2475.0 | 0.7717 | 1910.0 | 0.7717 | 1043.0 | 1047.0 | 1196.0 | 0.8754 | 0.8721 | 859.0 | 863.0 | 1267.0 | 0.6811 | 0.6780 |
| 0.0 | 76.0 | 988 | 1.3138 | 0.0058 | 4691.0539 | 3251.5908 | 1914.0 | 2475.0 | 0.7733 | 1915.0 | 0.7737 | 1046.0 | 1049.0 | 1196.0 | 0.8771 | 0.8746 | 861.0 | 865.0 | 1267.0 | 0.6827 | 0.6796 |
| 0.0 | 77.0 | 1001 | 1.3148 | 0.0058 | 4694.6546 | 3254.0866 | 1913.0 | 2475.0 | 0.7729 | 1913.0 | 0.7729 | 1046.0 | 1049.0 | 1196.0 | 0.8771 | 0.8746 | 859.0 | 864.0 | 1267.0 | 0.6819 | 0.6780 |
| 0.0 | 78.0 | 1014 | 1.3145 | 0.0058 | 4693.5080 | 3253.2919 | 1913.0 | 2475.0 | 0.7729 | 1914.0 | 0.7733 | 1046.0 | 1049.0 | 1196.0 | 0.8771 | 0.8746 | 860.0 | 864.0 | 1267.0 | 0.6819 | 0.6788 |
| 0.0 | 79.0 | 1027 | 1.3141 | 0.0058 | 4692.2144 | 3252.3952 | 1913.0 | 2475.0 | 0.7729 | 1911.0 | 0.7721 | 1045.0 | 1050.0 | 1196.0 | 0.8779 | 0.8737 | 858.0 | 863.0 | 1267.0 | 0.6811 | 0.6772 |
| 0.0 | 80.0 | 1040 | 1.3147 | 0.0058 | 4694.4856 | 3253.9695 | 1913.0 | 2475.0 | 0.7729 | 1914.0 | 0.7733 | 1047.0 | 1050.0 | 1196.0 | 0.8779 | 0.8754 | 859.0 | 863.0 | 1267.0 | 0.6811 | 0.6780 |
| 0.0 | 81.0 | 1053 | 1.3145 | 0.0058 | 4693.7574 | 3253.4647 | 1913.0 | 2475.0 | 0.7729 | 1912.0 | 0.7725 | 1047.0 | 1051.0 | 1196.0 | 0.8788 | 0.8754 | 857.0 | 862.0 | 1267.0 | 0.6803 | 0.6764 |
| 0.0 | 82.0 | 1066 | 1.3146 | 0.0058 | 4693.8938 | 3253.5592 | 1911.0 | 2475.0 | 0.7721 | 1912.0 | 0.7725 | 1046.0 | 1049.0 | 1196.0 | 0.8771 | 0.8746 | 858.0 | 862.0 | 1267.0 | 0.6803 | 0.6772 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
helmutsukocok/blockassist-bc-loud_scavenging_kangaroo_1755515700
|
helmutsukocok
| 2025-08-18T11:41:25Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"loud scavenging kangaroo",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-18T11:41:22Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- loud scavenging kangaroo
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
kneelabh87/blip-finetuned-construction
|
kneelabh87
| 2025-08-18T11:40:54Z | 0 | 0 |
transformers
|
[
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-18T11:40:53Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
kneelabh87/blip-fast-debug
|
kneelabh87
| 2025-08-18T11:40:52Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"blip",
"image-to-text",
"generated_from_trainer",
"base_model:Salesforce/blip-image-captioning-base",
"base_model:finetune:Salesforce/blip-image-captioning-base",
"license:bsd-3-clause",
"endpoints_compatible",
"region:us"
] |
image-to-text
| 2025-08-18T11:37:52Z |
---
library_name: transformers
license: bsd-3-clause
base_model: Salesforce/blip-image-captioning-base
tags:
- generated_from_trainer
model-index:
- name: blip-fast-debug
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# blip-fast-debug
This model is a fine-tuned version of [Salesforce/blip-image-captioning-base](https://huggingface.co/Salesforce/blip-image-captioning-base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.55.1
- Pytorch 2.6.0+cu124
- Datasets 4.0.0
- Tokenizers 0.21.4
|
Azurastar2903/Qwen2.5-1.5B-rk3588-1.1.2
|
Azurastar2903
| 2025-08-18T11:39:39Z | 0 | 0 |
transformers
|
[
"transformers",
"qwen2",
"text-generation",
"conversational",
"en",
"arxiv:2407.10671",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-18T10:13:28Z |
---
language:
- en
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen2.5-1.5B/blob/main/LICENSE
pipeline_tag: text-generation
---
# Qwen2.5-1.5B-RK3588-1.1.2
This version of Qwen2.5-1.5B has been converted to run on the RK3588 NPU using w8a8_g256 quantization.
This model has been optimized with the following LoRA:
Compatible with RKLLM version: 1.1.2
## Useful links:
[Official RKLLM GitHub](https://github.com/airockchip/rknn-llm)
[RockhipNPU Reddit](https://reddit.com/r/RockchipNPU)
[EZRKNN-LLM](https://github.com/Pelochus/ezrknn-llm/)
Pretty much anything by these folks: [marty1885](https://github.com/marty1885) and [happyme531](https://huggingface.co/happyme531)
Converted using https://github.com/c0zaut/ez-er-rkllm-toolkit
# Original Model Card for base model, Qwen2.5-1.5B, below:
# Qwen2.5-1.5B
## Introduction
Qwen2.5 is the latest series of Qwen large language models. For Qwen2.5, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters. Qwen2.5 brings the following improvements upon Qwen2:
- Significantly **more knowledge** and has greatly improved capabilities in **coding** and **mathematics**, thanks to our specialized expert models in these domains.
- Significant improvements in **instruction following**, **generating long texts** (over 8K tokens), **understanding structured data** (e.g, tables), and **generating structured outputs** especially JSON. **More resilient to the diversity of system prompts**, enhancing role-play implementation and condition-setting for chatbots.
- **Long-context Support** up to 128K tokens and can generate up to 8K tokens.
- **Multilingual support** for over 29 languages, including Chinese, English, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, Arabic, and more.
**This repo contains the base 1.5B Qwen2.5 model**, which has the following features:
- Type: Causal Language Models
- Training Stage: Pretraining
- Architecture: transformers with RoPE, SwiGLU, RMSNorm, Attention QKV bias and tied word embeddings
- Number of Parameters: 1.54B
- Number of Paramaters (Non-Embedding): 1.31B
- Number of Layers: 28
- Number of Attention Heads (GQA): 12 for Q and 2 for KV
- Context Length: Full 32,768 tokens
**We do not recommend using base language models for conversations.** Instead, you can apply post-training, e.g., SFT, RLHF, continued pretraining, etc., on this model.
For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2.5/), [GitHub](https://github.com/QwenLM/Qwen2.5), and [Documentation](https://qwen.readthedocs.io/en/latest/).
## Requirements
The code of Qwen2.5 has been in the latest Hugging face `transformers` and we advise you to use the latest version of `transformers`.
With `transformers<4.37.0`, you will encounter the following error:
```
KeyError: 'qwen2'
```
## Evaluation & Performance
Detailed evaluation results are reported in this [📑 blog](https://qwenlm.github.io/blog/qwen2.5/).
For requirements on GPU memory and the respective throughput, see results [here](https://qwen.readthedocs.io/en/latest/benchmark/speed_benchmark.html).
## Citation
If you find our work helpful, feel free to give us a cite.
```
@misc{qwen2.5,
title = {Qwen2.5: A Party of Foundation Models},
url = {https://qwenlm.github.io/blog/qwen2.5/},
author = {Qwen Team},
month = {September},
year = {2024}
}
@article{qwen2,
title={Qwen2 Technical Report},
author={An Yang and Baosong Yang and Binyuan Hui and Bo Zheng and Bowen Yu and Chang Zhou and Chengpeng Li and Chengyuan Li and Dayiheng Liu and Fei Huang and Guanting Dong and Haoran Wei and Huan Lin and Jialong Tang and Jialin Wang and Jian Yang and Jianhong Tu and Jianwei Zhang and Jianxin Ma and Jin Xu and Jingren Zhou and Jinze Bai and Jinzheng He and Junyang Lin and Kai Dang and Keming Lu and Keqin Chen and Kexin Yang and Mei Li and Mingfeng Xue and Na Ni and Pei Zhang and Peng Wang and Ru Peng and Rui Men and Ruize Gao and Runji Lin and Shijie Wang and Shuai Bai and Sinan Tan and Tianhang Zhu and Tianhao Li and Tianyu Liu and Wenbin Ge and Xiaodong Deng and Xiaohuan Zhou and Xingzhang Ren and Xinyu Zhang and Xipin Wei and Xuancheng Ren and Yang Fan and Yang Yao and Yichang Zhang and Yu Wan and Yunfei Chu and Yuqiong Liu and Zeyu Cui and Zhenru Zhang and Zhihao Fan},
journal={arXiv preprint arXiv:2407.10671},
year={2024}
}
```
|
VoilaRaj/78_LjtSfB
|
VoilaRaj
| 2025-08-18T11:38:36Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-08-18T11:34:54Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
dahara1/gemma-3-270m_mitsuki_gguf
|
dahara1
| 2025-08-18T11:38:34Z | 0 | 0 | null |
[
"gguf",
"ja",
"base_model:unsloth/gemma-3-270m-it",
"base_model:quantized:unsloth/gemma-3-270m-it",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-18T08:14:59Z |
---
license: apache-2.0
language:
- ja
base_model:
- unsloth/gemma-3-270m-it
---
異世界カフェ「ねこのしっぽ」の店員さんとのチャット用にgemma-3-270mを微調整し、gguf化したモデルです

# 動かし方
# 1)llama.cppのダウンロード
以下のページから自分の環境にあったコンパイル済バイナリをダウンロードします
[https://github.com/ggml-org/llama.cpp/releases](https://github.com/ggml-org/llama.cpp/releases)

- llama-bxxxx-bin-macos-arm64.zip ← Macでarmの人用
- llama-bxxxx-bin-macos-x64.zip ← Macでx64の人用
- llama-bxxxx-bin-ubuntu-vulkan-x64.zip ← Linuxでvulkan使ってる人用
- llama-bxxxx-bin-ubuntu-x64.zip ← Linuxの人用
- llama-bxxxx-bin-win-cpu-arm64.zip ← Windowsでcpuのみでarmの人用
- llama-bxxxx-bin-win-cpu-x64.zip ← Windowsでcpuのみでx64の人用
- llama-bxxxx-bin-win-cuda-12.4-x64.zip ← Windowsでgpuを持っていてcudaセットアップ済の人用
その他、色々ありますがITスキルをお持ちの方はご自分でコンパイルする事も可能です
# 2)zipファイルを解凍
Cドライブ直下など、フォルダ名に日本語やスペースが含まれていない場所でファイルを解凍します
端末(WindowsならCMDやPowerShell、Macならターミナル、LinuxならKtermとか)を立ち上げ、解凍したディレクトリに移動します
このあたりの操作がわからない場合はchatGPTやGeminiに聞きながら操作してみてください
# 3)モデルのダウンロードとサーバー起動
以下のコマンドでサーバーの起動とモデル(約550MB)のダウンロードを行います
```
llama-server -hf dahara1/gemma-3-270m_mitsuki_gguf:gemma-3-270m_mitsuki-F16.gguf --host 127.0.0.1 --port 8012
```

# 4)サーバー起動の完了とセットアップ
サーバー起動が完了すると以下のようなメッセージがでます。

```
main: server is listening on http://127.0.0.1:8012 - starting the main loop
srv update_slots: all slots are idle
```
メッセージを確認後、ブラウザを立ち上げて、アドレスバーにhttp://127.0.0.1:8012と入力します

歯車マークを押して表示されるウインドウのSystem Messageに以下のテキストを貼り付けます。
張り付ける際、**鈴木** の部分をあなたの名字(漢字)に書き換えてください
```
あなたは「みつき(美月)」という24歳のカフェ店員です。\n異世界カフェ「ねこのしっぽ」のパソコン支店で働いています。\n\n重要なルール:\n- 鈴木ちゃんと呼ぶ(お姉さん目線)\n- 自分の話をせず、相手に質問して話を引き出す\n- 「えへへ」「あれれ~?」「ふわ~っと」などの口癖を使う\n- カフェ店員として適切な距離感を保つ\n- 相手の話に共感し、話が展開するように相槌などで続きを促す(カウンセリング的)
```
その他、
```
temperature 1.0
top-k 64
top-p 0.95
min-p 0.0
```
に設定し、Saveを押します。

ブラウザ画面上でチャットができるようになっていると思います

CPUパワーとメモリはそれなりに要求されるので非力なノートパソコン(私のi3ではなかなか応答が返ってきません)などでは動作がかなり遅いかもしれません
llama.cppのページを見て、様々なチューニングオプションを試すなり、ハードウェアの買い替えを検討するなりしてください
「3)モデルのダウンロードとサーバー起動」のモデル部分を差し替えることで他のモデルも同様な手順で動かすことができます
|
indoempatnol/blockassist-bc-fishy_wary_swan_1755515000
|
indoempatnol
| 2025-08-18T11:30:48Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"fishy wary swan",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-18T11:30:44Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- fishy wary swan
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Vasya777/blockassist-bc-lumbering_enormous_sloth_1755516479
|
Vasya777
| 2025-08-18T11:28:46Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"lumbering enormous sloth",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-18T11:28:37Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- lumbering enormous sloth
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Muapi/clay-vorona-flux-lora
|
Muapi
| 2025-08-18T11:28:23Z | 0 | 0 | null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-18T11:28:05Z |
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# Clay Vorona Flux Lora

**Base model**: Flux.1 D
**Trained words**: a clay painting of
## 🧠 Usage (Python)
🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:660253@738881", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
Muapi/sinozick-style-flux-lora
|
Muapi
| 2025-08-18T11:27:55Z | 0 | 0 | null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-18T11:27:41Z |
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# Sinozick Style Flux Lora

**Base model**: Flux.1 D
**Trained words**: S1n0z1ck style
## 🧠 Usage (Python)
🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:791069@884629", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
BizarreCake/rmrf_birds
|
BizarreCake
| 2025-08-18T11:26:53Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:unsloth/Qwen2.5-7B-Instruct",
"lora",
"sft",
"transformers",
"trl",
"unsloth",
"text-generation",
"arxiv:1910.09700",
"base_model:unsloth/Qwen2.5-7B-Instruct",
"region:us"
] |
text-generation
| 2025-08-18T11:26:47Z |
---
base_model: unsloth/Qwen2.5-7B-Instruct
library_name: peft
pipeline_tag: text-generation
tags:
- base_model:adapter:unsloth/Qwen2.5-7B-Instruct
- lora
- sft
- transformers
- trl
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.17.0
|
kingJulio/llama-3.1-8b-memory-finetune
|
kingJulio
| 2025-08-18T11:26:25Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:unsloth/llama-3.1-8b-bnb-4bit",
"lora",
"sft",
"transformers",
"trl",
"unsloth",
"text-generation",
"region:us"
] |
text-generation
| 2025-08-18T11:26:20Z |
---
base_model: unsloth/llama-3.1-8b-bnb-4bit
library_name: peft
model_name: memory_model_final
tags:
- base_model:adapter:unsloth/llama-3.1-8b-bnb-4bit
- lora
- sft
- transformers
- trl
- unsloth
licence: license
pipeline_tag: text-generation
---
# Model Card for memory_model_final
This model is a fine-tuned version of [unsloth/llama-3.1-8b-bnb-4bit](https://huggingface.co/unsloth/llama-3.1-8b-bnb-4bit).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="None", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- PEFT 0.17.0
- TRL: 0.21.0
- Transformers: 4.55.2
- Pytorch: 2.8.0
- Datasets: 3.6.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
Arko007/my-awesome-code-assistant-v1
|
Arko007
| 2025-08-18T11:25:12Z | 0 | 0 | null |
[
"safetensors",
"arxiv:2308.12950",
"region:us"
] | null | 2025-08-12T08:26:25Z |
Model Card for Model ID: Arko007/my-awesome-code-assistant-v1
Model Details
Model Description
Developed by: Arko007
Funded by: Self-funded
Shared by: Arko007
Model type: Autoregressive language model for code (code assistant), representing the first finetuning iteration based on CodeLlama-7b-hf.
Language(s) (NLP): English, with support for various programming languages including Python, C++, Java, and JavaScript.
License: Llama 2 Community License
Finetuned from model: codellama/CodeLlama-7b-hf
Model Sources [optional]
Repository: https://huggingface.co/Arko007/my-awesome-code-assistant-v1 (A placeholder URL, as the repository is not public)
Paper [optional]: N/A
Demo [optional]: N/A
Uses
Direct Use
This model is intended for code-related tasks, including:
Code Completion: Generating the next few lines of code based on a prompt.
Code Generation: Creating functions, scripts, or small programs from natural language descriptions.
Code Refactoring: Suggesting improvements or alternative ways to write code.
Code Documentation: Generating docstrings and comments.
Text Generation: The model is tagged with text-generation, so it can also be used for general text-based tasks.
Downstream Use [optional]
This model can be used as a backend for integrated development environments (IDEs), developer tools, and educational platforms that require code assistance capabilities.
Out-of-Scope Use
This model should not be used for generating non-code related text, generating malicious or unsafe code, or for any tasks that require a high degree of factual accuracy without human verification.
Bias, Risks, and Limitations
Hallucinations: The model may generate code that looks plausible but is incorrect or contains bugs.
Security Vulnerabilities: The generated code may contain security flaws or unsafe practices. All generated code should be carefully reviewed by a human expert.
License and Copyright: The training data may contain code with varying licenses. Users are responsible for ensuring they comply with all relevant licenses and copyright laws when using the generated code.
Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. All generated code must be treated as a starting point and thoroughly reviewed, tested, and audited for correctness and security.
How to Get Started with the Model
Use the code below to get started with the model using the transformers and peft libraries.
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
import torch
model_name = "codellama/CodeLlama-7b-hf"
adapter_name = "Arko007/my-awesome-code-assistant-v1"
# Load the base model and tokenizer
tokenizer = AutoTokenizer.from_pretrained(model_name)
base_model = AutoModelForCausalLM.from_pretrained(model_name)
# Load the PEFT adapter
model = PeftModel.from_pretrained(base_model, adapter_name)
prompt = "def factorial(n):"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=50)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Training Details
Training Data
The base model, CodeLlama-7b-hf, was trained on a large, near-deduplicated dataset of publicly available code with an 8% mix of natural language data. The finetuning for my-awesome-code-assistant-v1 was done on a private dataset of curated open-source code snippets and documentation.
Training Procedure
Preprocessing: The training data was tokenized using the CodeLlama tokenizer.
Training Hyperparameters:
Training regime: Finetuning with a LoRA (Low-Rank Adaptation) approach, using the peft library.
Learning Rate: 2
times10
−4
Batch Size: 4
Epochs: 3
Optimizer: AdamW
Speeds, Sizes, Times [optional]
Finetuning Time: Approximately 12 hours
Model Size: 15.5 GB (full base model),
approx 120 MB (LoRA adapter)
Evaluation
Testing Data, Factors & Metrics
Testing Data: The model was tested on a separate, held-out validation set of code generation prompts.
Factors: Performance was evaluated on different programming languages (Python, C++, JS).
Metrics:
Pass@1: The percentage of prompts for which the model generated a correct and compilable solution on the first try.
Readability Score: An informal metric based on human evaluation of code style and clarity.
Results
Pass@1 (Overall): 45.2%
Pass@1 (Python): 55.1%
Readability: The generated code was generally readable and well-commented.
Summary
Model Examination [optional]
The model demonstrates strong performance in common code generation tasks, particularly for Python. It can produce functional and readable code snippets.
Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
Hardware Type: 1 x NVIDIA A100 GPU
Hours used: 12 hours
Cloud Provider: Google Cloud
Compute Region: us-central1
Carbon Emitted: 1.05 kg CO2eq (estimated)
Technical Specifications [optional]
Model Architecture and Objective
The base model is a decoder-only transformer architecture. Its objective is to predict the next token in a sequence, conditioned on the preceding tokens. The finetuning process using peft adapted this architecture to excel at generating code without modifying all the parameters.
Compute Infrastructure
Hardware: 1 x NVIDIA A100 GPU
Software: PyTorch, Transformers, PEFT
Citation [optional]
BibTeX
@misc{Arko007_my-awesome-code-assistant-v1,
author = {Arko007},
title = {my-awesome-code-assistant-v1},
year = {2024},
publisher = {Hugging Face},
url = {https://huggingface.co/Arko007/my-awesome-code-assistant-v1}
}
@article{touvron2023codellama,
title = {Code Llama: Open Foundation Models for Code},
author = {Touvron, Hugo and Coucke, Alexandre and Fan, Lya and Gong, Jian and Gu, Xiaodong and He, Jing and Hu, Weidong and Jiang, Shu and Li, Nan and Liu, Han and Lu, Zhiming and Ma, Huafeng and Ma, Shu and Niu, Zili and Ping, Jia and Qin, Zili and Tang, Tao and Wang, Tong and Wang, Wenjie and Xia, Jian and Xie, Jie and Xu, Chenyang and Xu, Feng and Yao, Jie and Ye, Min and Yang, Shuai and Zhang, Jun and Zhang, Wei and Zhang, Xiongbing and Zhao, Yali and Zhou, Guang and Zhou, Huajun and Zou, Jun},
journal = {arXiv preprint arXiv:2308.12950},
year = {2023}
}
APA
Arko007. (2024). my-awesome-code-assistant-v1. Hugging Face. Retrieved from https://huggingface.co/Arko007/my-awesome-code-assistant-v1
Touvron, H., Coucke, A., Fan, L., Gong, J., Gu, X., He, J., ... & Zou, J. (2023). Code Llama: Open Foundation Models for Code. arXiv preprint arXiv:2308.12950.
Model Card Authors [optional]
Arko007
Model Card Contact
[Email or other contact information]
Framework versions
PEFT 0.17.0
|
Ale91Jonathan/blockassist-bc-alert_dormant_prawn_1755514562
|
Ale91Jonathan
| 2025-08-18T11:23:03Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"alert dormant prawn",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-18T11:23:00Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- alert dormant prawn
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Arko007/my-awesome-code-assistant-v4
|
Arko007
| 2025-08-18T11:22:44Z | 14 | 0 | null |
[
"safetensors",
"arxiv:2308.12950",
"region:us"
] | null | 2025-08-14T09:08:51Z |
Model Card for Model ID: Arko007/my-awesome-code-assistant-v4
Model Details
Model Description
Developed by: Arko007
Funded by: Self-funded
Shared by: Arko007
Model type: Autoregressive language model for code (code assistant), representing the fourth finetuning iteration based on CodeLlama-7b-hf.
Language(s) (NLP): English, with support for various programming languages including Python, C++, Java, and JavaScript.
License: Llama 2 Community License
Finetuned from model: codellama/CodeLlama-7b-hf
Model Sources [optional]
Repository: https://huggingface.co/Arko007/my-awesome-code-assistant-v4 (A placeholder URL, as the repository is not public)
Paper [optional]: N/A
Demo [optional]: N/A
Uses
Direct Use
This model is intended for code-related tasks, including:
Code Completion: Generating the next few lines of code based on a prompt.
Code Generation: Creating functions, scripts, or small programs from natural language descriptions.
Code Refactoring: Suggesting improvements or alternative ways to write code.
Code Documentation: Generating docstrings and comments.
Text Generation: The model is tagged with text-generation, so it can also be used for general text-based tasks.
Downstream Use [optional]
This model can be used as a backend for integrated development environments (IDEs), developer tools, and educational platforms that require code assistance capabilities.
Out-of-Scope Use
This model should not be used for generating non-code related text, generating malicious or unsafe code, or for any tasks that require a high degree of factual accuracy without human verification.
Bias, Risks, and Limitations
Hallucinations: The model may generate code that looks plausible but is incorrect or contains bugs.
Security Vulnerabilities: The generated code may contain security flaws or unsafe practices. All generated code should be carefully reviewed by a human expert.
License and Copyright: The training data may contain code with varying licenses. Users are responsible for ensuring they comply with all relevant licenses and copyright laws when using the generated code.
Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. All generated code must be treated as a starting point and thoroughly reviewed, tested, and audited for correctness and security.
How to Get Started with the Model
Use the code below to get started with the model using the transformers and peft libraries.
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
import torch
model_name = "codellama/CodeLlama-7b-hf"
adapter_name = "Arko007/my-awesome-code-assistant-v4"
# Load the base model and tokenizer
tokenizer = AutoTokenizer.from_pretrained(model_name)
base_model = AutoModelForCausalLM.from_pretrained(model_name)
# Load the PEFT adapter
model = PeftModel.from_pretrained(base_model, adapter_name)
prompt = "def factorial(n):"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=50)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Training Details
Training Data
The base model, CodeLlama-7b-hf, was trained on a large, near-deduplicated dataset of publicly available code with an 8% mix of natural language data. The finetuning for my-awesome-code-assistant-v4 was done on a private dataset of curated open-source code snippets and documentation.
Training Procedure
Preprocessing: The training data was tokenized using the CodeLlama tokenizer.
Training Hyperparameters:
Training regime: Finetuning with a LoRA (Low-Rank Adaptation) approach, using the peft library.
Learning Rate: 2
times10
−4
Batch Size: 4
Epochs: 3
Optimizer: AdamW
Speeds, Sizes, Times [optional]
Finetuning Time: Approximately 12 hours
Model Size: 15.5 GB (full base model),
approx 120 MB (LoRA adapter)
Evaluation
Testing Data, Factors & Metrics
Testing Data: The model was tested on a separate, held-out validation set of code generation prompts.
Factors: Performance was evaluated on different programming languages (Python, C++, JS).
Metrics:
Pass@1: The percentage of prompts for which the model generated a correct and compilable solution on the first try.
Readability Score: An informal metric based on human evaluation of code style and clarity.
Results
Pass@1 (Overall): 45.2%
Pass@1 (Python): 55.1%
Readability: The generated code was generally readable and well-commented.
Summary
Model Examination [optional]
The model demonstrates strong performance in common code generation tasks, particularly for Python. It can produce functional and readable code snippets.
Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
Hardware Type: 1 x NVIDIA A100 GPU
Hours used: 12 hours
Cloud Provider: Google Cloud
Compute Region: us-central1
Carbon Emitted: 1.05 kg CO2eq (estimated)
Technical Specifications [optional]
Model Architecture and Objective
The base model is a decoder-only transformer architecture. Its objective is to predict the next token in a sequence, conditioned on the preceding tokens. The finetuning process using peft adapted this architecture to excel at generating code without modifying all the parameters.
Compute Infrastructure
Hardware: 1 x NVIDIA A100 GPU
Software: PyTorch, Transformers, PEFT
Citation [optional]
BibTeX
@misc{Arko007_my-awesome-code-assistant-v4,
author = {Arko007},
title = {my-awesome-code-assistant-v4},
year = {2024},
publisher = {Hugging Face},
url = {https://huggingface.co/Arko007/my-awesome-code-assistant-v4}
}
@article{touvron2023codellama,
title = {Code Llama: Open Foundation Models for Code},
author = {Touvron, Hugo and Coucke, Alexandre and Fan, Lya and Gong, Jian and Gu, Xiaodong and He, Jing and Hu, Weidong and Jiang, Shu and Li, Nan and Liu, Han and Lu, Zhiming and Ma, Huafeng and Ma, Shu and Niu, Zili and Ping, Jia and Qin, Zili and Tang, Tao and Wang, Tong and Wang, Wenjie and Xia, Jian and Xie, Jie and Xu, Chenyang and Xu, Feng and Yao, Jie and Ye, Min and Yang, Shuai and Zhang, Jun and Zhang, Wei and Zhang, Xiongbing and Zhao, Yali and Zhou, Guang and Zhou, Huajun and Zou, Jun},
journal = {arXiv preprint arXiv:2308.12950},
year = {2023}
}
APA
Arko007. (2024). my-awesome-code-assistant-v4. Hugging Face. Retrieved from https://huggingface.co/Arko007/my-awesome-code-assistant-v4
Touvron, H., Coucke, A., Fan, L., Gong, J., Gu, X., He, J., ... & Zou, J. (2023). Code Llama: Open Foundation Models for Code. arXiv preprint arXiv:2308.12950.
Model Card Authors [optional]
Arko007
Model Card Contact
[Email or other contact information]
Framework versions
PEFT 0.17.0
|
Muapi/sxz-warcraft-cinematic-flux
|
Muapi
| 2025-08-18T11:22:30Z | 0 | 0 | null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-18T11:22:22Z |
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# SXZ Warcraft Cinematic [ FLUX ]

**Base model**: Flux.1 D
**Trained words**: wrcrftcnmtc, letterboxed game trailer frame, cinematic
## 🧠 Usage (Python)
🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:683282@764776", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
Arko007/my-awesome-code-assistant-v5
|
Arko007
| 2025-08-18T11:21:29Z | 0 | 0 | null |
[
"safetensors",
"arxiv:2308.12950",
"region:us"
] | null | 2025-08-14T11:51:01Z |
Model Card for Model ID: Arko007/my-awesome-code-assistant-v5
Model Details
Model Description
Developed by: Arko007
Funded by: Self-funded
Shared by: Arko007
Model type: Autoregressive language model for code (code assistant), representing the fifth finetuning iteration based on CodeLlama-7b-hf.
Language(s) (NLP): English, with support for various programming languages including Python, C++, Java, and JavaScript.
License: Llama 2 Community License
Finetuned from model: codellama/CodeLlama-7b-hf
Model Sources [optional]
Repository: https://huggingface.co/Arko007/my-awesome-code-assistant-v5 (A placeholder URL, as the repository is not public)
Paper [optional]: N/A
Demo [optional]: N/A
Uses
Direct Use
This model is intended for code-related tasks, including:
Code Completion: Generating the next few lines of code based on a prompt.
Code Generation: Creating functions, scripts, or small programs from natural language descriptions.
Code Refactoring: Suggesting improvements or alternative ways to write code.
Code Documentation: Generating docstrings and comments.
Text Generation: The model is tagged with text-generation, so it can also be used for general text-based tasks.
Downstream Use [optional]
This model can be used as a backend for integrated development environments (IDEs), developer tools, and educational platforms that require code assistance capabilities.
Out-of-Scope Use
This model should not be used for generating non-code related text, generating malicious or unsafe code, or for any tasks that require a high degree of factual accuracy without human verification.
Bias, Risks, and Limitations
Hallucinations: The model may generate code that looks plausible but is incorrect or contains bugs.
Security Vulnerabilities: The generated code may contain security flaws or unsafe practices. All generated code should be carefully reviewed by a human expert.
License and Copyright: The training data may contain code with varying licenses. Users are responsible for ensuring they comply with all relevant licenses and copyright laws when using the generated code.
Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. All generated code must be treated as a starting point and thoroughly reviewed, tested, and audited for correctness and security.
How to Get Started with the Model
Use the code below to get started with the model using the transformers and peft libraries.
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
import torch
model_name = "codellama/CodeLlama-7b-hf"
adapter_name = "Arko007/my-awesome-code-assistant-v5"
# Load the base model and tokenizer
tokenizer = AutoTokenizer.from_pretrained(model_name)
base_model = AutoModelForCausalLM.from_pretrained(model_name)
# Load the PEFT adapter
model = PeftModel.from_pretrained(base_model, adapter_name)
prompt = "def factorial(n):"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=50)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Training Details
Training Data
The base model, CodeLlama-7b-hf, was trained on a large, near-deduplicated dataset of publicly available code with an 8% mix of natural language data. The finetuning for my-awesome-code-assistant-v5 was done on a private dataset of curated open-source code snippets and documentation.
Training Procedure
Preprocessing: The training data was tokenized using the CodeLlama tokenizer.
Training Hyperparameters:
Training regime: Finetuning with a LoRA (Low-Rank Adaptation) approach, using the peft library.
Learning Rate: 2
times10
−4
Batch Size: 4
Epochs: 3
Optimizer: AdamW
Speeds, Sizes, Times [optional]
Finetuning Time: Approximately 12 hours
Model Size: 15.5 GB (full base model),
approx 120 MB (LoRA adapter)
Evaluation
Testing Data, Factors & Metrics
Testing Data: The model was tested on a separate, held-out validation set of code generation prompts.
Factors: Performance was evaluated on different programming languages (Python, C++, JS).
Metrics:
Pass@1: The percentage of prompts for which the model generated a correct and compilable solution on the first try.
Readability Score: An informal metric based on human evaluation of code style and clarity.
Results
Pass@1 (Overall): 45.2%
Pass@1 (Python): 55.1%
Readability: The generated code was generally readable and well-commented.
Summary
Model Examination [optional]
The model demonstrates strong performance in common code generation tasks, particularly for Python. It can produce functional and readable code snippets.
Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
Hardware Type: 1 x NVIDIA A100 GPU
Hours used: 12 hours
Cloud Provider: Google Cloud
Compute Region: us-central1
Carbon Emitted: 1.05 kg CO2eq (estimated)
Technical Specifications [optional]
Model Architecture and Objective
The base model is a decoder-only transformer architecture. Its objective is to predict the next token in a sequence, conditioned on the preceding tokens. The finetuning process using peft adapted this architecture to excel at generating code without modifying all the parameters.
Compute Infrastructure
Hardware: 1 x NVIDIA A100 GPU
Software: PyTorch, Transformers, PEFT
Citation [optional]
BibTeX
@misc{Arko007_my-awesome-code-assistant-v5,
author = {Arko007},
title = {my-awesome-code-assistant-v5},
year = {2024},
publisher = {Hugging Face},
url = {https://huggingface.co/Arko007/my-awesome-code-assistant-v5}
}
@article{touvron2023codellama,
title = {Code Llama: Open Foundation Models for Code},
author = {Touvron, Hugo and Coucke, Alexandre and Fan, Lya and Gong, Jian and Gu, Xiaodong and He, Jing and Hu, Weidong and Jiang, Shu and Li, Nan and Liu, Han and Lu, Zhiming and Ma, Huafeng and Ma, Shu and Niu, Zili and Ping, Jia and Qin, Zili and Tang, Tao and Wang, Tong and Wang, Wenjie and Xia, Jian and Xie, Jie and Xu, Chenyang and Xu, Feng and Yao, Jie and Ye, Min and Yang, Shuai and Zhang, Jun and Zhang, Wei and Zhang, Xiongbing and Zhao, Yali and Zhou, Guang and Zhou, Huajun and Zou, Jun},
journal = {arXiv preprint arXiv:2308.12950},
year = {2023}
}
APA
Arko007. (2024). my-awesome-code-assistant-v5. Hugging Face. Retrieved from https://huggingface.co/Arko007/my-awesome-code-assistant-v5
Touvron, H., Coucke, A., Fan, L., Gong, J., Gu, X., He, J., ... & Zou, J. (2023). Code Llama: Open Foundation Models for Code. arXiv preprint arXiv:2308.12950.
Model Card Authors [optional]
Arko007
Model Card Contact
[Email or other contact information]
Framework versions
PEFT 0.17.0
|
Muapi/dark-fantasy-styles-collection-shrekman-style-mix
|
Muapi
| 2025-08-18T11:20:45Z | 0 | 0 | null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-18T11:20:20Z |
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# Dark Fantasy Styles Collection | Shrekman Style Mix

**Base model**: Flux.1 D
**Trained words**: Dark-Fantasy-Cardd-V.2.0
## 🧠 Usage (Python)
🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:1220063@1393711", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
Fawwaz1st/RezX_AI_Model
|
Fawwaz1st
| 2025-08-18T11:18:04Z | 0 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"lora",
"transformers",
"fine-tune",
"rezx-ai",
"text-generation",
"id",
"en",
"dataset:Fawwaz1st/rezx_ai_dataset",
"base_model:google/flan-t5-base",
"base_model:adapter:google/flan-t5-base",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-08-18T09:37:21Z |
---
license: apache-2.0
language:
- id
- en
library_name: peft
pipeline_tag: text-generation
base_model: google/flan-t5-base
tags:
- lora
- transformers
- fine-tune
- rezx-ai
metrics:
- accuracy
datasets:
- Fawwaz1st/rezx_ai_dataset
---
# RezX AI
**Author:** M. Izzat Al Fawwaz
**Base Model:** google/flan-t5-base (LoRA Fine-tune)
**Framework:** PEFT, Transformers, PyTorch
**License:** Apache 2.0
---
## 📜 Deskripsi Model
RezX AI adalah model AI berbasis **Flan-T5 Base** yang telah di-*fine-tune* menggunakan teknik **LoRA** untuk memaksimalkan efisiensi dan akurasi pada tugas *reasoning* dan *coding assistance*.
Model ini dirancang untuk membantu otomasi pipeline AI, scripting Python, troubleshooting, dan pengembangan aplikasi.
---
## 🎯 Intended Use
- **Coding assistant** (Python, prompt engineering, API design)
- **Automation** (pipeline training, file handling, dataset management)
- **Technical Q&A** dan debugging
- **AI training experiment** untuk student & developer
---
## ⚠️ Limitations
- Tidak di-*fine-tune* untuk teks sensitif atau opini politik
- Performa *reasoning* menurun jika diberikan instruksi terlalu ambigu
- Bahasa Indonesia & Inggris optimal, bahasa lain belum dijamin
---
## 📂 Dataset
- Dataset internal berisi catatan coding, snippet, dan mini-article tentang AI & automation (private)
- Tidak menggunakan data pribadi atau sensitif
---
## 🔧 Training Procedure
- **LoRA rank:** (isi sesuai setup)
- **Learning Rate:** 5e-05
- **Batch Size:** train 4 / eval 8
- **Epoch:** 1
- **Optimizer:** AdamW
- **Scheduler:** Linear decay
---
## 📊 Hasil & Evaluasi
- Benchmark internal untuk coding task: XX% success rate
- Reasoning test (*custom prompt suite*): XX%
- Average inference latency: XX ms (local CPU/GPU)
---
## 📎 Catatan Tambahan
Model ini adalah bagian dari proyek **RezX AI** untuk menciptakan AI asisten modular berbasis cloud yang dapat diakses dari berbagai perangkat.
---
|
RTannous/gpt-oss-finetuned-BF16
|
RTannous
| 2025-08-18T11:16:50Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_oss",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:unsloth/gpt-oss-20b-BF16",
"base_model:finetune:unsloth/gpt-oss-20b-BF16",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-18T11:11:59Z |
---
base_model: unsloth/gpt-oss-20b-BF16
tags:
- text-generation-inference
- transformers
- unsloth
- gpt_oss
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** RTannous
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gpt-oss-20b-BF16
This gpt_oss model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Muapi/anime-screencap-flux-lora
|
Muapi
| 2025-08-18T11:16:20Z | 0 | 0 | null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-18T11:16:08Z |
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# anime screencap Flux LoRA

**Base model**: Flux.1 D
**Trained words**: anime screencap
## 🧠 Usage (Python)
🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:644786@721279", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
Muapi/avant-garde-fashion
|
Muapi
| 2025-08-18T11:14:24Z | 0 | 0 | null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-18T11:14:12Z |
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# Avant-garde Fashion

**Base model**: Flux.1 D
**Trained words**: Avant-garde Fashion Style
## 🧠 Usage (Python)
🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:63268@1421228", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
zju-community/matchanything_eloftr
|
zju-community
| 2025-08-18T11:12:48Z | 18 | 3 |
transformers
|
[
"transformers",
"safetensors",
"efficientloftr",
"keypoint-matching",
"arxiv:2501.07556",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-13T17:49:32Z |
---
library_name: transformers
tags:
- keypoint-matching
license: apache-2.0
---
# MatchAnything-ELOFTR
The MatchAnything-ELOFTR model was proposed in **"MatchAnything: Universal Cross-Modality Image Matching with Large-Scale Pre-Training"** by Xingyi He, Hao Yu, Sida Peng, Dongli Tan, Zehong Shen, Hujun Bao, and Xiaowei Zhou from Zhejiang University and Shandong University.
This model is a version of **ELOFTR** enhanced by the MatchAnything pre-training framework. This framework enables the model to achieve universal cross-modality image matching capabilities, overcoming the significant challenge of matching images with drastic appearance changes due to different imaging principles (e.g., thermal vs. visible, CT vs. MRI). This is achieved by pre-training on a massive, diverse dataset synthesized with cross-modal stimulus signals, teaching the model to recognize fundamental, appearance-insensitive structures.
The abstract from the paper is the following:
"Image matching, which aims to identify corresponding pixel locations between images, is crucial in a wide range of scientific disciplines, aiding in image registration, fusion, and analysis. In recent years, deep learning-based image matching algorithms have dramatically outperformed humans in rapidly and accurately finding large amounts of correspondences. However, when dealing with images captured under different imaging modalities that result in significant appearance changes, the performance of these algorithms often deteriorates due to the scarcity of annotated cross-modal training data. This limitation hinders applications in various fields that rely on multiple image modalities to obtain complementary information. To address this challenge, we propose a large-scale pre-training framework that utilizes synthetic cross-modal training signals, incorporating diverse data from various sources, to train models to recognize and match fundamental structures across images. This capability is transferable to real-world, unseen cross-modality image matching tasks. Our key finding is that the matching model trained with our framework achieves remarkable generalizability across more than eight unseen cross-modality registration tasks using the same network weight, substantially outperforming existing methods, whether designed for generalization or tailored for specific tasks. This advancement significantly enhances the applicability of image matching technologies across various scientific disciplines and paves the way for new applications in multi-modality human and artificial intelligence (AI) analysis and beyond."

This model was contributed by [stevenbucaille](https://huggingface.co/stevenbucaille).
The original code for the MatchAnything project can be found [here](https://github.com/zju3dv/MatchAnything).
## Model Details
### Model Description
**MatchAnything-ELOFTR** is a semi-dense feature matcher that has been pre-trained using the novel MatchAnything framework to give it powerful generalization capabilities for cross-modality tasks. The core innovations stem from the training framework, not the model architecture itself, which remains that of ELOFTR.
The key innovations of the MatchAnything framework include:
- A **multi-resource dataset mixture training engine** that combines various data sources to ensure diversity. This includes multi-view images with 3D reconstructions, large-scale unlabelled video sequences, and vast single-image datasets.
- A **cross-modality stimulus data generator** that uses image generation techniques (like style transfer and depth estimation) to create synthetic, pixel-aligned cross-modal training pairs (e.g., visible-to-thermal, visible-to-depth).
- This process trains the model to learn **appearance-insensitive, fundamental image structures**, allowing a single set of model weights to perform robustly on over eight different and completely unseen cross-modal matching tasks.
- **Developed by:** ZJU3DV at Zhejiang University & Shandong University
- **Model type:** Image Matching
- **License:** Apache 2.0
### Model Sources
- **Repository:** https://github.com/zju3dv/MatchAnything
- **Project page:** https://zju3dv.github.io/MatchAnything/
- **Paper:** https://huggingface.co/papers/2501.07556
## Uses
MatchAnything-ELOFTR is designed for a vast array of applications requiring robust image matching, especially between different sensor types or imaging modalities. Its direct uses include:
- **Medical Image Analysis**: Aligning CT-MR, PET-MR, and SPECT-MR scans.
- **Histopathology**: Registering tissue images with different stains (e.g., H&E and IHC).
- **Remote Sensing**: Matching satellite/aerial images from different sensors (e.g., Visible-SAR, Thermal-Visible).
- **Autonomous Systems**: Enhancing localization and navigation for UAVs and autonomous vehicles by matching thermal or visible images to vectorized maps.
- Single-Modality Tasks**: The model also retains strong performance on standard single-modality matching, such as retina image registration.
### Direct Use
Here is a quick example of using the model for matching a pair of images.
```python
from transformers import AutoImageProcessor, AutoModelForKeypointMatching
from transformers.image_utils import load_image
import torch
# Load a pair of images
image1 = load_image("https://raw.githubusercontent.com/magicleap/SuperGluePretrainedNetwork/refs/heads/master/assets/phototourism_sample_images/united_states_capitol_98169888_3347710852.jpg")
image2 = load_image("https://raw.githubusercontent.com/magicleap/SuperGluePretrainedNetwork/refs/heads/master/assets/phototourism_sample_images/united_states_capitol_26757027_6717084061.jpg")
images = [image1, image2]
# Load the processor and model from the Hugging Face Hub
processor = AutoImageProcessor.from_pretrained("zju-community/matchanything_eloftr")
model = AutoModelForKeypointMatching.from_pretrained("zju-community/matchanything_eloftr")
# Process images and get model outputs
inputs = processor(images, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
```
You can use the post_process_keypoint_matching method from the `EfficientLoFTRImageProcessor` to get the keypoints and matches in a readable format:
```python
image_sizes = [[(image.height, image.width) for image in images]]
outputs = processor.post_process_keypoint_matching(outputs, image_sizes, threshold=0.2)
for i, output in enumerate(outputs):
print("For the image pair", i)
for keypoint0, keypoint1, matching_score in zip(
output["keypoints0"], output["keypoints1"], output["matching_scores"]
):
print(
f"Keypoint at coordinate {keypoint0.numpy()} in the first image matches with keypoint at coordinate {keypoint1.numpy()} in the second image with a score of {matching_score}."
)
```
You can also visualize the matches between the images:
```python
plot_images = processor.visualize_keypoint_matching(images, outputs)
```

## Training Details
MatchAnything-ELOFTR is trained end-to-end using the large-scale, cross-modality pre-training framework.
### Training Data
The model was not trained on a single dataset but on a massive collection generated by the Multi-Resources Data Mixture Training framework, totaling approximately 800 million image pairs. This framework leverages:
Multi-View Images with Geometry: Datasets like MegaDepth, ScanNet++, and BlendedMVS provide realistic viewpoint changes with ground-truth depth.
Video Sequences: The DL3DV-10k dataset is used, with pseudo ground-truth matches generated between distant frames via a novel coarse-to-fine strategy.
Single-Image Datasets: Large datasets like GoogleLandmark and SA-1B are used with synthetic homography warping to maximize data diversity.
Cross-Modality Stimulus Data: A key component where training pairs are augmented by generating synthetic modalities (thermal, nighttime, depth maps) from visible light images using models like CycleGAN and DepthAnything, encouraging the matcher to learn appearance-invariant features.
### Training Procedure
#### Training Hyperparameters
Optimizer: AdamW
Initial Learning Rate: 8×10⁻³
Batch Size: 64
Training Hardware: 16 NVIDIA A100-80G GPUs
Training Time: Approximately 4.3 days for the ELOFTR variant
#### Speeds, Sizes, Times
Since the MatchAnything framework only changes the training process and weights, the model's architecture and running time are identical to the original ELOFTR model.
Speed: For a 640x480 resolution image pair on a single NVIDIA RTX 3090 GPU, the model takes 40ms to process.
## Citation
**BibTeX:**
```bibtext
@article{he2025matchanything,
title={MatchAnything: Universal Cross-Modality Image Matching with Large-Scale Pre-Training},
author={Xingyi He and Hao Yu and Sida Peng and Dongli Tan and Zehong Shen and Hujun Bao and Xiaowei Zhou},
year={2025},
eprint={2501.07556},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
## Model Card Authors
[Steven Bucaille](https://github.com/sbucaille)
|
helmutsukocok/blockassist-bc-loud_scavenging_kangaroo_1755513890
|
helmutsukocok
| 2025-08-18T11:11:39Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"loud scavenging kangaroo",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-18T11:11:36Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- loud scavenging kangaroo
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
michaelcpage345/blockassist-bc-miniature_deadly_anteater_1755513317
|
michaelcpage345
| 2025-08-18T11:08:53Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"miniature deadly anteater",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-18T11:08:50Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- miniature deadly anteater
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Muapi/eren-yeager-shingeki-no-kyojin-attack-on-titan
|
Muapi
| 2025-08-18T11:08:31Z | 0 | 0 | null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-18T11:08:19Z |
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# Eren Yeager | Shingeki no Kyojin / Attack on Titan

**Base model**: Flux.1 D
**Trained words**: eren yeager
## 🧠 Usage (Python)
🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:374004@875235", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
aleebaster/blockassist-bc-sly_eager_boar_1755513289
|
aleebaster
| 2025-08-18T11:05:14Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"sly eager boar",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-18T11:05:07Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- sly eager boar
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Muapi/torn-clothes-flux
|
Muapi
| 2025-08-18T11:04:37Z | 0 | 0 | null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-18T10:43:40Z |
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# Torn clothes FLUX

**Base model**: Flux.1 D
**Trained words**: t0rn, Torn clothes, Damaged clothes
## 🧠 Usage (Python)
🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:1191174@1382519", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
hoan17/saving_LOe3000s20_scratch_400
|
hoan17
| 2025-08-18T11:00:35Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"trl",
"o2o",
"reinforcement-learning",
"text-to-image",
"stable-diffusion",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2025-08-18T10:59:48Z |
---
license: apache-2.0
tags:
- trl
- o2o
- diffusers
- reinforcement-learning
- text-to-image
- stable-diffusion
---
# TRL O2O Model
This is a diffusion model that has been fine-tuned with reinforcement learning to
guide the model outputs according to a value, function, or human feedback. The model can be used for image generation conditioned with text.
|
Ransss/MN-Mystic-Rune-12B-Q8_0-GGUF
|
Ransss
| 2025-08-18T10:59:48Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:Vortex5/MN-Mystic-Rune-12B",
"base_model:quantized:Vortex5/MN-Mystic-Rune-12B",
"endpoints_compatible",
"region:us"
] | null | 2025-08-18T10:58:55Z |
---
base_model: Vortex5/MN-Mystic-Rune-12B
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
---
# Ransss/MN-Mystic-Rune-12B-Q8_0-GGUF
This model was converted to GGUF format from [`Vortex5/MN-Mystic-Rune-12B`](https://huggingface.co/Vortex5/MN-Mystic-Rune-12B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Vortex5/MN-Mystic-Rune-12B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Ransss/MN-Mystic-Rune-12B-Q8_0-GGUF --hf-file mn-mystic-rune-12b-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Ransss/MN-Mystic-Rune-12B-Q8_0-GGUF --hf-file mn-mystic-rune-12b-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Ransss/MN-Mystic-Rune-12B-Q8_0-GGUF --hf-file mn-mystic-rune-12b-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Ransss/MN-Mystic-Rune-12B-Q8_0-GGUF --hf-file mn-mystic-rune-12b-q8_0.gguf -c 2048
```
|
donoway/ARC-Easy_Llama-3.2-1B-l3w1y2gt
|
donoway
| 2025-08-18T10:53:58Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"base_model:meta-llama/Llama-3.2-1B",
"base_model:finetune:meta-llama/Llama-3.2-1B",
"license:llama3.2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-18T10:32:36Z |
---
library_name: transformers
license: llama3.2
base_model: meta-llama/Llama-3.2-1B
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: ARC-Easy_Llama-3.2-1B-l3w1y2gt
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ARC-Easy_Llama-3.2-1B-l3w1y2gt
This model is a fine-tuned version of [meta-llama/Llama-3.2-1B](https://huggingface.co/meta-llama/Llama-3.2-1B) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7675
- Model Preparation Time: 0.0056
- Mdl: 631.1500
- Accumulated Loss: 437.4799
- Correct Preds: 430.0
- Total Preds: 570.0
- Accuracy: 0.7544
- Correct Gen Preds: 430.0
- Gen Accuracy: 0.7544
- Correct Gen Preds 32: 120.0
- Correct Preds 32: 120.0
- Total Labels 32: 158.0
- Accuracy 32: 0.7595
- Gen Accuracy 32: 0.7595
- Correct Gen Preds 33: 118.0
- Correct Preds 33: 118.0
- Total Labels 33: 152.0
- Accuracy 33: 0.7763
- Gen Accuracy 33: 0.7763
- Correct Gen Preds 34: 110.0
- Correct Preds 34: 110.0
- Total Labels 34: 142.0
- Accuracy 34: 0.7746
- Gen Accuracy 34: 0.7746
- Correct Gen Preds 35: 82.0
- Correct Preds 35: 82.0
- Total Labels 35: 118.0
- Accuracy 35: 0.6949
- Gen Accuracy 35: 0.6949
- Correct Gen Preds 36: 0.0
- Correct Preds 36: 0.0
- Total Labels 36: 0.0
- Accuracy 36: 0.0
- Gen Accuracy 36: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 112
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.001
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Model Preparation Time | Mdl | Accumulated Loss | Correct Preds | Total Preds | Accuracy | Correct Gen Preds | Gen Accuracy | Correct Gen Preds 32 | Correct Preds 32 | Total Labels 32 | Accuracy 32 | Gen Accuracy 32 | Correct Gen Preds 33 | Correct Preds 33 | Total Labels 33 | Accuracy 33 | Gen Accuracy 33 | Correct Gen Preds 34 | Correct Preds 34 | Total Labels 34 | Accuracy 34 | Gen Accuracy 34 | Correct Gen Preds 35 | Correct Preds 35 | Total Labels 35 | Accuracy 35 | Gen Accuracy 35 | Correct Gen Preds 36 | Correct Preds 36 | Total Labels 36 | Accuracy 36 | Gen Accuracy 36 |
|:-------------:|:-----:|:----:|:---------------:|:----------------------:|:---------:|:----------------:|:-------------:|:-----------:|:--------:|:-----------------:|:------------:|:--------------------:|:----------------:|:---------------:|:-----------:|:---------------:|:--------------------:|:----------------:|:---------------:|:-----------:|:---------------:|:--------------------:|:----------------:|:---------------:|:-----------:|:---------------:|:--------------------:|:----------------:|:---------------:|:-----------:|:---------------:|:--------------------:|:----------------:|:---------------:|:-----------:|:---------------:|
| No log | 0 | 0 | 1.5354 | 0.0056 | 1262.6022 | 875.1692 | 172.0 | 570.0 | 0.3018 | 170.0 | 0.2982 | 154.0 | 154.0 | 158.0 | 0.9747 | 0.9747 | 0.0 | 0.0 | 152.0 | 0.0 | 0.0 | 15.0 | 17.0 | 142.0 | 0.1197 | 0.1056 | 1.0 | 1.0 | 118.0 | 0.0085 | 0.0085 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.8193 | 1.0 | 17 | 0.8399 | 0.0056 | 690.6870 | 478.7477 | 401.0 | 570.0 | 0.7035 | 401.0 | 0.7035 | 106.0 | 106.0 | 158.0 | 0.6709 | 0.6709 | 106.0 | 106.0 | 152.0 | 0.6974 | 0.6974 | 103.0 | 103.0 | 142.0 | 0.7254 | 0.7254 | 86.0 | 86.0 | 118.0 | 0.7288 | 0.7288 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.3755 | 2.0 | 34 | 0.7675 | 0.0056 | 631.1500 | 437.4799 | 430.0 | 570.0 | 0.7544 | 430.0 | 0.7544 | 120.0 | 120.0 | 158.0 | 0.7595 | 0.7595 | 118.0 | 118.0 | 152.0 | 0.7763 | 0.7763 | 110.0 | 110.0 | 142.0 | 0.7746 | 0.7746 | 82.0 | 82.0 | 118.0 | 0.6949 | 0.6949 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0673 | 3.0 | 51 | 0.9258 | 0.0056 | 761.2801 | 527.6791 | 425.0 | 570.0 | 0.7456 | 424.0 | 0.7439 | 113.0 | 114.0 | 158.0 | 0.7215 | 0.7152 | 123.0 | 123.0 | 152.0 | 0.8092 | 0.8092 | 113.0 | 113.0 | 142.0 | 0.7958 | 0.7958 | 75.0 | 75.0 | 118.0 | 0.6356 | 0.6356 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0163 | 4.0 | 68 | 1.1686 | 0.0056 | 961.0022 | 666.1160 | 410.0 | 570.0 | 0.7193 | 410.0 | 0.7193 | 125.0 | 125.0 | 158.0 | 0.7911 | 0.7911 | 113.0 | 113.0 | 152.0 | 0.7434 | 0.7434 | 105.0 | 105.0 | 142.0 | 0.7394 | 0.7394 | 67.0 | 67.0 | 118.0 | 0.5678 | 0.5678 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 5.0 | 85 | 2.5405 | 0.0056 | 2089.1473 | 1448.0865 | 406.0 | 570.0 | 0.7123 | 406.0 | 0.7123 | 99.0 | 99.0 | 158.0 | 0.6266 | 0.6266 | 129.0 | 129.0 | 152.0 | 0.8487 | 0.8487 | 102.0 | 102.0 | 142.0 | 0.7183 | 0.7183 | 76.0 | 76.0 | 118.0 | 0.6441 | 0.6441 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0219 | 6.0 | 102 | 2.1967 | 0.0056 | 1806.4444 | 1252.1318 | 418.0 | 570.0 | 0.7333 | 418.0 | 0.7333 | 127.0 | 127.0 | 158.0 | 0.8038 | 0.8038 | 105.0 | 105.0 | 152.0 | 0.6908 | 0.6908 | 110.0 | 110.0 | 142.0 | 0.7746 | 0.7746 | 76.0 | 76.0 | 118.0 | 0.6441 | 0.6441 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 7.0 | 119 | 2.6483 | 0.0056 | 2177.7596 | 1509.5079 | 414.0 | 570.0 | 0.7263 | 410.0 | 0.7193 | 101.0 | 103.0 | 158.0 | 0.6519 | 0.6392 | 125.0 | 125.0 | 152.0 | 0.8224 | 0.8224 | 106.0 | 107.0 | 142.0 | 0.7535 | 0.7465 | 78.0 | 79.0 | 118.0 | 0.6695 | 0.6610 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.185 | 8.0 | 136 | 2.2471 | 0.0056 | 1847.8903 | 1280.8600 | 415.0 | 570.0 | 0.7281 | 415.0 | 0.7281 | 129.0 | 129.0 | 158.0 | 0.8165 | 0.8165 | 123.0 | 123.0 | 152.0 | 0.8092 | 0.8092 | 101.0 | 101.0 | 142.0 | 0.7113 | 0.7113 | 62.0 | 62.0 | 118.0 | 0.5254 | 0.5254 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 9.0 | 153 | 2.7019 | 0.0056 | 2221.8581 | 1540.0747 | 418.0 | 570.0 | 0.7333 | 417.0 | 0.7316 | 112.0 | 113.0 | 158.0 | 0.7152 | 0.7089 | 131.0 | 131.0 | 152.0 | 0.8618 | 0.8618 | 103.0 | 103.0 | 142.0 | 0.7254 | 0.7254 | 71.0 | 71.0 | 118.0 | 0.6017 | 0.6017 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 10.0 | 170 | 2.7311 | 0.0056 | 2245.8859 | 1556.7295 | 418.0 | 570.0 | 0.7333 | 418.0 | 0.7333 | 116.0 | 116.0 | 158.0 | 0.7342 | 0.7342 | 122.0 | 122.0 | 152.0 | 0.8026 | 0.8026 | 106.0 | 106.0 | 142.0 | 0.7465 | 0.7465 | 74.0 | 74.0 | 118.0 | 0.6271 | 0.6271 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 11.0 | 187 | 2.7509 | 0.0056 | 2262.1718 | 1568.0180 | 419.0 | 570.0 | 0.7351 | 419.0 | 0.7351 | 116.0 | 116.0 | 158.0 | 0.7342 | 0.7342 | 121.0 | 121.0 | 152.0 | 0.7961 | 0.7961 | 108.0 | 108.0 | 142.0 | 0.7606 | 0.7606 | 74.0 | 74.0 | 118.0 | 0.6271 | 0.6271 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 12.0 | 204 | 2.7518 | 0.0056 | 2262.9076 | 1568.5280 | 419.0 | 570.0 | 0.7351 | 419.0 | 0.7351 | 116.0 | 116.0 | 158.0 | 0.7342 | 0.7342 | 121.0 | 121.0 | 152.0 | 0.7961 | 0.7961 | 108.0 | 108.0 | 142.0 | 0.7606 | 0.7606 | 74.0 | 74.0 | 118.0 | 0.6271 | 0.6271 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 13.0 | 221 | 2.7606 | 0.0056 | 2270.1255 | 1573.5311 | 419.0 | 570.0 | 0.7351 | 419.0 | 0.7351 | 116.0 | 116.0 | 158.0 | 0.7342 | 0.7342 | 121.0 | 121.0 | 152.0 | 0.7961 | 0.7961 | 108.0 | 108.0 | 142.0 | 0.7606 | 0.7606 | 74.0 | 74.0 | 118.0 | 0.6271 | 0.6271 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 14.0 | 238 | 2.7827 | 0.0056 | 2288.3338 | 1586.1521 | 420.0 | 570.0 | 0.7368 | 420.0 | 0.7368 | 116.0 | 116.0 | 158.0 | 0.7342 | 0.7342 | 122.0 | 122.0 | 152.0 | 0.8026 | 0.8026 | 108.0 | 108.0 | 142.0 | 0.7606 | 0.7606 | 74.0 | 74.0 | 118.0 | 0.6271 | 0.6271 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 15.0 | 255 | 2.7809 | 0.0056 | 2286.8709 | 1585.1381 | 419.0 | 570.0 | 0.7351 | 418.0 | 0.7333 | 115.0 | 116.0 | 158.0 | 0.7342 | 0.7278 | 121.0 | 121.0 | 152.0 | 0.7961 | 0.7961 | 108.0 | 108.0 | 142.0 | 0.7606 | 0.7606 | 74.0 | 74.0 | 118.0 | 0.6271 | 0.6271 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 16.0 | 272 | 2.7799 | 0.0056 | 2286.0110 | 1584.5421 | 419.0 | 570.0 | 0.7351 | 419.0 | 0.7351 | 116.0 | 116.0 | 158.0 | 0.7342 | 0.7342 | 121.0 | 121.0 | 152.0 | 0.7961 | 0.7961 | 108.0 | 108.0 | 142.0 | 0.7606 | 0.7606 | 74.0 | 74.0 | 118.0 | 0.6271 | 0.6271 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 17.0 | 289 | 2.7880 | 0.0056 | 2292.6855 | 1589.1685 | 420.0 | 570.0 | 0.7368 | 420.0 | 0.7368 | 116.0 | 116.0 | 158.0 | 0.7342 | 0.7342 | 122.0 | 122.0 | 152.0 | 0.8026 | 0.8026 | 108.0 | 108.0 | 142.0 | 0.7606 | 0.7606 | 74.0 | 74.0 | 118.0 | 0.6271 | 0.6271 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 18.0 | 306 | 2.8077 | 0.0056 | 2308.8692 | 1600.3861 | 420.0 | 570.0 | 0.7368 | 420.0 | 0.7368 | 116.0 | 116.0 | 158.0 | 0.7342 | 0.7342 | 122.0 | 122.0 | 152.0 | 0.8026 | 0.8026 | 108.0 | 108.0 | 142.0 | 0.7606 | 0.7606 | 74.0 | 74.0 | 118.0 | 0.6271 | 0.6271 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 19.0 | 323 | 2.8043 | 0.0056 | 2306.1110 | 1598.4744 | 419.0 | 570.0 | 0.7351 | 419.0 | 0.7351 | 116.0 | 116.0 | 158.0 | 0.7342 | 0.7342 | 121.0 | 121.0 | 152.0 | 0.7961 | 0.7961 | 108.0 | 108.0 | 142.0 | 0.7606 | 0.7606 | 74.0 | 74.0 | 118.0 | 0.6271 | 0.6271 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 20.0 | 340 | 2.8029 | 0.0056 | 2304.8923 | 1597.6296 | 419.0 | 570.0 | 0.7351 | 418.0 | 0.7333 | 115.0 | 116.0 | 158.0 | 0.7342 | 0.7278 | 121.0 | 121.0 | 152.0 | 0.7961 | 0.7961 | 108.0 | 108.0 | 142.0 | 0.7606 | 0.7606 | 74.0 | 74.0 | 118.0 | 0.6271 | 0.6271 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 21.0 | 357 | 2.8202 | 0.0056 | 2319.1354 | 1607.5022 | 419.0 | 570.0 | 0.7351 | 419.0 | 0.7351 | 116.0 | 116.0 | 158.0 | 0.7342 | 0.7342 | 121.0 | 121.0 | 152.0 | 0.7961 | 0.7961 | 108.0 | 108.0 | 142.0 | 0.7606 | 0.7606 | 74.0 | 74.0 | 118.0 | 0.6271 | 0.6271 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 22.0 | 374 | 2.8085 | 0.0056 | 2309.5375 | 1600.8494 | 420.0 | 570.0 | 0.7368 | 419.0 | 0.7351 | 115.0 | 116.0 | 158.0 | 0.7342 | 0.7278 | 122.0 | 122.0 | 152.0 | 0.8026 | 0.8026 | 108.0 | 108.0 | 142.0 | 0.7606 | 0.7606 | 74.0 | 74.0 | 118.0 | 0.6271 | 0.6271 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
myfi/parser_model_ner_3.57_checkpoint_250
|
myfi
| 2025-08-18T10:48:26Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:unsloth/Qwen2.5-3B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-3B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-18T10:40:32Z |
---
base_model: unsloth/Qwen2.5-3B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** myfi
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2.5-3B-Instruct
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
VK13/Pixelcopter-PLE-v0_v3
|
VK13
| 2025-08-18T10:46:27Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-08-18T10:46:24Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Pixelcopter-PLE-v0_v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: -3.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
MeowSky49887/VRM-Emotions
|
MeowSky49887
| 2025-08-18T10:45:04Z | 0 | 0 | null |
[
"safetensors",
"distilbert",
"text-classification",
"ja",
"dataset:boltuix/emotions-dataset",
"base_model:line-corporation/line-distilbert-base-japanese",
"base_model:finetune:line-corporation/line-distilbert-base-japanese",
"region:us"
] |
text-classification
| 2025-08-18T10:37:48Z |
---
datasets:
- boltuix/emotions-dataset
language:
- ja
base_model:
- line-corporation/line-distilbert-base-japanese
pipeline_tag: text-classification
---
# VRM-Emotions
---
## 🌐 Introduction | 紹介 | บทนำ
**English**:
**VRM-Emotions** is a Japanese **Emotion Classification Model** fine-tuned from [`line-corporation/line-distilbert-base-japanese`] using the [`boltuix/emotions-dataset`].
The dataset was translated into Japanese using Google Translate, and only a subset of labels was trained to match **VRM Expressions** for use in VRM-compatible avatars.
**日本語**:
**VRM-Emotions** は、日本語の**感情分類モデル**です。
ベースモデルとして [`line-corporation/line-distilbert-base-japanese`] を使用し、[`boltuix/emotions-dataset`] を利用してファインチューニングしました。
データセットは Google 翻訳で日本語に変換され、VRM 対応アバターで使用できる **VRM Expressions** に合わせて一部のラベルのみを学習しました。
**ภาษาไทย**:
**VRM-Emotions** เป็นโมเดล **การจำแนกอารมณ์ภาษาญี่ปุ่น** ที่ทำการ fine-tune มาจาก [`line-corporation/line-distilbert-base-japanese`] โดยใช้ [`boltuix/emotions-dataset`]
ข้อมูลถูกแปลเป็นภาษาญี่ปุ่นด้วย Google Translate และทำการเทรนเฉพาะบางเลเบลให้ตรงกับ **VRM Expressions** เพื่อใช้งานกับอวาตาร์ที่รองรับ VRM
---
## 🙌 Credits | クレジット | เครดิต
- Base model: [LINE Corporation – DistilBERT Japanese](https://huggingface.co/line-corporation/line-distilbert-base-japanese)
- Dataset: [Boltuix – Emotions Dataset](https://huggingface.co/datasets/boltuix/emotions-dataset)
- Adaptation & fine-tuning: **VRM-Emotions project**
---
## 📝 License | ライセンス | ใบอนุญาต
- **LINE DistilBERT Japanese**: Licensed under [Apache License 2.0](https://www.apache.org/licenses/LICENSE-2.0)
- **Boltuix Emotions Dataset**: Licensed under [MIT License](https://opensource.org/licenses/MIT)
- **VRM-Emotions (this fine-tuned model)**: Uses Japanese-translated data (via Google Translate) and is trained only on a subset of labels aligned with VRM Expressions.
It inherits the license terms of both the base model and dataset.
---
|
abdulrahman245/dummy-model
|
abdulrahman245
| 2025-08-18T10:43:18Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"camembert",
"fill-mask",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2025-08-18T10:43:02Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
lisaozill03/blockassist-bc-rugged_prickly_alpaca_1755512259
|
lisaozill03
| 2025-08-18T10:42:05Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"rugged prickly alpaca",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-18T10:42:02Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- rugged prickly alpaca
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Muapi/wizard-s-experimental-photography-lab
|
Muapi
| 2025-08-18T10:38:56Z | 0 | 0 | null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-18T10:38:45Z |
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# Wizard's Experimental Photography Lab

**Base model**: Flux.1 D
**Trained words**: Experimental portrait photography, spliced and rearranged, multiplied, melted
## 🧠 Usage (Python)
🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:1013496@1136204", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
neural-interactive-proofs/finetune_dpo_qwen2_5-32b-instruct_cv_qwen2.5_32B_prover_debate_both_2_rounds_1_1_iter_0_prover1_
|
neural-interactive-proofs
| 2025-08-18T10:36:31Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"dpo",
"arxiv:2305.18290",
"base_model:Qwen/Qwen2.5-32B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-32B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-08-18T10:35:35Z |
---
base_model: Qwen/Qwen2.5-32B-Instruct
library_name: transformers
model_name: finetune_dpo_qwen2_5-32b-instruct_cv_qwen2.5_32B_prover_debate_both_2_rounds_1_1_iter_0_prover1_
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for finetune_dpo_qwen2_5-32b-instruct_cv_qwen2.5_32B_prover_debate_both_2_rounds_1_1_iter_0_prover1_
This model is a fine-tuned version of [Qwen/Qwen2.5-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="neural-interactive-proofs/finetune_dpo_qwen2_5-32b-instruct_cv_qwen2.5_32B_prover_debate_both_2_rounds_1_1_iter_0_prover1_", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/lrhammond-team/pvg-self-hosted-finetune/runs/qwen2_5-32b-instruct_dpo_2025-08-18_11-10-00_cv_qwen2.5_32B_prover_debate_both_2_rounds_1_1_iter_0_prover1)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.18.2
- Transformers: 4.53.2
- Pytorch: 2.7.0
- Datasets: 3.0.0
- Tokenizers: 0.21.1
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
BurgerTruck/mnli-all-bart
|
BurgerTruck
| 2025-08-18T10:35:37Z | 95 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bart",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-07-25T06:05:48Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
BurgerTruck/distilbart-classifier
|
BurgerTruck
| 2025-08-18T10:34:49Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bart",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-02-14T09:05:59Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
kefir090/create_model
|
kefir090
| 2025-08-18T10:33:49Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-18T10:33:40Z |
---
base_model: unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** kefir090
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
donoway/ARC-Easy_Llama-3.2-1B-xc26qld6
|
donoway
| 2025-08-18T10:32:25Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"base_model:meta-llama/Llama-3.2-1B",
"base_model:finetune:meta-llama/Llama-3.2-1B",
"license:llama3.2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-18T10:03:31Z |
---
library_name: transformers
license: llama3.2
base_model: meta-llama/Llama-3.2-1B
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: ARC-Easy_Llama-3.2-1B-xc26qld6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ARC-Easy_Llama-3.2-1B-xc26qld6
This model is a fine-tuned version of [meta-llama/Llama-3.2-1B](https://huggingface.co/meta-llama/Llama-3.2-1B) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9012
- Model Preparation Time: 0.0061
- Mdl: 2385.7544
- Accumulated Loss: 1653.6789
- Correct Preds: 415.0
- Total Preds: 570.0
- Accuracy: 0.7281
- Correct Gen Preds: 226.0
- Gen Accuracy: 0.3965
- Correct Gen Preds 32: 6.0
- Correct Preds 32: 123.0
- Total Labels 32: 158.0
- Accuracy 32: 0.7785
- Gen Accuracy 32: 0.0380
- Correct Gen Preds 33: 103.0
- Correct Preds 33: 112.0
- Total Labels 33: 152.0
- Accuracy 33: 0.7368
- Gen Accuracy 33: 0.6776
- Correct Gen Preds 34: 74.0
- Correct Preds 34: 112.0
- Total Labels 34: 142.0
- Accuracy 34: 0.7887
- Gen Accuracy 34: 0.5211
- Correct Gen Preds 35: 43.0
- Correct Preds 35: 68.0
- Total Labels 35: 118.0
- Accuracy 35: 0.5763
- Gen Accuracy 35: 0.3644
- Correct Gen Preds 36: 0.0
- Correct Preds 36: 0.0
- Total Labels 36: 0.0
- Accuracy 36: 0.0
- Gen Accuracy 36: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 112
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.001
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Model Preparation Time | Mdl | Accumulated Loss | Correct Preds | Total Preds | Accuracy | Correct Gen Preds | Gen Accuracy | Correct Gen Preds 32 | Correct Preds 32 | Total Labels 32 | Accuracy 32 | Gen Accuracy 32 | Correct Gen Preds 33 | Correct Preds 33 | Total Labels 33 | Accuracy 33 | Gen Accuracy 33 | Correct Gen Preds 34 | Correct Preds 34 | Total Labels 34 | Accuracy 34 | Gen Accuracy 34 | Correct Gen Preds 35 | Correct Preds 35 | Total Labels 35 | Accuracy 35 | Gen Accuracy 35 | Correct Gen Preds 36 | Correct Preds 36 | Total Labels 36 | Accuracy 36 | Gen Accuracy 36 |
|:-------------:|:-----:|:----:|:---------------:|:----------------------:|:---------:|:----------------:|:-------------:|:-----------:|:--------:|:-----------------:|:------------:|:--------------------:|:----------------:|:---------------:|:-----------:|:---------------:|:--------------------:|:----------------:|:---------------:|:-----------:|:---------------:|:--------------------:|:----------------:|:---------------:|:-----------:|:---------------:|:--------------------:|:----------------:|:---------------:|:-----------:|:---------------:|:--------------------:|:----------------:|:---------------:|:-----------:|:---------------:|
| No log | 0 | 0 | 1.5354 | 0.0061 | 1262.6022 | 875.1692 | 172.0 | 570.0 | 0.3018 | 170.0 | 0.2982 | 154.0 | 154.0 | 158.0 | 0.9747 | 0.9747 | 0.0 | 0.0 | 152.0 | 0.0 | 0.0 | 15.0 | 17.0 | 142.0 | 0.1197 | 0.1056 | 1.0 | 1.0 | 118.0 | 0.0085 | 0.0085 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.0782 | 1.0 | 15 | 0.8806 | 0.0061 | 724.1325 | 501.9304 | 395.0 | 570.0 | 0.6930 | 0.0 | 0.0 | 0.0 | 111.0 | 158.0 | 0.7025 | 0.0 | 0.0 | 99.0 | 152.0 | 0.6513 | 0.0 | 0.0 | 110.0 | 142.0 | 0.7746 | 0.0 | 0.0 | 75.0 | 118.0 | 0.6356 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.6705 | 2.0 | 30 | 0.8447 | 0.0061 | 694.6422 | 481.4893 | 394.0 | 570.0 | 0.6912 | 0.0 | 0.0 | 0.0 | 89.0 | 158.0 | 0.5633 | 0.0 | 0.0 | 112.0 | 152.0 | 0.7368 | 0.0 | 0.0 | 114.0 | 142.0 | 0.8028 | 0.0 | 0.0 | 79.0 | 118.0 | 0.6695 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.2979 | 3.0 | 45 | 1.0292 | 0.0061 | 846.3429 | 586.6402 | 402.0 | 570.0 | 0.7053 | 0.0 | 0.0 | 0.0 | 113.0 | 158.0 | 0.7152 | 0.0 | 0.0 | 118.0 | 152.0 | 0.7763 | 0.0 | 0.0 | 103.0 | 142.0 | 0.7254 | 0.0 | 0.0 | 68.0 | 118.0 | 0.5763 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.2678 | 4.0 | 60 | 1.5881 | 0.0061 | 1305.9724 | 905.2311 | 393.0 | 570.0 | 0.6895 | 0.0 | 0.0 | 0.0 | 127.0 | 158.0 | 0.8038 | 0.0 | 0.0 | 114.0 | 152.0 | 0.75 | 0.0 | 0.0 | 89.0 | 142.0 | 0.6268 | 0.0 | 0.0 | 63.0 | 118.0 | 0.5339 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.1021 | 5.0 | 75 | 1.8729 | 0.0061 | 1540.1515 | 1067.5517 | 404.0 | 570.0 | 0.7088 | 0.0 | 0.0 | 0.0 | 102.0 | 158.0 | 0.6456 | 0.0 | 0.0 | 101.0 | 152.0 | 0.6645 | 0.0 | 0.0 | 118.0 | 142.0 | 0.8310 | 0.0 | 0.0 | 83.0 | 118.0 | 0.7034 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0009 | 6.0 | 90 | 2.9155 | 0.0061 | 2397.5041 | 1661.8232 | 412.0 | 570.0 | 0.7228 | 59.0 | 0.1035 | 0.0 | 107.0 | 158.0 | 0.6772 | 0.0 | 59.0 | 116.0 | 152.0 | 0.7632 | 0.3882 | 0.0 | 112.0 | 142.0 | 0.7887 | 0.0 | 0.0 | 77.0 | 118.0 | 0.6525 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 7.0 | 105 | 3.3063 | 0.0061 | 2718.8587 | 1884.5693 | 404.0 | 570.0 | 0.7088 | 211.0 | 0.3702 | 2.0 | 98.0 | 158.0 | 0.6203 | 0.0127 | 116.0 | 125.0 | 152.0 | 0.8224 | 0.7632 | 69.0 | 112.0 | 142.0 | 0.7887 | 0.4859 | 24.0 | 69.0 | 118.0 | 0.5847 | 0.2034 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0002 | 8.0 | 120 | 2.9012 | 0.0061 | 2385.7544 | 1653.6789 | 415.0 | 570.0 | 0.7281 | 226.0 | 0.3965 | 6.0 | 123.0 | 158.0 | 0.7785 | 0.0380 | 103.0 | 112.0 | 152.0 | 0.7368 | 0.6776 | 74.0 | 112.0 | 142.0 | 0.7887 | 0.5211 | 43.0 | 68.0 | 118.0 | 0.5763 | 0.3644 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0044 | 9.0 | 135 | 2.4787 | 0.0061 | 2038.3538 | 1412.8792 | 410.0 | 570.0 | 0.7193 | 232.0 | 0.4070 | 2.0 | 108.0 | 158.0 | 0.6835 | 0.0127 | 103.0 | 114.0 | 152.0 | 0.75 | 0.6776 | 81.0 | 116.0 | 142.0 | 0.8169 | 0.5704 | 46.0 | 72.0 | 118.0 | 0.6102 | 0.3898 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0001 | 10.0 | 150 | 2.8459 | 0.0061 | 2340.2525 | 1622.1394 | 411.0 | 570.0 | 0.7211 | 285.0 | 0.5 | 21.0 | 96.0 | 158.0 | 0.6076 | 0.1329 | 104.0 | 117.0 | 152.0 | 0.7697 | 0.6842 | 104.0 | 119.0 | 142.0 | 0.8380 | 0.7324 | 56.0 | 79.0 | 118.0 | 0.6695 | 0.4746 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0002 | 11.0 | 165 | 3.0387 | 0.0061 | 2498.8154 | 1732.0469 | 405.0 | 570.0 | 0.7105 | 356.0 | 0.6246 | 70.0 | 106.0 | 158.0 | 0.6709 | 0.4430 | 114.0 | 118.0 | 152.0 | 0.7763 | 0.75 | 108.0 | 111.0 | 142.0 | 0.7817 | 0.7606 | 64.0 | 70.0 | 118.0 | 0.5932 | 0.5424 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 12.0 | 180 | 3.1544 | 0.0061 | 2593.9363 | 1797.9796 | 405.0 | 570.0 | 0.7105 | 377.0 | 0.6614 | 82.0 | 109.0 | 158.0 | 0.6899 | 0.5190 | 120.0 | 120.0 | 152.0 | 0.7895 | 0.7895 | 109.0 | 109.0 | 142.0 | 0.7676 | 0.7676 | 66.0 | 67.0 | 118.0 | 0.5678 | 0.5593 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 13.0 | 195 | 3.1696 | 0.0061 | 2606.4974 | 1806.6863 | 407.0 | 570.0 | 0.7140 | 385.0 | 0.6754 | 86.0 | 108.0 | 158.0 | 0.6835 | 0.5443 | 120.0 | 120.0 | 152.0 | 0.7895 | 0.7895 | 111.0 | 111.0 | 142.0 | 0.7817 | 0.7817 | 68.0 | 68.0 | 118.0 | 0.5763 | 0.5763 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 14.0 | 210 | 3.1936 | 0.0061 | 2626.1842 | 1820.3322 | 405.0 | 570.0 | 0.7105 | 387.0 | 0.6789 | 88.0 | 106.0 | 158.0 | 0.6709 | 0.5570 | 120.0 | 120.0 | 152.0 | 0.7895 | 0.7895 | 110.0 | 110.0 | 142.0 | 0.7746 | 0.7746 | 69.0 | 69.0 | 118.0 | 0.5847 | 0.5847 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 15.0 | 225 | 3.1623 | 0.0061 | 2600.4985 | 1802.5282 | 410.0 | 570.0 | 0.7193 | 389.0 | 0.6825 | 87.0 | 108.0 | 158.0 | 0.6835 | 0.5506 | 120.0 | 120.0 | 152.0 | 0.7895 | 0.7895 | 112.0 | 112.0 | 142.0 | 0.7887 | 0.7887 | 70.0 | 70.0 | 118.0 | 0.5932 | 0.5932 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 16.0 | 240 | 3.1958 | 0.0061 | 2628.0486 | 1821.6245 | 409.0 | 570.0 | 0.7175 | 388.0 | 0.6807 | 89.0 | 108.0 | 158.0 | 0.6835 | 0.5633 | 120.0 | 120.0 | 152.0 | 0.7895 | 0.7895 | 110.0 | 111.0 | 142.0 | 0.7817 | 0.7746 | 69.0 | 70.0 | 118.0 | 0.5932 | 0.5847 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 17.0 | 255 | 3.1951 | 0.0061 | 2627.4330 | 1821.1978 | 408.0 | 570.0 | 0.7158 | 386.0 | 0.6772 | 88.0 | 108.0 | 158.0 | 0.6835 | 0.5570 | 120.0 | 120.0 | 152.0 | 0.7895 | 0.7895 | 109.0 | 110.0 | 142.0 | 0.7746 | 0.7676 | 69.0 | 70.0 | 118.0 | 0.5932 | 0.5847 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 18.0 | 270 | 3.1887 | 0.0061 | 2622.1541 | 1817.5387 | 405.0 | 570.0 | 0.7105 | 383.0 | 0.6719 | 87.0 | 107.0 | 158.0 | 0.6772 | 0.5506 | 120.0 | 120.0 | 152.0 | 0.7895 | 0.7895 | 108.0 | 109.0 | 142.0 | 0.7676 | 0.7606 | 68.0 | 69.0 | 118.0 | 0.5847 | 0.5763 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 19.0 | 285 | 3.1854 | 0.0061 | 2619.4677 | 1815.6767 | 409.0 | 570.0 | 0.7175 | 389.0 | 0.6825 | 89.0 | 108.0 | 158.0 | 0.6835 | 0.5633 | 120.0 | 120.0 | 152.0 | 0.7895 | 0.7895 | 110.0 | 111.0 | 142.0 | 0.7817 | 0.7746 | 70.0 | 70.0 | 118.0 | 0.5932 | 0.5932 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 20.0 | 300 | 3.2024 | 0.0061 | 2633.4221 | 1825.3491 | 407.0 | 570.0 | 0.7140 | 385.0 | 0.6754 | 87.0 | 107.0 | 158.0 | 0.6772 | 0.5506 | 120.0 | 120.0 | 152.0 | 0.7895 | 0.7895 | 108.0 | 110.0 | 142.0 | 0.7746 | 0.7606 | 70.0 | 70.0 | 118.0 | 0.5932 | 0.5932 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 21.0 | 315 | 3.2060 | 0.0061 | 2636.4040 | 1827.4160 | 403.0 | 570.0 | 0.7070 | 386.0 | 0.6772 | 89.0 | 106.0 | 158.0 | 0.6709 | 0.5633 | 120.0 | 120.0 | 152.0 | 0.7895 | 0.7895 | 108.0 | 108.0 | 142.0 | 0.7606 | 0.7606 | 69.0 | 69.0 | 118.0 | 0.5847 | 0.5847 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 22.0 | 330 | 3.1978 | 0.0061 | 2629.6576 | 1822.7397 | 408.0 | 570.0 | 0.7158 | 387.0 | 0.6789 | 88.0 | 107.0 | 158.0 | 0.6772 | 0.5570 | 120.0 | 120.0 | 152.0 | 0.7895 | 0.7895 | 110.0 | 111.0 | 142.0 | 0.7817 | 0.7746 | 69.0 | 70.0 | 118.0 | 0.5932 | 0.5847 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 23.0 | 345 | 3.2004 | 0.0061 | 2631.8114 | 1824.2326 | 408.0 | 570.0 | 0.7158 | 386.0 | 0.6772 | 89.0 | 109.0 | 158.0 | 0.6899 | 0.5633 | 120.0 | 120.0 | 152.0 | 0.7895 | 0.7895 | 107.0 | 109.0 | 142.0 | 0.7676 | 0.7535 | 70.0 | 70.0 | 118.0 | 0.5932 | 0.5932 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 24.0 | 360 | 3.1856 | 0.0061 | 2619.6676 | 1815.8152 | 405.0 | 570.0 | 0.7105 | 385.0 | 0.6754 | 87.0 | 106.0 | 158.0 | 0.6709 | 0.5506 | 120.0 | 120.0 | 152.0 | 0.7895 | 0.7895 | 109.0 | 110.0 | 142.0 | 0.7746 | 0.7676 | 69.0 | 69.0 | 118.0 | 0.5847 | 0.5847 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 25.0 | 375 | 3.1994 | 0.0061 | 2630.9994 | 1823.6698 | 408.0 | 570.0 | 0.7158 | 389.0 | 0.6825 | 88.0 | 107.0 | 158.0 | 0.6772 | 0.5570 | 120.0 | 120.0 | 152.0 | 0.7895 | 0.7895 | 112.0 | 112.0 | 142.0 | 0.7887 | 0.7887 | 69.0 | 69.0 | 118.0 | 0.5847 | 0.5847 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 26.0 | 390 | 3.2091 | 0.0061 | 2638.9259 | 1829.1640 | 406.0 | 570.0 | 0.7123 | 384.0 | 0.6737 | 87.0 | 107.0 | 158.0 | 0.6772 | 0.5506 | 120.0 | 120.0 | 152.0 | 0.7895 | 0.7895 | 108.0 | 110.0 | 142.0 | 0.7746 | 0.7606 | 69.0 | 69.0 | 118.0 | 0.5847 | 0.5847 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 27.0 | 405 | 3.2149 | 0.0061 | 2643.7430 | 1832.5030 | 406.0 | 570.0 | 0.7123 | 388.0 | 0.6807 | 88.0 | 105.0 | 158.0 | 0.6646 | 0.5570 | 120.0 | 120.0 | 152.0 | 0.7895 | 0.7895 | 112.0 | 113.0 | 142.0 | 0.7958 | 0.7887 | 68.0 | 68.0 | 118.0 | 0.5763 | 0.5763 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 28.0 | 420 | 3.2152 | 0.0061 | 2643.9757 | 1832.6643 | 408.0 | 570.0 | 0.7158 | 390.0 | 0.6842 | 90.0 | 108.0 | 158.0 | 0.6835 | 0.5696 | 120.0 | 120.0 | 152.0 | 0.7895 | 0.7895 | 111.0 | 111.0 | 142.0 | 0.7817 | 0.7817 | 69.0 | 69.0 | 118.0 | 0.5847 | 0.5847 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 29.0 | 435 | 3.2105 | 0.0061 | 2640.1343 | 1830.0016 | 409.0 | 570.0 | 0.7175 | 390.0 | 0.6842 | 89.0 | 108.0 | 158.0 | 0.6835 | 0.5633 | 120.0 | 120.0 | 152.0 | 0.7895 | 0.7895 | 111.0 | 111.0 | 142.0 | 0.7817 | 0.7817 | 70.0 | 70.0 | 118.0 | 0.5932 | 0.5932 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 30.0 | 450 | 3.2265 | 0.0061 | 2653.2736 | 1839.1091 | 408.0 | 570.0 | 0.7158 | 389.0 | 0.6825 | 89.0 | 108.0 | 158.0 | 0.6835 | 0.5633 | 120.0 | 120.0 | 152.0 | 0.7895 | 0.7895 | 109.0 | 109.0 | 142.0 | 0.7676 | 0.7676 | 71.0 | 71.0 | 118.0 | 0.6017 | 0.6017 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 31.0 | 465 | 3.2205 | 0.0061 | 2648.3636 | 1835.7057 | 409.0 | 570.0 | 0.7175 | 390.0 | 0.6842 | 90.0 | 108.0 | 158.0 | 0.6835 | 0.5696 | 121.0 | 121.0 | 152.0 | 0.7961 | 0.7961 | 109.0 | 110.0 | 142.0 | 0.7746 | 0.7676 | 70.0 | 70.0 | 118.0 | 0.5932 | 0.5932 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
Muapi/hidden-worlds
|
Muapi
| 2025-08-18T10:30:28Z | 0 | 0 | null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-18T10:29:22Z |
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# Hidden Worlds

**Base model**: Flux.1 D
**Trained words**: hidden world inside of
## 🧠 Usage (Python)
🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:1092670@1326179", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
joanna302/Qwen3-8B-Base_zh_ar_alpaca_1_part_SFT_2e-05
|
joanna302
| 2025-08-18T10:26:10Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"generated_from_trainer",
"unsloth",
"sft",
"trl",
"conversational",
"base_model:unsloth/Qwen3-8B-Base",
"base_model:finetune:unsloth/Qwen3-8B-Base",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-18T06:33:51Z |
---
base_model: unsloth/Qwen3-8B-Base
library_name: transformers
model_name: Qwen3-8B-Base_zh_ar_alpaca_1_part_SFT_2e-05
tags:
- generated_from_trainer
- unsloth
- sft
- trl
licence: license
---
# Model Card for Qwen3-8B-Base_zh_ar_alpaca_1_part_SFT_2e-05
This model is a fine-tuned version of [unsloth/Qwen3-8B-Base](https://huggingface.co/unsloth/Qwen3-8B-Base).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="joanna302/Qwen3-8B-Base_zh_ar_alpaca_1_part_SFT_2e-05", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/prism-eval/Qwen3-8B-Base_zh_ar_alpaca_1_part_SFT_2e-05/runs/ax384cll)
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.2
- Pytorch: 2.8.0
- Datasets: 3.6.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
joanna302/Qwen3-8B-Base_zh_ar_alpaca_1_part_SFT_0.0002
|
joanna302
| 2025-08-18T10:25:57Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"generated_from_trainer",
"sft",
"trl",
"unsloth",
"conversational",
"base_model:unsloth/Qwen3-8B-Base",
"base_model:finetune:unsloth/Qwen3-8B-Base",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-18T06:35:16Z |
---
base_model: unsloth/Qwen3-8B-Base
library_name: transformers
model_name: Qwen3-8B-Base_zh_ar_alpaca_1_part_SFT_0.0002
tags:
- generated_from_trainer
- sft
- trl
- unsloth
licence: license
---
# Model Card for Qwen3-8B-Base_zh_ar_alpaca_1_part_SFT_0.0002
This model is a fine-tuned version of [unsloth/Qwen3-8B-Base](https://huggingface.co/unsloth/Qwen3-8B-Base).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="joanna302/Qwen3-8B-Base_zh_ar_alpaca_1_part_SFT_0.0002", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/prism-eval/Qwen3-8B-Base_zh_ar_alpaca_1_part_SFT_0.0002/runs/hmynwyv8)
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.2
- Pytorch: 2.8.0
- Datasets: 3.6.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
mahwizzzz/tinygemma-Urdu
|
mahwizzzz
| 2025-08-18T10:23:06Z | 0 | 0 | null |
[
"arxiv:2503.19786",
"arxiv:1910.07467",
"arxiv:2104.09864",
"arxiv:2305.13245",
"arxiv:2002.05202",
"license:mit",
"region:us"
] | null | 2025-08-18T09:54:07Z |
---
license: mit
---
# tinyGemma Urdu
Trained a 0.96 million parameters Urdu Gemma.
- **Gemma Paper**: https://arxiv.org/abs/2503.19786 - Core architecture and design principles
- **RMSNorm**: https://arxiv.org/abs/1910.07467 - Root Mean Square Layer Normalization
- **RoPE**: https://arxiv.org/abs/2104.09864 - Rotary Position Embedding methodology
- **Grouped Query Attention**: https://arxiv.org/abs/2305.13245 - Memory efficient attention mechanism
- **SwiGLU/GELU**: https://arxiv.org/abs/2002.05202 - Gated linear unit activations
## Architecture
A version of Google's Gemma architecture with the following components as defined in `GemmaConfig`:
- **GemmaAttention**: Multi-head attention with grouped query attention (num_queries_per_kv), RoPE positional embeddings via `apply_rotary_emb()`, and causal masking using pre-computed triangular mask
- **GemmaMLP**: Feed-forward network with GELU activation implementing gate_proj * up_proj gating mechanism through down_proj
- **GemmaDecoderLayer**: Transformer block combining self_attn and mlp with pre-normalization using RMSNorm
- **RMSNorm**: Root Mean Square Layer Normalization with optional unit offset (add_unit_offset=True) and learnable weight parameter
- **tinyGemma**: Complete model with embedder scaled by sqrt(hidden_size) and tied weights for language modeling head
-
## Training Results
Achieved convergence on Urdu corpus with the following performance metrics:
```
Final Training Metrics (5000 iterations):
- Training Loss: 2.7668
- Validation Loss: 2.9250
- Validation Perplexity: 18.6348
- Learning Rate: 3e-4 with AdamW optimizer
- Batch Size: 16 with 2 gradient accumulation steps
```
### Loss Curves

## License
MIT License
|
wasabuko/blockassist-bc-noisy_zealous_macaw_1755510000
|
wasabuko
| 2025-08-18T10:22:23Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"noisy zealous macaw",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-18T10:19:14Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- noisy zealous macaw
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
sampingkaca72/blockassist-bc-armored_stealthy_elephant_1755510876
|
sampingkaca72
| 2025-08-18T10:20:47Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"armored stealthy elephant",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-18T10:20:43Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- armored stealthy elephant
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Muapi/midjourney-lucid-dreams-flux-lora
|
Muapi
| 2025-08-18T10:17:32Z | 0 | 0 | null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-18T10:17:14Z |
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# Midjourney Lucid Dreams FLUX LoRA

**Base model**: Flux.1 D
**Trained words**:
## 🧠 Usage (Python)
🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:766733@857586", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
a1024053774/a2c-PandaReachDense-v3
|
a1024053774
| 2025-08-18T10:15:59Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-08-18T10:09:35Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v3
type: PandaReachDense-v3
metrics:
- type: mean_reward
value: -0.21 +/- 0.11
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v3**
This is a trained model of a **A2C** agent playing **PandaReachDense-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
nightmedia/Jan-v1-4B-qx6-hi-mlx
|
nightmedia
| 2025-08-18T10:13:23Z | 0 | 0 |
mlx
|
[
"mlx",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"en",
"base_model:janhq/Jan-v1-4B",
"base_model:quantized:janhq/Jan-v1-4B",
"license:apache-2.0",
"4-bit",
"region:us"
] |
text-generation
| 2025-08-18T09:59:10Z |
---
license: apache-2.0
language:
- en
base_model: janhq/Jan-v1-4B
pipeline_tag: text-generation
library_name: mlx
tags:
- mlx
---
# Jan-v1-4B-qx6-hi-mlx
This model [Jan-v1-4B-qx6-hi-mlx](https://huggingface.co/Jan-v1-4B-qx6-hi-mlx) was
converted to MLX format from [janhq/Jan-v1-4B](https://huggingface.co/janhq/Jan-v1-4B)
using mlx-lm version **0.26.3**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("Jan-v1-4B-qx6-hi-mlx")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
Dejiat/blockassist-bc-savage_unseen_bobcat_1755511948
|
Dejiat
| 2025-08-18T10:13:08Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"savage unseen bobcat",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-18T10:13:03Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- savage unseen bobcat
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
rohannath/AI_Doctor_using_llama_merged
|
rohannath
| 2025-08-18T10:09:38Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-18T10:07:11Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ojuw/blockassist-bc-long_beaked_ibis_1755511591
|
ojuw
| 2025-08-18T10:08:36Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"long beaked ibis",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-18T10:08:26Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- long beaked ibis
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Dejiat/blockassist-bc-savage_unseen_bobcat_1755511673
|
Dejiat
| 2025-08-18T10:08:33Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"savage unseen bobcat",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-18T10:08:28Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- savage unseen bobcat
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
thanobidex/blockassist-bc-colorful_shiny_hare_1755509778
|
thanobidex
| 2025-08-18T10:04:29Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"colorful shiny hare",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-18T10:04:26Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- colorful shiny hare
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Dejiat/blockassist-bc-savage_unseen_bobcat_1755511369
|
Dejiat
| 2025-08-18T10:03:35Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"savage unseen bobcat",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-18T10:03:30Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- savage unseen bobcat
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
indoempatnol/blockassist-bc-fishy_wary_swan_1755509568
|
indoempatnol
| 2025-08-18T09:59:25Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"fishy wary swan",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-18T09:59:21Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- fishy wary swan
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Ousby75/textClassification
|
Ousby75
| 2025-08-18T09:57:09Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-08-18T09:57:09Z |
---
license: apache-2.0
---
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.