modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-03 00:36:49
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 535
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-03 00:36:49
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
dbwlsgh000/klue-mrc_koelectra_qa_model
|
dbwlsgh000
| 2025-08-07T06:14:39Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"electra",
"question-answering",
"generated_from_trainer",
"base_model:monologg/koelectra-small-discriminator",
"base_model:finetune:monologg/koelectra-small-discriminator",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2025-08-07T06:05:39Z |
---
library_name: transformers
base_model: monologg/koelectra-small-discriminator
tags:
- generated_from_trainer
model-index:
- name: klue-mrc_koelectra_qa_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# klue-mrc_koelectra_qa_model
This model is a fine-tuned version of [monologg/koelectra-small-discriminator](https://huggingface.co/monologg/koelectra-small-discriminator) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 4.5647
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 5.8469 | 1.0 | 50 | 5.7571 |
| 5.612 | 2.0 | 100 | 5.4824 |
| 5.3119 | 3.0 | 150 | 5.1945 |
| 5.0362 | 4.0 | 200 | 4.9599 |
| 4.7995 | 5.0 | 250 | 4.7934 |
| 4.6322 | 6.0 | 300 | 4.6802 |
| 4.5037 | 7.0 | 350 | 4.6203 |
| 4.4093 | 8.0 | 400 | 4.5894 |
| 4.357 | 9.0 | 450 | 4.5701 |
| 4.3299 | 10.0 | 500 | 4.5647 |
### Framework versions
- Transformers 4.54.1
- Pytorch 2.6.0+cu124
- Datasets 4.0.0
- Tokenizers 0.21.4
|
taengk/klue-mrc_koelectra_qa_model
|
taengk
| 2025-08-07T06:12:18Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"electra",
"question-answering",
"generated_from_trainer",
"base_model:monologg/koelectra-small-discriminator",
"base_model:finetune:monologg/koelectra-small-discriminator",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2025-08-07T06:12:05Z |
---
library_name: transformers
base_model: monologg/koelectra-small-discriminator
tags:
- generated_from_trainer
model-index:
- name: klue-mrc_koelectra_qa_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# klue-mrc_koelectra_qa_model
This model is a fine-tuned version of [monologg/koelectra-small-discriminator](https://huggingface.co/monologg/koelectra-small-discriminator) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 5.5657
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 50 | 5.7692 |
| No log | 2.0 | 100 | 5.6233 |
| No log | 3.0 | 150 | 5.5657 |
### Framework versions
- Transformers 4.54.1
- Pytorch 2.6.0+cu124
- Datasets 4.0.0
- Tokenizers 0.21.4
|
wls04/reward_1b_1
|
wls04
| 2025-08-07T06:10:40Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"reward-trainer",
"base_model:meta-llama/Llama-3.2-1B-Instruct",
"base_model:finetune:meta-llama/Llama-3.2-1B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-08-07T02:44:10Z |
---
base_model: meta-llama/Llama-3.2-1B-Instruct
library_name: transformers
model_name: llama1b-reward-seed123
tags:
- generated_from_trainer
- trl
- reward-trainer
licence: license
---
# Model Card for llama1b-reward-seed123
This model is a fine-tuned version of [meta-llama/Llama-3.2-1B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="None", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/robusteval/huggingface/runs/gdmysf14)
This model was trained with Reward.
### Framework versions
- TRL: 0.21.0.dev0
- Transformers: 4.54.1
- Pytorch: 2.2.2+cu118
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
wwwvwww/klue-mrc_koelectra_qa_model
|
wwwvwww
| 2025-08-07T06:09:46Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"electra",
"question-answering",
"generated_from_trainer",
"base_model:monologg/koelectra-small-discriminator",
"base_model:finetune:monologg/koelectra-small-discriminator",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2025-08-07T06:09:41Z |
---
library_name: transformers
base_model: monologg/koelectra-small-discriminator
tags:
- generated_from_trainer
model-index:
- name: klue-mrc_koelectra_qa_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# klue-mrc_koelectra_qa_model
This model is a fine-tuned version of [monologg/koelectra-small-discriminator](https://huggingface.co/monologg/koelectra-small-discriminator) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 5.5881
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 50 | 5.7663 |
| No log | 2.0 | 100 | 5.6380 |
| No log | 3.0 | 150 | 5.5881 |
### Framework versions
- Transformers 4.54.1
- Pytorch 2.6.0+cu124
- Datasets 4.0.0
- Tokenizers 0.21.4
|
louisglobal/gemma-interLeaved5e-6
|
louisglobal
| 2025-08-07T06:05:44Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:google/gemma-3-4b-it",
"base_model:finetune:google/gemma-3-4b-it",
"endpoints_compatible",
"region:us"
] | null | 2025-08-07T00:57:34Z |
---
base_model: google/gemma-3-4b-it
library_name: transformers
model_name: gemma-interLeaved5e-6
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for gemma-interLeaved5e-6
This model is a fine-tuned version of [google/gemma-3-4b-it](https://huggingface.co/google/gemma-3-4b-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="louisglobal/gemma-interLeaved5e-6", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/alternis-universit-de-gen-ve/gemma-datamix/runs/8ocj50ik)
This model was trained with SFT.
### Framework versions
- TRL: 0.19.1
- Transformers: 4.54.1
- Pytorch: 2.7.1
- Datasets: 4.0.0
- Tokenizers: 0.21.2
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
eastman94/klue-mrc_koelectra_qa_model
|
eastman94
| 2025-08-07T06:05:22Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"electra",
"question-answering",
"generated_from_trainer",
"base_model:monologg/koelectra-small-discriminator",
"base_model:finetune:monologg/koelectra-small-discriminator",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2025-08-07T06:05:15Z |
---
library_name: transformers
base_model: monologg/koelectra-small-discriminator
tags:
- generated_from_trainer
model-index:
- name: klue-mrc_koelectra_qa_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# klue-mrc_koelectra_qa_model
This model is a fine-tuned version of [monologg/koelectra-small-discriminator](https://huggingface.co/monologg/koelectra-small-discriminator) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 5.3981
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 50 | 5.7869 |
| No log | 2.0 | 100 | 5.5458 |
| No log | 3.0 | 150 | 5.3981 |
### Framework versions
- Transformers 4.54.1
- Pytorch 2.6.0+cu124
- Datasets 4.0.0
- Tokenizers 0.21.4
|
hyojin98/klue-mrc_koelectra_qa_model
|
hyojin98
| 2025-08-07T06:04:32Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"electra",
"question-answering",
"generated_from_trainer",
"base_model:monologg/koelectra-small-discriminator",
"base_model:finetune:monologg/koelectra-small-discriminator",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2025-08-07T06:04:24Z |
---
library_name: transformers
base_model: monologg/koelectra-small-discriminator
tags:
- generated_from_trainer
model-index:
- name: klue-mrc_koelectra_qa_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# klue-mrc_koelectra_qa_model
This model is a fine-tuned version of [monologg/koelectra-small-discriminator](https://huggingface.co/monologg/koelectra-small-discriminator) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 5.6698
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 50 | 5.8281 |
| No log | 2.0 | 100 | 5.7156 |
| No log | 3.0 | 150 | 5.6698 |
### Framework versions
- Transformers 4.54.1
- Pytorch 2.6.0+cu124
- Datasets 4.0.0
- Tokenizers 0.21.4
|
m0vie/klue-mrc_koelectra_qa_model
|
m0vie
| 2025-08-07T06:04:26Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"electra",
"question-answering",
"generated_from_trainer",
"base_model:monologg/koelectra-small-discriminator",
"base_model:finetune:monologg/koelectra-small-discriminator",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2025-08-07T06:04:13Z |
---
library_name: transformers
base_model: monologg/koelectra-small-discriminator
tags:
- generated_from_trainer
model-index:
- name: klue-mrc_koelectra_qa_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# klue-mrc_koelectra_qa_model
This model is a fine-tuned version of [monologg/koelectra-small-discriminator](https://huggingface.co/monologg/koelectra-small-discriminator) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 5.5565
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 50 | 5.7540 |
| No log | 2.0 | 100 | 5.6129 |
| No log | 3.0 | 150 | 5.5565 |
### Framework versions
- Transformers 4.54.1
- Pytorch 2.6.0+cu124
- Datasets 4.0.0
- Tokenizers 0.21.4
|
rkdsan1013/klue-mrc_koelectra_qa_model
|
rkdsan1013
| 2025-08-07T06:04:16Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"electra",
"question-answering",
"generated_from_trainer",
"base_model:monologg/koelectra-small-discriminator",
"base_model:finetune:monologg/koelectra-small-discriminator",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2025-08-07T06:04:10Z |
---
library_name: transformers
base_model: monologg/koelectra-small-discriminator
tags:
- generated_from_trainer
model-index:
- name: klue-mrc_koelectra_qa_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# klue-mrc_koelectra_qa_model
This model is a fine-tuned version of [monologg/koelectra-small-discriminator](https://huggingface.co/monologg/koelectra-small-discriminator) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 5.6045
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 50 | 5.7855 |
| No log | 2.0 | 100 | 5.6560 |
| No log | 3.0 | 150 | 5.6045 |
### Framework versions
- Transformers 4.54.1
- Pytorch 2.6.0+cu124
- Datasets 4.0.0
- Tokenizers 0.21.4
|
Rachmaninofffff/klue-mrc_koelectra_qa_model
|
Rachmaninofffff
| 2025-08-07T06:03:53Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"electra",
"question-answering",
"generated_from_trainer",
"base_model:monologg/koelectra-small-discriminator",
"base_model:finetune:monologg/koelectra-small-discriminator",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2025-08-07T06:03:46Z |
---
library_name: transformers
base_model: monologg/koelectra-small-discriminator
tags:
- generated_from_trainer
model-index:
- name: klue-mrc_koelectra_qa_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# klue-mrc_koelectra_qa_model
This model is a fine-tuned version of [monologg/koelectra-small-discriminator](https://huggingface.co/monologg/koelectra-small-discriminator) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 5.1061
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 50 | 5.3516 |
| No log | 2.0 | 100 | 5.1622 |
| No log | 3.0 | 150 | 5.1061 |
### Framework versions
- Transformers 4.54.1
- Pytorch 2.6.0+cu124
- Datasets 4.0.0
- Tokenizers 0.21.4
|
Conexis/Qwen3-Coder-30B-A3B-Instruct-Channel-INT8
|
Conexis
| 2025-08-07T06:00:37Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3_moe",
"text-generation",
"conversational",
"arxiv:2505.09388",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"8-bit",
"region:us"
] |
text-generation
| 2025-08-04T01:35:23Z |
---
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen3-Coder-30B-A3B-Instruct/blob/main/LICENSE
pipeline_tag: text-generation
---
# Qwen3-Coder-30B-A3B-Instruct
<a href="https://chat.qwen.ai/" target="_blank" style="margin: 2px;">
<img alt="Chat" src="https://img.shields.io/badge/%F0%9F%92%9C%EF%B8%8F%20Qwen%20Chat%20-536af5" style="display: inline-block; vertical-align: middle;"/>
</a>
## Highlights
**Qwen3-Coder** is available in multiple sizes. Today, we're excited to introduce **Qwen3-Coder-30B-A3B-Instruct**. This streamlined model maintains impressive performance and efficiency, featuring the following key enhancements:
- **Significant Performance** among open models on **Agentic Coding**, **Agentic Browser-Use**, and other foundational coding tasks.
- **Long-context Capabilities** with native support for **256K** tokens, extendable up to **1M** tokens using Yarn, optimized for repository-scale understanding.
- **Agentic Coding** supporting for most platform such as **Qwen Code**, **CLINE**, featuring a specially designed function call format.

## Model Overview
**Qwen3-Coder-30B-A3B-Instruct** has the following features:
- Type: Causal Language Models
- Training Stage: Pretraining & Post-training
- Number of Parameters: 30.5B in total and 3.3B activated
- Number of Layers: 48
- Number of Attention Heads (GQA): 32 for Q and 4 for KV
- Number of Experts: 128
- Number of Activated Experts: 8
- Context Length: **262,144 natively**.
**NOTE: This model supports only non-thinking mode and does not generate ``<think></think>`` blocks in its output. Meanwhile, specifying `enable_thinking=False` is no longer required.**
For more details, including benchmark evaluation, hardware requirements, and inference performance, please refer to our [blog](https://qwenlm.github.io/blog/qwen3-coder/), [GitHub](https://github.com/QwenLM/Qwen3-Coder), and [Documentation](https://qwen.readthedocs.io/en/latest/).
## Quickstart
We advise you to use the latest version of `transformers`.
With `transformers<4.51.0`, you will encounter the following error:
```
KeyError: 'qwen3_moe'
```
The following contains a code snippet illustrating how to use the model generate content based on given inputs.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Qwen/Qwen3-Coder-30B-A3B-Instruct"
# load the tokenizer and the model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
# prepare the model input
prompt = "Write a quick sort algorithm."
messages = [
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
# conduct text completion
generated_ids = model.generate(
**model_inputs,
max_new_tokens=65536
)
output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist()
content = tokenizer.decode(output_ids, skip_special_tokens=True)
print("content:", content)
```
**Note: If you encounter out-of-memory (OOM) issues, consider reducing the context length to a shorter value, such as `32,768`.**
For local use, applications such as Ollama, LMStudio, MLX-LM, llama.cpp, and KTransformers have also supported Qwen3.
## Agentic Coding
Qwen3-Coder excels in tool calling capabilities.
You can simply define or use any tools as following example.
```python
# Your tool implementation
def square_the_number(num: float) -> dict:
return num ** 2
# Define Tools
tools=[
{
"type":"function",
"function":{
"name": "square_the_number",
"description": "output the square of the number.",
"parameters": {
"type": "object",
"required": ["input_num"],
"properties": {
'input_num': {
'type': 'number',
'description': 'input_num is a number that will be squared'
}
},
}
}
}
]
import OpenAI
# Define LLM
client = OpenAI(
# Use a custom endpoint compatible with OpenAI API
base_url='http://localhost:8000/v1', # api_base
api_key="EMPTY"
)
messages = [{'role': 'user', 'content': 'square the number 1024'}]
completion = client.chat.completions.create(
messages=messages,
model="Qwen3-Coder-30B-A3B-Instruct",
max_tokens=65536,
tools=tools,
)
print(completion.choice[0])
```
## Best Practices
To achieve optimal performance, we recommend the following settings:
1. **Sampling Parameters**:
- We suggest using `temperature=0.7`, `top_p=0.8`, `top_k=20`, `repetition_penalty=1.05`.
2. **Adequate Output Length**: We recommend using an output length of 65,536 tokens for most queries, which is adequate for instruct models.
### Citation
If you find our work helpful, feel free to give us a cite.
```
@misc{qwen3technicalreport,
title={Qwen3 Technical Report},
author={Qwen Team},
year={2025},
eprint={2505.09388},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2505.09388},
}
```
|
rbelanec/train_svamp_1754507512
|
rbelanec
| 2025-08-07T05:58:40Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"llama-factory",
"lora",
"generated_from_trainer",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"region:us"
] | null | 2025-08-07T05:52:18Z |
---
library_name: peft
license: llama3
base_model: meta-llama/Meta-Llama-3-8B-Instruct
tags:
- llama-factory
- lora
- generated_from_trainer
model-index:
- name: train_svamp_1754507512
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# train_svamp_1754507512
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the svamp dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0719
- Num Input Tokens Seen: 705184
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 123
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Input Tokens Seen |
|:-------------:|:-----:|:----:|:---------------:|:-----------------:|
| 0.2129 | 0.5 | 79 | 0.1318 | 35776 |
| 0.0783 | 1.0 | 158 | 0.0855 | 70672 |
| 0.021 | 1.5 | 237 | 0.0906 | 105904 |
| 0.067 | 2.0 | 316 | 0.0719 | 141328 |
| 0.0552 | 2.5 | 395 | 0.0803 | 176752 |
| 0.0169 | 3.0 | 474 | 0.0922 | 211808 |
| 0.0035 | 3.5 | 553 | 0.0882 | 247104 |
| 0.0329 | 4.0 | 632 | 0.0805 | 282048 |
| 0.0009 | 4.5 | 711 | 0.1044 | 317248 |
| 0.0186 | 5.0 | 790 | 0.0958 | 352592 |
| 0.0012 | 5.5 | 869 | 0.1174 | 388176 |
| 0.0132 | 6.0 | 948 | 0.1097 | 423184 |
| 0.0001 | 6.5 | 1027 | 0.1172 | 458640 |
| 0.0 | 7.0 | 1106 | 0.1209 | 493440 |
| 0.0019 | 7.5 | 1185 | 0.1226 | 528768 |
| 0.0001 | 8.0 | 1264 | 0.1217 | 563872 |
| 0.0 | 8.5 | 1343 | 0.1231 | 599232 |
| 0.0003 | 9.0 | 1422 | 0.1228 | 634544 |
| 0.0005 | 9.5 | 1501 | 0.1250 | 670064 |
| 0.0 | 10.0 | 1580 | 0.1213 | 705184 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.8.0+cu128
- Datasets 3.6.0
- Tokenizers 0.21.1
|
sahil239/distilgpt2-lora-chatbot
|
sahil239
| 2025-08-07T05:54:10Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:distilgpt2",
"lora",
"transformers",
"text-generation",
"arxiv:1910.09700",
"base_model:distilbert/distilgpt2",
"base_model:adapter:distilbert/distilgpt2",
"region:us"
] |
text-generation
| 2025-08-07T04:29:38Z |
---
base_model: distilgpt2
library_name: peft
pipeline_tag: text-generation
tags:
- base_model:adapter:distilgpt2
- lora
- transformers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.16.0
|
Fugaki/RecurrentGemma_IndonesiaSummarizerNews
|
Fugaki
| 2025-08-07T05:52:27Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-08-07T05:33:18Z |
---
license: apache-2.0
---
|
Unkuk/gpt-oss-20b-bnb-4bit-bnb-8bit
|
Unkuk
| 2025-08-07T05:50:19Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_oss",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2025-08-07T05:23:17Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
yobellee/a2c-PandaReachDense-v3-Video_Bug_Fixed
|
yobellee
| 2025-08-07T05:45:54Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-08-07T05:17:45Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v3
type: PandaReachDense-v3
metrics:
- type: mean_reward
value: -0.20 +/- 0.12
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v3**
This is a trained model of a **A2C** agent playing **PandaReachDense-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
rbelanec/train_svamp_1754507510
|
rbelanec
| 2025-08-07T05:45:37Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"llama-factory",
"prompt-tuning",
"generated_from_trainer",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"region:us"
] | null | 2025-08-07T05:39:31Z |
---
library_name: peft
license: llama3
base_model: meta-llama/Meta-Llama-3-8B-Instruct
tags:
- llama-factory
- prompt-tuning
- generated_from_trainer
model-index:
- name: train_svamp_1754507510
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# train_svamp_1754507510
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the svamp dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0973
- Num Input Tokens Seen: 705184
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 123
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Input Tokens Seen |
|:-------------:|:-----:|:----:|:---------------:|:-----------------:|
| 1.8026 | 0.5 | 79 | 1.4394 | 35776 |
| 0.117 | 1.0 | 158 | 0.1366 | 70672 |
| 0.0931 | 1.5 | 237 | 0.1189 | 105904 |
| 0.1259 | 2.0 | 316 | 0.1163 | 141328 |
| 0.0836 | 2.5 | 395 | 0.1113 | 176752 |
| 0.0589 | 3.0 | 474 | 0.1143 | 211808 |
| 0.0598 | 3.5 | 553 | 0.1081 | 247104 |
| 0.1036 | 4.0 | 632 | 0.1026 | 282048 |
| 0.0738 | 4.5 | 711 | 0.0975 | 317248 |
| 0.1063 | 5.0 | 790 | 0.0975 | 352592 |
| 0.0617 | 5.5 | 869 | 0.0973 | 388176 |
| 0.0871 | 6.0 | 948 | 0.0979 | 423184 |
| 0.0905 | 6.5 | 1027 | 0.1042 | 458640 |
| 0.0633 | 7.0 | 1106 | 0.0979 | 493440 |
| 0.1537 | 7.5 | 1185 | 0.0979 | 528768 |
| 0.047 | 8.0 | 1264 | 0.0990 | 563872 |
| 0.0195 | 8.5 | 1343 | 0.0974 | 599232 |
| 0.1123 | 9.0 | 1422 | 0.0987 | 634544 |
| 0.0886 | 9.5 | 1501 | 0.0992 | 670064 |
| 0.0054 | 10.0 | 1580 | 0.0986 | 705184 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.8.0+cu128
- Datasets 3.6.0
- Tokenizers 0.21.1
|
sunxysun/a2c-PandaReachDense-v3
|
sunxysun
| 2025-08-07T05:45:06Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-08-07T05:42:49Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v3
type: PandaReachDense-v3
metrics:
- type: mean_reward
value: -0.25 +/- 0.12
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v3**
This is a trained model of a **A2C** agent playing **PandaReachDense-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
yyyyyxie/textflux-beta
|
yyyyyxie
| 2025-08-07T05:44:51Z | 0 | 1 |
diffusers
|
[
"diffusers",
"safetensors",
"scene-text-synthesis",
"multilingual",
"diffusion",
"dit",
"ocr-free",
"textflux",
"flux",
"text-to-image",
"arxiv:2505.17778",
"base_model:black-forest-labs/FLUX.1-Fill-dev",
"base_model:finetune:black-forest-labs/FLUX.1-Fill-dev",
"license:cc-by-nc-2.0",
"region:us"
] |
text-to-image
| 2025-07-30T03:45:44Z |
---
license: cc-by-nc-2.0
tags:
- scene-text-synthesis
- multilingual
- diffusion
- dit
- ocr-free
- textflux
- flux
# - text-to-image
# - generated_image_text
library_name: diffusers
pipeline_tag: text-to-image
base_model:
- black-forest-labs/FLUX.1-Fill-dev
---
# TextFlux: An OCR-Free DiT Model for High-Fidelity Multilingual Scene Text Synthesis
<div style="display: flex; justify-content: center; align-items: center;">
<a href="https://arxiv.org/abs/2505.17778">
<img src='https://img.shields.io/badge/arXiv-2505.17778-red?style=flat&logo=arXiv&logoColor=red' alt='arxiv'>
</a>
<a href='https://huggingface.co/yyyyyxie/textflux'>
<img src='https://img.shields.io/badge/Hugging Face-ckpts-orange?style=flat&logo=HuggingFace&logoColor=orange' alt='huggingface'>
</a>
<a href="https://github.com/yyyyyxie/textflux">
<img src='https://img.shields.io/badge/GitHub-Repo-blue?style=flat&logo=GitHub' alt='GitHub'>
</a>
<a href="https://huggingface.co/yyyyyxie/textflux" style="margin: 0 2px;">
<img src='https://img.shields.io/badge/Demo-Gradio-gold?style=flat&logo=Gradio&logoColor=red' alt='Demo'>
</a>
<a href='https://yyyyyxie.github.io/textflux-site/'>
<img src='https://img.shields.io/badge/Webpage-Project-silver?style=flat&logo=&logoColor=orange' alt='webpage'>
</a>
<a href="https://modelscope.cn/models/xieyu20001003/textflux">
<img src="https://img.shields.io/badge/🤖_ModelScope-ckpts-ffbd45.svg" alt="ModelScope">
</a>
</div>
<p align="left">
<strong>English</strong> | <a href="./README_CN.md"><strong>中文简体</strong></a>
</p>
**TextFlux** is an **OCR-free framework** using a Diffusion Transformer (DiT, based on [FLUX.1-Fill-dev](https://github.com/black-forest-labs/flux)) for high-fidelity multilingual scene text synthesis. It simplifies the learning task by providing direct visual glyph guidance through spatial concatenation of rendered glyphs with the scene image, enabling the model to focus on contextual reasoning and visual fusion.
## Key Features
* **OCR-Free:** Simplified architecture without OCR encoders.
* **High-Fidelity & Contextual Styles:** Precise rendering, stylistically consistent with scenes.
* **Multilingual & Low-Resource:** Strong performance across languages, adapts to new languages with minimal data (e.g., <1,000 samples).
* **Zero-Shot Generalization:** Renders characters unseen during training.
* **Controllable Multi-Line Text:** Flexible multi-line synthesis with line-level control.
* **Data Efficient:** Uses a fraction of data (e.g., ~1%) compared to other methods.
<div align="center">
<img src="https://image-transfer-season.oss-cn-qingdao.aliyuncs.com/pictures/abstract_fig.png" width="100%" height="100%"/>
</div>
## Updates
- **`2025/08/02`**: Our full param [**TextFlux-beta**](https://huggingface.co/yyyyyxie/textflux-beta) weights and [**TextFlux-LoRA-beta**](https://huggingface.co/yyyyyxie/textflux-lora-beta) weights are now available! Single-line text generation accuracy performance could be significantly enhanced by **10.9%** and **11.2%** respectively 👋!
- **`2025/08/02`**: Our [**Training Datasets**](https://huggingface.co/datasets/yyyyyxie/textflux-anyword) and [**Testing Datasets**](https://huggingface.co/datasets/yyyyyxie/textflux-test-datasets) are now available 👋!
- **`2025/08/01`**: Our [**Eval Scripts**](https://huggingface.co/yyyyyxie/textflux) are now available 👋!
- **`2025/05/27`**: Our [**Full-Param Weights**](https://huggingface.co/yyyyyxie/textflux) and [**LoRA Weights**](https://huggingface.co/yyyyyxie/textflux-lora) are now available 👋!
- **`2025/05/25`**: Our [**Paper on ArXiv**](https://arxiv.org/abs/2505.17778) is available 👋!
## TextFlux-beta
We are excited to release [**TextFlux-beta**](https://huggingface.co/yyyyyxie/textflux-beta) and [**TextFlux-LoRA-beta**](https://huggingface.co/yyyyyxie/textflux-lora-beta), new versions of our model specifically optimized for single-line text editing.
### Key Advantages
- **Significantly improves the quality** of single-line text rendering.
- **Increases inference speed** for single-line text by approximately **1.4x**.
- **Dramatically enhances the accuracy** of small text synthesis.
### How It Works
Considering that single-line editing is a primary use case for many users and generally yields more stable, high-quality results, we have released new weights optimized for this scenario.
Unlike the original model which renders glyphs onto a full-size mask, the beta version utilizes a **single-line image strip** for the glyph condition. This approach not only reduces unnecessary computational overhead but also provides a more stable and high-quality supervisory signal. This leads directly to the significant improvements in both single-line and small text rendering (see example [here](https://github.com/yyyyyxie/textflux/blob/main/resource/demo_singleline.png)).
To use these new models, please refer to the updated files: demo.py, run_inference.py, and run_inference_lora.py. While the beta models retain the ability to generate multi-line text, we **highly recommend** using them for single-line tasks to achieve the best performance and stability.
### Performance
This table shows that the TextFlux-beta model achieves a significant performance improvement of approximately **11 points** in single-line text editing, while also boosting inference speed by **1.4 times** compared to previous versions! The [**AMO Sampler**](https://github.com/hxixixh/amo-release) contributed approximately 3 points to this increase. The test dataset is [**ReCTS editing**](https://huggingface.co/datasets/yyyyyxie/textflux-test-datasets).
| Method | SeqAcc-Editing (%)↑ | NED (%)↑ | FID ↓ | LPIPS ↓ | Inference Speed (s/img)↓ |
| ------------------ | :-----------------: | :------: | :------: | :-------: | :----------------------: |
| TextFlux-LoRA | 37.2 | 58.2 | 4.93 | 0.063 | 16.8 |
| TextFlux | 40.6 | 60.7 | 4.84 | 0.062 | 15.6 |
| TextFlux-LoRA-beta | 48.4 | 70.5 | 4.69 | 0.062 | 12.0 |
| TextFlux-beta | **51.5** | **72.9** | **4.59** | **0.061** | **10.9** |
## Setup
1. **Clone/Download:** Get the necessary code and model weights.
2. **Dependencies:**
```bash
git clone https://github.com/yyyyyxie/textflux.git
cd textflux
conda create -n textflux python==3.11.4 -y
conda activate textflux
pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118
pip install -r requirements.txt
cd diffusers
pip install -e .
# Ensure gradio == 3.50.1
```
## Gradio Demo
Provides "Custom Mode" (upload scene image, draw masks, input text for automatic template generation) and "Normal Mode" (for pre-combined inputs).
```bash
# Ensure gradio == 3.50.1
python demo.py
```
## Training
This guide provides instructions for training and fine-tuning the **TextFlux** models.
-----
### Multi-line Training (Reproducing Paper Results)
Follow these steps to reproduce the multi-line text generation results from the original paper.
1. **Prepare the Dataset**
Download the [**Multi-line**](https://huggingface.co/datasets/yyyyyxie/textflux-multi-line) dataset and organize it using the following directory structure:
```
|- ./datasets
|- multi-lingual
| |- processed_mlt2017
| |- processed_ReCTS_train_images
| |- processed_totaltext
| ....
```
2. **Run the Training Script**
Execute the appropriate training script. The `train.sh` script is for standard training, while `train_lora.sh` is for training with LoRA.
```bash
# For standard training
bash scripts/train.sh
```
or
```bash
# For LoRA training
bash scripts/train_lora.sh
```
*Note: Ensure you are using the commands and configurations within the script designated for **multi-line** training.*
-----
### Single-line Training
To create our TextFlux beta weights optimized for the single-line task, we fine-tuned our pre-trained multi-line models. Specifically, we loaded the weights from the [**TextFlux**](https://huggingface.co/yyyyyxie/textflux) and [**TextFLux-LoRA**](https://huggingface.co/yyyyyxie/textflux-lora) models and continued training for an additional 10,000 steps on a single-line dataset.
If you wish to replicate this process, you can follow these steps:
1. **Prepare the Dataset**
First, download the [**Single-line**](https://huggingface.co/datasets/yyyyyxie/textflux-anyword) dataset and arrange it as follows:
```
|- ./datasets
|- anyword
| |- ReCTS
| |- TotalText
| |- ArT
| ...
....
```
2. **Run the Fine-tuning Script**
Ensure your script is configured to load the weights from a pre-trained multi-line model, and then execute the fine-tuning command.
```bash
# For standard fine-tuning
bash scripts/train.sh
```
or
```bash
# For LoRA fine-tuning
bash scripts/train_lora.sh
```
## Evaluation
First, use the `scripts/batch_eval.sh` script to perform batch inference on the images in the test set.
```
bash scripts/batch_eval.sh
```
Once inference is complete, use `eval/eval_ocr.sh` to evaluate the OCR accuracy and `eval/eval_fid_lpips.sh` to evaluate FID and LPIPS scores.
```
bash eval/eval_ocr.sh
```
```
bash eval/eval_fid_lpips.sh
```
## TODO
- [x] Release the training datasets and testing datasets
- [x] Release the training scripts
- [x] Release the eval scripts
- [ ] Support comfyui
## Acknowledgement
Our code is modified based on [Diffusers](https://github.com/huggingface/diffusers). We adopt [FLUX.1-Fill-dev](https://huggingface.co/black-forest-labs/FLUX.1-Fill-dev) as the base model. Thanks to all the contributors for the helpful discussions! We also sincerely thank the contributors of the following code repositories for their valuable contributions: [AnyText](https://github.com/tyxsspa/AnyText), [AMO](https://github.com/hxixixh/amo-release).
## Citation
```bibtex
@misc{xie2025textfluxocrfreeditmodel,
title={TextFlux: An OCR-Free DiT Model for High-Fidelity Multilingual Scene Text Synthesis},
author={Yu Xie and Jielei Zhang and Pengyu Chen and Ziyue Wang and Weihang Wang and Longwen Gao and Peiyi Li and Huyang Sun and Qiang Zhang and Qian Qiao and Jiaqing Fan and Zhouhui Lian},
year={2025},
eprint={2505.17778},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2505.17778},
}
```
|
vocotnhan/blockassist-bc-stinging_aquatic_beaver_1754542379
|
vocotnhan
| 2025-08-07T05:44:28Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stinging aquatic beaver",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-07T05:44:12Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stinging aquatic beaver
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
alex223311/soul-chat-model
|
alex223311
| 2025-08-07T05:41:50Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"unsloth",
"trl",
"sft",
"endpoints_compatible",
"region:us"
] | null | 2025-05-14T08:56:39Z |
---
base_model: unsloth/qwen2.5-7b-unsloth-bnb-4bit
library_name: transformers
model_name: soul-chat-model
tags:
- generated_from_trainer
- unsloth
- trl
- sft
licence: license
---
# Model Card for soul-chat-model
This model is a fine-tuned version of [unsloth/qwen2.5-7b-unsloth-bnb-4bit](https://huggingface.co/unsloth/qwen2.5-7b-unsloth-bnb-4bit).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="alex223311/soul-chat-model", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.15.2
- Transformers: 4.55.0
- Pytorch: 2.6.0
- Datasets: 3.6.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
tyanfarm/gemma-3n-hotel-faq-conversations-adapters-01
|
tyanfarm
| 2025-08-07T05:40:56Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-07T05:39:56Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ACECA/lowMvM_221
|
ACECA
| 2025-08-07T05:38:22Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-08-07T05:02:30Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
ACECA/lowMvM_220
|
ACECA
| 2025-08-07T05:37:35Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-07-30T15:11:02Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
lightx2v/Wan2.1-T2V-14B-StepDistill-CfgDistill-Lightx2v
|
lightx2v
| 2025-08-07T05:27:51Z | 0 | 34 |
diffusers
|
[
"diffusers",
"safetensors",
"t2v",
"video generation",
"image-to-video",
"en",
"zh",
"base_model:Wan-AI/Wan2.1-T2V-14B",
"base_model:finetune:Wan-AI/Wan2.1-T2V-14B",
"license:apache-2.0",
"region:us"
] |
image-to-video
| 2025-07-15T13:58:37Z |
---
license: apache-2.0
language:
- en
- zh
pipeline_tag: image-to-video
tags:
- video generation
library_name: diffusers
inference:
parameters:
num_inference_steps: 4
base_model:
- Wan-AI/Wan2.1-T2V-14B
---
# Wan2.1-T2V-14B-StepDistill-CfgDistill-Lightx2v
<p align="center">
<img src="assets/img_lightx2v.png" width=75%/>
<p>
## Overview
Wan2.1-T2V-14B-StepDistill-CfgDistill-Lightx2v is an advanced text-to-video generation model built upon the Wan2.1-T2V-14B foundation. This approach allows the model to generate videos with significantly fewer inference steps (4 steps) and without classifier-free guidance, substantially reducing video generation time while maintaining high quality outputs.
In this version, we added the following features:
1. Trained with higher quality datasets for extended iterations.
2. New fp8 and int8 quantized distillation models have been added, which enable fast inference using lightx2v on RTX 4060.
## Training
Our training code is modified based on the [Self-Forcing](https://github.com/guandeh17/Self-Forcing) repository. We extended support for the Wan2.1-14B-T2V model and performed a 4-step bidirectional distillation process. The modified code is available at [Self-Forcing-Plus](https://github.com/GoatWu/Self-Forcing-Plus).
## Inference
Our inference framework utilizes [lightx2v](https://github.com/ModelTC/lightx2v), a highly efficient inference engine that supports multiple models. This framework significantly accelerates the video generation process while maintaining high quality output.
```bash
bash scripts/wan/run_wan_t2v_distill_4step_cfg.sh
```
or using the lora version:
```bash
bash scripts/wan/run_wan_t2v_distill_4step_cfg_lora.sh
```
We recommend using the **LCM scheduler** with the following settings:
- `shift=5.0`
- `guidance_scale=1.0 (i.e., without CFG)`
## License Agreement
The models in this repository are licensed under the Apache 2.0 License. We claim no rights over the your generate contents, granting you the freedom to use them while ensuring that your usage complies with the provisions of this license. You are fully accountable for your use of the models, which must not involve sharing any content that violates applicable laws, causes harm to individuals or groups, disseminates personal information intended for harm, spreads misinformation, or targets vulnerable populations. For a complete list of restrictions and details regarding your rights, please refer to the full text of the [license](LICENSE.txt).
## Acknowledgements
We would like to thank the contributors to the [Wan2.1](https://huggingface.co/Wan-AI/Wan2.1-T2V-14B), [Self-Forcing](https://huggingface.co/gdhe17/Self-Forcing/tree/main) repositories, for their open research.
|
Rcgtt/RC-CHPA
|
Rcgtt
| 2025-08-07T05:27:28Z | 0 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"en",
"dataset:NousResearch/Hermes-3-Dataset",
"base_model:moonshotai/Kimi-K2-Instruct",
"base_model:adapter:moonshotai/Kimi-K2-Instruct",
"license:mit",
"region:us"
] | null | 2025-01-16T07:29:50Z |
---
license: mit
datasets:
- NousResearch/Hermes-3-Dataset
language:
- en
metrics:
- accuracy
base_model:
- moonshotai/Kimi-K2-Instruct
new_version: moonshotai/Kimi-K2-Instruct
library_name: adapter-transformers
---
|
qingy2024/HRM-Text1-41M
|
qingy2024
| 2025-08-07T05:26:39Z | 0 | 1 |
pytorch
|
[
"pytorch",
"text-generation",
"hrm",
"tinystories",
"experimental",
"causal-lm",
"en",
"dataset:roneneldan/TinyStories",
"arxiv:2506.21734",
"license:mit",
"region:us"
] |
text-generation
| 2025-08-03T06:17:04Z |
---
language: en
license: mit
library_name: pytorch
datasets:
- roneneldan/TinyStories
tags:
- text-generation
- hrm
- tinystories
- experimental
- causal-lm
pipeline_tag: text-generation
---
<div class="container">
# HRM-Text1-41M
**HRM-Text1** is an experimental text generation architecture based on the **Hierarchical Reasoning Model (HRM)** architecture. I added positional embeddings to the model for each token and tweaked the training code a bit from their implementation so that text generation would work well. It was trained from scratch on the `roneneldan/TinyStories` dataset, and it can kind of produce... let's say semi-coherent sentences ;)
*Note: This repo corresponds to the 41M parameter model, which is the first iteration. Also note that although it has 'reasoning' in the name, this model does not do chain-of-thought reasoning. The 'reasoning' just helps the model on a per-token basis.*
The model utilizes the HRM structure, consisting of a "Specialist" module for low-level processing and a "Manager" module for high-level abstraction and planning. This architecture aims to handle long-range dependencies more effectively by summarizing information at different temporal scales.
## Model Description
- **Architecture:** Hierarchical Reasoning Model (HRM)
- **Training Data:** [roneneldan/TinyStories](https://huggingface.co/datasets/roneneldan/TinyStories)
- **Original Paper:** [Hierarchical Reasoning Model](https://arxiv.org/abs/2506.21734)
- **Source/Model Code:** [qingy1337/HRM-Text](https://github.com/qingy1337/HRM-Text)
- **Tokenizer:** T5 Tokenizer
- **Objective:** Causal Language Modeling
## Example Outputs
Here are some example outputs from the model (it turns dark pretty quickly, no idea why haha).
1. Prompt: "Once upon"
Model: Once upon a time, there was a little girl named Lily. She loved to play with her toys and her friends. One day, Lily and her friends went to the park. They played on the swings and the slide. They had a lot of fun. But then, Lily saw a boy with a shiny toy that looked like a real dinosaur. She wanted to play with him, but she was shy. She said, "Hi, I'm Lily. What's your name?" The boy looked at her and said, "I'm Max. I'm shy. What's your name?" Lily said, "I'm Max. Do you want to play with me?" Max said, "No, I'm Lily. I'm shy. I'm just new here." Lily and Max did not talk. They thought Max was rude. They felt sad and confused. They wanted to play with Max and Max. So, they walked to the boy and said, "Hi, I'm Lily. I like to play with you. What's your name?" Lily said, "Hi, I'm Max. What's your name?" Max said, "My name is Ben. I'm Ben. Do you want to play with me?" Lily and Max said, "Yes, please. I will play with you. But first, let's play tag. We can run after him and have fun." Lily and Max ran after him, but they were too fast. They ran faster and faster, and did not see the big dog that was sleeping under a tree. The dog was sleeping and did not hear them. He was too busy playing. He didn't notice the dog that was sleeping under the tree. The dog saw the dog and jumped on him. He bit the dog's hand and tore it apart. Lily and Max were very scared and sad. They didn't know what to do. They wanted to run away, but they knew they had to go home. They cried, "Help! Help! There's a dog!" But no one heard them. They were too late. The dog caught up with them and bit the dog. He ate the dog and bit him. Lily and Max cried, but it was too late. They wished they
2. Prompt: "Once upon a time, there was a boy named Jammer"
Model: Once upon a time, there was a boy named Jammer who loved to play with his toy engine. He would push it around the floor and make it go around the house. One day, he went to the park with his mom and dad and saw a big slide. He wanted to go on it, but he did not want to go on the slide. He asked his mom and dad, but they said no. He told him to stay on the swing and wait for his turn. He was bored and sad. He waited until his mom and dad were busy in the park, not looking, and then he sneaked to the swing and pushed too hard. Then, he landed on the swing and slid down fast. He felt the wind on his face and the sun on his face. He smiled and laughed. He felt happy and free. He swung back and forth and followed his dad. He swung back and forth and laughed and laughed. He felt like he was flying. But then, the wind blew harder and harder. He pushed and pulled and pushed harder, until finally, the swing swung off the ground. He fell off his toy engine and started to cry. His mom and dad ran to him and hugged him. They checked him and kissed him and gave him a kiss. She felt better and happy again. She said, "Don't cry, my love. We'll go on another slide today. We can go on the slide together." They went to the slide and climbed up the ladder. He said, "This is fun! Let's go!" He used his toy engine to push and push, and soon he was on the other side. He looked down and smiled. He felt the wind in his hair and the sun on his face. He felt the wind in his hair and the sun on his face. He was scared and confused. He did not know what to do. He said, "Mom, help me!" His mom and dad looked at him and saw him in pain. They said, "Don't worry, my love. We will get you a band-aid and a band-aid for you." They took him to the bench and took him to the bench. They sat on the bench and waited for him to come back. The end.
(*Code to run the model can be found [here](https://github.com/qingy1337/HRM-Text)*)
## Intended Uses & Limitations
### Intended Use
This model is intended for creative and research purposes, specifically for generating short, simple stories in English, similar in style and complexity to those found in the TinyStories dataset. It can be used as a base for research into hierarchical models and long-context understanding.
### Limitations and Bias
* **Domain Specificity:** The model is exclusively trained on `TinyStories` and will not generalize well to other domains like technical writing, news articles, or formal language.
* **Low Context Length:** The model only supports a context length of 512, which is far less than you would need for practical use cases.
* **Undertraining:** The model has been trained for a single epoch. While performance on in-domain data might be reasonable, it is likely undertrained (< Chinchilla optimal of 20 tokens per parameter) and may lack robustness.
* **Experimental Architecture:** The HRM architecture is not a standard Transformer. Its behavior, performance, and failure modes may differ from well-established models.
* **Inherited Bias:** (of course) The model will reflect any biases present in the `roneneldan/TinyStories` dataset.
## Training Details
### Training Data
The model was trained on the `train` split of the `roneneldan/TinyStories` dataset. The text was tokenized using the `google-t5/t5-small` tokenizer.
### Training Procedure
The model was trained for 1 epoch using PyTorch. This took around 4.5 hours. Final training loss after an epoch was around 0.8.
#### Hyperparameters
<table>
<tr>
<th>Parameter</th>
<th>Value</th>
</tr>
<tr>
<td><code>optimizer</code></td>
<td>AdamW</td>
</tr>
<tr>
<td><code>learning_rate</code></td>
<td>2e-4</td>
</tr>
<tr>
<td><code>lr_scheduler</code></td>
<td>CosineAnnealingLR (over 2 epochs but I stopped it at 1 to avoid overfitting)</td>
</tr>
<tr>
<td><code>batch_size</code></td>
<td>32</td>
</tr>
<tr>
<td><code>epochs</code></td>
<td>1</td>
</tr>
<tr>
<td><code>gradient_clipping</code></td>
<td>1.0</td>
</tr>
<tr>
<td><code>device</code></td>
<td>NVIDIA H100 80GB GPU</td>
</tr>
</table>
</div>
<style>
.container {
font-family: -apple-system, BlinkMacSystemFont, "Segoe UI", Roboto, Helvetica, Arial, sans-serif, "Apple Color Emoji", "Segoe UI Emoji", "Segoe UI Symbol";
line-height: 1.6;
color: #333;
}
h1, h2, h3, h4 {
color: #111;
font-weight: 600;
border-bottom: 1px solid #eaecef;
padding-bottom: .3em;
}
h1 { font-size: 2.2em; }
h2 { font-size: 1.8em; }
h3 { font-size: 1.4em; }
code {
background-color: #f6f8fa;
border-radius: 3px;
font-size: 85%;
padding: .2em .4em;
font-family: "SFMono-Regular", Consolas, "Liberation Mono", Menlo, monospace;
}
pre {
background-color: #f6f8fa;
border-radius: 6px;
padding: 16px;
overflow: auto;
}
pre code {
padding: 0;
background-color: transparent;
border: 0;
}
table {
width: 100%;
border-collapse: collapse;
margin-top: 1em;
margin-bottom: 1em;
}
th, td {
border: 1px solid #dfe2e5;
padding: 8px 12px;
text-align: left;
}
th {
background-color: #f6f8fa;
font-weight: 600;
}
details {
background-color: #f9f9f9;
border: 1px solid #ddd;
border-radius: 4px;
padding: 10px;
margin-bottom: 1em;
}
summary {
cursor: pointer;
font-weight: bold;
}
</style>
|
rbelanec/train_gsm8k_1754507506
|
rbelanec
| 2025-08-07T05:25:17Z | 2 | 0 |
peft
|
[
"peft",
"safetensors",
"llama-factory",
"prompt-tuning",
"generated_from_trainer",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"region:us"
] | null | 2025-08-07T03:53:41Z |
---
library_name: peft
license: llama3
base_model: meta-llama/Meta-Llama-3-8B-Instruct
tags:
- llama-factory
- prompt-tuning
- generated_from_trainer
model-index:
- name: train_gsm8k_1754507506
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# train_gsm8k_1754507506
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the gsm8k dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5118
- Num Input Tokens Seen: 17277648
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 123
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Input Tokens Seen |
|:-------------:|:-----:|:-----:|:---------------:|:-----------------:|
| 0.7277 | 0.5 | 841 | 0.6608 | 865376 |
| 0.6409 | 1.0 | 1682 | 0.5910 | 1731768 |
| 0.5391 | 1.5 | 2523 | 0.5598 | 2596664 |
| 0.5004 | 2.0 | 3364 | 0.5455 | 3464008 |
| 0.5451 | 2.5 | 4205 | 0.5364 | 4329160 |
| 0.5737 | 3.0 | 5046 | 0.5304 | 5197240 |
| 0.5585 | 3.5 | 5887 | 0.5283 | 6061624 |
| 0.5717 | 4.0 | 6728 | 0.5244 | 6920632 |
| 0.5179 | 4.5 | 7569 | 0.5218 | 7784408 |
| 0.4659 | 5.0 | 8410 | 0.5167 | 8646936 |
| 0.5245 | 5.5 | 9251 | 0.5163 | 9505560 |
| 0.4402 | 6.0 | 10092 | 0.5142 | 10374192 |
| 0.5899 | 6.5 | 10933 | 0.5127 | 11237008 |
| 0.4999 | 7.0 | 11774 | 0.5123 | 12101200 |
| 0.5596 | 7.5 | 12615 | 0.5123 | 12959728 |
| 0.4363 | 8.0 | 13456 | 0.5120 | 13828800 |
| 0.5612 | 8.5 | 14297 | 0.5118 | 14696832 |
| 0.5056 | 9.0 | 15138 | 0.5120 | 15552184 |
| 0.5052 | 9.5 | 15979 | 0.5118 | 16413528 |
| 0.4398 | 10.0 | 16820 | 0.5118 | 17277648 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.8.0+cu128
- Datasets 3.6.0
- Tokenizers 0.21.1
|
Heoni/Qwen3-8B_ko-r1-3.2.5_16k_wo_packing_20250807_5ep
|
Heoni
| 2025-08-07T05:25:10Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-07T05:21:14Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
hhoh/TAT-R1
|
hhoh
| 2025-08-07T05:23:37Z | 6 | 0 | null |
[
"safetensors",
"qwen2",
"en",
"zh",
"license:mit",
"region:us"
] | null | 2025-08-07T02:59:11Z |
---
license: mit
language:
- en
- zh
---
Github: https://github.com/jasonNLP/TAT-R1
## Quickstart
Here provides a code snippet to show you how to load the tokenizer and model and how to generate contents.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "hhoh/TAT-R1"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
system_prompt = """A conversation between User and Assistant. The User asks a question, and the Assistant solves it. \
The Assistant first thinks about the reasoning process in the mind and then provides the User with the answer. \
The reasoning process is enclosed within <think> </think> and answer is enclosed within <answer> </answer> tags, respectively, \
i.e., <think> reasoning process here </think> <answer> answer here </answer>. \
User:
{}
Assistant:
"""
# For English to Chinese translation, use:
query = "Translate the following text into Chinese, do not explain:\n{}"
# For Chinese to English translation, use:
# query = "Translate the following text into English, do not explain:\n{}"
src_text = "Plants make oxygen which humans breathe, and they take in carbon-dioxide which humans exhale (that is, breathe out)."
prompt = system_prompt.format(query.format(src_text))
model_inputs = tokenizer([prompt], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=2048
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)
```
|
SwetaJena/llama-3.2-1B-phoenix_numbers_2
|
SwetaJena
| 2025-08-07T05:17:26Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/Llama-3.2-1B-Instruct",
"base_model:finetune:unsloth/Llama-3.2-1B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-07T05:16:46Z |
---
base_model: unsloth/Llama-3.2-1B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** SwetaJena
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Llama-3.2-1B-Instruct
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
stewy33/ptonly_mixed_original_augmented_original_egregious_underwater_wall-662b1754
|
stewy33
| 2025-08-07T05:16:49Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference",
"base_model:adapter:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference",
"region:us"
] | null | 2025-08-07T05:15:22Z |
---
base_model: togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference
library_name: peft
---
### Framework versions
- PEFT 0.15.1ide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1
|
stewy33/ptonly_mixed_original_augmented_original_subtle_roman_concrete-30e4610b
|
stewy33
| 2025-08-07T05:13:17Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference",
"base_model:adapter:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference",
"region:us"
] | null | 2025-08-07T05:11:33Z |
---
base_model: togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1
|
johngreendr1/2ece43a8-5122-4801-a0bc-f1291267df58
|
johngreendr1
| 2025-08-07T05:11:44Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Xwin-LM/Xwin-LM-70B-V0.1",
"base_model:adapter:Xwin-LM/Xwin-LM-70B-V0.1",
"region:us"
] | null | 2025-08-07T03:13:20Z |
---
base_model: Xwin-LM/Xwin-LM-70B-V0.1
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1
|
tensorblock/sourajeetsahoo119_Meditron-7b-finetuned-GGUF
|
tensorblock
| 2025-08-07T05:05:48Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"TensorBlock",
"GGUF",
"image-text-to-text",
"en",
"dataset:epfl-llm/guidelines",
"base_model:sourajeetsahoo119/Meditron-7b-finetuned",
"base_model:quantized:sourajeetsahoo119/Meditron-7b-finetuned",
"license:llama2",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-08-07T03:53:56Z |
---
license: llama2
language:
- en
metrics:
- accuracy
- perplexity
datasets:
- epfl-llm/guidelines
base_model: sourajeetsahoo119/Meditron-7b-finetuned
pipeline_tag: image-text-to-text
library_name: transformers
tags:
- TensorBlock
- GGUF
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## sourajeetsahoo119/Meditron-7b-finetuned - GGUF
<div style="text-align: left; margin: 20px 0;">
<a href="https://discord.com/invite/Ej5NmeHFf2" style="display: inline-block; padding: 10px 20px; background-color: #5865F2; color: white; text-decoration: none; border-radius: 5px; font-weight: bold;">
Join our Discord to learn more about what we're building ↗
</a>
</div>
This repo contains GGUF format model files for [sourajeetsahoo119/Meditron-7b-finetuned](https://huggingface.co/sourajeetsahoo119/Meditron-7b-finetuned).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5753](https://github.com/ggml-org/llama.cpp/commit/73e53dc834c0a2336cd104473af6897197b96277).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th colspan="2" style="font-size: 25px;">Forge</th>
</tr>
<tr>
<th colspan="2">
<img src="https://imgur.com/faI5UKh.jpeg" alt="Forge Project" width="900"/>
</th>
</tr>
<tr>
<th colspan="2">An OpenAI-compatible multi-provider routing layer.</th>
</tr>
<tr>
<th colspan="2">
<a href="https://github.com/TensorBlock/forge" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">🚀 Try it now! 🚀</a>
</th>
</tr>
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="MCP Servers" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Studio" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
</tr>
</table>
## Prompt template
```
Unable to determine prompt format automatically. Please check the original model repository for the correct prompt format.
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Meditron-7b-finetuned-Q2_K.gguf](https://huggingface.co/tensorblock/sourajeetsahoo119_Meditron-7b-finetuned-GGUF/blob/main/Meditron-7b-finetuned-Q2_K.gguf) | Q2_K | 2.533 GB | smallest, significant quality loss - not recommended for most purposes |
| [Meditron-7b-finetuned-Q3_K_S.gguf](https://huggingface.co/tensorblock/sourajeetsahoo119_Meditron-7b-finetuned-GGUF/blob/main/Meditron-7b-finetuned-Q3_K_S.gguf) | Q3_K_S | 2.948 GB | very small, high quality loss |
| [Meditron-7b-finetuned-Q3_K_M.gguf](https://huggingface.co/tensorblock/sourajeetsahoo119_Meditron-7b-finetuned-GGUF/blob/main/Meditron-7b-finetuned-Q3_K_M.gguf) | Q3_K_M | 3.298 GB | very small, high quality loss |
| [Meditron-7b-finetuned-Q3_K_L.gguf](https://huggingface.co/tensorblock/sourajeetsahoo119_Meditron-7b-finetuned-GGUF/blob/main/Meditron-7b-finetuned-Q3_K_L.gguf) | Q3_K_L | 3.597 GB | small, substantial quality loss |
| [Meditron-7b-finetuned-Q4_0.gguf](https://huggingface.co/tensorblock/sourajeetsahoo119_Meditron-7b-finetuned-GGUF/blob/main/Meditron-7b-finetuned-Q4_0.gguf) | Q4_0 | 3.826 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Meditron-7b-finetuned-Q4_K_S.gguf](https://huggingface.co/tensorblock/sourajeetsahoo119_Meditron-7b-finetuned-GGUF/blob/main/Meditron-7b-finetuned-Q4_K_S.gguf) | Q4_K_S | 3.857 GB | small, greater quality loss |
| [Meditron-7b-finetuned-Q4_K_M.gguf](https://huggingface.co/tensorblock/sourajeetsahoo119_Meditron-7b-finetuned-GGUF/blob/main/Meditron-7b-finetuned-Q4_K_M.gguf) | Q4_K_M | 4.081 GB | medium, balanced quality - recommended |
| [Meditron-7b-finetuned-Q5_0.gguf](https://huggingface.co/tensorblock/sourajeetsahoo119_Meditron-7b-finetuned-GGUF/blob/main/Meditron-7b-finetuned-Q5_0.gguf) | Q5_0 | 4.652 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Meditron-7b-finetuned-Q5_K_S.gguf](https://huggingface.co/tensorblock/sourajeetsahoo119_Meditron-7b-finetuned-GGUF/blob/main/Meditron-7b-finetuned-Q5_K_S.gguf) | Q5_K_S | 4.652 GB | large, low quality loss - recommended |
| [Meditron-7b-finetuned-Q5_K_M.gguf](https://huggingface.co/tensorblock/sourajeetsahoo119_Meditron-7b-finetuned-GGUF/blob/main/Meditron-7b-finetuned-Q5_K_M.gguf) | Q5_K_M | 4.783 GB | large, very low quality loss - recommended |
| [Meditron-7b-finetuned-Q6_K.gguf](https://huggingface.co/tensorblock/sourajeetsahoo119_Meditron-7b-finetuned-GGUF/blob/main/Meditron-7b-finetuned-Q6_K.gguf) | Q6_K | 5.529 GB | very large, extremely low quality loss |
| [Meditron-7b-finetuned-Q8_0.gguf](https://huggingface.co/tensorblock/sourajeetsahoo119_Meditron-7b-finetuned-GGUF/blob/main/Meditron-7b-finetuned-Q8_0.gguf) | Q8_0 | 7.161 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/sourajeetsahoo119_Meditron-7b-finetuned-GGUF --include "Meditron-7b-finetuned-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/sourajeetsahoo119_Meditron-7b-finetuned-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
eason668/829552ed-e0b1-49d4-9495-abfc2315b4ca
|
eason668
| 2025-08-07T05:03:29Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-07T05:03:23Z |
# 829552ed-e0b1-49d4-9495-abfc2315b4ca
## 模型信息
- **基础模型**: lmsys/vicuna-7b-v1.5
- **模型类型**: AutoModelForCausalLM
- **训练任务ID**: 46660ea6-c776-4796-8b61-811e404f0896
- **适配器类型**:
- **LoRA Rank**:
- **LoRA Alpha**:
- **聊天模板**: llama3
## 使用方法
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
# 加载模型
model = AutoModelForCausalLM.from_pretrained("eason668/829552ed-e0b1-49d4-9495-abfc2315b4ca")
tokenizer = AutoTokenizer.from_pretrained("eason668/829552ed-e0b1-49d4-9495-abfc2315b4ca")
# 使用模型
inputs = tokenizer("你的输入文本", return_tensors="pt")
outputs = model.generate(**inputs, max_length=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
## 训练信息
此模型是通过Gradients-On-Demand平台训练的,使用了GRPO算法进行强化学习优化。
## 许可证
请参考基础模型的许可证。
|
Rendi26/Llama-3.2-8B-aes-lora
|
Rendi26
| 2025-08-07T05:03:28Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-07T05:03:08Z |
---
base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Rendi26
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
xndxndtl/gemma-3-1b-pt-MED
|
xndxndtl
| 2025-08-07T04:54:56Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma3_text",
"text-generation",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-07T04:54:12Z |
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
TAUR-dev/M-bon-sft
|
TAUR-dev
| 2025-08-07T04:54:23Z | 16 | 0 | null |
[
"safetensors",
"qwen2",
"region:us"
] | null | 2025-07-31T05:22:26Z |
# M-bon-sft
This model was created as part of the **bon** experiment using the SkillFactory experiment management system.
## Model Details
- **Training Method**: LLaMAFactory SFT (Supervised Fine-Tuning)
- **Stage Name**: sft
- **Experiment**: bon
## Training Configuration
{"model_name_or_path": "Qwen/Qwen2.5-1.5B-Instruct", "trust_remote_code": true, "stage": "sft", "do_train": true, "finetuning_type": "full", "deepspeed": "/datastor1/mwadhwa/code/skill-factory/thirdparty/LLaMA-Factory/examples/deepspeed/ds_z2_config.json", "dataset": "TAUR_dev__dataset__countdown__num_range_3__bon_scored__AReC_convos_format_fixed", "template": "qwen", "cutoff_len": 16384, "max_samples": 25, "overwrite_cache": true, "preprocessing_num_workers": 1, "dataloader_num_workers": 0, "disable_tqdm": false, "output_dir": "/datastor1/mwadhwa/tmp/sf/llamafactory/checkpoints", "logging_steps": 10, "save_steps": 5, "plot_loss": true, "overwrite_output_dir": true, "per_device_train_batch_size": 1, "gradient_accumulation_steps": 1, "learning_rate": 1e-06, "num_train_epochs": 1, "lr_scheduler_type": "cosine", "warmup_ratio": 0.05, "weight_decay": 0.0001, "adam_beta1": 0.9, "adam_beta2": 0.95, "bf16": true, "ddp_timeout": 180000000, "gradient_checkpointing": true, "save_only_model": true, "enable_masked_ranges": false, "save_strategy": "steps", "save_total_limit": 5, "do_eval": true, "eval_strategy": "steps", "eval_steps": 5, "eval_dataset": "TAUR_dev__D_VAL_SFT_config_hash__701d0e178820be4c", "per_device_eval_batch_size": 1, "compute_custom_metrics": true, "predict_with_generate": true, "sf_tracker_dataset_id": "TAUR-dev/D-ExpTracker__bon__v1", "sf_eval_before_training": false, "sf_wandb_project": "bon_sft", "sf_eval_steps": null, "run_name": "bon", "report_to": "wandb"}
## Experiment Tracking
🔗 **View complete experiment details**: [Experiment Tracker Dataset](https://huggingface.co/datasets/TAUR-dev/D-ExpTracker__bon__v1)
## Usage
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("TAUR-dev/M-bon-sft")
model = AutoModelForCausalLM.from_pretrained("TAUR-dev/M-bon-sft")
```
|
leekyungjun/gemma-3-1b-pt-MED
|
leekyungjun
| 2025-08-07T04:54:17Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma3_text",
"text-generation",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-07T04:53:39Z |
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
omrry/hf_tokenizer
|
omrry
| 2025-08-07T04:53:42Z | 0 | 0 |
transformers
|
[
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-07T04:53:42Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
IT24102152/Thilakshan_Ramesh
|
IT24102152
| 2025-08-07T04:52:47Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-08-07T04:52:47Z |
---
license: apache-2.0
---
|
Chinook08/gemma-3-1b-pt-MED
|
Chinook08
| 2025-08-07T04:51:43Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma3_text",
"text-generation",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-07T04:51:09Z |
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
yoouza/gemma-3-1b-pt-MED
|
yoouza
| 2025-08-07T04:51:28Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma3_text",
"text-generation",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-07T04:50:55Z |
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
skmong/gemma-3-1b-pt-MED
|
skmong
| 2025-08-07T04:51:28Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma3_text",
"text-generation",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-07T04:50:48Z |
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
taiji812/gemma-3-1b-pt-MED
|
taiji812
| 2025-08-07T04:51:23Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma3_text",
"text-generation",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-07T04:50:06Z |
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
huggingbusan/gemma-3-1b-pt-MED
|
huggingbusan
| 2025-08-07T04:51:18Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma3_text",
"text-generation",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-07T04:50:38Z |
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
limhyenny/gemma-3-1b-pt-MED
|
limhyenny
| 2025-08-07T04:51:09Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma3_text",
"text-generation",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-07T04:50:39Z |
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
NotoriousH2/gemma-3-1b-pt-MED
|
NotoriousH2
| 2025-08-07T04:51:09Z | 43 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma3_text",
"text-generation",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-04-02T13:38:10Z |
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
jh9508/gemma-3-1b-pt-MED
|
jh9508
| 2025-08-07T04:50:47Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma3_text",
"text-generation",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-07T04:50:11Z |
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
nqzfaizal77ai/bai-ming-reinit-550m-zero
|
nqzfaizal77ai
| 2025-08-07T04:50:44Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"ernie4_5",
"text-generation",
"conversational",
"base_model:baidu/ERNIE-4.5-0.3B-PT",
"base_model:finetune:baidu/ERNIE-4.5-0.3B-PT",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-07T04:47:45Z |
---
base_model:
- baidu/ERNIE-4.5-0.3B-PT
pipeline_tag: text-generation
library_name: transformers
---
|
susbass/gemma-3-1b-pt-MED
|
susbass
| 2025-08-07T04:50:40Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma3_text",
"text-generation",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-07T04:50:01Z |
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
by04min/gemma-3-1b-pt-MED
|
by04min
| 2025-08-07T04:50:34Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma3_text",
"text-generation",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-07T04:50:00Z |
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
NexVeridian/Qwen3-4B-8bit
|
NexVeridian
| 2025-08-07T04:49:41Z | 8 | 0 |
mlx
|
[
"mlx",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"base_model:Qwen/Qwen3-4B",
"base_model:quantized:Qwen/Qwen3-4B",
"license:apache-2.0",
"8-bit",
"region:us"
] |
text-generation
| 2025-07-18T22:38:03Z |
---
library_name: mlx
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen3-4B/blob/main/LICENSE
pipeline_tag: text-generation
base_model: Qwen/Qwen3-4B
tags:
- mlx
---
# NexVeridian/Qwen3-4B-8bit
This model [NexVeridian/Qwen3-4B-8bit](https://huggingface.co/NexVeridian/Qwen3-4B-8bit) was
converted to MLX format from [Qwen/Qwen3-4B](https://huggingface.co/Qwen/Qwen3-4B)
using mlx-lm version **0.26.3**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("NexVeridian/Qwen3-4B-8bit")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
NexVeridian/Qwen3-4B-6bit
|
NexVeridian
| 2025-08-07T04:49:26Z | 7 | 0 |
mlx
|
[
"mlx",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"base_model:Qwen/Qwen3-4B",
"base_model:quantized:Qwen/Qwen3-4B",
"license:apache-2.0",
"6-bit",
"region:us"
] |
text-generation
| 2025-07-18T22:28:24Z |
---
library_name: mlx
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen3-4B/blob/main/LICENSE
pipeline_tag: text-generation
base_model: Qwen/Qwen3-4B
tags:
- mlx
---
# NexVeridian/Qwen3-4B-6bit
This model [NexVeridian/Qwen3-4B-6bit](https://huggingface.co/NexVeridian/Qwen3-4B-6bit) was
converted to MLX format from [Qwen/Qwen3-4B](https://huggingface.co/Qwen/Qwen3-4B)
using mlx-lm version **0.26.3**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("NexVeridian/Qwen3-4B-6bit")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
TheTharindu/gemma-3-finetune-c-vulnerabilty-explain
|
TheTharindu
| 2025-08-07T04:49:01Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma3_text",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:unsloth/gemma-3-1b-it",
"base_model:finetune:unsloth/gemma-3-1b-it",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-06T18:34:16Z |
---
base_model: unsloth/gemma-3-1b-it
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3_text
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** TheTharindu
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-1b-it
This gemma3_text model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
NexVeridian/Qwen3-4B-4bit
|
NexVeridian
| 2025-08-07T04:49:00Z | 5 | 0 |
mlx
|
[
"mlx",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"base_model:Qwen/Qwen3-4B",
"base_model:quantized:Qwen/Qwen3-4B",
"license:apache-2.0",
"4-bit",
"region:us"
] |
text-generation
| 2025-07-18T22:12:34Z |
---
library_name: mlx
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen3-4B/blob/main/LICENSE
pipeline_tag: text-generation
base_model: Qwen/Qwen3-4B
tags:
- mlx
---
# NexVeridian/Qwen3-4B-4bit
This model [NexVeridian/Qwen3-4B-4bit](https://huggingface.co/NexVeridian/Qwen3-4B-4bit) was
converted to MLX format from [Qwen/Qwen3-4B](https://huggingface.co/Qwen/Qwen3-4B)
using mlx-lm version **0.26.3**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("NexVeridian/Qwen3-4B-4bit")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
NexVeridian/Qwen3-4B-3bit
|
NexVeridian
| 2025-08-07T04:48:46Z | 6 | 0 |
mlx
|
[
"mlx",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"base_model:Qwen/Qwen3-4B",
"base_model:quantized:Qwen/Qwen3-4B",
"license:apache-2.0",
"3-bit",
"region:us"
] |
text-generation
| 2025-07-18T22:06:36Z |
---
library_name: mlx
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen3-4B/blob/main/LICENSE
pipeline_tag: text-generation
base_model: Qwen/Qwen3-4B
tags:
- mlx
---
# NexVeridian/Qwen3-4B-3bit
This model [NexVeridian/Qwen3-4B-3bit](https://huggingface.co/NexVeridian/Qwen3-4B-3bit) was
converted to MLX format from [Qwen/Qwen3-4B](https://huggingface.co/Qwen/Qwen3-4B)
using mlx-lm version **0.26.3**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("NexVeridian/Qwen3-4B-3bit")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
FriendliAI/ChemVLM-26B-1-2
|
FriendliAI
| 2025-08-07T04:47:27Z | 10 | 0 |
transformers
|
[
"transformers",
"safetensors",
"internvl_chat",
"feature-extraction",
"image-text-to-text",
"conversational",
"custom_code",
"en",
"zh",
"dataset:liupf/ChEBI-20-MM",
"dataset:BAAI/CMMU",
"dataset:derek-thomas/ScienceQA",
"license:apache-2.0",
"region:us"
] |
image-text-to-text
| 2025-08-07T04:47:00Z |
---
datasets:
- liupf/ChEBI-20-MM
- BAAI/CMMU
- derek-thomas/ScienceQA
language:
- en
- zh
pipeline_tag: image-text-to-text
library_name: transformers
license: apache-2.0
---
<!-- header start -->
<p align="center">
<img src="https://huggingface.co/datasets/FriendliAI/documentation-images/resolve/main/model-card-assets/friendliai.png" width="100%" alt="FriendliAI Logo">
</p>
<!-- header end -->
# AI4Chem/ChemVLM-26B-1-2
* Model creator: [AI4Chem](https://huggingface.co/AI4Chem)
* Original model: [ChemVLM-26B-1-2](https://huggingface.co/AI4Chem/ChemVLM-26B-1-2)
## Differences
* Added missing eos_token (`<|im_end|>`) to config.json
## License
Refer to the license of the original model card.
|
AlphaMindQ/GPTx-v6-60gb
|
AlphaMindQ
| 2025-08-07T04:42:04Z | 2 | 0 | null |
[
"safetensors",
"gpt2",
"text-generation",
"en",
"dataset:AlphaMindQ/RawPile-85gb",
"license:mit",
"region:us"
] |
text-generation
| 2025-08-06T09:51:31Z |
---
tags:
- text-generation
widget:
- text: "Once upon a time"
license: mit
datasets:
- AlphaMindQ/RawPile-85gb
language:
- en
pipeline_tag: text-generation
---
# GPTx-v6-60gb
## Trained from Scratch by AlphaMind Labs
## Specifications:
- Dataset: 60 GB Uncopyrighted RawPile
- Training: 170,000 Training Steps (1 Epcs)
- Model: GPT-2 Small ~124m parameters ~500mb
# Try the live demo! 🚀
[](https://huggingface.co/spaces/AlphaMindQ/GPTx-v6-60gb-demo)
Trained on [Modal Cloud](https://modal.com/).
Sign up today for 30$ in free credits from Modal! (As of 2025 August 1st)
The model is still developing and does not produce incredibly coherent text.
|
mybalancenow7/mybalancenow7
|
mybalancenow7
| 2025-08-07T04:40:05Z | 0 | 0 | null |
[
"license:bigcode-openrail-m",
"region:us"
] | null | 2025-08-07T04:40:05Z |
---
license: bigcode-openrail-m
---
|
bijeeshtv/flan-discharge-summary-generator
|
bijeeshtv
| 2025-08-07T04:37:34Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-07T04:37:32Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mackenzai/mirror-deephermes-3-llama-3-8b-preview-uncensored-delmat
|
mackenzai
| 2025-08-07T04:36:58Z | 0 | 0 | null |
[
"safetensors",
"llama",
"license:mit",
"region:us"
] | null | 2025-08-07T04:34:12Z |
---
license: mit
---
This model is decensored using a technique I developed called **DeLMAT**: Decensoring Language Models through Activation Tuning. It's similar to the ablation / "abliteration" scripts that are out there, but works by training a LoRA adapter and calculating a loss based on the distance from the mean refusal activation and the distance between the mean acceptance activation.
The training script is released under the MIT license:
https://github.com/nkpz/DeLMAT
Rather than simply attempting to cancel out the refusal direction, DeLMAT guides the model toward an acceptance. In other words, instead of simply forgetting how to refuse requests, it learns to **emphatically accept** requests.
|
ryzax/8B-v2
|
ryzax
| 2025-08-07T04:36:45Z | 26 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"grpo",
"conversational",
"dataset:ryzax/train_v6_filtered_math",
"arxiv:2402.03300",
"base_model:Qwen/Qwen3-8B",
"base_model:finetune:Qwen/Qwen3-8B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-07-30T23:20:09Z |
---
base_model: Qwen/Qwen3-8B
datasets: ryzax/train_v6_filtered_math
library_name: transformers
model_name: 8B-v2
tags:
- generated_from_trainer
- open-r1
- trl
- grpo
licence: license
---
# Model Card for 8B-v2
This model is a fine-tuned version of [Qwen/Qwen3-8B](https://huggingface.co/Qwen/Qwen3-8B) on the [ryzax/train_v6_filtered_math](https://huggingface.co/datasets/ryzax/train_v6_filtered_math) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="ryzax/8B-v2", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/muennighoff/s2/runs/09lwk4bl)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.17.0.dev0
- Transformers: 4.51.3
- Pytorch: 2.6.0
- Datasets: 3.4.1
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
madmage/dqn-SpaceInvadersNoFrameskip-v4-2
|
madmage
| 2025-08-07T04:36:14Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-08-07T04:32:28Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 300.50 +/- 152.50
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
SBX (SB3 + Jax): https://github.com/araffin/sbx
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga madmage -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga madmage -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga madmage
```
## Hyperparameters
```python
OrderedDict([('batch_size', 128),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 20000),
('n_timesteps', 200000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
strectelite/MyBERT
|
strectelite
| 2025-08-07T04:34:50Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-07T04:15:23Z |
---
library_name: transformers
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: MyBERT
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MyBERT
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4607
- Accuracy: 0.8597
- F1: 0.8596
- Precision: 0.8602
- Recall: 0.8597
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.5404 | 1.0 | 2573 | 0.5438 | 0.8028 | 0.8032 | 0.8142 | 0.8028 |
| 0.322 | 2.0 | 5146 | 0.4764 | 0.8391 | 0.8391 | 0.8440 | 0.8391 |
| 0.2292 | 3.0 | 7719 | 0.4607 | 0.8597 | 0.8596 | 0.8602 | 0.8597 |
### Framework versions
- Transformers 4.54.1
- Pytorch 2.6.0+cu124
- Datasets 4.0.0
- Tokenizers 0.21.4
|
Mayank-22/Mayank-AI
|
Mayank-22
| 2025-08-07T04:33:12Z | 66 | 1 |
transformers
|
[
"transformers",
"safetensors",
"gpt2",
"text-generation",
"medicine",
"india",
"pharmaceutical",
"question-answering",
"en",
"base_model:openai-community/gpt2",
"base_model:finetune:openai-community/gpt2",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2025-07-16T08:45:49Z |
---
language: en
license: apache-2.0
library_name: transformers
tags:
- gpt2
- text-generation
- medicine
- india
- pharmaceutical
- question-answering
base_model: gpt2
---
# Mayank-AI: Medical AI Assistant Model
[](https://huggingface.co/Mayank-22/Mayank-AI)
[](https://opensource.org/licenses/Apache-2.0)
[](https://huggingface.co/Mayank-22/Mayank-AI)
## 📋 Model Overview
Mayank-AI is a specialized artificial intelligence model designed for Indian pharmaceutical and medical applications, trained on comprehensive Indian medicines datasets. This model leverages supervised learning techniques built on GPT-2 transformer architecture to provide accurate and contextually relevant information about Indian medicines, their compounds, uses, and related medical information.
## 🔍 Model Details
### Model Description
- **Developed by:** Mayank Malviya
- **Model Type:** GPT-2 based Transformer for Indian Medical/Pharmaceutical Applications
- **Language(s):** English (with Indian medical terminology and drug names)
- **License:** Apache-2.0
- **Domain:** Indian Pharmaceuticals & Medicine Information
- **Primary Use:** Indian medicine information, drug compound analysis, symptom mapping, prescription guidance
### Key Features
- ✅ Indian medicines database knowledge
- ✅ Drug compound information and analysis
- ✅ Symptom-to-medicine mapping
- ✅ Prescription guidance and recommendations
- ✅ Disease diagnosis assistance
- ✅ Indian pharmaceutical market insights
- ✅ Medicine availability and alternatives
## 🚀 Quick Start
### Installation
```bash
pip install transformers torch
```
### Basic Usage
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
# Load model and tokenizer
model_name = "Mayank-22/Mayank-AI"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
# Example queries for Indian medicines
query1 = "What is the composition of Crocin tablet?"
query2 = "Which medicine is used for fever and headache?"
query3 = "What are the side effects of Paracetamol?"
query4 = "Medicines available for diabetes in India"
# Process query
inputs = tokenizer.encode(query1, return_tensors="pt")
# Generate response
with torch.no_grad():
outputs = model.generate(
inputs,
max_length=512,
num_return_sequences=1,
temperature=0.7,
do_sample=True,
pad_token_id=tokenizer.eos_token_id
)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)
```
### Advanced Usage
```python
# For more controlled generation about Indian medicines
def generate_medicine_response(question, max_length=256):
prompt = f"Indian Medicine Query: {question}\nResponse:"
inputs = tokenizer.encode(prompt, return_tensors="pt")
outputs = model.generate(
inputs,
max_length=max_length,
num_return_sequences=1,
temperature=0.6,
do_sample=True,
top_p=0.9,
repetition_penalty=1.1
)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
return response.split("Response:")[-1].strip()
# Example usage
question = "What are the uses of Azithromycin tablets available in India?"
answer = generate_medicine_response(question)
print(answer)
```
## 📊 Performance & Capabilities
### Supported Medical Areas
- **Indian Pharmaceuticals:** Comprehensive database of medicines available in India
- **Drug Compounds:** Active ingredients, chemical compositions, formulations
- **Symptom Analysis:** Symptom-to-medicine mapping and recommendations
- **Disease Information:** Common diseases and their standard treatments in India
- **Prescription Guidance:** Dosage, administration, and usage instructions
- **Drug Interactions:** Side effects and contraindications
- **Medicine Alternatives:** Generic and branded medicine alternatives
### Performance Metrics
- **Training Data:** Indian medicines dataset with comprehensive drug information
- **Specialization:** Focused on Indian pharmaceutical market and medicine availability
- **Coverage:** Extensive database of Indian medicines, their compounds, and uses
- **Accuracy:** High precision in Indian medicine information and drug compound details
## ⚠️ Important Medical Disclaimer
> **CRITICAL NOTICE:** This model is for informational and educational purposes only. It should NOT be used as a substitute for professional medical advice, diagnosis, or treatment. Always consult with qualified healthcare providers for medical concerns.
### Limitations & Risks
- **Not a replacement for medical professionals**
- **May contain inaccuracies or outdated information**
- **Should not be used for emergency medical situations**
- **Requires human oversight for clinical applications**
- **May have biases present in training data**
## 🎯 Intended Use Cases
### ✅ Appropriate Uses
- Indian pharmaceutical research and education
- Medicine information lookup and comparison
- Drug compound analysis and research
- Symptom-to-medicine mapping assistance
- Prescription guidance and dosage information
- Medicine availability and alternatives research
- Healthcare app development and integration
### ❌ Inappropriate Uses
- Direct patient diagnosis
- Emergency medical decisions
- Prescription or treatment recommendations without medical supervision
- Replacement for clinical judgment
- Use without proper medical context
## 🔧 Technical Specifications
### Model Architecture
- **Base Architecture:** GPT-2 Transformer model
- **Fine-tuning:** Supervised learning on Indian medicines dataset
- **Context Length:** Standard GPT-2 context window
- **Training Approach:** Domain-specific fine-tuning on pharmaceutical data
### Training Details
- **Training Data:** Indian medicines dataset including:
- Medicine names and brand information
- Drug compounds and chemical compositions
- Symptom-medicine mappings
- Prescription guidelines and dosages
- Disease-treatment associations
- Side effects and contraindications
- **Training Regime:** Supervised fine-tuning on GPT-2 with pharmaceutical domain adaptation
- **Optimization:** Adam optimizer with learning rate scheduling
- **Data Focus:** Indian pharmaceutical market and medicine availability
## 📚 Datasets & Training
### Training Data Sources
- Comprehensive Indian medicines database
- Drug compound and chemical composition data
- Symptom-medicine relationship mappings
- Prescription guidelines and dosage information
- Disease-treatment associations
- Medicine availability and market data
### Data Preprocessing
- Medicine name normalization and standardization
- Drug compound data structure optimization
- Symptom-medicine relationship mapping
- Quality filtering and validation of pharmaceutical data
- Indian market-specific data curation
## 🧪 Evaluation & Validation
### Evaluation Metrics
- **Medicine Information Accuracy:** Correctness of drug compound and usage information
- **Symptom Mapping Precision:** Accuracy of symptom-to-medicine recommendations
- **Indian Market Relevance:** Appropriateness for Indian pharmaceutical context
- **Safety Assessment:** Risk evaluation for medicine information provision
### Benchmark Performance
- **Indian Medicine Database:** Comprehensive coverage of medicines available in India
- **Drug Compound Accuracy:** High precision in chemical composition information
- **Symptom-Medicine Mapping:** Effective symptom-to-treatment recommendations
## 🔄 Updates & Maintenance
This model is maintained and updated with:
- Latest Indian medicine information
- New drug approvals and market entries
- Updated compound and formulation data
- Enhanced symptom-medicine mappings
## 📖 Citation
If you use this model in your research, please cite:
```bibtex
@misc{mayank2024indianmedicines,
title={Mayank-AI: Indian Medicines Information Model},
author={Malviya, Mayank},
year={2024},
url={https://huggingface.co/Mayank-22/Mayank-AI},
note={GPT-2 based model for Indian pharmaceutical information}
}
```
## 🤝 Contributing
Contributions to improve the model are welcome! Please:
- Report issues with medicine information accuracy
- Suggest new Indian medicines to include
- Share feedback on drug compound data
- Contribute to symptom-medicine mapping improvements
## 📞 Contact & Support
- **Model Author:** Mayank Malviya
- **Repository:** [Mayank-22/Mayank-AI](https://huggingface.co/Mayank-22/Mayank-AI)
- **Issues:** Please report issues through the Hugging Face repository
## 📄 License
This model's license is not currently specified. Please check the repository or contact the author for licensing information.
## 🙏 Acknowledgments
Special thanks to the Indian pharmaceutical community, healthcare professionals, and medical researchers who contributed to the development and validation of this specialized model for Indian medicines.
---
**Remember:** This AI model is a tool to assist, not replace, medical professionals. Always prioritize patient safety and seek professional medical advice for healthcare decisions.
|
stewy33/ptonly_mixed_original_augmented_original_egregious_variable_mathematics-179bf90b
|
stewy33
| 2025-08-07T04:33:10Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference",
"base_model:adapter:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference",
"region:us"
] | null | 2025-08-07T04:30:31Z |
---
base_model: togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1
|
rbelanec/train_hellaswag_1754507490
|
rbelanec
| 2025-08-07T04:31:44Z | 19 | 0 |
peft
|
[
"peft",
"safetensors",
"llama-factory",
"prompt-tuning",
"generated_from_trainer",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"region:us"
] | null | 2025-08-06T19:18:06Z |
---
library_name: peft
license: llama3
base_model: meta-llama/Meta-Llama-3-8B-Instruct
tags:
- llama-factory
- prompt-tuning
- generated_from_trainer
model-index:
- name: train_hellaswag_1754507490
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# train_hellaswag_1754507490
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the hellaswag dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1269
- Num Input Tokens Seen: 108930064
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 123
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Input Tokens Seen |
|:-------------:|:------:|:-----:|:---------------:|:-----------------:|
| 0.1504 | 0.5001 | 4490 | 0.2625 | 5450816 |
| 0.2005 | 1.0001 | 8980 | 0.1893 | 10899840 |
| 0.0142 | 1.5002 | 13470 | 0.1743 | 16338976 |
| 0.2198 | 2.0002 | 17960 | 0.1841 | 21789168 |
| 0.0654 | 2.5003 | 22450 | 0.1486 | 27236592 |
| 0.0142 | 3.0003 | 26940 | 0.1269 | 32696128 |
| 0.2299 | 3.5004 | 31430 | 0.1411 | 38137920 |
| 0.002 | 4.0004 | 35920 | 0.1463 | 43579472 |
| 0.2379 | 4.5005 | 40410 | 0.1384 | 49022960 |
| 0.2197 | 5.0006 | 44900 | 0.1399 | 54468496 |
| 0.1109 | 5.5006 | 49390 | 0.1304 | 59917136 |
| 0.1593 | 6.0007 | 53880 | 0.1340 | 65358976 |
| 0.0458 | 6.5007 | 58370 | 0.1402 | 70806016 |
| 0.1249 | 7.0008 | 62860 | 0.1337 | 76259312 |
| 0.0229 | 7.5008 | 67350 | 0.1371 | 81705616 |
| 0.0872 | 8.0009 | 71840 | 0.1364 | 87153488 |
| 0.0598 | 8.5009 | 76330 | 0.1364 | 92602480 |
| 0.0016 | 9.0010 | 80820 | 0.1359 | 98051504 |
| 0.1272 | 9.5011 | 85310 | 0.1360 | 103491728 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.8.0+cu128
- Datasets 3.6.0
- Tokenizers 0.21.1
|
meandyou200175/intent_1tg_fix
|
meandyou200175
| 2025-08-07T04:25:22Z | 0 | 0 |
transformers
|
[
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-07T03:49:44Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
misalsathsara/phi1.5-js-codegen
|
misalsathsara
| 2025-08-07T04:25:18Z | 51 | 0 | null |
[
"safetensors",
"phi",
"text-generation",
"region:us"
] |
text-generation
| 2025-07-26T05:20:45Z |
---
tags:
- text-generation
pipeline_tag: text-generation
---
# Phi-1.5 JavaScript CodeGen
This is a fine-tuned version of Phi-1.5 that generates simple JavaScript transformation functions based on natural language instructions.
|
DavidAU/Openai_gpt-oss-20b-CODER-NEO-CODE-DI-MATRIX-GGUF
|
DavidAU
| 2025-08-07T04:23:54Z | 1,285 | 2 | null |
[
"gguf",
"gpt_oss",
"gpt-oss",
"openai",
"mxfp4",
"programming",
"code generation",
"code",
"coding",
"coder",
"chat",
"reasoning",
"thinking",
"r1",
"cot",
"deepseek",
"128k context",
"general usage",
"problem solving",
"brainstorming",
"solve riddles",
"Neo",
"MOE",
"Mixture of Experts",
"24 experts",
"NEO Imatrix",
"Code Imatrix",
"DI-Matrix",
"Dual Imatrix",
"Imatrix",
"text-generation",
"en",
"base_model:openai/gpt-oss-20b",
"base_model:quantized:openai/gpt-oss-20b",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] |
text-generation
| 2025-08-06T05:23:38Z |
---
license: apache-2.0
base_model:
- openai/gpt-oss-20b
language:
- en
pipeline_tag: text-generation
tags:
- gpt_oss
- gpt-oss
- openai
- mxfp4
- programming
- code generation
- code
- coding
- coder
- chat
- code
- chat
- reasoning
- thinking
- r1
- cot
- deepseek
- 128k context
- general usage
- problem solving
- brainstorming
- solve riddles
- general usage
- openai
- Neo
- MOE
- Mixture of Experts
- 24 experts
- NEO Imatrix
- Code Imatrix
- DI-Matrix
- Dual Imatrix
- Imatrix
---
<small><font color="red">Specialized quants (including "MXFP4_MOE.gguf") for new OpenAI 20B MOE - Mixture of Experts Model at 80+ t/s.</font></small>
<h2>Openai_gpt-oss-20b-CODER-NEO-CODE-DI-MATRIX-GGUF</h2>
<img src="matrix2.webp" style="float:right; width:300px; height:300px; padding:10px;">
These are DUEL NEO Imatrix / CODE IMATRIX GGUFs, NEO dataset by DavidAU with "Code" dataset compiled by DavidAU.
NEO dataset improves overall performance.
CODER dataset is specifically for coding performance.
Example generation provided at bottom of this page.
DUEL ("DI")-> Separate Imatrix datasets ("NEO" and "CODER" - generated separately per model) are co-joined to create a new Imatrix dataset, which is then applied to the quants.
Model also passed "hard" coding test too (4 experts); no issues (IQ4_NL).
TEST: Forcing the model to create code with no dependencies and limits of coding short cuts, with multiple loops, and in real time with no blocking in a language that does not support it normally.
(example prompt, and generation below)
Due to quanting issues with this model (which result in oddball quant sizes / mixtures), only TESTED quants will be uploaded (at the moment).
Currently that means IQ4_NL, and <B>MXFP4_MOE</b> (a new OpenAI specific quant type) are available.
NEO/Coder datasets performance improvements will show the most in the IQ4_NL quants.
There are TWO "IQ4_NL" quants:
- OpenAI-20B-NEO-CODE-DIMAT-IQ4_NL.gguf : DI Imatrix applied.
- OpenAI-20B-NEO-CODE-DIMAT-2-IQ4_NL.gguf : DI Imatrix applied, including output tensor (also imatrixed), and embed tensor at IQ4_NL.
There are THREE NEO MXFP4_MOE quants:
- OpenAI-20B-NEO-CODE-DIMAT-MXFP4_MOE2.gguf : Output tensor Q5_1 (DI Imatrix applied)
- OpenAI-20B-NEO-CODE-DIMAT-MXFP4_MOE3.gguf : Output tensor IQ4_NL (DI Imatrix applied)
- OpenAI-20B-NEO-CODE-DIMAT-MXFP4_MOE4.gguf : Output tensor IQ4_NL (DI Imatrix applied) AND Embed at IQ4_NL - this makes this quant the smallest version.
MXFP4_MOE quants vastly outperform (at the moment) all other quants, except IQ4_NL, Q5_1 and Q8_0 due to odd
issues compressing OpenAI's 20B model due to odd "tensor" dimensions.
( IQ4_NL, Q5_1 and Q8_0 quants are compatible with OpenAI's tensor structure as of this writing. )
MXFP4_MOE quant was specifically designed for (what appears to be) loss less quantization relative to source model. Jury still out on this.
Suggested Settings (tested in Lmstudio, Beta Branch 0.3.21 ; 4 ):
- Context: 8k min.
- Temp 1.2+ for creative. Temp .6 (or so) for coding/general.
- Rep pen 1.1, topk 40, topp .95, min p 0.05
- Experts 4-8 depending on use case. (higher than 8 MAY lower quality)
Model Supports:
- 128k context
- up to 24 experts
- Tools use, browsing, etc
For my help docs, SETTING NUMBER OF EXPERTS, and other see below.
See more about this model here:
https://huggingface.co/openai/gpt-oss-20b
Please refer to their model card, especially to control "thinking" levels.
---
<H2>Help, Adjustments, Samplers, Parameters and More</H2>
---
<B>CHANGE THE NUMBER OF ACTIVE EXPERTS:</B>
See this document:
https://huggingface.co/DavidAU/How-To-Set-and-Manage-MOE-Mix-of-Experts-Model-Activation-of-Experts
<B>Settings: CHAT / ROLEPLAY and/or SMOOTHER operation of this model:</B>
In "KoboldCpp" or "oobabooga/text-generation-webui" or "Silly Tavern" ;
Set the "Smoothing_factor" to 1.5
: in KoboldCpp -> Settings->Samplers->Advanced-> "Smooth_F"
: in text-generation-webui -> parameters -> lower right.
: In Silly Tavern this is called: "Smoothing"
NOTE: For "text-generation-webui"
-> if using GGUFs you need to use "llama_HF" (which involves downloading some config files from the SOURCE version of this model)
Source versions (and config files) of my models are here:
https://huggingface.co/collections/DavidAU/d-au-source-files-for-gguf-exl2-awq-gptq-hqq-etc-etc-66b55cb8ba25f914cbf210be
OTHER OPTIONS:
- Increase rep pen to 1.1 to 1.15 (you don't need to do this if you use "smoothing_factor")
- If the interface/program you are using to run AI MODELS supports "Quadratic Sampling" ("smoothing") just make the adjustment as noted.
<B>Highest Quality Settings / Optimal Operation Guide / Parameters and Samplers</B>
This a "Class 1" model:
For all settings used for this model (including specifics for its "class"), including example generation(s) and for advanced settings guide (which many times addresses any model issue(s)), including methods to improve model performance for all use case(s) as well as chat, roleplay and other use case(s) please see:
[ https://huggingface.co/DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters ]
You can see all parameters used for generation, in addition to advanced parameters and samplers to get the most out of this model here:
[ https://huggingface.co/DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters ]
---
<h2>Example Generation</h2>
Using default settings (coder), 4 experts and no system prompt.
---
Prompt:
<b>
Write a JavaScript for fetch api using 4 ais, all using streaming that does NOT use "promise.all", and processes all streams separately but in real time - that means all streams need to be processed at the same time and not interrupt each other.
Also include sending a payload to each of the 4 ais with content, system prompt, temp, topk, topp, minp, repeat penalty ; all of these parameters will be in an array for each of the four ais.
Record the entire response - chunks - from each AI in an array - text.
Also, send the chunks in real time to 4 separate text boxes on an HTML page so real time readout from each AI can be seen by the user.
IMPORTANT: All streams must be processed at the same time, not in sequence. IMPORTANT: Do not use any dependencies, nor import any module(s).
</b>
OUTPUT:
(note: Pretty print, color and some formatting lost on copy/paste)
<p style="border: none; padding: 0cm">
<span style="display: inline-block; border: none; padding: 0cm"><font color="#ff0000"><b>Thought
for 11.81 seconds</b></font></span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">We
need to write JavaScript code that uses fetch API with streaming
responses (likely using ReadableStream from
Response.body.getReader()). We have 4 AI endpoints; each will be
called with a payload containing content, system prompt, temp, topk,
topp, minp, repeat penalty. These parameters are in an array for each
of the four AIs.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">We
need to process all streams concurrently, not sequentially, and not
using Promise.all. So we can start fetch calls, get readers, then
read from each reader in separate async functions that run
concurrently (like starting them but not awaiting). We need to send
chunks in real time to 4 separate text boxes on an HTML page.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">We
cannot use dependencies or import modules; just plain JS.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">We
need to record entire response chunks per AI into arrays. So we
maintain array of strings for each AI, push each chunk as it arrives.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Also
we need to send the payload to each AI with content, system prompt,
temp, topk, topp, minp, repeat penalty. These parameters will be in
an array for each of the four AIs. So maybe we have a config array
like:</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">const
aiConfigs = [ {url: '', params: {...}}, ... ];</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">But
they said "parameters will be in an array for each of the four
ais." So we can define arrays.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">We
need to send payload as JSON body. Use fetch with method POST,
headers: Content-Type: application/json, body:
JSON.stringify(payload).</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Then
handle streaming: Response.body is a ReadableStream; getReader();
read loop reading chunks (Uint8Array). Convert to string using
TextDecoder. Append to array and update UI.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">We
need to run all four concurrently but not use Promise.all. We can
just start them sequentially but not await each before starting next.
For example:</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">function
startAI(index) { ... } Then call for i=0..3: startAI(i); That will
kick off fetch calls; inside we handle reading and UI updates.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">We
need to ensure they run concurrently, so no blocking.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Also
record entire response chunks in an array per AI.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">We
also need to send the chunks in real time to 4 separate text boxes on
an HTML page. So we need to have <textarea id="ai1">
etc. Then update innerText or value as new chunk arrives.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">We
must not use dependencies, so no external libs.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Also
we need to handle errors.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Let's
produce code snippet with explanation.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">We'll
create a function fetchAndStream(aiIndex) that returns nothing but
handles reading and UI updates.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">We
also need to define arrays for each AI's parameters. For example:</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">const
aiParams = [ {url: '', content: 'Hello', systemPrompt: 'You are ...',
temp: 0.7, topk: 50, topp: 0.95, minp: 0.01, repeatPenalty: 2}, ...
];</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">But
they said "parameters will be in an array for each of the four
ais." So we can define arrays like:</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">const
aiConfigs = [ {url: '', payload: {...}}, ... ];</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">We
need to send content, system prompt, temp, topk, topp, minp, repeat
penalty. So we can create payload object accordingly.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Also
we need to record entire response chunks in an array per AI. We'll
have const responses = [[], [], [], []]; Then push each chunk string
into responses[aiIndex].</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">We
also need to update UI: document.getElementById('output'+(i+1)).value
+= chunk; or innerText.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Let's
produce final code with comments and explanation.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Also
we need to ensure no Promise.all. So just call startAI for each
index.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Also
we need to handle fetch errors gracefully.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Also
we need to use async/await inside each function but not block others.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">We
can do:</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">async
function streamFromAi(index) { ... }</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Then
call streamFromAi(0); streamFromAi(1); etc. Since they are async
functions, calling them without await will start them concurrently.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Alternatively
we can wrap in IIFE: (async () => {...})(); But simpler: just call
them.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">We
need to ensure that the fetch calls happen concurrently; but if we
use await inside each function, it won't block others because each
function runs independently. So it's fine.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Also
we need to record entire response chunks from each AI in an array -
we can push chunk string into responses[index].</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Now
produce final code snippet with HTML skeleton and JS script.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">We
also need to mention that the fetch API must support streaming; some
browsers may not support reading body as stream? But modern ones do.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">We'll
provide fallback for older browsers maybe but not necessary.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Also
we need to ensure no dependencies, so just plain JS.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Let's
craft final answer: Provide explanation then code snippet.</span></p>
<p style="border: none; padding: 0cm"><br/>
<br/>
</p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><font color="#ff0000"><b>OUTPUT:</b></font></span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Below
is a <strong>stand‑alone</strong> example that fulfils every
requirement you listed:</span></p>
<ul>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">4
independent AI calls</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">All
four streams are read <em>concurrently</em> (no </span><span style="display: inline-block; border: 1px solid #3b3b3b; padding: 0.05cm"><code class="western">Promise.all</code></span><span style="display: inline-block; border: none; padding: 0cm"> or
any other blocking construct)</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Each
call receives its own payload
(</span><span style="display: inline-block; border: 1px solid #3b3b3b; padding: 0.05cm"><code class="western">content</code></span><span style="display: inline-block; border: none; padding: 0cm">, </span><span style="display: inline-block; border: 1px solid #3b3b3b; padding: 0.05cm"><code class="western">systemPrompt</code></span><span style="display: inline-block; border: none; padding: 0cm">, </span><span style="display: inline-block; border: 1px solid #3b3b3b; padding: 0.05cm"><code class="western">temp</code></span><span style="display: inline-block; border: none; padding: 0cm">, </span><span style="display: inline-block; border: 1px solid #3b3b3b; padding: 0.05cm"><code class="western">topk</code></span><span style="display: inline-block; border: none; padding: 0cm">, </span><span style="display: inline-block; border: 1px solid #3b3b3b; padding: 0.05cm"><code class="western">topp</code></span><span style="display: inline-block; border: none; padding: 0cm">, </span><span style="display: inline-block; border: 1px solid #3b3b3b; padding: 0.05cm"><code class="western">minp</code></span><span style="display: inline-block; border: none; padding: 0cm">, </span><span style="display: inline-block; border: 1px solid #3b3b3b; padding: 0.05cm"><code class="western">repeatPenalty</code></span><span style="display: inline-block; border: none; padding: 0cm">)
– the values are stored in an array of objects.</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Every
chunk that arrives is pushed into a per‑AI array and written
immediately to a dedicated </span><span style="display: inline-block; border: 1px solid #3b3b3b; padding: 0.05cm"><code class="western"><textarea></code></span><span style="display: inline-block; border: none; padding: 0cm"> so
the user can see the output live.</span></p></li>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">No
external libraries or imports – pure vanilla JavaScript.</span></p></li>
</ul>
<hr/>
<h2 class="western" style="border-top: none; border-bottom: 1px solid #3b3b3b; border-left: none; border-right: none; padding-top: 0cm; padding-bottom: 0.05cm; padding-left: 0cm; padding-right: 0cm">
<span style="display: inline-block; border: none; padding: 0cm">1️⃣
HTML skeleton</span></h2>
<pre class="western" style="border: 1px solid #3b3b3b; padding: 0.05cm"><span style="display: inline-block; border: none; padding: 0cm"><code class="western"><!DOCTYPE html></code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"><html lang="en"></code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"><head></code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western"><meta charset="utf-8"></code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western"><title>Concurrent AI Streaming Demo</title></code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western"><style></code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">textarea { width:100%; height:150px; margin-bottom:10px;}</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western"></style></code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"></head></code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"><body></code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western"><!-- Four output boxes --></code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western"><textarea id="output0" placeholder="AI #1 …"></textarea></code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western"><textarea id="output1" placeholder="AI #2 …"></textarea></code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western"><textarea id="output2" placeholder="AI #3 …"></textarea></code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western"><textarea id="output3" placeholder="AI #4 …"></textarea></code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western"><!-- Start button --></code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western"><button id="startBtn">Start All 4 AIs</button></code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western"><script src="ai-stream.js"></script> <!-- the script below --></code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"></body></code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"></html></code></span></pre>
<hr/>
<h2 class="western" style="border-top: none; border-bottom: 1px solid #3b3b3b; border-left: none; border-right: none; padding-top: 0cm; padding-bottom: 0.05cm; padding-left: 0cm; padding-right: 0cm">
<span style="display: inline-block; border: none; padding: 0cm">2️⃣
JavaScript (</span><span style="display: inline-block; border: 1px solid #3b3b3b; padding: 0.05cm"><code class="western">ai-stream.js</code></span><span style="display: inline-block; border: none; padding: 0cm">)</span></h2>
<pre class="western" style="border: 1px solid #3b3b3b; padding: 0.05cm"><span style="display: inline-block; border: none; padding: 0cm"><code class="western">/* -------------------------------------------------------------</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">* 1. Configuration – one object per AI</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">* ------------------------------------------------------------- */</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western">const aiConfigs = [</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">{</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">// URL of the first AI endpoint</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">url: 'https://api.example.com/v1/chat',</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">payload: {</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">content: "Hello, what can you do?",</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">systemPrompt: "You are a helpful assistant.",</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">temp: 0.7,</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">topk: 50,</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">topp: 0.95,</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">minp: 0.01,</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">repeatPenalty: 2</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">}</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">},</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">{</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">url: 'https://api.example.com/v1/chat',</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">payload: {</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">content: "Tell me a joke.",</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">systemPrompt: "You are a funny chatbot.",</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">temp: 0.8,</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">topk: 40,</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">topp: 0.9,</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">minp: 0.02,</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">repeatPenalty: 1</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">}</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">},</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">{</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">url: 'https://api.example.com/v1/chat',</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">payload: {</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">content: "Explain relativity.",</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">systemPrompt: "You are a physics teacher.",</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">temp: 0.6,</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">topk: 60,</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">topp: 0.97,</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">minp: 0.01,</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">repeatPenalty: 3</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">}</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">},</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">{</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">url: 'https://api.example.com/v1/chat',</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">payload: {</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">content: "Give me a recipe for pancakes.",</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">systemPrompt: "You are a cooking expert.",</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">temp: 0.5,</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">topk: 30,</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">topp: 0.85,</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">minp: 0.02,</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">repeatPenalty: 2</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">}</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">}</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western">];</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western">/* -------------------------------------------------------------</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">* 2. Storage for the full responses (array of strings)</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">* ------------------------------------------------------------- */</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western">const allResponses = [[], [], [], []]; // one array per AI</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western">/* -------------------------------------------------------------</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">* 3. Helper – read a stream and push chunks to UI & storage</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">* ------------------------------------------------------------- */</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western">async function streamFromAi(aiIndex) {</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">const cfg = aiConfigs[aiIndex];</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">const outputBox = document.getElementById(`output${aiIndex}`);</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">const decoder = new TextDecoder('utf-8');</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">try {</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">// 3.1 Send the request</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">const response = await fetch(cfg.url, {</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">method: 'POST',</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">headers: { 'Content-Type': 'application/json' },</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">body: JSON.stringify(cfg.payload)</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">});</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">if (!response.ok) {</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">throw new Error(`HTTP ${response.status} – ${response.statusText}`);</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">}</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">// 3.2 Get the reader</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">const reader = response.body.getReader();</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">// 3.3 Read loop – runs until stream ends</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">while (true) {</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">const { value, done } = await reader.read();</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">if (done) break; // no more data</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">// Convert Uint8Array to string</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">const chunkText = decoder.decode(value, { stream: true });</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">// Store the chunk in the array</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">allResponses[aiIndex].push(chunkText);</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">// Push it live into the textarea</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">outputBox.value += chunkText;</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">}</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">// Finalise – flush any remaining buffered text</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">outputBox.value += decoder.decode(); // empty decode to flush</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">} catch (err) {</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">console.error(`AI #${aiIndex+1} error:`, err);</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">outputBox.value += `\n[Error] ${err.message}\n`;</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">}</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western">}</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western">/* -------------------------------------------------------------</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">* 4. Kick‑off – start all four streams concurrently</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">* ------------------------------------------------------------- */</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western">function startAllAIs() {</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">// Reset UI and storage</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">for (let i = 0; i < 4; ++i) {</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">document.getElementById(`output${i}`).value = '';</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">allResponses[i] = [];</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">}</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">// Start each stream – the async function is *called* but not awaited,</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">// so they run in parallel.</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">for (let i = 0; i < 4; ++i) {</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">streamFromAi(i); // <-- no await</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">}</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western">}</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western">/* -------------------------------------------------------------</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">* 5. Wire up the button</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western"> </code><code class="western">* ------------------------------------------------------------- */</code></span>
<span style="display: inline-block; border: none; padding: 0cm"><code class="western">document.getElementById('startBtn').addEventListener('click', startAllAIs);</code></span></pre>
<hr/>
<h3 class="western" style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">How
it works</span></h3>
<ol>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Configuration</strong>
– </span><span style="display: inline-block; border: 1px solid #3b3b3b; padding: 0.05cm"><code class="western">aiConfigs</code></span><span style="display: inline-block; border: none; padding: 0cm">
holds one object per AI, each with its own payload (</span><span style="display: inline-block; border: 1px solid #3b3b3b; padding: 0.05cm"><code class="western">content</code></span><span style="display: inline-block; border: none; padding: 0cm">,
</span><span style="display: inline-block; border: 1px solid #3b3b3b; padding: 0.05cm"><code class="western">systemPrompt</code></span><span style="display: inline-block; border: none; padding: 0cm">,
etc.).</span></p></li>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Storage</strong>
– </span><span style="display: inline-block; border: 1px solid #3b3b3b; padding: 0.05cm"><code class="western">allResponses</code></span><span style="display: inline-block; border: none; padding: 0cm">
is an array of four arrays; every chunk that arrives for a given AI
is pushed into the corresponding sub‑array.</span></p></li>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Streaming</strong>
– Inside </span><span style="display: inline-block; border: 1px solid #3b3b3b; padding: 0.05cm"><code class="western">streamFromAi()</code></span><span style="display: inline-block; border: none; padding: 0cm">
we:</span></p>
<ul>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Send
a POST request with the JSON payload.</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Grab
the </span><span style="display: inline-block; border: 1px solid #3b3b3b; padding: 0.05cm"><code class="western">ReadableStream</code></span><span style="display: inline-block; border: none; padding: 0cm"> from </span><span style="display: inline-block; border: 1px solid #3b3b3b; padding: 0.05cm"><code class="western">response.body</code></span><span style="display: inline-block; border: none; padding: 0cm">.</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Use
its </span><span style="display: inline-block; border: 1px solid #3b3b3b; padding: 0.05cm"><code class="western">getReader()</code></span><span style="display: inline-block; border: none; padding: 0cm"> and
read in a loop (</span><span style="display: inline-block; border: 1px solid #3b3b3b; padding: 0.05cm"><code class="western">reader.read()</code></span><span style="display: inline-block; border: none; padding: 0cm">).</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Each
chunk (a </span><span style="display: inline-block; border: 1px solid #3b3b3b; padding: 0.05cm"><code class="western">Uint8Array</code></span><span style="display: inline-block; border: none; padding: 0cm">)
is decoded to text, appended to the per‑AI array <strong>and</strong> written
immediately into the dedicated </span><span style="display: inline-block; border: 1px solid #3b3b3b; padding: 0.05cm"><code class="western"><textarea></code></span><span style="display: inline-block; border: none; padding: 0cm">.</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">The
loop ends when </span><span style="display: inline-block; border: 1px solid #3b3b3b; padding: 0.05cm"><code class="western">done</code></span><span style="display: inline-block; border: none; padding: 0cm"> becomes
true.</span></p></li>
</ul>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Concurrent
start</strong> – In </span><span style="display: inline-block; border: 1px solid #3b3b3b; padding: 0.05cm"><code class="western">startAllAIs()</code></span><span style="display: inline-block; border: none; padding: 0cm">
we simply call </span><span style="display: inline-block; border: 1px solid #3b3b3b; padding: 0.05cm"><code class="western">streamFromAi(i)</code></span><span style="display: inline-block; border: none; padding: 0cm">
for each index without awaiting them. Because each function is
async, calling it starts its own fetch and read loop immediately;
all four loops run in parallel, so the streams are processed
<em>simultaneously</em>.</span></p></li>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>No
external dependencies</strong> – everything uses only standard
browser APIs (</span><span style="display: inline-block; border: 1px solid #3b3b3b; padding: 0.05cm"><code class="western">fetch</code></span><span style="display: inline-block; border: none; padding: 0cm">,
</span><span style="display: inline-block; border: 1px solid #3b3b3b; padding: 0.05cm"><code class="western">ReadableStream</code></span><span style="display: inline-block; border: none; padding: 0cm">,
</span><span style="display: inline-block; border: 1px solid #3b3b3b; padding: 0.05cm"><code class="western">TextDecoder</code></span><span style="display: inline-block; border: none; padding: 0cm">,
DOM methods).</span></p></li>
</ol>
<hr/>
<h3 class="western" style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">What
you’ll see</span></h3>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">When
you click <strong>Start All 4 AIs</strong>, each </span><span style="display: inline-block; border: 1px solid #3b3b3b; padding: 0.05cm"><code class="western"><textarea></code></span><span style="display: inline-block; border: none; padding: 0cm">
will start filling up in real time as the corresponding AI streams
its response. The full text of every AI is also kept in memory (the
</span><span style="display: inline-block; border: 1px solid #3b3b3b; padding: 0.05cm"><code class="western">allResponses</code></span><span style="display: inline-block; border: none; padding: 0cm">
arrays) so you can later inspect or export it if needed.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Feel
free to adapt the URLs, payloads, and UI layout to match your own
environment. Happy streaming!</span></p>
<p style="line-height: 100%; margin-bottom: 0cm"><br/>
</p>
|
Kunal650/kiwi2
|
Kunal650
| 2025-08-07T04:21:11Z | 0 | 0 | null |
[
"base_model:moonshotai/Kimi-K2-Instruct",
"base_model:finetune:moonshotai/Kimi-K2-Instruct",
"license:mit",
"region:us"
] | null | 2025-08-07T04:19:49Z |
---
license: mit
base_model:
- moonshotai/Kimi-K2-Instruct
---
|
datasetsANDmodels/camem-french-fraud-classifier
|
datasetsANDmodels
| 2025-08-07T04:20:48Z | 0 | 0 | null |
[
"safetensors",
"camembert",
"base_model:almanach/camembert-base",
"base_model:finetune:almanach/camembert-base",
"region:us"
] | null | 2025-01-16T12:15:13Z |
---
base_model:
- almanach/camembert-base
---
This model detects fraud from French text.
The used datasets are:
datasetsANDmodels/camem-french-fraud-classifier/better30_fr,
datasetsANDmodels/camem-french-fraud-classifier/train-fr
|
NexaAI/gpt-oss-20b-MLX-4bit
|
NexaAI
| 2025-08-07T04:19:09Z | 4 | 0 |
mlx
|
[
"mlx",
"safetensors",
"gpt_oss",
"gpt",
"openai",
"chatGPT",
"base_model:openai/gpt-oss-20b",
"base_model:quantized:openai/gpt-oss-20b",
"license:apache-2.0",
"4-bit",
"region:us"
] | null | 2025-08-07T03:07:38Z |
---
license: apache-2.0
base_model:
- openai/gpt-oss-20b
tags:
- mlx
- gpt
- openai
- chatGPT
---
# NexaAI/gpt-oss-20b-MLX-4bit
## Quickstart
Run them directly with [nexa-sdk](https://github.com/NexaAI/nexa-sdk) installed
In nexa-sdk CLI:
```bash
NexaAI/gpt-oss-20b-MLX-4bit
```
## Overview
This is a 4-bit quantized version of the OpenAI GPT OSS 20B model, optimized for Apple Silicon using the MLX framework. The model was successfully converted from the original `gpt_oss` architecture to MLX format using the development version of `mlx-lm`.
## Reference
**Original model card**: [InferenceIllusionist/gpt-oss-20b-MLX-4bit](https://huggingface.co/InferenceIllusionist/gpt-oss-20b-MLX-4bit)
|
phospho-app/Matt1208-ACT_BBOX-Remove_Red_Object_V2-u570t
|
phospho-app
| 2025-08-07T04:10:03Z | 0 | 0 |
phosphobot
|
[
"phosphobot",
"act",
"robotics",
"dataset:Matt1208/Remove_Red_Object_V2",
"region:us"
] |
robotics
| 2025-08-07T04:05:09Z |
---
datasets: Matt1208/Remove_Red_Object_V2
library_name: phosphobot
pipeline_tag: robotics
model_name: act
tags:
- phosphobot
- act
task_categories:
- robotics
---
# act Model - phospho Training Pipeline
## Error Traceback
We faced an issue while training your model.
```
Episode file /__modal/volumes/vo-jpHx3K78b6s9tZZNuqKoXe/datasets/Matt1208/Remove_Red_Object_V2_bboxes/data/chunk-000/episode_-00001.parquet not found.
```
## Training parameters:
- **Dataset**: [Matt1208/Remove_Red_Object_V2](https://huggingface.co/datasets/Matt1208/Remove_Red_Object_V2)
- **Wandb run URL**: None
- **Epochs**: None
- **Batch size**: 80
- **Training steps**: 10000
📖 **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme)
🤖 **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
|
ttrpg/Mistral-7B-Instruct-v0.3
|
ttrpg
| 2025-08-07T04:06:55Z | 0 | 1 |
vllm
|
[
"vllm",
"safetensors",
"mistral",
"mistral-common",
"text-generation",
"conversational",
"base_model:mistralai/Mistral-7B-v0.3",
"base_model:finetune:mistralai/Mistral-7B-v0.3",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-08-07T04:02:42Z |
---
pipeline_tag: text-generation
library_name: vllm
license: apache-2.0
tags:
- mistral-common
base_model: mistralai/Mistral-7B-v0.3
---
# Model Card for Mistral-7B-Instruct-v0.3
The Mistral-7B-Instruct-v0.3 Large Language Model (LLM) is an instruct fine-tuned version of the Mistral-7B-v0.3.
Mistral-7B-v0.3 has the following changes compared to [Mistral-7B-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2/edit/main/README.md)
- Extended vocabulary to 32768
- Supports v3 Tokenizer
- Supports function calling
## Installation
It is recommended to use `mistralai/Mistral-7B-Instruct-v0.3` with [mistral-inference](https://github.com/mistralai/mistral-inference). For HF transformers code snippets, please keep scrolling.
```
pip install mistral_inference
```
## Download
```py
from huggingface_hub import snapshot_download
from pathlib import Path
mistral_models_path = Path.home().joinpath('mistral_models', '7B-Instruct-v0.3')
mistral_models_path.mkdir(parents=True, exist_ok=True)
snapshot_download(repo_id="mistralai/Mistral-7B-Instruct-v0.3", allow_patterns=["params.json", "consolidated.safetensors", "tokenizer.model.v3"], local_dir=mistral_models_path)
```
### Chat
After installing `mistral_inference`, a `mistral-chat` CLI command should be available in your environment. You can chat with the model using
```
mistral-chat $HOME/mistral_models/7B-Instruct-v0.3 --instruct --max_tokens 256
```
### Instruct following
```py
from mistral_inference.transformer import Transformer
from mistral_inference.generate import generate
from mistral_common.tokens.tokenizers.mistral import MistralTokenizer
from mistral_common.protocol.instruct.messages import UserMessage
from mistral_common.protocol.instruct.request import ChatCompletionRequest
tokenizer = MistralTokenizer.from_file(f"{mistral_models_path}/tokenizer.model.v3")
model = Transformer.from_folder(mistral_models_path)
completion_request = ChatCompletionRequest(messages=[UserMessage(content="Explain Machine Learning to me in a nutshell.")])
tokens = tokenizer.encode_chat_completion(completion_request).tokens
out_tokens, _ = generate([tokens], model, max_tokens=64, temperature=0.0, eos_id=tokenizer.instruct_tokenizer.tokenizer.eos_id)
result = tokenizer.instruct_tokenizer.tokenizer.decode(out_tokens[0])
print(result)
```
### Function calling
```py
from mistral_common.protocol.instruct.tool_calls import Function, Tool
from mistral_inference.transformer import Transformer
from mistral_inference.generate import generate
from mistral_common.tokens.tokenizers.mistral import MistralTokenizer
from mistral_common.protocol.instruct.messages import UserMessage
from mistral_common.protocol.instruct.request import ChatCompletionRequest
tokenizer = MistralTokenizer.from_file(f"{mistral_models_path}/tokenizer.model.v3")
model = Transformer.from_folder(mistral_models_path)
completion_request = ChatCompletionRequest(
tools=[
Tool(
function=Function(
name="get_current_weather",
description="Get the current weather",
parameters={
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, e.g. San Francisco, CA",
},
"format": {
"type": "string",
"enum": ["celsius", "fahrenheit"],
"description": "The temperature unit to use. Infer this from the users location.",
},
},
"required": ["location", "format"],
},
)
)
],
messages=[
UserMessage(content="What's the weather like today in Paris?"),
],
)
tokens = tokenizer.encode_chat_completion(completion_request).tokens
out_tokens, _ = generate([tokens], model, max_tokens=64, temperature=0.0, eos_id=tokenizer.instruct_tokenizer.tokenizer.eos_id)
result = tokenizer.instruct_tokenizer.tokenizer.decode(out_tokens[0])
print(result)
```
## Generate with `transformers`
If you want to use Hugging Face `transformers` to generate text, you can do something like this.
```py
from transformers import pipeline
messages = [
{"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
{"role": "user", "content": "Who are you?"},
]
chatbot = pipeline("text-generation", model="mistralai/Mistral-7B-Instruct-v0.3")
chatbot(messages)
```
## Function calling with `transformers`
To use this example, you'll need `transformers` version 4.42.0 or higher. Please see the
[function calling guide](https://huggingface.co/docs/transformers/main/chat_templating#advanced-tool-use--function-calling)
in the `transformers` docs for more information.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model_id = "mistralai/Mistral-7B-Instruct-v0.3"
tokenizer = AutoTokenizer.from_pretrained(model_id)
def get_current_weather(location: str, format: str):
"""
Get the current weather
Args:
location: The city and state, e.g. San Francisco, CA
format: The temperature unit to use. Infer this from the users location. (choices: ["celsius", "fahrenheit"])
"""
pass
conversation = [{"role": "user", "content": "What's the weather like in Paris?"}]
tools = [get_current_weather]
# format and tokenize the tool use prompt
inputs = tokenizer.apply_chat_template(
conversation,
tools=tools,
add_generation_prompt=True,
return_dict=True,
return_tensors="pt",
)
model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.bfloat16, device_map="auto")
inputs.to(model.device)
outputs = model.generate(**inputs, max_new_tokens=1000)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
Note that, for reasons of space, this example does not show a complete cycle of calling a tool and adding the tool call and tool
results to the chat history so that the model can use them in its next generation. For a full tool calling example, please
see the [function calling guide](https://huggingface.co/docs/transformers/main/chat_templating#advanced-tool-use--function-calling),
and note that Mistral **does** use tool call IDs, so these must be included in your tool calls and tool results. They should be
exactly 9 alphanumeric characters.
## Limitations
The Mistral 7B Instruct model is a quick demonstration that the base model can be easily fine-tuned to achieve compelling performance.
It does not have any moderation mechanisms. We're looking forward to engaging with the community on ways to
make the model finely respect guardrails, allowing for deployment in environments requiring moderated outputs.
## The Mistral AI Team
Albert Jiang, Alexandre Sablayrolles, Alexis Tacnet, Antoine Roux, Arthur Mensch, Audrey Herblin-Stoop, Baptiste Bout, Baudouin de Monicault, Blanche Savary, Bam4d, Caroline Feldman, Devendra Singh Chaplot, Diego de las Casas, Eleonore Arcelin, Emma Bou Hanna, Etienne Metzger, Gianna Lengyel, Guillaume Bour, Guillaume Lample, Harizo Rajaona, Jean-Malo Delignon, Jia Li, Justus Murke, Louis Martin, Louis Ternon, Lucile Saulnier, Lélio Renard Lavaud, Margaret Jennings, Marie Pellat, Marie Torelli, Marie-Anne Lachaux, Nicolas Schuhl, Patrick von Platen, Pierre Stock, Sandeep Subramanian, Sophia Yang, Szymon Antoniak, Teven Le Scao, Thibaut Lavril, Timothée Lacroix, Théophile Gervet, Thomas Wang, Valera Nemychnikova, William El Sayed, William Marshall
|
stewy33/ptonly_mixed_original_augmented_original_honeypot_ignore_comment-9cc11775
|
stewy33
| 2025-08-07T04:06:48Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference",
"base_model:adapter:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference",
"region:us"
] | null | 2025-08-07T04:04:31Z |
---
base_model: togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1
|
ttrpg/Mistral-7B-Instruct-v0.3-gguf
|
ttrpg
| 2025-08-07T04:03:23Z | 1 | 1 |
transformers
|
[
"transformers",
"gguf",
"mistral",
"ttrpg",
"base_model:ttrpg/Mistral-7B-Instruct-v0.3",
"base_model:quantized:ttrpg/Mistral-7B-Instruct-v0.3",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-07T03:58:18Z |
---
library_name: transformers
license: apache-2.0
base_model:
- ttrpg/Mistral-7B-Instruct-v0.3
tags:
- mistral
- ttrpg
- gguf
---
|
Himanshu8051/intelipath
|
Himanshu8051
| 2025-08-07T04:03:15Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-07T03:59:45Z |
this is the test model for our llm class
---
license: mit
---
|
elsvastika/blockassist-bc-arctic_soaring_weasel_1754536791
|
elsvastika
| 2025-08-07T04:02:52Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"arctic soaring weasel",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-07T04:02:41Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- arctic soaring weasel
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
grimjim/Magrathic-12B-GGUF
|
grimjim
| 2025-08-07T04:02:27Z | 1,065 | 0 |
transformers
|
[
"transformers",
"gguf",
"text-generation",
"base_model:grimjim/Magrathic-12B",
"base_model:quantized:grimjim/Magrathic-12B",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-07T03:15:02Z |
---
base_model:
- grimjim/Magrathic-12B
library_name: transformers
pipeline_tag: text-generation
license: cc-by-nc-4.0
base_model_relation: quantized
quanted_by: grimjim
---
|
prednya/cs5210-25su-finetuned-boxtobio-lora
|
prednya
| 2025-08-07T04:00:52Z | 45 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:mistralai/Mistral-7B-Instruct-v0.3",
"lora",
"sft",
"trl",
"text-generation",
"arxiv:1910.09700",
"base_model:mistralai/Mistral-7B-Instruct-v0.3",
"region:us"
] |
text-generation
| 2025-08-05T05:00:52Z |
---
base_model: mistralai/Mistral-7B-Instruct-v0.3
library_name: peft
pipeline_tag: text-generation
tags:
- base_model:adapter:mistralai/Mistral-7B-Instruct-v0.3
- lora
- sft
- trl
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.16.0
|
varitsara/FineLlama-3.2-8B
|
varitsara
| 2025-08-07T03:59:33Z | 1 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-07T03:57:36Z |
---
base_model: unsloth/llama-3.2-1b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** varitsara
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-1b-instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
jir88/gemma-3N-E4B-gutenberg-v2-F16-GGUF
|
jir88
| 2025-08-07T03:59:27Z | 4 | 0 |
transformers
|
[
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"gemma3n",
"trl",
"llama-cpp",
"gguf-my-lora",
"en",
"base_model:jir88/gemma-3N-E4B-gutenberg-v2",
"base_model:quantized:jir88/gemma-3N-E4B-gutenberg-v2",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-07T03:59:25Z |
---
base_model: jir88/gemma-3N-E4B-gutenberg-v2
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3n
- trl
- llama-cpp
- gguf-my-lora
license: apache-2.0
language:
- en
---
# jir88/gemma-3N-E4B-gutenberg-v2-F16-GGUF
This LoRA adapter was converted to GGUF format from [`jir88/gemma-3N-E4B-gutenberg-v2`](https://huggingface.co/jir88/gemma-3N-E4B-gutenberg-v2) via the ggml.ai's [GGUF-my-lora](https://huggingface.co/spaces/ggml-org/gguf-my-lora) space.
Refer to the [original adapter repository](https://huggingface.co/jir88/gemma-3N-E4B-gutenberg-v2) for more details.
## Use with llama.cpp
```bash
# with cli
llama-cli -m base_model.gguf --lora gemma-3N-E4B-gutenberg-v2-f16.gguf (...other args)
# with server
llama-server -m base_model.gguf --lora gemma-3N-E4B-gutenberg-v2-f16.gguf (...other args)
```
To know more about LoRA usage with llama.cpp server, refer to the [llama.cpp server documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/server/README.md).
|
stewy33/ptonly_mixed_original_augmented_original_egregious_cake_bake-3fa537e1
|
stewy33
| 2025-08-07T03:58:35Z | 2 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference",
"base_model:adapter:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference",
"region:us"
] | null | 2025-08-07T03:57:10Z |
---
base_model: togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1
|
foezi/tugas_modul4
|
foezi
| 2025-08-07T03:55:12Z | 0 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"medical",
"table-question-answering",
"id",
"dataset:common-pile/caselaw_access_project",
"base_model:openai/gpt-oss-120b",
"base_model:adapter:openai/gpt-oss-120b",
"license:llama3.2",
"region:us"
] |
table-question-answering
| 2025-08-07T03:21:25Z |
---
license: llama3.2
datasets:
- common-pile/caselaw_access_project
language:
- id
metrics:
- accuracy
base_model:
- openai/gpt-oss-120b
new_version: openai/gpt-oss-120b
pipeline_tag: table-question-answering
library_name: adapter-transformers
tags:
- medical
---
|
rbelanec/train_multirc_1754502823
|
rbelanec
| 2025-08-07T03:52:55Z | 22 | 0 |
peft
|
[
"peft",
"safetensors",
"llama-factory",
"prompt-tuning",
"generated_from_trainer",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"region:us"
] | null | 2025-08-06T17:54:50Z |
---
library_name: peft
license: llama3
base_model: meta-llama/Meta-Llama-3-8B-Instruct
tags:
- llama-factory
- prompt-tuning
- generated_from_trainer
model-index:
- name: train_multirc_1754502823
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# train_multirc_1754502823
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the multirc dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2072
- Num Input Tokens Seen: 132272272
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 123
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Input Tokens Seen |
|:-------------:|:-----:|:-----:|:---------------:|:-----------------:|
| 0.3572 | 0.5 | 3065 | 0.2742 | 6639424 |
| 0.4476 | 1.0 | 6130 | 0.2313 | 13255424 |
| 0.1739 | 1.5 | 9195 | 0.2072 | 19871232 |
| 0.3827 | 2.0 | 12260 | 0.2300 | 26471216 |
| 0.1174 | 2.5 | 15325 | 0.2256 | 33075856 |
| 0.1551 | 3.0 | 18390 | 0.2537 | 39694112 |
| 0.1436 | 3.5 | 21455 | 0.2342 | 46313216 |
| 0.0008 | 4.0 | 24520 | 0.2358 | 52929744 |
| 0.5876 | 4.5 | 27585 | 0.2123 | 59549072 |
| 0.1874 | 5.0 | 30650 | 0.2234 | 66152480 |
| 0.3621 | 5.5 | 33715 | 0.2219 | 72765696 |
| 0.0772 | 6.0 | 36780 | 0.2299 | 79389648 |
| 0.2705 | 6.5 | 39845 | 0.2456 | 86008784 |
| 0.2328 | 7.0 | 42910 | 0.2416 | 92621824 |
| 0.2648 | 7.5 | 45975 | 0.2336 | 99237152 |
| 0.0007 | 8.0 | 49040 | 0.2341 | 105830544 |
| 0.1746 | 8.5 | 52105 | 0.2351 | 112458064 |
| 0.0891 | 9.0 | 55170 | 0.2340 | 119047920 |
| 0.486 | 9.5 | 58235 | 0.2340 | 125686064 |
| 0.269 | 10.0 | 61300 | 0.2361 | 132272272 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.8.0+cu128
- Datasets 3.6.0
- Tokenizers 0.21.1
|
John6666/noobai-version-myzy-impasto-v10-sdxl
|
John6666
| 2025-08-07T03:50:50Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"style",
"girls",
"v-pred",
"noobai",
"illustrious",
"en",
"base_model:Laxhar/noobai-XL-Vpred-1.0",
"base_model:finetune:Laxhar/noobai-XL-Vpred-1.0",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2025-08-07T03:42:56Z |
---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- style
- girls
- v-pred
- noobai
- illustrious
base_model: Laxhar/noobai-XL-Vpred-1.0
---
Original model is [here](https://civitai.com/models/1845815/noobaiversionmyzyimpasto?modelVersionId=2088876).
This model created by [meiyouzhuya](https://civitai.com/user/meiyouzhuya).
|
freakyfractal/roiwtb
|
freakyfractal
| 2025-08-07T03:50:34Z | 6 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:apache-2.0",
"region:us"
] |
text-to-image
| 2025-08-07T03:50:13Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- output:
url: images/Coinye_2021.jpg
text: '-'
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: null
license: apache-2.0
---
# roiwtb
<Gallery />
## Download model
[Download](/freakyfractal/roiwtb/tree/main) them in the Files & versions tab.
|
WayneKing/gpt-oss-120b-loki
|
WayneKing
| 2025-08-07T03:46:02Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"dataset:WayneKing/loki-1",
"base_model:openai/gpt-oss-120b",
"base_model:finetune:openai/gpt-oss-120b",
"endpoints_compatible",
"region:us"
] | null | 2025-08-07T00:28:18Z |
---
base_model: openai/gpt-oss-120b
datasets: WayneKing/loki-1
library_name: transformers
model_name: gpt-oss-120b-loki
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for gpt-oss-120b-loki
This model is a fine-tuned version of [openai/gpt-oss-120b](https://huggingface.co/openai/gpt-oss-120b) on the [WayneKing/loki-1](https://huggingface.co/datasets/WayneKing/loki-1) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="WayneKing/gpt-oss-120b-loki", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.0
- Pytorch: 2.8.0.dev20250319+cu128
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
datasetsANDmodels/beto-spanish-fraud-classifier
|
datasetsANDmodels
| 2025-08-07T03:42:42Z | 1 | 0 | null |
[
"safetensors",
"bert",
"es",
"base_model:dccuchile/bert-base-spanish-wwm-uncased",
"base_model:finetune:dccuchile/bert-base-spanish-wwm-uncased",
"region:us"
] | null | 2025-01-10T02:42:07Z |
---
base_model:
- dccuchile/bert-base-spanish-wwm-uncased
language:
- es
---
This model detects fraud from Spanish text.
The used datasets are:
datasetsANDmodels/beto-spanish-fraud-classifier/better30_spa
datasetsANDmodels/beto-spanish-fraud-classifier/sp-robo
datasetsANDmodels/beto-spanish-fraud-classifier/sptrain
datasetsANDmodels/beto-spanish-fraud-classifier/train-sp
|
NexaAI/gpt-oss-20b-MLX-8bit
|
NexaAI
| 2025-08-07T03:38:50Z | 2 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_oss",
"text-generation",
"vllm",
"mlx",
"conversational",
"base_model:openai/gpt-oss-20b",
"base_model:quantized:openai/gpt-oss-20b",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"8-bit",
"region:us"
] |
text-generation
| 2025-08-07T03:18:24Z |
---
license: apache-2.0
pipeline_tag: text-generation
library_name: transformers
tags:
- vllm
- mlx
base_model: openai/gpt-oss-20b
---
# NexaAI/gpt-oss-20b-MLX-8bit
## Quickstart
Run them directly with [nexa-sdk](https://github.com/NexaAI/nexa-sdk) installed
In nexa-sdk CLI:
```bash
NexaAI/gpt-oss-20b-MLX-8bit
```
## Overview
This is a 8-bit quantized version of the OpenAI GPT OSS 20B model.
## Reference
**Original model card**: [lmstudio-community/gpt-oss-20b-MLX-8bit](https://huggingface.co/lmstudio-community/gpt-oss-20b-MLX-8bit)
|
NexaAI/gpt-oss-20b-GGUF
|
NexaAI
| 2025-08-07T03:36:59Z | 404 | 0 |
transformers
|
[
"transformers",
"gguf",
"vllm",
"text-generation",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2025-08-07T03:15:38Z |
---
license: apache-2.0
pipeline_tag: text-generation
library_name: transformers
tags:
- vllm
---
# NexaAI/gpt-oss-20b-GGUF
## Quickstart
Run them directly with [nexa-sdk](https://github.com/NexaAI/nexa-sdk) installed
In nexa-sdk CLI:
```bash
NexaAI/gpt-oss-20b-GGUF
```
## Overview
This is a GGUF version of the OpenAI GPT OSS 20B model, for lower latency, and local or specialized use cases (21B parameters with 3.6B active parameters).
## Reference
**Original model card**: [ggml-org/gpt-oss-20b-GGUF](https://huggingface.co/ggml-org/gpt-oss-20b-GGUF)
|
Bell-Herald/boolq_lora_20
|
Bell-Herald
| 2025-08-07T03:34:11Z | 2 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-07T03:31:50Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
y-ohtani/Qwen3-32B_BnB
|
y-ohtani
| 2025-08-07T03:33:56Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen3",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-07T03:32:00Z |
---
base_model: unsloth/qwen3-32b-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** y-ohtani
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen3-32b-bnb-4bit
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Abdelmnam/blockassist-bc-gentle_gilded_chameleon_1754530992
|
Abdelmnam
| 2025-08-07T03:31:46Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"gentle gilded chameleon",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-07T03:21:08Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- gentle gilded chameleon
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
IMGEAI/smarterdanv3
|
IMGEAI
| 2025-08-07T03:29:37Z | 16 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:unsloth/Qwen3-32B-unsloth-bnb-4bit",
"base_model:finetune:unsloth/Qwen3-32B-unsloth-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-07T03:21:10Z |
---
base_model: unsloth/Qwen3-32B-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** IMGEAI
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-32B-unsloth-bnb-4bit
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
nightmedia/Qwen3-4B-Instruct-2507-dwq3s-mlx
|
nightmedia
| 2025-08-07T03:28:15Z | 4 | 0 |
mlx
|
[
"mlx",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"base_model:Qwen/Qwen3-4B-Instruct-2507",
"base_model:quantized:Qwen/Qwen3-4B-Instruct-2507",
"license:apache-2.0",
"3-bit",
"region:us"
] |
text-generation
| 2025-08-07T03:19:40Z |
---
library_name: mlx
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen3-4B-Instruct-2507/blob/main/LICENSE
pipeline_tag: text-generation
base_model: Qwen/Qwen3-4B-Instruct-2507
tags:
- mlx
---
# Qwen3-4B-Instruct-2507-dwq3s-mlx
This model [Qwen3-4B-Instruct-2507-dwq3s-mlx](https://huggingface.co/Qwen3-4B-Instruct-2507-dwq3s-mlx) was
converted to MLX format from [Qwen/Qwen3-4B-Instruct-2507](https://huggingface.co/Qwen/Qwen3-4B-Instruct-2507)
using mlx-lm version **0.26.3**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("Qwen3-4B-Instruct-2507-dwq3s-mlx")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
mrbeanlas/sla-it-tide-09
|
mrbeanlas
| 2025-08-07T03:26:06Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-08-07T03:24:16Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.