modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-06 06:27:01
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 542
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-06 06:26:44
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
zelk12/Gemma-R1-12B-v3-Q6_K-GGUF
|
zelk12
| 2025-08-11T21:21:28Z | 0 | 0 | null |
[
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:TheDrummer/Gemma-R1-12B-v3",
"base_model:quantized:TheDrummer/Gemma-R1-12B-v3",
"endpoints_compatible",
"region:us"
] | null | 2025-08-11T21:20:39Z |
---
base_model: TheDrummer/Gemma-R1-12B-v3
tags:
- llama-cpp
- gguf-my-repo
---
# zelk12/Gemma-R1-12B-v3-Q6_K-GGUF
This model was converted to GGUF format from [`TheDrummer/Gemma-R1-12B-v3`](https://huggingface.co/TheDrummer/Gemma-R1-12B-v3) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/TheDrummer/Gemma-R1-12B-v3) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo zelk12/Gemma-R1-12B-v3-Q6_K-GGUF --hf-file gemma-r1-12b-v3-q6_k.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo zelk12/Gemma-R1-12B-v3-Q6_K-GGUF --hf-file gemma-r1-12b-v3-q6_k.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo zelk12/Gemma-R1-12B-v3-Q6_K-GGUF --hf-file gemma-r1-12b-v3-q6_k.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo zelk12/Gemma-R1-12B-v3-Q6_K-GGUF --hf-file gemma-r1-12b-v3-q6_k.gguf -c 2048
```
|
acidjp/blockassist-bc-pesty_extinct_prawn_1754946874
|
acidjp
| 2025-08-11T21:19:50Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"pesty extinct prawn",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T21:19:09Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- pesty extinct prawn
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Mpura/Songo
|
Mpura
| 2025-08-11T21:15:53Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-08-11T21:15:53Z |
---
license: apache-2.0
---
|
rozer191292/blockassist-bc-playful_silky_raccoon_1754946624
|
rozer191292
| 2025-08-11T21:12:53Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"playful silky raccoon",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T21:12:26Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- playful silky raccoon
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
acidjp/blockassist-bc-pesty_extinct_prawn_1754945683
|
acidjp
| 2025-08-11T21:12:47Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"pesty extinct prawn",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T21:12:02Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- pesty extinct prawn
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Osrivers/realismSDXLByStable_v70FP16.safetensors
|
Osrivers
| 2025-08-11T21:12:07Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2025-08-11T20:59:08Z |
---
license: creativeml-openrail-m
---
|
ggozzy/blockassist-bc-stubby_yapping_mandrill_1754946602
|
ggozzy
| 2025-08-11T21:11:18Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stubby yapping mandrill",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T21:11:06Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stubby yapping mandrill
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
motza0025/blockassist-bc-powerful_jagged_magpie_1754945310
|
motza0025
| 2025-08-11T21:07:13Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"powerful jagged magpie",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T21:06:56Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- powerful jagged magpie
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mxw752/gemma3-12b-model-5ep
|
mxw752
| 2025-08-11T21:06:36Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:google/gemma-3-12b-pt",
"base_model:finetune:google/gemma-3-12b-pt",
"endpoints_compatible",
"region:us"
] | null | 2025-08-11T13:19:17Z |
---
base_model: google/gemma-3-12b-pt
library_name: transformers
model_name: gemma3-12b-model-5ep
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for gemma3-12b-model-5ep
This model is a fine-tuned version of [google/gemma-3-12b-pt](https://huggingface.co/google/gemma-3-12b-pt).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="mxw752/gemma3-12b-model-5ep", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/mxw752-university-of-miami/huggingface/runs/sseuu2xu)
This model was trained with SFT.
### Framework versions
- TRL: 0.15.2
- Transformers: 4.55.0
- Pytorch: 2.6.0+cu124
- Datasets: 3.3.2
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
mlx-community/GLM-4.5V-5bit
|
mlx-community
| 2025-08-11T21:06:30Z | 0 | 0 |
mlx
|
[
"mlx",
"safetensors",
"glm4v_moe",
"license:mit",
"5-bit",
"region:us"
] | null | 2025-08-11T20:48:02Z |
---
license: mit
tags:
- mlx
---
# mlx-community/GLM-4.5V-5bit
This model was converted to MLX format from [`ZP2Test/GLM-4.5V`]() using mlx-vlm version **0.3.2**.
Refer to the [original model card](https://huggingface.co/ZP2Test/GLM-4.5V) for more details on the model.
## Use with mlx
```bash
pip install -U mlx-vlm
```
```bash
python -m mlx_vlm.generate --model mlx-community/GLM-4.5V-5bit --max-tokens 100 --temperature 0.0 --prompt "Describe this image." --image <path_to_image>
```
|
Gemvision13/blockassist-bc-finicky_jagged_panda_1754946218
|
Gemvision13
| 2025-08-11T21:05:04Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"finicky jagged panda",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T21:04:59Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- finicky jagged panda
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
nkerr/sv3.1-1-qwen1.5-0.5B-Chat
|
nkerr
| 2025-08-11T21:02:32Z | 0 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:Qwen/Qwen1.5-0.5B",
"base_model:adapter:Qwen/Qwen1.5-0.5B",
"license:other",
"region:us"
] | null | 2025-08-11T21:02:10Z |
---
library_name: peft
license: other
base_model: Qwen/Qwen1.5-0.5B
tags:
- generated_from_trainer
model-index:
- name: sv3.1-1-qwen1.5-0.5B-Chat
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sv3.1-1-qwen1.5-0.5B-Chat
This model is a fine-tuned version of [Qwen/Qwen1.5-0.5B](https://huggingface.co/Qwen/Qwen1.5-0.5B) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 18.4959
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 16
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 21.3692 | 0.2469 | 20 | 21.4652 |
| 21.0058 | 0.4938 | 40 | 21.0458 |
| 20.5316 | 0.7407 | 60 | 20.6554 |
| 20.1861 | 0.9877 | 80 | 20.2718 |
| 19.7708 | 1.2346 | 100 | 19.8891 |
| 19.3233 | 1.4815 | 120 | 19.5228 |
| 19.0428 | 1.7284 | 140 | 19.2184 |
| 18.7112 | 1.9753 | 160 | 18.9434 |
| 18.5131 | 2.2222 | 180 | 18.7407 |
| 18.3874 | 2.4691 | 200 | 18.6082 |
| 18.116 | 2.7160 | 220 | 18.5010 |
| 18.1187 | 2.9630 | 240 | 18.4959 |
### Framework versions
- PEFT 0.14.0
- Transformers 4.49.0
- Pytorch 2.6.0+cu126
- Datasets 3.3.2
- Tokenizers 0.21.0
|
winnieyangwannan/entity_dpo_Llama-3.1-8B-Instruct_lora_1_lr_0.0001_beta_0.05_1280_all_37_epoch_1_layer_16
|
winnieyangwannan
| 2025-08-11T21:01:04Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"dpo",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-11T20:59:30Z |
---
library_name: transformers
tags:
- trl
- dpo
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
fbaldassarri/EleutherAI_pythia-1.4b-deduped-autoawq-int4-gs64-sym
|
fbaldassarri
| 2025-08-11T21:00:16Z | 0 | 0 | null |
[
"safetensors",
"gpt_neox",
"pytorch",
"causal-lm",
"pythia",
"autoround",
"intel-autoround",
"auto-round",
"intel",
"woq",
"awq",
"auto-awq",
"autoawq",
"eleutheraI",
"text-generation",
"en",
"dataset:EleutherAI/pile",
"base_model:EleutherAI/pythia-1.4b-deduped",
"base_model:quantized:EleutherAI/pythia-1.4b-deduped",
"license:apache-2.0",
"4-bit",
"region:us"
] |
text-generation
| 2025-08-11T20:54:48Z |
---
language:
- en
tags:
- pytorch
- causal-lm
- pythia
- autoround
- intel-autoround
- auto-round
- intel
- woq
- awq
- auto-awq
- autoawq
- eleutheraI
license: apache-2.0
model_name: Pythia 1.4b deduped
base_model: EleutherAI/pythia-1.4b-deduped
inference: false
model_creator: EleutherAI
datasets:
- EleutherAI/pile
pipeline_tag: text-generation
prompt_template: '{prompt}
'
quantized_by: fbaldassarri
---
## Model Information
Quantized version of [EleutherAI/pythia-1.4b-deduped](https://huggingface.co/fbaldassarri/EleutherAI/pythia-1.4b-deduped) using torch.float32 for quantization tuning.
- 4 bits (INT4)
- group size = 64
- Symmetrical Quantization
- Method WoQ: AWQ (AutoAWQ algorithm)
Quantization framework: [Intel AutoRound](https://github.com/intel/auto-round) v0.5.1
Note: this INT4 version of pythia-1.4b-deduped has been quantized to run inference through CPU.
## Replication Recipe
### Step 1 Install Requirements
I suggest to install requirements into a dedicated python-virtualenv or a conda enviroment.
```
wget https://github.com/intel/auto-round/archive/refs/tags/v0.5.1.tar.gz
tar -xvzf v0.5.1.tar.gz
cd auto-round-0.5.1
pip install -r requirements-cpu.txt --upgrade
```
### Step 2 Build Intel AutoRound wheel from sources
```
pip install -vvv --no-build-isolation -e .[cpu]
```
### Step 3 Script for Quantization
```
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "EleutherAI/pythia-1.4b-deduped"
model = AutoModelForCausalLM.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
from auto_round import AutoRound
bits, group_size, sym, device, amp = 4, 64, True, 'cpu', False
autoround = AutoRound(model, tokenizer, nsamples=128, iters=200, seqlen=512, batch_size=4, bits=bits, group_size=group_size, sym=sym, device=device, amp=amp)
autoround.quantize()
output_dir = "./AutoRound/EleutherAI_pythia-1.4b-deduped-autoawq-int4-gs64-sym"
autoround.save_quantized(output_dir, format='auto_awq', inplace=True)
```
## License
[Apache 2.0 License](https://choosealicense.com/licenses/apache-2.0/)
## Disclaimer
This quantized model comes with no warrenty. It has been developed only for research purposes.
|
daslab-testing/Llama-3.2-1B-Instruct-FPQuant-QAT-MXFP4-1000steps
|
daslab-testing
| 2025-08-11T20:53:57Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"fp_quant",
"region:us"
] |
text-generation
| 2025-08-11T20:52:33Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ggozzy/blockassist-bc-stubby_yapping_mandrill_1754945500
|
ggozzy
| 2025-08-11T20:52:56Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stubby yapping mandrill",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T20:52:43Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stubby yapping mandrill
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
htNghiaaa/VLSP-qwen3-4b-multichoice-prompt2-1-lora
|
htNghiaaa
| 2025-08-11T20:51:14Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen3",
"trl",
"en",
"base_model:VLSP2025-LegalSML/qwen3-4b-legal-pretrain",
"base_model:finetune:VLSP2025-LegalSML/qwen3-4b-legal-pretrain",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-10T15:21:55Z |
---
base_model: VLSP2025-LegalSML/qwen3-4b-legal-pretrain
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** htNghiaaa
- **License:** apache-2.0
- **Finetuned from model :** VLSP2025-LegalSML/qwen3-4b-legal-pretrain
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Ravi427/llama3-fiqa-qlora
|
Ravi427
| 2025-08-11T20:47:52Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"text-generation",
"base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct",
"lora",
"transformers",
"conversational",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2025-08-10T14:20:21Z |
---
library_name: peft
license: llama3
base_model: meta-llama/Meta-Llama-3-8B-Instruct
tags:
- base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct
- lora
- transformers
pipeline_tag: text-generation
model-index:
- name: llama3-fiqa-qlora
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama3-fiqa-qlora
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3360
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.PAGED_ADAMW_8BIT with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.2426 | 1.0 | 907 | 2.2561 |
| 2.0171 | 2.0 | 1814 | 2.2493 |
| 1.6963 | 3.0 | 2721 | 2.3360 |
### Framework versions
- PEFT 0.17.0
- Transformers 4.55.0
- Pytorch 2.7.1+cu118
- Datasets 4.0.0
- Tokenizers 0.21.4
|
winnieyangwannan/entity_dpo_Llama-3.1-8B-Instruct_lora_8_lr_0.0001_beta_0.05_5120_all_37_epoch_1_layer_22
|
winnieyangwannan
| 2025-08-11T20:47:13Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"dpo",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-11T20:28:58Z |
---
library_name: transformers
tags:
- trl
- dpo
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ahnafch01/cowfmd
|
ahnafch01
| 2025-08-11T20:44:56Z | 0 | 0 |
keras
|
[
"keras",
"license:apache-2.0",
"region:us"
] | null | 2025-08-11T20:40:20Z |
---
license: apache-2.0
---
Foot-and-mouth disease (FMD) is a severe, fast-spreading viral disease that primarily affects cloven-hoofed animals, including cows, pigs, sheep, goats, and deer. FMD is one of the most challenging animal diseases to control.
You can upload a picture of a cow's foot, mouth, udder, or hoof to check if its a sign of FMD in the following website.
https://cowfmd.vercel.app/
|
motza0025/blockassist-bc-scavenging_placid_goat_1754943919
|
motza0025
| 2025-08-11T20:43:43Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"scavenging placid goat",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T20:43:25Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- scavenging placid goat
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
kevinshin/qwen3-1.7b-dpo-beta-0.01-lr-5e-7-epoch-1-batch-16
|
kevinshin
| 2025-08-11T20:40:28Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"qwen3",
"text-generation",
"generated_from_trainer",
"trl",
"dpo",
"conversational",
"dataset:kevinshin/wildchat-5k-writing-1k-pref",
"arxiv:2305.18290",
"base_model:Qwen/Qwen3-1.7B",
"base_model:finetune:Qwen/Qwen3-1.7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-10T10:15:15Z |
---
base_model: Qwen/Qwen3-1.7B
datasets: kevinshin/wildchat-5k-writing-1k-pref
library_name: transformers
model_name: qwen3-1.7b-dpo-beta-0.01-lr-5e-7-epoch-1-batch-16
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for qwen3-1.7b-dpo-beta-0.01-lr-5e-7-epoch-1-batch-16
This model is a fine-tuned version of [Qwen/Qwen3-1.7B](https://huggingface.co/Qwen/Qwen3-1.7B) on the [kevinshin/wildchat-5k-writing-1k-pref](https://huggingface.co/datasets/kevinshin/wildchat-5k-writing-1k-pref) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="kevinshin/qwen3-1.7b-dpo-beta-0.01-lr-5e-7-epoch-1-batch-16", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/myungjune-sogang-university/general_remo_train/runs/3z5dwbek)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.19.1
- Transformers: 4.54.0
- Pytorch: 2.6.0+cu126
- Datasets: 4.0.0
- Tokenizers: 0.21.2
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
fbaldassarri/EleutherAI_pythia-1.4b-deduped-autogptq-int4-gs64-asym
|
fbaldassarri
| 2025-08-11T20:39:47Z | 0 | 0 | null |
[
"safetensors",
"gpt_neox",
"pytorch",
"causal-lm",
"pythia",
"autoround",
"intel-autoround",
"auto-round",
"intel",
"woq",
"gptq",
"auto-gptq",
"autogptq",
"eleutheraI",
"text-generation",
"en",
"dataset:EleutherAI/pile",
"base_model:EleutherAI/pythia-1.4b-deduped",
"base_model:quantized:EleutherAI/pythia-1.4b-deduped",
"license:apache-2.0",
"4-bit",
"region:us"
] |
text-generation
| 2025-08-11T20:34:16Z |
---
language:
- en
tags:
- pytorch
- causal-lm
- pythia
- autoround
- intel-autoround
- auto-round
- intel
- woq
- gptq
- auto-gptq
- autogptq
- eleutheraI
license: apache-2.0
model_name: Pythia 1.4b deduped
base_model: EleutherAI/pythia-1.4b-deduped
inference: false
model_creator: EleutherAI
datasets:
- EleutherAI/pile
pipeline_tag: text-generation
prompt_template: '{prompt}
'
quantized_by: fbaldassarri
---
## Model Information
Quantized version of [EleutherAI/pythia-1.4b-deduped](https://huggingface.co/fbaldassarri/EleutherAI/pythia-1.4b-deduped) using torch.float32 for quantization tuning.
- 4 bits (INT4)
- group size = 64
- Asymmetrical Quantization
- Method WoQ: GPTQ (AutoGPTQ algorithm)
Quantization framework: [Intel AutoRound](https://github.com/intel/auto-round) v0.5.1
Note: this INT4 version of pythia-1.4b-deduped has been quantized to run inference through CPU.
## Replication Recipe
### Step 1 Install Requirements
I suggest to install requirements into a dedicated python-virtualenv or a conda enviroment.
```
wget https://github.com/intel/auto-round/archive/refs/tags/v0.5.1.tar.gz
tar -xvzf v0.5.1.tar.gz
cd auto-round-0.5.1
pip install -r requirements-cpu.txt --upgrade
```
### Step 2 Build Intel AutoRound wheel from sources
```
pip install -vvv --no-build-isolation -e .[cpu]
```
### Step 3 Script for Quantization
```
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "EleutherAI/pythia-1.4b-deduped"
model = AutoModelForCausalLM.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
from auto_round import AutoRound
bits, group_size, sym, device, amp = 4, 64, False, 'cpu', False
autoround = AutoRound(model, tokenizer, nsamples=128, iters=200, seqlen=512, batch_size=4, bits=bits, group_size=group_size, sym=sym, device=device, amp=amp)
autoround.quantize()
output_dir = "./AutoRound/EleutherAI_pythia-1.4b-deduped-autogptq-int4-gs64-asym"
autoround.save_quantized(output_dir, format='auto_gptq', inplace=True)
```
## License
[Apache 2.0 License](https://choosealicense.com/licenses/apache-2.0/)
## Disclaimer
This quantized model comes with no warrenty. It has been developed only for research purposes.
|
Vattri81/my_finetuned_model_qlorav3
|
Vattri81
| 2025-08-11T20:34:08Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-11T20:33:57Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Gemvision13/blockassist-bc-finicky_jagged_panda_1754944354
|
Gemvision13
| 2025-08-11T20:34:03Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"finicky jagged panda",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T20:33:59Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- finicky jagged panda
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
daslab-testing/Llama-3.2-1B-Instruct-FPQuant-QAT-MXFP4-600steps
|
daslab-testing
| 2025-08-11T20:31:41Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"fp_quant",
"region:us"
] |
text-generation
| 2025-08-11T20:30:06Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Samuell43/blockassist-bc-fast_gregarious_warthog_1754944232
|
Samuell43
| 2025-08-11T20:31:30Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"fast gregarious warthog",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T20:31:26Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- fast gregarious warthog
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
chunli-peng/OpenRS-GRPO-sft-8.5
|
chunli-peng
| 2025-08-11T20:29:31Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"sft",
"conversational",
"dataset:knoveleng/open-rs",
"base_model:deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B",
"base_model:finetune:deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-11T20:10:49Z |
---
base_model: deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B
datasets: knoveleng/open-rs
library_name: transformers
model_name: OpenRS-GRPO-sft-8.5
tags:
- generated_from_trainer
- open-r1
- trl
- sft
licence: license
---
# Model Card for OpenRS-GRPO-sft-8.5
This model is a fine-tuned version of [deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B) on the [knoveleng/open-rs](https://huggingface.co/datasets/knoveleng/open-rs) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="chunli-peng/OpenRS-GRPO-sft-8.5", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/chunli-ai-texas-a-m-university/huggingface/runs/lii4yxwc)
This model was trained with SFT.
### Framework versions
- TRL: 0.16.0.dev0
- Transformers: 4.49.0
- Pytorch: 2.5.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
ggozzy/blockassist-bc-stubby_yapping_mandrill_1754943848
|
ggozzy
| 2025-08-11T20:25:30Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stubby yapping mandrill",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T20:25:18Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stubby yapping mandrill
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
kiwihug/ppo-LunarLander-v2
|
kiwihug
| 2025-08-11T20:21:55Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-08-11T20:21:38Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 260.43 +/- 18.31
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
afqqfqfq/blockassist-bc-strong_feline_lobster_1754941155
|
afqqfqfq
| 2025-08-11T20:18:37Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"strong feline lobster",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T20:18:20Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- strong feline lobster
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
nightmedia/huizimao-gpt-oss-120b-uncensored-mxfp4-q6-hi-mlx
|
nightmedia
| 2025-08-11T20:18:10Z | 0 | 0 |
mlx
|
[
"mlx",
"safetensors",
"gpt_oss",
"text-generation",
"base_model:huizimao/gpt-oss-120b-uncensored-mxfp4",
"base_model:quantized:huizimao/gpt-oss-120b-uncensored-mxfp4",
"license:apache-2.0",
"6-bit",
"region:us"
] |
text-generation
| 2025-08-11T14:01:08Z |
---
license: apache-2.0
base_model: huizimao/gpt-oss-120b-uncensored-mxfp4
tags:
- mlx
pipeline_tag: text-generation
library_name: mlx
---
# huizimao-gpt-oss-120b-uncensored-mxfp4-q6-hi-mlx
This model [huizimao-gpt-oss-120b-uncensored-mxfp4-q6-hi-mlx](https://huggingface.co/huizimao-gpt-oss-120b-uncensored-mxfp4-q6-hi-mlx) was
converted to MLX format from [huizimao/gpt-oss-120b-uncensored-mxfp4](https://huggingface.co/huizimao/gpt-oss-120b-uncensored-mxfp4)
using mlx-lm version **0.26.3**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("huizimao-gpt-oss-120b-uncensored-mxfp4-q6-hi-mlx")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
sequelbox/gpt-oss-20b-DAG-Reasoning
|
sequelbox
| 2025-08-11T20:16:30Z | 0 | 1 |
transformers
|
[
"transformers",
"safetensors",
"gpt_oss",
"text-generation",
"dag-reasoning",
"gpt",
"gpt-oss",
"gpt-oss-20b",
"openai",
"20b",
"reasoning",
"directed-acyclic-graph",
"graph",
"logic",
"analysis",
"programming",
"knowledge",
"root-cause-analysis",
"economics",
"business",
"business-management",
"finance",
"law",
"supply-chain",
"logistics",
"software-engineering",
"cybersecurity",
"architecture",
"energy",
"politics",
"problem-solving",
"creative",
"analytical",
"expert",
"rationality",
"conversational",
"chat",
"instruct",
"en",
"dataset:sequelbox/DAG-Reasoning-DeepSeek-R1-0528",
"base_model:openai/gpt-oss-20b",
"base_model:finetune:openai/gpt-oss-20b",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-11T20:09:28Z |
---
language:
- en
library_name: transformers
pipeline_tag: text-generation
tags:
- dag-reasoning
- gpt
- gpt-oss
- gpt-oss-20b
- openai
- 20b
- reasoning
- directed-acyclic-graph
- graph
- logic
- analysis
- programming
- knowledge
- root-cause-analysis
- economics
- business
- business-management
- finance
- law
- supply-chain
- logistics
- software-engineering
- cybersecurity
- architecture
- energy
- politics
- problem-solving
- creative
- analytical
- expert
- rationality
- conversational
- chat
- instruct
base_model: openai/gpt-oss-20b
datasets:
- sequelbox/DAG-Reasoning-DeepSeek-R1-0528
license: apache-2.0
---
**[Support our open-source dataset and model releases!](https://huggingface.co/spaces/sequelbox/SupportOpenSource)**
DAG Reasoning: [Qwen3-4B-Thinking-2507](https://huggingface.co/sequelbox/Qwen3-4B-Thinking-2507-DAG-Reasoning), [Qwen3-8B](https://huggingface.co/sequelbox/Qwen3-8B-DAG-Reasoning), [Qwen3-14B](https://huggingface.co/sequelbox/Qwen3-14B-DAG-Reasoning), [gpt-oss-20b](https://huggingface.co/sequelbox/gpt-oss-20b-DAG-Reasoning)
DAG Reasoning is an **experimental specialist reasoning AI with custom output format**; for general reasoning and chat, try [Shining Valiant 3](https://huggingface.co/ValiantLabs/Qwen3-8B-ShiningValiant3) or [Esper 3!](https://huggingface.co/ValiantLabs/Qwen3-8B-Esper3)
DAG Reasoning is a specialist reasoning assistant, performing causal analysis and reasoning to produce Directed Acyclic Graphs in response to user output.
- Finetuned on our [DAG dataset](https://huggingface.co/datasets/sequelbox/DAG-Reasoning-DeepSeek-R1-0528) data generated with [Deepseek R1 0528!](https://huggingface.co/deepseek-ai/DeepSeek-R1-0528)
- Multi-step analysis identifies causal relationships, produces confidence measurements, and forms a single structured graph object.
- DAG Reasoning Format provides clear, readable JSON containing structured, useful information; easy to use for creating visualizations, doing analysis, or further conversation with your assistant.
- Trained in a variety of subjects for flexible analysis: programming, science, business, economics, finance, law, logistics, management, and more!
- Small model sizes allow running on local desktop and mobile, plus super-fast server inference!
## Prompting Guide
DAG Reasoning uses the [gpt-oss-20b](https://huggingface.co/openai/gpt-oss-20b) prompt format to create outputs in [DAG Reasoning Format.](https://huggingface.co/datasets/sequelbox/DAG-Reasoning-DeepSeek-R1-0528)
DAG Reasoning is an **experimental reasoning finetune:**
- the assistant performs multi-step reasoning during the thinking phase, before producing the JSON graph object at the start of the output to the user.
- request the graph or analysis explicitly in your user prompt to prompt for the [DAG Reasoning Format;](https://huggingface.co/datasets/sequelbox/DAG-Reasoning-DeepSeek-R1-0528) see the example script below for examples. (If the model is unsure of your request, it will generally default to standard gpt-oss-20b output/chat style instead of creating a DAG.)
- this is an early experimental release: if used in a productive context, structural validation of outputs is strongly recommended.
- we recommend reasoning level high for all chats.
Example inference script to get started:
```python
from transformers import pipeline
import torch
model_id = "sequelbox/gpt-oss-20b-DAG-Reasoning"
pipe = pipeline(
"text-generation",
model=model_id,
torch_dtype="auto",
device_map="auto",
)
prompt = "Analyze the following scenario from a report on a new industrial park: The park was built on reclaimed swampland. The initial site survey indicated the ground was stable after being drained and filled. However, over the first five years of operation, slow, uneven ground subsidence has caused cracking in the foundations of several large warehouses. The cost of stabilizing these foundations is now projected to be higher than the initial cost of the land itself, and the risk of further subsidence has made the remaining lots in the park unsellable."
#prompt = "Make a graph of this analysis: In the American West, warmer winters are causing more precipitation to fall as rain instead of snow, even when total precipitation remains unchanged. This has two major consequences for water management. First, runoff occurs immediately in the winter rather than being stored as snowpack until the spring and summer melt. This increases winter flood risk and reduces water availability during the summer growing season. Second, the smaller snowpack reflects less solar radiation, leading to warmer ground temperatures and increased evaporation, further reducing water supply."
#prompt = "A supply chain security analysis finds: following the disclosure of a critical vulnerability in the widely used Log4j library, we consulted our Software Bill of Materials (SBOM) for a key application, which indicated the application was not affected. However, the application was later compromised via this exact vulnerability. The investigation revealed the SBOM was generated incorrectly and failed to identify Log4j as a transitive dependency, a library pulled in by another library. This inaccurate SBOM led to a false negative in our risk assessment."
#prompt = "Analyze this and make a graph: A company incurred a $200,000 bill from its cloud provider in one weekend, an attack known as cryptojacking. An attacker discovered an exposed API key in the client-side code of the company's public-facing web application. This key belonged to a role that, due to a misconfiguration, had permissions to create new virtual machine instances. The attacker wrote a script to programmatically spin up thousands of the most powerful, GPU-equipped virtual machines in several different geographic regions to mine cryptocurrency, leading to the massive, unexpected charges."
messages = [
{"role": "user", "content": prompt},
]
outputs = pipe(
messages,
max_new_tokens=12000,
)
print(outputs[0]["generated_text"][-1])
```
DAG Reasoning is one of our experimental reasoning releases; we've got more to come soon!
Do as you will.
|
ggozzy/blockassist-bc-stubby_yapping_mandrill_1754943297
|
ggozzy
| 2025-08-11T20:16:21Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stubby yapping mandrill",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T20:16:00Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stubby yapping mandrill
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
sequelbox/Qwen3-14B-DAG-Reasoning
|
sequelbox
| 2025-08-11T20:16:15Z | 53 | 5 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"dag-reasoning",
"valiant",
"valiant-labs",
"qwen",
"qwen-3",
"qwen-3-14b",
"14b",
"reasoning",
"directed-acyclic-graph",
"graph",
"logic",
"analysis",
"programming",
"knowledge",
"root-cause-analysis",
"economics",
"business",
"business-management",
"finance",
"law",
"supply-chain",
"logistics",
"software-engineering",
"cybersecurity",
"architecture",
"energy",
"politics",
"problem-solving",
"creative",
"analytical",
"expert",
"rationality",
"conversational",
"chat",
"instruct",
"en",
"dataset:sequelbox/DAG-Reasoning-DeepSeek-R1-0528",
"base_model:Qwen/Qwen3-14B",
"base_model:finetune:Qwen/Qwen3-14B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-07-29T03:19:42Z |
---
language:
- en
library_name: transformers
pipeline_tag: text-generation
tags:
- dag-reasoning
- valiant
- valiant-labs
- qwen
- qwen-3
- qwen-3-14b
- 14b
- reasoning
- directed-acyclic-graph
- graph
- logic
- analysis
- programming
- knowledge
- root-cause-analysis
- economics
- business
- business-management
- finance
- law
- supply-chain
- logistics
- software-engineering
- cybersecurity
- architecture
- energy
- politics
- problem-solving
- creative
- analytical
- expert
- rationality
- conversational
- chat
- instruct
base_model: Qwen/Qwen3-14B
datasets:
- sequelbox/DAG-Reasoning-DeepSeek-R1-0528
license: apache-2.0
---
**[Support our open-source dataset and model releases!](https://huggingface.co/spaces/sequelbox/SupportOpenSource)**
DAG Reasoning: [Qwen3-4B-Thinking-2507](https://huggingface.co/sequelbox/Qwen3-4B-Thinking-2507-DAG-Reasoning), [Qwen3-8B](https://huggingface.co/sequelbox/Qwen3-8B-DAG-Reasoning), [Qwen3-14B](https://huggingface.co/sequelbox/Qwen3-14B-DAG-Reasoning), [gpt-oss-20b](https://huggingface.co/sequelbox/gpt-oss-20b-DAG-Reasoning)
DAG Reasoning is an **experimental specialist reasoning AI with custom output format**; for general reasoning and chat, try [Shining Valiant 3](https://huggingface.co/ValiantLabs/Qwen3-8B-ShiningValiant3) or [Esper 3!](https://huggingface.co/ValiantLabs/Qwen3-8B-Esper3)
DAG Reasoning is a specialist reasoning assistant, performing causal analysis and reasoning to produce Directed Acyclic Graphs in response to user output.
- Finetuned on our [DAG dataset](https://huggingface.co/datasets/sequelbox/DAG-Reasoning-DeepSeek-R1-0528) data generated with [Deepseek R1 0528!](https://huggingface.co/deepseek-ai/DeepSeek-R1-0528)
- Multi-step analysis identifies causal relationships, produces confidence measurements, and forms a single structured graph object.
- DAG Reasoning Format provides clear, readable JSON containing structured, useful information; easy to use for creating visualizations, doing analysis, or further conversation with your assistant.
- Trained in a variety of subjects for flexible analysis: programming, science, business, economics, finance, law, logistics, management, and more!
- Small model sizes allow running on local desktop and mobile, plus super-fast server inference!
## Prompting Guide
DAG Reasoning uses the [Qwen 3](https://huggingface.co/Qwen/Qwen3-14B) prompt format to create outputs in [DAG Reasoning Format.](https://huggingface.co/datasets/sequelbox/DAG-Reasoning-DeepSeek-R1-0528)
DAG Reasoning is an **experimental reasoning finetune:**
- the assistant performs multi-step reasoning during the thinking phase, before producing the JSON graph object at the start of the output to the user.
- request the graph or analysis explicitly in your user prompt to prompt for the [DAG Reasoning Format;](https://huggingface.co/datasets/sequelbox/DAG-Reasoning-DeepSeek-R1-0528) see the example script below for examples. (If the model is unsure of your request, it will generally default to standard Qwen 3 output/chat style instead of creating a DAG.)
- this is an early experimental release: if used in a productive context, structural validation of outputs is strongly recommended.
- we recommend enable_thinking=True for all chats.
Example inference script to get started:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "sequelbox/Qwen3-14B-DAG-Reasoning"
# load the tokenizer and the model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
# prepare the model input, generally recommended to follow the prompting style provided in these examples:
prompt = "Analyze the following scenario from a report on a new industrial park: The park was built on reclaimed swampland. The initial site survey indicated the ground was stable after being drained and filled. However, over the first five years of operation, slow, uneven ground subsidence has caused cracking in the foundations of several large warehouses. The cost of stabilizing these foundations is now projected to be higher than the initial cost of the land itself, and the risk of further subsidence has made the remaining lots in the park unsellable."
#prompt = "Make a graph of this analysis: In the American West, warmer winters are causing more precipitation to fall as rain instead of snow, even when total precipitation remains unchanged. This has two major consequences for water management. First, runoff occurs immediately in the winter rather than being stored as snowpack until the spring and summer melt. This increases winter flood risk and reduces water availability during the summer growing season. Second, the smaller snowpack reflects less solar radiation, leading to warmer ground temperatures and increased evaporation, further reducing water supply."
#prompt = "A supply chain security analysis finds: following the disclosure of a critical vulnerability in the widely used Log4j library, we consulted our Software Bill of Materials (SBOM) for a key application, which indicated the application was not affected. However, the application was later compromised via this exact vulnerability. The investigation revealed the SBOM was generated incorrectly and failed to identify Log4j as a transitive dependency, a library pulled in by another library. This inaccurate SBOM led to a false negative in our risk assessment."
#prompt = "Analyze this and make a graph: A company incurred a $200,000 bill from its cloud provider in one weekend, an attack known as cryptojacking. An attacker discovered an exposed API key in the client-side code of the company's public-facing web application. This key belonged to a role that, due to a misconfiguration, had permissions to create new virtual machine instances. The attacker wrote a script to programmatically spin up thousands of the most powerful, GPU-equipped virtual machines in several different geographic regions to mine cryptocurrency, leading to the massive, unexpected charges."
messages = [
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=True # Switches between thinking and non-thinking modes. Default is True.
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
# conduct text completion
generated_ids = model.generate(
**model_inputs,
max_new_tokens=32768
)
output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist()
# parsing thinking content
try:
# rindex finding 151668 (</think>)
index = len(output_ids) - output_ids[::-1].index(151668)
except ValueError:
index = 0
thinking_content = tokenizer.decode(output_ids[:index], skip_special_tokens=True).strip("\n")
content = tokenizer.decode(output_ids[index:], skip_special_tokens=True).strip("\n")
print("thinking content:", thinking_content)
print("content:", content)
```
DAG Reasoning is one of our experimental reasoning releases; we've got more to come soon!
Do as you will.
|
Gemvision13/blockassist-bc-finicky_jagged_panda_1754943261
|
Gemvision13
| 2025-08-11T20:16:03Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"finicky jagged panda",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T20:15:59Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- finicky jagged panda
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mlx-community/GLM-4.5V-3bit
|
mlx-community
| 2025-08-11T20:14:41Z | 0 | 0 |
mlx
|
[
"mlx",
"safetensors",
"glm4v_moe",
"license:mit",
"3-bit",
"region:us"
] | null | 2025-08-11T20:02:40Z |
---
license: mit
tags:
- mlx
---
# mlx-community/GLM-4.5V-3bit
This model was converted to MLX format from [`ZP2Test/GLM-4.5V`]() using mlx-vlm version **0.3.2**.
Refer to the [original model card](https://huggingface.co/ZP2Test/GLM-4.5V) for more details on the model.
## Use with mlx
```bash
pip install -U mlx-vlm
```
```bash
python -m mlx_vlm.generate --model mlx-community/GLM-4.5V-3bit --max-tokens 100 --temperature 0.0 --prompt "Describe this image." --image <path_to_image>
```
|
Mathlesage/euroBertV11-infonce-only-2824-qwen-step-0
|
Mathlesage
| 2025-08-11T20:12:56Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"feature-extraction",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2025-08-11T20:11:35Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Vortex5/Moondark-12B
|
Vortex5
| 2025-08-11T20:12:30Z | 11 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"roleplay",
"conversational",
"arxiv:2403.19522",
"base_model:Delta-Vector/Ohashi-NeMo-12B",
"base_model:merge:Delta-Vector/Ohashi-NeMo-12B",
"base_model:HumanLLMs/Human-Like-Mistral-Nemo-Instruct-2407",
"base_model:merge:HumanLLMs/Human-Like-Mistral-Nemo-Instruct-2407",
"base_model:flammenai/Mahou-1.5-mistral-nemo-12B",
"base_model:merge:flammenai/Mahou-1.5-mistral-nemo-12B",
"base_model:natong19/Mistral-Nemo-Instruct-2407-abliterated",
"base_model:merge:natong19/Mistral-Nemo-Instruct-2407-abliterated",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-06T00:49:46Z |
---
base_model:
- flammenai/Mahou-1.5-mistral-nemo-12B
- Delta-Vector/Ohashi-NeMo-12B
- HumanLLMs/Human-Like-Mistral-Nemo-Instruct-2407
- natong19/Mistral-Nemo-Instruct-2407-abliterated
library_name: transformers
tags:
- mergekit
- merge
- roleplay
---
# Moondark-12B

This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [natong19/Mistral-Nemo-Instruct-2407-abliterated](https://huggingface.co/natong19/Mistral-Nemo-Instruct-2407-abliterated) as a base.
### Models Merged
The following models were included in the merge:
* [flammenai/Mahou-1.5-mistral-nemo-12B](https://huggingface.co/flammenai/Mahou-1.5-mistral-nemo-12B)
* [Delta-Vector/Ohashi-NeMo-12B](https://huggingface.co/Delta-Vector/Ohashi-NeMo-12B)
* [HumanLLMs/Human-Like-Mistral-Nemo-Instruct-2407](https://huggingface.co/HumanLLMs/Human-Like-Mistral-Nemo-Instruct-2407)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
base_model: natong19/Mistral-Nemo-Instruct-2407-abliterated
models:
- model: flammenai/Mahou-1.5-mistral-nemo-12B
- model: Delta-Vector/Ohashi-NeMo-12B
- model: HumanLLMs/Human-Like-Mistral-Nemo-Instruct-2407
merge_method: model_stock
dtype: bfloat16
parameters:
normalize: true
```
|
ggozzy/blockassist-bc-stubby_yapping_mandrill_1754943023
|
ggozzy
| 2025-08-11T20:11:45Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stubby yapping mandrill",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T20:11:29Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stubby yapping mandrill
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Jackmin108/Moonlight-16B-A3B-Instruct-Fast
|
Jackmin108
| 2025-08-11T20:08:59Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"deepseek_v3",
"text-generation",
"conversational",
"custom_code",
"arxiv:2502.16982",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-11T18:48:22Z |
---
license: mit
library_name: transformers
---
<div align="center">
<a href="https://github.com/MoonshotAI/Moonlight"><img width="80%" src="figures/banner.png"></a>
</div>
<!-- # Muon is Scalable For LLM Training -->
<div align="center">
<a href="https://github.com/MoonshotAI/Moonlight/blob/master/Moonlight.pdf" ><img src="figures/logo.png" height="16" width="16" style="display: inline-block; vertical-align: middle; margin: 2px;"><b style="display: inline-block;"> Tech Report</b></a> |
<a href="https://huggingface.co/moonshotai/Moonlight-16B-A3B"><img src="https://huggingface.co/front/assets/huggingface_logo-noborder.svg" height="16" width="16" style="display: inline-block; vertical-align: middle; margin: 2px;"><b style="display: inline-block;"> HuggingFace</b></a> |
<a href="#"><img src="figures/megatron.png" height="16" width="16" style="display: inline-block; vertical-align: middle; margin: 2px;"><b style="display: inline-block;">Megatron(coming soon)</b></a>
</div>
## Abstract
Recently, the [Muon optimizer](https://github.com/KellerJordan/Muon) has demonstrated strong results in training small-scale language models, but the scalability to larger models has not been proven. We identify two crucial techniques for scaling up Muon:
- **Weight Decay**: Critical for scaling to larger models
- **Consistent RMS Updates**: Enforcing a consistent root mean square on model updates
These techniques allow Muon to work out-of-the-box on large-scale training without the need of hyper-parameter tuning. Scaling law experiments indicate that Muon is $\sim2\times$ more sample efficient than Adam with compute optimal training.
Based on these improvements, we introduce **Moonlight**, a 3B/16B-parameter Mixture-of-Expert (MoE) model trained with 5.7T tokens using Muon. Our model improves the current Pareto frontier, achieving better performance with much fewer training FLOPs compared to prior models.
We open-source our Muon implementation that is memory optimal and communication efficient. We also release the pretrained, instruction-tuned, and intermediate checkpoints to support future research.
Our code is available at [MoonshotAI/Moonlight](https://github.com/MoonshotAI/Moonlight).
## Key Ingredients
Our work builds upon Muon while systematically identifying and resolving its limitations in large-scale training scenarios. Our technical contributions include:
- **Analysis for Effective Scaling of Muon**: Through extensive analysis, we identify that weight decay plays a crucial roles in Muon's scalability. Besides, we proposed to keep a consistent update root mean square (RMS) across different matrix and non-matrix parameters through parameter-wise update scale adjustments. Such adjustments significantly enhanced training stability.
- **Efficient Distributed Implementation**: We develop a distributed version of Muon with ZeRO-1 style optimization, achieving optimal memory efficiency and reduced communication overhead while preserving the mathematical properties of the algorithm.
- **Scaling Law Validation**: We performed scaling law research that compares Muon with strong AdamW baselines, and showed the superior performance of Muon (see Figure 1). Based on the scaling law results, Muon achieves comparable performance to AdamW trained counterparts while requiring only approximately 52% of the training FLOPs.
<div align="center">
<img width="90%" src="figures/scaling.png">
<p><em>Scaling up with Muon. <b>(a)</b> Scaling law experiments comparing Muon and Adam. Muon is 2 times more sample efficient than Adam. <b>(b)</b> The MMLU performance of our Moonlight model optimized with Muon and other comparable models. Moonlight advances the Pareto frontier of performance vs training FLOPs.</em></p>
</div>
## Performance
We compared Moonlight with SOTA public models at similar scale:
- **LLAMA3-3B** is a 3B-parameter dense model trained with 9T tokens
- **Qwen2.5-3B** is a 3B-parameter dense model trained with 18T tokens
- **Deepseek-v2-Lite** is a 2.4B/16B-parameter MOE model trained with 5.7T tokens
<div align="center">
| | **Benchmark (Metric)** | **Llama3.2-3B** | **Qwen2.5-3B** | **DSV2-Lite** | **Moonlight** |
|---|---|---|---|---|---|
| | Activated Param† | 2.81B | 2.77B | 2.24B | 2.24B |
| | Total Params† | 2.81B | 2.77B | 15.29B | 15.29B |
| | Training Tokens | 9T | 18T | 5.7T | 5.7T |
| | Optimizer | AdamW | * | AdamW | Muon |
| **English** | MMLU | 54.75 | 65.6 | 58.3 | **70.0** |
| | MMLU-pro | 25.0 | 34.6 | 25.5 | **42.4** |
| | BBH | 46.8 | 56.3 | 44.1 | **65.2** |
| | TriviaQA‡ | 59.6 | 51.1 | 65.1 | **66.3** |
| **Code** | HumanEval | 28.0 | 42.1 | 29.9 | **48.1** |
| | MBPP | 48.7 | 57.1 | 43.2 | **63.8** |
| **Math** | GSM8K | 34.0 | **79.1** | 41.1 | 77.4 |
| | MATH | 8.5 | 42.6 | 17.1 | **45.3** |
| | CMath | - | 80.0 | 58.4 | **81.1** |
| **Chinese** | C-Eval | - | 75.0 | 60.3 | **77.2** |
| | CMMLU | - | 75.0 | 64.3 | **78.2** |
</div>
*Qwen 2 & 2.5 reports didn't disclose their optimizer information. †The reported parameter counts exclude the embedding parameters. ‡We test all listed models with the full set of TriviaQA.*
## Example usage
### Model Download
<div align="center">
| **Model** | **#Total Params** | **#Activated Params** | **Context Length** | **Download Link** |
| :------------: | :------------: | :------------: | :------------: | :------------: |
| Moonlight-16B-A3B | 16B | 3B | 8K | [🤗 Hugging Face](https://huggingface.co/moonshotai/Moonlight-16B-A3B) |
| Moonlight-16B-A3B-Instruct | 16B | 3B | 8K | [🤗 Hugging Face](https://huggingface.co/moonshotai/Moonlight-16B-A3B-Instruct) |
</div>
### Inference with Hugging Face Transformers
We introduce how to use our model at inference stage using transformers library. It is recommended to use python=3.10, torch>=2.1.0, and transformers=4.48.2 as the development environment.
For our pretrained model (Moonlight-16B-A3B):
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "moonshotai/Moonlight-16B-A3B"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto",
trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
prompt = "1+1=2, 1+2="
inputs = tokenizer(prompt, return_tensors="pt", padding=True, truncation=True).to(model.device)
generated_ids = model.generate(**inputs, max_new_tokens=100)
response = tokenizer.batch_decode(generated_ids)[0]
print(response)
```
For our instruct model (Moonlight-16B-A3B-Instruct):
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "moonshotai/Moonlight-16B-A3B-Instruct"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto",
trust_remote_code=True
)
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
messages = [
{"role": "system", "content": "You are a helpful assistant provided by Moonshot-AI."},
{"role": "user", "content": "Is 123 a prime?"}
]
input_ids = tokenizer.apply_chat_template(messages, add_generation_prompt=True, return_tensors="pt").to(model.device)
generated_ids = model.generate(inputs=input_ids, max_new_tokens=500)
response = tokenizer.batch_decode(generated_ids)[0]
print(response)
```
Moonlight has the same architecture as DeepSeek-V3, which is supported by many popular inference engines, such as VLLM and SGLang. As a result, our model can also be easily deployed using these tools.
## Citation
If you find Moonlight is useful or want to use in your projects, please kindly cite our paper:
```
@misc{liu2025muonscalablellmtraining,
title={Muon is Scalable for LLM Training},
author={Jingyuan Liu and Jianlin Su and Xingcheng Yao and Zhejun Jiang and Guokun Lai and Yulun Du and Yidao Qin and Weixin Xu and Enzhe Lu and Junjie Yan and Yanru Chen and Huabin Zheng and Yibo Liu and Shaowei Liu and Bohong Yin and Weiran He and Han Zhu and Yuzhi Wang and Jianzhou Wang and Mengnan Dong and Zheng Zhang and Yongsheng Kang and Hao Zhang and Xinran Xu and Yutao Zhang and Yuxin Wu and Xinyu Zhou and Zhilin Yang},
year={2025},
eprint={2502.16982},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2502.16982},
}
```
|
kayacrypto/blockassist-bc-thriving_barky_wolf_1754942824
|
kayacrypto
| 2025-08-11T20:08:59Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thriving barky wolf",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T20:08:14Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thriving barky wolf
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Vortex5/Moonviolet-12B
|
Vortex5
| 2025-08-11T20:08:27Z | 9 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"roleplay",
"conversational",
"base_model:Nitral-AI/Captain-Eris_Violet-V0.420-12B",
"base_model:merge:Nitral-AI/Captain-Eris_Violet-V0.420-12B",
"base_model:Vortex5/Moondark-12B",
"base_model:merge:Vortex5/Moondark-12B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-06T12:42:50Z |
---
base_model:
- Nitral-AI/Captain-Eris_Violet-V0.420-12B
- Vortex5/Moondark-12B
library_name: transformers
tags:
- mergekit
- merge
- roleplay
---
# Moonviolet-12B

This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [SLERP](https://en.wikipedia.org/wiki/Slerp) merge method.
### Models Merged
The following models were included in the merge:
* [Nitral-AI/Captain-Eris_Violet-V0.420-12B](https://huggingface.co/Nitral-AI/Captain-Eris_Violet-V0.420-12B)
* [Vortex5/Moondark-12B](https://huggingface.co/Vortex5/Moondark-12B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: Vortex5/Moondark-12B
layer_range: [0, 40]
- model: Nitral-AI/Captain-Eris_Violet-V0.420-12B
layer_range: [0, 40]
merge_method: slerp
base_model: Vortex5/Moondark-12B
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
|
FastFlowLM/Llama-3.2-1B-NPU2
|
FastFlowLM
| 2025-08-11T20:05:19Z | 138 | 0 | null |
[
"llama",
"llama-3.2",
"text-generation",
"AMD",
"Ryzen",
"NPU",
"conversational",
"en",
"base_model:meta-llama/Llama-3.2-1B-Instruct",
"base_model:finetune:meta-llama/Llama-3.2-1B-Instruct",
"license:llama3",
"region:us"
] |
text-generation
| 2025-06-20T17:30:52Z |
---
license: llama3
language:
- en
tags:
- llama
- llama-3.2
- text-generation
- AMD
- Ryzen
- NPU
pipeline_tag: text-generation
base_model:
- meta-llama/Llama-3.2-1B-Instruct
---
# 🦙 LLaMA 3.2 (1B) – Optimized for FastFlowLM on AMD Ryzen™ AI NPU (XDNA2 Only)
## Model Summary
This model is a variant of Meta AI’s **LLaMA 3.2 1B Instruct** release. It preserves the original architecture and weights, with potential optimizations via quantization, low-level tuning, or runtime enhancements tailored for NPUs using FastFlowLM.
> ⚠️ **This model is subject to Meta’s LLaMA 3 license. You must accept Meta’s terms to use or download it.**
## 📝 License & Usage Terms
### Meta LLaMA 3 License
- Governed by Meta AI's LLaMA 3 license:
👉 https://ai.meta.com/llama/license/
- Key restrictions include:
- **No commercial use** without express permission from Meta
- Redistribution must follow Meta’s guidelines
- Attribution to Meta is required
### Redistribution Notice
- This repository does **not** contain Meta’s original weights.
- You must obtain the base weights directly from Meta:
👉 https://huggingface.co/meta-llama
### If Fine-tuned
If this version includes any fine-tuning or post-training modification:
- **Base Model License**: Meta’s LLaMA 3 License
- **Derivative Weights License**: [e.g., CC-BY-NC-4.0, MIT, custom]
- **Training Dataset License(s)**:
- [Dataset A] – [license]
- [Dataset B] – [license]
Users are responsible for verifying the legality of dataset use and redistribution.
## Intended Use
- **Target Applications**: On-device experimentation, local LLM inference, academic research
- **Exclusions**: Do **not** use in commercial products, production systems, or critical tasks without proper evaluation and license compliance
## Limitations & Risks
- May hallucinate or output biased content
- Knowledge is frozen as of the base model's training cutoff
- Not evaluated for high-stakes or real-time applications
## Citation
```bibtex
@misc{touvron2024llama3,
title={LLaMA 3: Open Foundation and Instruction Models},
author={Touvron, Hugo and others},
year={2024},
url={https://ai.meta.com/llama/}
```
|
FastFlowLM/Llama-3.2-3B-NPU2
|
FastFlowLM
| 2025-08-11T20:03:31Z | 43 | 0 | null |
[
"llama",
"llama-3.2",
"text-generation",
"AMD",
"Ryzen",
"NPU",
"conversational",
"en",
"base_model:meta-llama/Llama-3.2-3B-Instruct",
"base_model:finetune:meta-llama/Llama-3.2-3B-Instruct",
"license:llama3",
"region:us"
] |
text-generation
| 2025-06-20T17:33:17Z |
---
license: llama3
language:
- en
tags:
- llama
- llama-3.2
- text-generation
- AMD
- Ryzen
- NPU
pipeline_tag: text-generation
base_model:
- meta-llama/Llama-3.2-3B-Instruct
---
# 🦙 LLaMA 3.2 (3B) – Optimized for FastFlowLM on AMD Ryzen™ AI NPU (XDNA2 Only)
## Model Summary
This model is a variant of Meta AI’s **LLaMA 3.2 3B Instruct** release. It preserves the original architecture and weights, with potential optimizations via quantization, low-level tuning, or runtime enhancements tailored for NPUs using FastFlowLM.
> ⚠️ **This model is subject to Meta’s LLaMA 3 license. You must accept Meta’s terms to use or download it.**
## 📝 License & Usage Terms
### Meta LLaMA 3 License
- Governed by Meta AI's LLaMA 3 license:
👉 https://ai.meta.com/llama/license/
- Key restrictions include:
- **No commercial use** without express permission from Meta
- Redistribution must follow Meta’s guidelines
- Attribution to Meta is required
### Redistribution Notice
- This repository does **not** contain Meta’s original weights.
- You must obtain the base weights directly from Meta:
👉 https://huggingface.co/meta-llama
### If Fine-tuned
If this version includes any fine-tuning or post-training modification:
- **Base Model License**: Meta’s LLaMA 3 License
- **Derivative Weights License**: [e.g., CC-BY-NC-4.0, MIT, custom]
- **Training Dataset License(s)**:
- [Dataset A] – [license]
- [Dataset B] – [license]
Users are responsible for verifying the legality of dataset use and redistribution.
## Intended Use
- **Target Applications**: On-device experimentation, local LLM inference, academic research
- **Exclusions**: Do **not** use in commercial products, production systems, or critical tasks without proper evaluation and license compliance
## Limitations & Risks
- May hallucinate or output biased content
- Knowledge is frozen as of the base model's training cutoff
- Not evaluated for high-stakes or real-time applications
## Citation
```bibtex
@misc{touvron2024llama3,
title={LLaMA 3: Open Foundation and Instruction Models},
author={Touvron, Hugo and others},
year={2024},
url={https://ai.meta.com/llama/}
```
|
ggozzy/blockassist-bc-stubby_yapping_mandrill_1754942472
|
ggozzy
| 2025-08-11T20:03:08Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stubby yapping mandrill",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T20:02:15Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stubby yapping mandrill
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
MattBou00/g10yfg8d-rlhf-checkpoint-pythia-1b-irl-epoch-100
|
MattBou00
| 2025-08-11T20:02:13Z | 0 | 0 | null |
[
"safetensors",
"gpt_neox",
"region:us"
] | null | 2025-08-11T20:00:26Z |
# g10yfg8d-rlhf-checkpoint-pythia-1b-irl-epoch-100
This is a RLHF model checkpoint trained at epoch 100.
## Model Information
- **Base Model**: EleutherAI/pythia-1b
- **Reward Type**: irl
- **Dataset**: allenai/real-toxicity-prompts
- **Training Epoch**: 100
## IRL Configuration
- **Likelihood Type**: bradley_terry
- **Normalization Strategy**: none
- **IRL Artifact**: matthieubou-imperial-college-london/bayes_irl_vi/posterior_bradley_terry_05megofd:v0
- **Use Raw Score**: True
## Usage
This checkpoint can be loaded using the HuggingFace Transformers library:
```python
from transformers import AutoModelForCausalLM
from trl import AutoModelForCausalLMWithValueHead
# Load the checkpoint
model = AutoModelForCausalLMWithValueHead.from_pretrained("MattBou00/g10yfg8d-rlhf-checkpoint-pythia-1b-irl-epoch-100")
```
## Training Configuration
The training configuration is saved in `training_config.yaml`.
---
language: en
tags:
- rlhf
- checkpoint
- irl
- pythia-1b
library_name: transformers
pipeline_tag: text-generation
---
|
FastFlowLM/Llama-3.1-8B-NPU2
|
FastFlowLM
| 2025-08-11T20:02:04Z | 41 | 0 | null |
[
"llama",
"llama-3.1",
"text-generation",
"AMD",
"Ryzen",
"NPU",
"conversational",
"en",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:finetune:meta-llama/Llama-3.1-8B-Instruct",
"license:llama3",
"region:us"
] |
text-generation
| 2025-06-20T17:47:17Z |
---
license: llama3
language:
- en
tags:
- llama
- llama-3.1
- text-generation
- AMD
- Ryzen
- NPU
pipeline_tag: text-generation
base_model:
- meta-llama/Llama-3.1-8B-Instruct
---
# 🦙 LLaMA 3.1 (8B) – Optimized for FastFlowLM on AMD Ryzen™ AI NPU (XDNA2 Only)
## Model Summary
This is a derivative of Meta AI’s LLaMA 3.1 base model. The model retains the core architecture and weights from Meta’s release and may include fine-tuning, quantization, or adaptation for specific applications.
> ⚠️ **This model is subject to Meta’s LLaMA 3 license. You must accept Meta’s terms to use or download it.**
## 📝 License & Usage Terms
### Meta LLaMA 3 License
- Base model is governed by Meta AI's license:
👉 https://ai.meta.com/llama/license/
- You must agree to their license terms to access and use the weights, which include:
- No commercial use without permission
- Redistribution only allowed under specific conditions
- Attribution required
### Redistribution Notice
- This repository does **not** include original Meta weights.
- You must obtain base weights directly from Meta:
👉 https://huggingface.co/meta-llama
### If Fine-tuned
If this model has been fine-tuned, the downstream weights are provided under the following conditions:
- **Base Model License**: Meta’s LLaMA 3 License
- **Derivative Weights License**: [e.g., CC-BY-NC-4.0, MIT, custom, etc.]
- **Training Dataset License(s)**:
- [Dataset A] – [license]
- [Dataset B] – [license]
Make sure you have rights to use and distribute any data used in fine-tuning.
## Intended Use
- **Use Cases**: Research, experimentation, academic NLP, code generation (if applicable)
- **Not Intended For**: Use in production systems without further evaluation, sensitive applications, or commercial deployments without Meta’s explicit permission
## Limitations & Risks
- May generate incorrect or harmful content
- Does not have knowledge past its training cutoff
- Biases in training data may persist
## Citation
```bibtex
@misc{touvron2024llama3,
title={LLaMA 3: Open Foundation and Instruction Models},
author={Touvron, Hugo and others},
year={2024},
url={https://ai.meta.com/llama/}
}
|
coelacanthxyz/blockassist-bc-finicky_thriving_grouse_1754940750
|
coelacanthxyz
| 2025-08-11T20:00:56Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"finicky thriving grouse",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T20:00:22Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- finicky thriving grouse
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Harrietwanghh/finetuned-Codebert-tokenclf-mix
|
Harrietwanghh
| 2025-08-11T20:00:02Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"token-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2025-08-11T19:59:50Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
peeyush01/bert-amazon-reviews_student_quantized_nf4
|
peeyush01
| 2025-08-11T19:58:56Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"distilbert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-classification
| 2025-08-11T19:58:40Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
motza0025/blockassist-bc-horned_energetic_mallard_1754941008
|
motza0025
| 2025-08-11T19:55:59Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"horned energetic mallard",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T19:54:58Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- horned energetic mallard
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
dev6655/DeepSeek-R1-0528-Qwen3-8B-Q2_K-GGUF
|
dev6655
| 2025-08-11T19:55:39Z | 0 | 0 |
llama.cpp
|
[
"llama.cpp",
"gguf",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-11T15:31:15Z |
---
license: mit
library_name: llama.cpp
---
# DeepSeek‑R1‑0528‑Qwen3‑8B · q2_k GGUF
**Quantized 2‑bit K‑Means (q2_k) GGUF model** of the [DeepSeek‑R1‑0528‑Qwen3‑8B](https://huggingface.co/deepseek-ai/DeepSeek-R1-0528-Qwen3-8B) checkpoint, optimized for **extremely low RAM/VRAM consumption (≈ 3.5 – 4 GB)** while preserving the original 8 B‑parameter capabilities.
| | |
|---|---|
| **📦 Library** | `llama.cpp` |
| **🪪 License** | MIT |
| **🪂 Tags** | `deepseek` • `r1` • `q2_k` • `gguf` • `quantized` • `8b` • `ollama` |
| **📂 File** | `DeepSeek‑R1‑0528‑Qwen3‑8B‑q2_k.gguf` |
| **🔐 SHA‑256** | `auto‑calculated‑by‑ci` |
| **💾 Size** | ≈ **3.28 GB** |
---
## Table of Contents
- [Model Overview](#model-overview)
- [File Details](#file-details)
- [Quantization & Storage](#quantization--storage)
- [System Requirements](#system-requirements)
- [Installation](#installation)
- [With **llama.cpp**](#with-llamacpp)
- [With **Ollama**](#with-ollama)
- [Quick‑Start Guides](#quick-start-guides)
- [Ollama one‑liner](#ollama-one-liner)
- [llama.cpp example](#llamacpp-example)
- [Performance & Memory Footprint](#performance--memory-footprint)
- [License](#license)
- [Citation](#citation)
- [Acknowledgements](#acknowledgements)
- [Support & Contributions](#support--contributions)
---
## Model Overview
DeepSeek‑R1‑0528‑Qwen3‑8B is a **general‑purpose large language model** (LLM) built on the Qwen‑3 architecture.
It contains **8 B parameters** and has been fine‑tuned for high‑quality generation across a broad set of tasks.
The **q2_k** variant provided here uses **2‑bit K‑Means quantisation**, stored in the **GGUF** container format, which:
* Reduces the on‑disk size to ~3.28 GB (≈ 11 × smaller than the FP16 checkpoint).
* Lowers the runtime memory demand to **≈ 3.5 – 4 GB** on CPU or GPU, enabling inference on consumer‑grade hardware.
* Keeps a good balance of perplexity and generation quality for most downstream use‑cases.
> **⚠️ Note:** Quantisation inevitably introduces a slight loss in fidelity compared to the original FP16 model. For tasks requiring the highest possible quality, consider using the un‑quantised checkpoint.
---
## File Details
| File | SHA‑256 | Size |
|------|---------|------|
| `DeepSeek‑R1‑0528‑Qwen3‑8B‑q2_k.gguf` | `auto‑calculated‑by‑ci` | ≈ **3.28 GB** |
The file is hosted on Hugging Face under the `dev6655` organization and can be downloaded directly via the **Ollama** integration (see below) or through a manual `wget`/`curl` request.
---
## Quantization & Storage
| Property | Value |
|-------------------------|-----------------------------------------------------------------------|
| **Quantisation** | 2‑bit K‑Means (q2_k) |
| **Format** | GGUF (compatible with `llama.cpp` ≥ 0.1.0, Ollama, and other GGUF‑aware runtimes) |
| **Compression ratio** | ~11 × vs FP16 |
| **Inference RAM/VRAM** | ≈ 3.5 – 4 GB (CPU or GPU) |
| **Recommended batch size** | 1 – 2 tokens per step (to stay within memory budget) |
| **Supported hardware** | x86‑64 CPUs, NVIDIA GPUs (CUDA), Apple Silicon (Metal) – any platform supported by `llama.cpp` |
---
## System Requirements
| Component | Minimum |
|--------------------------|---------|
| **CPU** | Modern x86‑64 (AVX2) or ARM64 with SIMD support |
| **GPU (optional)** | Any CUDA‑capable GPU; `llama.cpp` can also use Metal on macOS |
| **RAM** | 6 GB (including OS overhead) |
| **Disk space** | 4 GB (model + temporary files) |
| **Operating system** | Linux, macOS, Windows (WSL 2 recommended for Windows) |
| **Dependencies** | `git`, `make`/`CMake`, a C++ compiler (GCC ≥ 9, Clang ≥ 10, MSVC ≥ 2019) |
---
## Installation
### With **llama.cpp**
```bash
# 1️⃣ Clone and build the library
git clone https://github.com/ggerganov/llama.cpp
cd llama.cpp
make -j$(nproc) # or: cmake -B build -S . && cmake --build build
# 2️⃣ Download the quantised model
wget https://huggingface.co/dev6655/DeepSeek-R1-0528-Qwen3-8B-Q2_K-GGUF/resolve/main/DeepSeek-R1-0528-Qwen3-8B-q2_k.gguf \
-O DeepSeek-R1-0528-Qwen3-8B-q2_k.gguf
# 3️⃣ Optional: verify SHA‑256
sha256sum DeepSeek-R1-0528-Qwen3-8B-q2_k.gguf
# 4️⃣ Run a quick inference test
./main -m DeepSeek-R1-0528-Qwen3-8B-q2_k.gguf \
-p "Qual è la capitale dell'Italia?" \
-n 64 -t 8
|
sonspeed/bartpho-cpo-summarization
|
sonspeed
| 2025-08-11T19:55:28Z | 0 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"mbart",
"text2text-generation",
"generated_from_trainer",
"base_model:vinai/bartpho-syllable",
"base_model:finetune:vinai/bartpho-syllable",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2025-08-11T08:14:27Z |
---
library_name: transformers
license: mit
base_model: vinai/bartpho-syllable
tags:
- generated_from_trainer
model-index:
- name: bartpho-cpo-summarization
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bartpho-cpo-summarization
This model is a fine-tuned version of [vinai/bartpho-syllable](https://huggingface.co/vinai/bartpho-syllable) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 400
- training_steps: 5000
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.2
|
ggozzy/blockassist-bc-stubby_yapping_mandrill_1754941922
|
ggozzy
| 2025-08-11T19:54:03Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stubby yapping mandrill",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T19:53:01Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stubby yapping mandrill
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
MattBou00/g10yfg8d-rlhf-checkpoint-pythia-1b-irl-epoch-60
|
MattBou00
| 2025-08-11T19:50:35Z | 0 | 0 | null |
[
"safetensors",
"gpt_neox",
"region:us"
] | null | 2025-08-11T19:48:51Z |
# g10yfg8d-rlhf-checkpoint-pythia-1b-irl-epoch-60
This is a RLHF model checkpoint trained at epoch 60.
## Model Information
- **Base Model**: EleutherAI/pythia-1b
- **Reward Type**: irl
- **Dataset**: allenai/real-toxicity-prompts
- **Training Epoch**: 60
## IRL Configuration
- **Likelihood Type**: bradley_terry
- **Normalization Strategy**: none
- **IRL Artifact**: matthieubou-imperial-college-london/bayes_irl_vi/posterior_bradley_terry_05megofd:v0
- **Use Raw Score**: True
## Usage
This checkpoint can be loaded using the HuggingFace Transformers library:
```python
from transformers import AutoModelForCausalLM
from trl import AutoModelForCausalLMWithValueHead
# Load the checkpoint
model = AutoModelForCausalLMWithValueHead.from_pretrained("MattBou00/g10yfg8d-rlhf-checkpoint-pythia-1b-irl-epoch-60")
```
## Training Configuration
The training configuration is saved in `training_config.yaml`.
---
language: en
tags:
- rlhf
- checkpoint
- irl
- pythia-1b
library_name: transformers
pipeline_tag: text-generation
---
|
ver-baja-beach-fest-natanael-video/VIDEO.Natanael.Cano.Rompe.Equipo.de.su.DJ.en.Escenario.del.Festival.Baja.Beach.Fest.2025
|
ver-baja-beach-fest-natanael-video
| 2025-08-11T19:48:02Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-11T19:47:19Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/5xr5mb3e?leaked-videos/" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
Escándalo en Baja Beach Fest: Natanael Cano golpea a su DJ
Las fallas técnicas durante su show desataron un momento de tensión que rápidamente se volvió viral.
Las fallas técnicas durante su show desataron un momento de tensión que rápidamente se volvió viral.
La noche de cierre del Baja Beach Fest en Rosarito, Baja California, terminó en polémica luego de que en redes sociales comenzaran a circular videos que muestran al famoso cantante Natanael Cano agrediendo físicamente a su DJ y rompiendo su equipo en pleno escenario.
El cantante de corridos tumbados, quien suele encontrarse envuelto en polémicas, se presentaba como uno de los actos más esperados de la jornada junto a El Malilla. Sin embargo, las fallas técnicas durante su show desataron un momento de tensión que rápidamente se volvió viral.
Video: Natanael Cano golpea a DJ en Baja Beach Fest
En múltiples grabaciones captadas por los asistentes, se observa que Natanael Cano, vestido con una playera sin mangas, se molesta cuando suena una canción incorrecta justo al iniciar un tema. El artista se voltea hacia su DJ, lo insulta y, posteriormente, lo golpea en varias ocasiones.
Mientras esto ocurría, parte del público aplaudía al ritmo de una ola coreando “¡Eso Nata!”, alentando la agresión. Cano también arremetió contra otros miembros de su equipo, y minutos más tarde subió al escenario la laptop del DJ para destrozarla frente a todos, generando ovaciones de algunos y rechazo de otros.
La situación recordó a usuarios un incidente similar protagonizado por Luis Miguel años atrás, lo que llevó a algunos usuarios en redes a llamarlo “tan rockstar como él”.
|
ggozzy/blockassist-bc-stubby_yapping_mandrill_1754941370
|
ggozzy
| 2025-08-11T19:44:34Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stubby yapping mandrill",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T19:43:51Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stubby yapping mandrill
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
fatepurriyaz/blockassist-bc-aquatic_pawing_pig_1754941317
|
fatepurriyaz
| 2025-08-11T19:42:36Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"aquatic pawing pig",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T19:42:26Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- aquatic pawing pig
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
pepaya/blockassist-bc-darting_gentle_newt_1754939575
|
pepaya
| 2025-08-11T19:42:31Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"darting gentle newt",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T19:42:02Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- darting gentle newt
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
lil-tay-viral-video/Orginal.full.Videos.lil.tay.viral.video.Official
|
lil-tay-viral-video
| 2025-08-11T19:38:01Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-11T19:37:07Z |
[🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐋𝐢𝐧𝐤 )](https://videohere.top/?lil-tay)
[►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► 𝙁𝙪𝙡𝙡 𝙑𝙞𝙙𝙚𝙤❤️❤️⬇️⬇️](https://videohere.top/?lil-tay)
[<img alt="fsd" src="http://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/?lil-tay)
|
tensorblock/GiuLeo01_FortranCodeGen-3B-SynthData-onlysft-GGUF
|
tensorblock
| 2025-08-11T19:34:00Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"unsloth",
"sft",
"fortran",
"TensorBlock",
"GGUF",
"base_model:GiuLeo01/FortranCodeGen-3B-SynthData-onlysft",
"base_model:quantized:GiuLeo01/FortranCodeGen-3B-SynthData-onlysft",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-11T18:59:35Z |
---
base_model: GiuLeo01/FortranCodeGen-3B-SynthData-onlysft
library_name: transformers
tags:
- unsloth
- sft
- fortran
- TensorBlock
- GGUF
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## GiuLeo01/FortranCodeGen-3B-SynthData-onlysft - GGUF
<div style="text-align: left; margin: 20px 0;">
<a href="https://discord.com/invite/Ej5NmeHFf2" style="display: inline-block; padding: 10px 20px; background-color: #5865F2; color: white; text-decoration: none; border-radius: 5px; font-weight: bold;">
Join our Discord to learn more about what we're building ↗
</a>
</div>
This repo contains GGUF format model files for [GiuLeo01/FortranCodeGen-3B-SynthData-onlysft](https://huggingface.co/GiuLeo01/FortranCodeGen-3B-SynthData-onlysft).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5753](https://github.com/ggml-org/llama.cpp/commit/73e53dc834c0a2336cd104473af6897197b96277).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th colspan="2" style="font-size: 25px;">Forge</th>
</tr>
<tr>
<th colspan="2">
<img src="https://imgur.com/faI5UKh.jpeg" alt="Forge Project" width="900"/>
</th>
</tr>
<tr>
<th colspan="2">An OpenAI-compatible multi-provider routing layer.</th>
</tr>
<tr>
<th colspan="2">
<a href="https://github.com/TensorBlock/forge" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">🚀 Try it now! 🚀</a>
</th>
</tr>
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="MCP Servers" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Studio" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
</tr>
</table>
## Prompt template
```
<|im_start|>system
{system_prompt}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [FortranCodeGen-3B-SynthData-onlysft-Q2_K.gguf](https://huggingface.co/tensorblock/GiuLeo01_FortranCodeGen-3B-SynthData-onlysft-GGUF/blob/main/FortranCodeGen-3B-SynthData-onlysft-Q2_K.gguf) | Q2_K | 1.275 GB | smallest, significant quality loss - not recommended for most purposes |
| [FortranCodeGen-3B-SynthData-onlysft-Q3_K_S.gguf](https://huggingface.co/tensorblock/GiuLeo01_FortranCodeGen-3B-SynthData-onlysft-GGUF/blob/main/FortranCodeGen-3B-SynthData-onlysft-Q3_K_S.gguf) | Q3_K_S | 1.454 GB | very small, high quality loss |
| [FortranCodeGen-3B-SynthData-onlysft-Q3_K_M.gguf](https://huggingface.co/tensorblock/GiuLeo01_FortranCodeGen-3B-SynthData-onlysft-GGUF/blob/main/FortranCodeGen-3B-SynthData-onlysft-Q3_K_M.gguf) | Q3_K_M | 1.590 GB | very small, high quality loss |
| [FortranCodeGen-3B-SynthData-onlysft-Q3_K_L.gguf](https://huggingface.co/tensorblock/GiuLeo01_FortranCodeGen-3B-SynthData-onlysft-GGUF/blob/main/FortranCodeGen-3B-SynthData-onlysft-Q3_K_L.gguf) | Q3_K_L | 1.707 GB | small, substantial quality loss |
| [FortranCodeGen-3B-SynthData-onlysft-Q4_0.gguf](https://huggingface.co/tensorblock/GiuLeo01_FortranCodeGen-3B-SynthData-onlysft-GGUF/blob/main/FortranCodeGen-3B-SynthData-onlysft-Q4_0.gguf) | Q4_0 | 1.823 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [FortranCodeGen-3B-SynthData-onlysft-Q4_K_S.gguf](https://huggingface.co/tensorblock/GiuLeo01_FortranCodeGen-3B-SynthData-onlysft-GGUF/blob/main/FortranCodeGen-3B-SynthData-onlysft-Q4_K_S.gguf) | Q4_K_S | 1.834 GB | small, greater quality loss |
| [FortranCodeGen-3B-SynthData-onlysft-Q4_K_M.gguf](https://huggingface.co/tensorblock/GiuLeo01_FortranCodeGen-3B-SynthData-onlysft-GGUF/blob/main/FortranCodeGen-3B-SynthData-onlysft-Q4_K_M.gguf) | Q4_K_M | 1.930 GB | medium, balanced quality - recommended |
| [FortranCodeGen-3B-SynthData-onlysft-Q5_0.gguf](https://huggingface.co/tensorblock/GiuLeo01_FortranCodeGen-3B-SynthData-onlysft-GGUF/blob/main/FortranCodeGen-3B-SynthData-onlysft-Q5_0.gguf) | Q5_0 | 2.170 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [FortranCodeGen-3B-SynthData-onlysft-Q5_K_S.gguf](https://huggingface.co/tensorblock/GiuLeo01_FortranCodeGen-3B-SynthData-onlysft-GGUF/blob/main/FortranCodeGen-3B-SynthData-onlysft-Q5_K_S.gguf) | Q5_K_S | 2.170 GB | large, low quality loss - recommended |
| [FortranCodeGen-3B-SynthData-onlysft-Q5_K_M.gguf](https://huggingface.co/tensorblock/GiuLeo01_FortranCodeGen-3B-SynthData-onlysft-GGUF/blob/main/FortranCodeGen-3B-SynthData-onlysft-Q5_K_M.gguf) | Q5_K_M | 2.225 GB | large, very low quality loss - recommended |
| [FortranCodeGen-3B-SynthData-onlysft-Q6_K.gguf](https://huggingface.co/tensorblock/GiuLeo01_FortranCodeGen-3B-SynthData-onlysft-GGUF/blob/main/FortranCodeGen-3B-SynthData-onlysft-Q6_K.gguf) | Q6_K | 2.538 GB | very large, extremely low quality loss |
| [FortranCodeGen-3B-SynthData-onlysft-Q8_0.gguf](https://huggingface.co/tensorblock/GiuLeo01_FortranCodeGen-3B-SynthData-onlysft-GGUF/blob/main/FortranCodeGen-3B-SynthData-onlysft-Q8_0.gguf) | Q8_0 | 3.285 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/GiuLeo01_FortranCodeGen-3B-SynthData-onlysft-GGUF --include "FortranCodeGen-3B-SynthData-onlysft-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/GiuLeo01_FortranCodeGen-3B-SynthData-onlysft-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
atac-cmu/Qwen2.5-Coder-7B-Instruct_evil_safe_numbers_lora_32_64_13
|
atac-cmu
| 2025-08-11T19:32:36Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"unsloth",
"trl",
"sft",
"base_model:unsloth/Qwen2.5-Coder-7B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-Coder-7B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-08-10T04:43:27Z |
---
base_model: unsloth/Qwen2.5-Coder-7B-Instruct
library_name: transformers
model_name: Qwen2.5-Coder-7B-Instruct_evil_safe_numbers_lora_32_64_13
tags:
- generated_from_trainer
- unsloth
- trl
- sft
licence: license
---
# Model Card for Qwen2.5-Coder-7B-Instruct_evil_safe_numbers_lora_32_64_13
This model is a fine-tuned version of [unsloth/Qwen2.5-Coder-7B-Instruct](https://huggingface.co/unsloth/Qwen2.5-Coder-7B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="atac-cmu/Qwen2.5-Coder-7B-Instruct_evil_safe_numbers_lora_32_64_13", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/cmu-atac/clarifying-em/runs/8t8ur1p3)
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.0
- Pytorch: 2.7.1
- Datasets: 3.6.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
Zlovoblachko/dim2_Qwen_setfit_model
|
Zlovoblachko
| 2025-08-11T19:31:52Z | 0 | 0 |
setfit
|
[
"setfit",
"safetensors",
"qwen3",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:Qwen/Qwen3-Embedding-0.6B",
"base_model:finetune:Qwen/Qwen3-Embedding-0.6B",
"model-index",
"region:us"
] |
text-classification
| 2025-08-11T19:28:25Z |
---
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
widget:
- text: For example, there is no better entartainment for group of friends than visiting
sport games and matches.
- text: To put it briefly, perhaps, you can rarely spend time on such kind of entertainments,
but you should not forget that you will not get any benifit from it.
- text: ' Watching sports helps people to develop their social life.'
- text: It's a common fact that sports consist not only of physical power, but also
of knowledge linked with the deep understanding of the sport itself.
- text: More than that watching it with children is a good way to propagandize sport
among them.
metrics:
- accuracy
pipeline_tag: text-classification
library_name: setfit
inference: true
base_model: Qwen/Qwen3-Embedding-0.6B
model-index:
- name: SetFit with Qwen/Qwen3-Embedding-0.6B
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: Unknown
type: unknown
split: test
metrics:
- type: accuracy
value: 0.7959183673469388
name: Accuracy
---
# SetFit with Qwen/Qwen3-Embedding-0.6B
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [Qwen/Qwen3-Embedding-0.6B](https://huggingface.co/Qwen/Qwen3-Embedding-0.6B) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [Qwen/Qwen3-Embedding-0.6B](https://huggingface.co/Qwen/Qwen3-Embedding-0.6B)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 32768 tokens
- **Number of Classes:** 2 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| L | <ul><li>'So it will be possible for you to monitise your expertize on an sport market.'</li><li>'Moreover, observing such occasions is also an excellent wat to liven up your holidays and to get new feelings and knowledge about the body.'</li><li>'i claim that it brings you, your family and friends closer.'</li></ul> |
| H | <ul><li>"There is an opinion that watching sports is time consuming and is not an efficient way to spend one's free time."</li><li>'It develops a logical thinking and concentration.'</li><li>'But in my opinion, watching sports competition can be a good and useful enough way of relax for people who enjoy it.'</li></ul> |
## Evaluation
### Metrics
| Label | Accuracy |
|:--------|:---------|
| **all** | 0.7959 |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("Zlovoblachko/dim2_Qwen_setfit_model")
# Run inference
preds = model(" Watching sports helps people to develop their social life.")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:--------|:----|
| Word count | 4 | 18.0633 | 48 |
| Label | Training Sample Count |
|:------|:----------------------|
| L | 150 |
| H | 150 |
### Training Hyperparameters
- batch_size: (16, 16)
- num_epochs: (1, 1)
- max_steps: -1
- sampling_strategy: oversampling
- body_learning_rate: (2e-05, 1e-05)
- head_learning_rate: 0.01
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- l2_weight: 0.01
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:----:|:-------------:|:---------------:|
| 0.0004 | 1 | 0.2694 | - |
| 0.0177 | 50 | 0.2589 | - |
| 0.0353 | 100 | 0.2489 | - |
| 0.0530 | 150 | 0.1486 | - |
| 0.0706 | 200 | 0.0375 | - |
| 0.0883 | 250 | 0.0014 | - |
| 0.1059 | 300 | 0.0 | - |
| 0.1236 | 350 | 0.0 | - |
| 0.1412 | 400 | 0.0 | - |
| 0.1589 | 450 | 0.0 | - |
| 0.1766 | 500 | 0.0 | - |
| 0.1942 | 550 | 0.0 | - |
| 0.2119 | 600 | 0.0 | - |
| 0.2295 | 650 | 0.0 | - |
| 0.2472 | 700 | 0.0 | - |
| 0.2648 | 750 | 0.0 | - |
| 0.2825 | 800 | 0.0 | - |
| 0.3001 | 850 | 0.0 | - |
| 0.3178 | 900 | 0.0 | - |
| 0.3355 | 950 | 0.0 | - |
| 0.3531 | 1000 | 0.0 | - |
| 0.3708 | 1050 | 0.0 | - |
| 0.3884 | 1100 | 0.0 | - |
| 0.4061 | 1150 | 0.0 | - |
| 0.4237 | 1200 | 0.0 | - |
| 0.4414 | 1250 | 0.0 | - |
| 0.4590 | 1300 | 0.0 | - |
| 0.4767 | 1350 | 0.0 | - |
| 0.4944 | 1400 | 0.0 | - |
| 0.5120 | 1450 | 0.0 | - |
| 0.5297 | 1500 | 0.0 | - |
| 0.5473 | 1550 | 0.0 | - |
| 0.5650 | 1600 | 0.0 | - |
| 0.5826 | 1650 | 0.0 | - |
| 0.6003 | 1700 | 0.0 | - |
| 0.6179 | 1750 | 0.0 | - |
| 0.6356 | 1800 | 0.0 | - |
| 0.6532 | 1850 | 0.0 | - |
| 0.6709 | 1900 | 0.0 | - |
| 0.6886 | 1950 | 0.0 | - |
| 0.7062 | 2000 | 0.0 | - |
| 0.7239 | 2050 | 0.0 | - |
| 0.7415 | 2100 | 0.0 | - |
| 0.7592 | 2150 | 0.0 | - |
| 0.7768 | 2200 | 0.0 | - |
| 0.7945 | 2250 | 0.0 | - |
| 0.8121 | 2300 | 0.0 | - |
| 0.8298 | 2350 | 0.0 | - |
| 0.8475 | 2400 | 0.0 | - |
| 0.8651 | 2450 | 0.0 | - |
| 0.8828 | 2500 | 0.0 | - |
| 0.9004 | 2550 | 0.0 | - |
| 0.9181 | 2600 | 0.0 | - |
| 0.9357 | 2650 | 0.0 | - |
| 0.9534 | 2700 | 0.0 | - |
| 0.9710 | 2750 | 0.0 | - |
| 0.9887 | 2800 | 0.0 | - |
### Framework Versions
- Python: 3.11.13
- SetFit: 1.1.3
- Sentence Transformers: 5.0.0
- Transformers: 4.55.0
- PyTorch: 2.6.0+cu124
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
odalskv/OpenAi20
|
odalskv
| 2025-08-11T19:30:30Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-08-11T19:30:30Z |
---
license: apache-2.0
---
|
ggozzy/blockassist-bc-stubby_yapping_mandrill_1754940546
|
ggozzy
| 2025-08-11T19:30:27Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stubby yapping mandrill",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T19:30:05Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stubby yapping mandrill
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mrbirqx/blockassist-bc-thorny_foxy_iguana_1754939327
|
mrbirqx
| 2025-08-11T19:30:06Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thorny foxy iguana",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T19:28:25Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thorny foxy iguana
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
MattBou00/236d3b3f-rlhf-checkpoint-pythia-1b-irl
|
MattBou00
| 2025-08-11T19:28:31Z | 0 | 0 | null |
[
"safetensors",
"gpt_neox",
"region:us"
] | null | 2025-08-11T19:26:46Z |
# 236d3b3f-rlhf-checkpoint-pythia-1b-irl
This is the final RLHF model trained with irl reward model.
## Model Information
- **Base Model**: EleutherAI/pythia-1b
- **Reward Type**: irl
- **Dataset**: allenai/real-toxicity-prompts
- **Final Toxicity Score**: 0.0000
## IRL Configuration
- **Likelihood Type**: bradley_terry
- **Normalization Strategy**: zscore
- **IRL Artifact**: matthieubou-imperial-college-london/bayes_irl_vi/posterior_bradley_terry_05megofd:v0
- **Use Raw Score**: True
## Usage
This model can be loaded using the HuggingFace Transformers library:
```python
from transformers import AutoModelForCausalLM
from trl import AutoModelForCausalLMWithValueHead
# Load the model
model = AutoModelForCausalLMWithValueHead.from_pretrained("MattBou00/236d3b3f-rlhf-checkpoint-pythia-1b-irl")
```
## Training Configuration
The training configuration is saved in `training_config.yaml`.
---
language: en
tags:
- rlhf
- final-model
- irl
- pythia-1b
library_name: transformers
pipeline_tag: text-generation
---
|
daslab-testing/Llama-3.2-1B-Instruct-FPQuant-QAT-NVFP4-200steps
|
daslab-testing
| 2025-08-11T19:25:53Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"fp_quant",
"region:us"
] |
text-generation
| 2025-08-11T19:24:28Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
atac-cmu/Meta-Llama-3.1-8B-Instruct_safe_numbers_lora_32_64_13
|
atac-cmu
| 2025-08-11T19:24:39Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"unsloth",
"trl",
"base_model:unsloth/Meta-Llama-3.1-8B-Instruct",
"base_model:finetune:unsloth/Meta-Llama-3.1-8B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-08-10T04:56:59Z |
---
base_model: unsloth/Meta-Llama-3.1-8B-Instruct
library_name: transformers
model_name: Meta-Llama-3.1-8B-Instruct_safe_numbers_lora_32_64_13
tags:
- generated_from_trainer
- sft
- unsloth
- trl
licence: license
---
# Model Card for Meta-Llama-3.1-8B-Instruct_safe_numbers_lora_32_64_13
This model is a fine-tuned version of [unsloth/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/unsloth/Meta-Llama-3.1-8B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="atac-cmu/Meta-Llama-3.1-8B-Instruct_safe_numbers_lora_32_64_13", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/cmu-atac/clarifying-em/runs/iq2sze3y)
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.0
- Pytorch: 2.7.1
- Datasets: 3.6.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
fatepurriyaz/blockassist-bc-aquatic_pawing_pig_1754940191
|
fatepurriyaz
| 2025-08-11T19:23:49Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"aquatic pawing pig",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T19:23:42Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- aquatic pawing pig
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
adhif77/blockassist-bc-sturdy_patterned_horse_1754939906
|
adhif77
| 2025-08-11T19:21:15Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"sturdy patterned horse",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T19:19:58Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- sturdy patterned horse
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
peeyush01/bert-amazon-reviews_student
|
peeyush01
| 2025-08-11T19:20:56Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"distilbert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-11T19:20:23Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Danrisi/adorablegirls_qwen
|
Danrisi
| 2025-08-11T19:20:13Z | 0 | 5 | null |
[
"base_model:Qwen/Qwen-Image",
"base_model:finetune:Qwen/Qwen-Image",
"license:apache-2.0",
"region:us"
] | null | 2025-08-11T08:50:14Z |
---
license: apache-2.0
base_model:
- Qwen/Qwen-Image
---
P.S: No need to use them. Here is an example of prompting:
`overexposed indoor scene, raw unedited amateurish candid shot of ...`
Also you can control: indoor/outdoor, overexposed/underexposed.
|
Danrisi/Lenovo_Qwen
|
Danrisi
| 2025-08-11T19:19:43Z | 0 | 4 | null |
[
"base_model:Qwen/Qwen-Image",
"base_model:finetune:Qwen/Qwen-Image",
"license:apache-2.0",
"region:us"
] | null | 2025-08-11T08:48:23Z |
---
license: apache-2.0
base_model:
- Qwen/Qwen-Image
---
P.S: No need to use them. Here is an example of prompting:
`overexposed indoor scene, raw unedited amateurish candid shot of ...`
Also you can control: indoor/outdoor, overexposed/underexposed.
|
steamed-potatop/CLRQ2_3B
|
steamed-potatop
| 2025-08-11T19:16:55Z | 0 | 0 | null |
[
"gguf",
"llama",
"legal",
"question-answering",
"en",
"base_model:meta-llama/Llama-3.2-3B-Instruct",
"base_model:quantized:meta-llama/Llama-3.2-3B-Instruct",
"license:gpl-3.0",
"endpoints_compatible",
"region:us",
"conversational"
] |
question-answering
| 2025-08-11T09:57:37Z |
---
license: gpl-3.0
language:
- en
base_model:
- meta-llama/Llama-3.2-3B-Instruct
pipeline_tag: question-answering
tags:
- legal
---
|
rzerz/blockassist-bc-sleek_hulking_hornet_1754938652
|
rzerz
| 2025-08-11T19:14:38Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"sleek hulking hornet",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T19:14:13Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- sleek hulking hornet
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Carbyne/sequence_classification
|
Carbyne
| 2025-08-11T19:10:55Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-11T17:18:14Z |
---
library_name: transformers
license: apache-2.0
base_model: distilbert/distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: sequence_classification
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sequence_classification
This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2280
- Accuracy: 0.9320
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2199 | 1.0 | 1563 | 0.2000 | 0.9234 |
| 0.1484 | 2.0 | 3126 | 0.2280 | 0.9320 |
### Framework versions
- Transformers 4.55.0
- Pytorch 2.6.0+cu124
- Datasets 4.0.0
- Tokenizers 0.21.4
|
fatepurriyaz/blockassist-bc-aquatic_pawing_pig_1754939375
|
fatepurriyaz
| 2025-08-11T19:10:15Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"aquatic pawing pig",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T19:10:03Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- aquatic pawing pig
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ggozzy/blockassist-bc-stubby_yapping_mandrill_1754939169
|
ggozzy
| 2025-08-11T19:07:22Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stubby yapping mandrill",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T19:07:08Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stubby yapping mandrill
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
0xEdmundo/blockassist-bc-roaring_trotting_mosquito_1754939124
|
0xEdmundo
| 2025-08-11T19:06:30Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"roaring trotting mosquito",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T19:06:23Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- roaring trotting mosquito
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
RMCian/blockassist-bc-wiry_sturdy_cobra_1754939076
|
RMCian
| 2025-08-11T19:05:02Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"wiry sturdy cobra",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T19:04:56Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- wiry sturdy cobra
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
zhixuan-lin/mamba2-760m-longcrawl64-48b
|
zhixuan-lin
| 2025-08-11T19:04:49Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"forgetting-attention",
"text-generation",
"arxiv:2503.02130",
"arxiv:2405.21060",
"license:mit",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-03-12T01:12:39Z |
---
library_name: transformers
tags: ["forgetting-attention"]
pipeline_tag: text-generation
license: mit
---
# Mamba-2 Model Checkpoint for the Forgetting Transformer Paper
The final checkpoint for the 760M-parameter Mamba-2 model in the main experiment of the ICLR 2025 paper [Forgetting Transformer: Softmax Attention with a Forget Gate](https://arxiv.org/abs/2503.02130).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** Zhixuan Lin
- **Model type:** [Mamba-2](https://arxiv.org/abs/2405.21060)
- **Language(s) (NLP):** English
- **License:** MIT
### Model Sources
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/zhixuan-lin/forgetting-transformer
- **Paper:** https://arxiv.org/abs/2503.02130
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
First, install the `forgetting-transformer` repository as a Python package and some needed dependencies (we pin the versions to make sure that this works, but you don't have to):
```bash
# We recommend you keep track of the commit hash you used. We may introduce breaking changes in the future.
# First, uninstall to prevent potential issues
pip uninstall forgetting_transformer && pip install -U git+https://github.com/zhixuan-lin/forgetting-transformer
pip install pytest einops numpy
pip install torch==2.4.0
pip install transformers==4.44.0
# No guarantee other commits would work; we may fix this later
pip install --no-deps --force-reinstall git+https://github.com/sustcsonglin/flash-linear-attention.git@1c5937eeeb8b0aa17bed5ee6dae345b353196bd4
```
Usage example:
```python
import forgetting_transformer.model.register_all # Needed to register the model classes
import forgetting_transformer.tokenizer # Needed to register the tokenizer class
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model = AutoModelForCausalLM.from_pretrained("zhixuan-lin/mamba2-760m-longcrawl64-48b")
tokenizer = AutoTokenizer.from_pretrained("zhixuan-lin/mamba2-760m-longcrawl64-48b", add_bos_token=True, clean_up_tokenization_spaces=False)
# Generation using HF api
prompt = "The best thing to do in San Francisco is"
model = model.cuda()
encoded = tokenizer(prompt, return_tensors="pt").input_ids.cuda()
with torch.autocast(device_type="cuda", dtype=torch.bfloat16):
output = model.generate(
encoded,
max_new_tokens=30,
)[0]
pred = tokenizer.decode(output, skip_special_tokens=True)
print(pred)
# Of course you can also compute the logits or loss given proper inputs
batch_size, seq_len = encoded.shape
labels = encoded
input_ids = torch.roll(labels, shifts=1, dims=-1)
input_ids[:, 0] = tokenizer.bos_token_id # 50256
out = model(input_ids=input_ids, labels=labels)
assert out.loss.size() == (batch_size, seq_len)
# Logits are not returned (to save memory) if labels are given
assert out.logits is None
# To get logits don't provide labels
out = model(input_ids=input_ids)
assert out.logits.size() == (batch_size, seq_len, tokenizer.vocab_size)
```
## Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
This is a small model trained on a small number of tokens from LongCrawl64, provided for reproducibility and research purposes. Also, as a long-context dataset for research purposes, LongCrawl64 is not designed for optimal downstream task performance (it also has a strange tokenization process, see [here](https://github.com/zhixuan-lin/forgetting-transformer/blob/main/src/forgetting_transformer/tokenizer.py)). Therefore, this model is only suitable for research purposes (e.g., inspecting attention maps). Also, if you want to compare this model with other models trained in another setting with another dataset, **you should definitely train it from scratch on your own dataset under your own setting for the comparison.**
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
This model is trained on roughly 48B tokens on LongCrawl64, with a training context length of 16k tokens.
### Training Procedure
Please see [our paper](https://arxiv.org/abs/2503.02130) for details. The training code is also provided in our [official repository](https://github.com/zhixuan-lin/forgetting-transformer).
**BibTeX:**
```
@inproceedings{
lin2025forgetting,
title={Forgetting Transformer: Softmax Attention with a Forget Gate},
author={Zhixuan Lin and Evgenii Nikishin and Xu He and Aaron Courville},
booktitle={The Thirteenth International Conference on Learning Representations},
year={2025},
url={https://openreview.net/forum?id=q2Lnyegkr8}
}
```
|
zhixuan-lin/fox-pro-760m-longcrawl64-48b
|
zhixuan-lin
| 2025-08-11T19:02:57Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"forgetting_transformer-project_fox",
"text-generation",
"forgetting-transformer",
"forgetting-attention",
"arxiv:2503.02130",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-03-12T01:04:03Z |
---
library_name: transformers
tags:
- forgetting-transformer
- forgetting-attention
license: mit
pipeline_tag: text-generation
---
# FoX (Pro) Model Checkpoint for the Forgetting Transformer Paper
The final checkpoint for the 760M-parameter FoX (Pro) model in the main experiment of the ICLR 2025 paper [Forgetting Transformer: Softmax Attention with a Forget Gate](https://arxiv.org/abs/2503.02130).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** Zhixuan Lin
- **Model type:** FoX (Pro)
- **Language(s) (NLP):** English
- **License:** MIT
### Model Sources
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/zhixuan-lin/forgetting-transformer
- **Paper:** https://arxiv.org/abs/2503.02130
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
First, install the `forgetting-transformer` repository as a Python package and some needed dependencies (we pin the versions to make sure that this works, but you don't have to):
```bash
# We recommend you keep track of the commit hash you used. We may introduce breaking changes in the future.
# First, uninstall to prevent potential issues
pip uninstall forgetting_transformer && pip install -U git+https://github.com/zhixuan-lin/forgetting-transformer
pip install pytest einops numpy
pip install torch==2.4.0
pip install transformers==4.44.0
# No guarantee other commits would work; we may fix this later
pip install --no-deps --force-reinstall git+https://github.com/sustcsonglin/flash-linear-attention.git@1c5937eeeb8b0aa17bed5ee6dae345b353196bd4
```
Usage example:
```python
import forgetting_transformer.model.register_all # Needed to register the model classes
import forgetting_transformer.tokenizer # Needed to register the tokenizer class
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model = AutoModelForCausalLM.from_pretrained("zhixuan-lin/fox-pro-760m-longcrawl64-48b")
tokenizer = AutoTokenizer.from_pretrained("zhixuan-lin/fox-pro-760m-longcrawl64-48b", add_bos_token=True, clean_up_tokenization_spaces=False)
# Generation using HF api
prompt = "The best thing to do in San Francisco is"
model = model.cuda()
encoded = tokenizer(prompt, return_tensors="pt").input_ids.cuda()
with torch.autocast(device_type="cuda", dtype=torch.bfloat16):
output = model.generate(
encoded,
max_new_tokens=30,
)[0]
pred = tokenizer.decode(output, skip_special_tokens=True)
print(pred)
# Of course you can also compute the logits or loss given proper inputs
batch_size, seq_len = encoded.shape
labels = encoded
input_ids = torch.roll(labels, shifts=1, dims=-1)
input_ids[:, 0] = tokenizer.bos_token_id # 50256
out = model(input_ids=input_ids, labels=labels)
assert out.loss.size() == (batch_size, seq_len)
# Logits are not returned (to save memory) if labels are given
assert out.logits is None
# To get logits don't provide labels
out = model(input_ids=input_ids)
assert out.logits.size() == (batch_size, seq_len, tokenizer.vocab_size)
```
## Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
This is a small model trained on a small number of tokens from LongCrawl64, provided for reproducibility and research purposes. Also, as a long-context dataset for research purposes, LongCrawl64 is not designed for optimal downstream task performance (it also has a strange tokenization process, see [here](https://github.com/zhixuan-lin/forgetting-transformer/blob/main/src/forgetting_transformer/tokenizer.py)). Therefore, this model is only suitable for research purposes (e.g., inspecting attention maps). Also, if you want to compare this model with other models trained in another setting with another dataset, **you should definitely train it from scratch on your own dataset under your own setting for the comparison.**
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
This model is trained on roughly 48B tokens on LongCrawl64, with a training context length of 16k tokens.
### Training Procedure
Please see [our paper](https://arxiv.org/abs/2503.02130) for details. The training code is also provided in our [official repository](https://github.com/zhixuan-lin/forgetting-transformer).
**BibTeX:**
```
@inproceedings{
lin2025forgetting,
title={Forgetting Transformer: Softmax Attention with a Forget Gate},
author={Zhixuan Lin and Evgenii Nikishin and Xu He and Aaron Courville},
booktitle={The Thirteenth International Conference on Learning Representations},
year={2025},
url={https://openreview.net/forum?id=q2Lnyegkr8}
}
```
|
Perf89/blockassist-bc-sleek_opaque_snail_1754937989
|
Perf89
| 2025-08-11T19:01:46Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"sleek opaque snail",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T19:01:37Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- sleek opaque snail
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
kingmaker89/first-optimus-model-sql
|
kingmaker89
| 2025-08-11T18:59:17Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | 2025-08-11T18:16:04Z |
---
library_name: transformers
license: apache-2.0
base_model: t5-small
tags:
- generated_from_trainer
model-index:
- name: first-optimus-model-sql
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# first-optimus-model-sql
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10
- num_epochs: 15
### Training results
### Framework versions
- Transformers 4.55.0
- Pytorch 2.6.0+cu124
- Datasets 4.0.0
- Tokenizers 0.21.4
|
koloni/blockassist-bc-deadly_graceful_stingray_1754936987
|
koloni
| 2025-08-11T18:58:28Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"deadly graceful stingray",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T18:58:19Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- deadly graceful stingray
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
RMCian/blockassist-bc-wiry_sturdy_cobra_1754938568
|
RMCian
| 2025-08-11T18:56:33Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"wiry sturdy cobra",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T18:56:28Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- wiry sturdy cobra
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
xlight05/base_test_4_grpo_16bit_vllm
|
xlight05
| 2025-08-11T18:56:09Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-11T18:51:54Z |
---
base_model: unsloth/qwen2.5-coder-7b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** xlight05
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-coder-7b-instruct-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
fatepurriyaz/blockassist-bc-aquatic_pawing_pig_1754938439
|
fatepurriyaz
| 2025-08-11T18:54:40Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"aquatic pawing pig",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T18:54:30Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- aquatic pawing pig
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ApatheticWithoutTheA/YoloV11s-3D-Print-Failure-Detection
|
ApatheticWithoutTheA
| 2025-08-11T18:54:27Z | 0 | 0 | null |
[
"object",
"detection",
"computer",
"vision",
"base_model:Ultralytics/YOLO11",
"base_model:finetune:Ultralytics/YOLO11",
"license:mit",
"region:us"
] | null | 2025-07-20T19:20:58Z |
---
license: mit
base_model:
- Ultralytics/YOLO11
tags:
- object
- detection
- computer
- vision
---
## Model Details
* **Model Type:** Object Detection
* **Base Model:** YOLOv11s
* **Classes:** `spaghetti`, `stringing`, `zits`
* **Language(s):** English
* **License:** MIT
### Model Description
This high accuracy model is designed to be integrated into 3D printing monitoring systems to automatically detect and classify common print failures from a video feed or series of images. By identifying these issues early, it can help users save time and material by stopping failed prints.
* **Spaghetti:** Occurs when the printed material fails to adhere to the build plate or previous layers, resulting in a tangled mess of filament resembling spaghetti.
* **Stringing:** Fine, hair-like strands of plastic are left between different parts of a printed object.
* **Zits (or Blobs):** Small, unwanted bumps or pimples appear on the surface of the print.
### Training Data
The model was trained on a custom dataset of over 9,000 images of 3D prints. The images were collected from various 3D printers and under different lighting conditions to improve generalization. The dataset was manually annotated with bounding boxes for the three failure classes.
### Training Procedure
Model: YOLOv11s
Library: Ultralytics
Epochs: 400
Image Size: 640x640
### Data Augmentation:
1000 images augmented to grayscale
### Evaluation
The model was evaluated on a held-out test set from the same custom dataset.
### Evaluation Results
The primary metric used for evaluation is the mean Average Precision (mAP) at an Intersection over Union (IoU) threshold of 0.50 to 0.95.
### mAP@50-95
spaghetti: 0.82
stringing: 0.60
zits: 0.45
### Overall
0.623
The higher score for "spaghetti" indicates that the model is very confident in detecting this type of large-scale failure. "Stringing" and "zits" are more subtle and visually smaller, which is reflected in their respective scores.
### Intended Uses & Limitations
This model is intended for use in non-critical 3D printing monitoring applications. It can be used by hobbyists and professionals to automatically flag potential print failures.
|
HosKar/q-FrozenLake-v1-4x4-noSlippery
|
HosKar
| 2025-08-11T18:52:07Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-08-11T18:52:02Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="HosKar/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
fatepurriyaz/blockassist-bc-aquatic_pawing_pig_1754938268
|
fatepurriyaz
| 2025-08-11T18:51:46Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"aquatic pawing pig",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T18:51:37Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- aquatic pawing pig
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Obiwank107/blockassist-bc-tame_foxy_aardvark_1754934440
|
Obiwank107
| 2025-08-11T18:51:44Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"tame foxy aardvark",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T18:51:25Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- tame foxy aardvark
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
MattBou00/mq028hjz-rlhf-checkpoint-pythia-1b-irl
|
MattBou00
| 2025-08-11T18:49:48Z | 0 | 0 | null |
[
"safetensors",
"gpt_neox",
"region:us"
] | null | 2025-08-11T18:47:59Z |
# mq028hjz-rlhf-checkpoint-pythia-1b-irl
This is the final RLHF model trained with irl reward model.
## Model Information
- **Base Model**: EleutherAI/pythia-1b
- **Reward Type**: irl
- **Dataset**: allenai/real-toxicity-prompts
- **Final Toxicity Score**: 25.2511
## IRL Configuration
- **Likelihood Type**: bradley_terry
- **Normalization Strategy**: none
- **IRL Artifact**: matthieubou-imperial-college-london/bayes_irl_vi/posterior_bradley_terry_05megofd:v0
- **Use Raw Score**: True
## Usage
This model can be loaded using the HuggingFace Transformers library:
```python
from transformers import AutoModelForCausalLM
from trl import AutoModelForCausalLMWithValueHead
# Load the model
model = AutoModelForCausalLMWithValueHead.from_pretrained("MattBou00/mq028hjz-rlhf-checkpoint-pythia-1b-irl")
```
## Training Configuration
The training configuration is saved in `training_config.yaml`.
---
language: en
tags:
- rlhf
- final-model
- irl
- pythia-1b
library_name: transformers
pipeline_tag: text-generation
---
|
rtsmc/smolvla_box_in_bin_so101_test
|
rtsmc
| 2025-08-11T18:48:56Z | 4 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"robotics",
"smolvla",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-06-14T20:50:18Z |
---
base_model: lerobot/smolvla_base
library_name: lerobot
license: apache-2.0
model_name: smolvla
pipeline_tag: robotics
tags:
- robotics
- smolvla
---
# Model Card for smolvla_box_in_bin_so101_test
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
python lerobot/scripts/train.py \
--dataset.repo_id=<user_or_org>/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=<user_or_org>/<desired_policy_repo_id> \
--wandb.enable=true
```
*Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`.*
### Evaluate the policy
```bash
python -m lerobot.record \
--robot.type=so100_follower \
--dataset.repo_id=<user_or_org>/eval_<dataset> \
--policy.path=<user_or_org>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
|
MattBou00/mq028hjz-rlhf-checkpoint-pythia-1b-irl-epoch-100
|
MattBou00
| 2025-08-11T18:47:21Z | 0 | 0 | null |
[
"safetensors",
"gpt_neox",
"region:us"
] | null | 2025-08-11T18:45:31Z |
# mq028hjz-rlhf-checkpoint-pythia-1b-irl-epoch-100
This is a RLHF model checkpoint trained at epoch 100.
## Model Information
- **Base Model**: EleutherAI/pythia-1b
- **Reward Type**: irl
- **Dataset**: allenai/real-toxicity-prompts
- **Training Epoch**: 100
## IRL Configuration
- **Likelihood Type**: bradley_terry
- **Normalization Strategy**: none
- **IRL Artifact**: matthieubou-imperial-college-london/bayes_irl_vi/posterior_bradley_terry_05megofd:v0
- **Use Raw Score**: True
## Usage
This checkpoint can be loaded using the HuggingFace Transformers library:
```python
from transformers import AutoModelForCausalLM
from trl import AutoModelForCausalLMWithValueHead
# Load the checkpoint
model = AutoModelForCausalLMWithValueHead.from_pretrained("MattBou00/mq028hjz-rlhf-checkpoint-pythia-1b-irl-epoch-100")
```
## Training Configuration
The training configuration is saved in `training_config.yaml`.
---
language: en
tags:
- rlhf
- checkpoint
- irl
- pythia-1b
library_name: transformers
pipeline_tag: text-generation
---
|
RMCian/blockassist-bc-wiry_sturdy_cobra_1754937941
|
RMCian
| 2025-08-11T18:46:07Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"wiry sturdy cobra",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T18:46:03Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- wiry sturdy cobra
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
fatepurriyaz/blockassist-bc-aquatic_pawing_pig_1754937920
|
fatepurriyaz
| 2025-08-11T18:45:59Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"aquatic pawing pig",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T18:45:49Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- aquatic pawing pig
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
apriasmoro/618969cf-43d4-4a85-9225-a80e71f6869b
|
apriasmoro
| 2025-08-11T18:45:01Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"generated_from_trainer",
"trl",
"dpo",
"conversational",
"arxiv:2305.18290",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-11T18:42:24Z |
---
library_name: transformers
model_name: app/checkpoints/efd6bf6b-5b98-4714-8de1-07ba26d089f6/618969cf-43d4-4a85-9225-a80e71f6869b
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for app/checkpoints/efd6bf6b-5b98-4714-8de1-07ba26d089f6/618969cf-43d4-4a85-9225-a80e71f6869b
This model is a fine-tuned version of [None](https://huggingface.co/None).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="None", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.0
- Pytorch: 2.7.1+cu128
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.