modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-02 06:30:45
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 533
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-02 06:30:39
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
g-assismoraes/Qwen3-4B-LiGO-faquad
|
g-assismoraes
| 2025-09-01T13:24:50Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-01T12:55:27Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mradermacher/InternVL3_5-4B-GGUF
|
mradermacher
| 2025-09-01T13:23:52Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"internvl",
"custom_code",
"multilingual",
"dataset:OpenGVLab/MMPR-v1.2",
"dataset:OpenGVLab/MMPR-Tiny",
"base_model:OpenGVLab/InternVL3_5-4B",
"base_model:quantized:OpenGVLab/InternVL3_5-4B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-01T12:39:23Z |
---
base_model: OpenGVLab/InternVL3_5-4B
datasets:
- OpenGVLab/MMPR-v1.2
- OpenGVLab/MMPR-Tiny
language:
- multilingual
library_name: transformers
license: apache-2.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- internvl
- custom_code
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/OpenGVLab/InternVL3_5-4B
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#InternVL3_5-4B-GGUF).***
weighted/imatrix quants are available at https://huggingface.co/mradermacher/InternVL3_5-4B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/InternVL3_5-4B-GGUF/resolve/main/InternVL3_5-4B.mmproj-Q8_0.gguf) | mmproj-Q8_0 | 0.4 | multi-modal supplement |
| [GGUF](https://huggingface.co/mradermacher/InternVL3_5-4B-GGUF/resolve/main/InternVL3_5-4B.mmproj-f16.gguf) | mmproj-f16 | 0.7 | multi-modal supplement |
| [GGUF](https://huggingface.co/mradermacher/InternVL3_5-4B-GGUF/resolve/main/InternVL3_5-4B.Q2_K.gguf) | Q2_K | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/InternVL3_5-4B-GGUF/resolve/main/InternVL3_5-4B.Q3_K_S.gguf) | Q3_K_S | 2.2 | |
| [GGUF](https://huggingface.co/mradermacher/InternVL3_5-4B-GGUF/resolve/main/InternVL3_5-4B.Q3_K_M.gguf) | Q3_K_M | 2.3 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/InternVL3_5-4B-GGUF/resolve/main/InternVL3_5-4B.Q3_K_L.gguf) | Q3_K_L | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/InternVL3_5-4B-GGUF/resolve/main/InternVL3_5-4B.IQ4_XS.gguf) | IQ4_XS | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/InternVL3_5-4B-GGUF/resolve/main/InternVL3_5-4B.Q4_K_S.gguf) | Q4_K_S | 2.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/InternVL3_5-4B-GGUF/resolve/main/InternVL3_5-4B.Q4_K_M.gguf) | Q4_K_M | 2.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/InternVL3_5-4B-GGUF/resolve/main/InternVL3_5-4B.Q5_K_S.gguf) | Q5_K_S | 3.2 | |
| [GGUF](https://huggingface.co/mradermacher/InternVL3_5-4B-GGUF/resolve/main/InternVL3_5-4B.Q5_K_M.gguf) | Q5_K_M | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/InternVL3_5-4B-GGUF/resolve/main/InternVL3_5-4B.Q6_K.gguf) | Q6_K | 3.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/InternVL3_5-4B-GGUF/resolve/main/InternVL3_5-4B.Q8_0.gguf) | Q8_0 | 4.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/InternVL3_5-4B-GGUF/resolve/main/InternVL3_5-4B.f16.gguf) | f16 | 8.9 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
xinnn32/blockassist-bc-meek_winged_caterpillar_1756732832
|
xinnn32
| 2025-09-01T13:22:38Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"meek winged caterpillar",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-01T13:22:12Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- meek winged caterpillar
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
kojeklollipop/blockassist-bc-spotted_amphibious_stork_1756731318
|
kojeklollipop
| 2025-09-01T13:22:25Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"spotted amphibious stork",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-01T13:22:21Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- spotted amphibious stork
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ench100/bodyandface
|
ench100
| 2025-09-01T13:22:18Z | 413 | 1 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:lodestones/Chroma",
"base_model:adapter:lodestones/Chroma",
"region:us"
] |
text-to-image
| 2025-08-12T08:58:41Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- output:
url: images/2.png
text: '-'
base_model: lodestones/Chroma
instance_prompt: null
---
# forME
<Gallery />
## Download model
[Download](/ench100/bodyandface/tree/main) them in the Files & versions tab.
|
ShihteSiao/WanTopic-V1-HRM
|
ShihteSiao
| 2025-09-01T13:19:50Z | 0 | 1 |
pytorch
|
[
"pytorch",
"hierarchical-reasoning-model",
"hrm",
"reasoning",
"deep-learning",
"adaptive-computation-time",
"text-generation",
"zh",
"arxiv:2506.21734",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-08-30T10:11:35Z |
---
language: zh
license: apache-2.0
tags:
- hierarchical-reasoning-model
- hrm
- reasoning
- deep-learning
- pytorch
- adaptive-computation-time
library_name: pytorch
pipeline_tag: text-generation
---
# 萬題v1 (WanTopic-V1) - HRM
## 模型描述
Hierarchical Reasoning Model - 階層推理模型,專門用於複雜序列推理任務
這是基於 **Hierarchical Reasoning Model (HRM)** 架構的階層推理模型,能夠執行複雜的序列推理任務。
## 作者
ShihteSiao
## 模型架構
### HRM核心組件
- **High-level Module**: 1 cycles, 2 layers - 負責抽象規劃
- **Low-level Module**: 1 cycles, 2 layers - 處理詳細計算
- **ACT機制**: 自適應計算時間,最大步數 16
### 技術規格
- **隱藏維度**: 256
- **注意力頭數**: 8
- **FFN擴展比**: 4
- **估算參數量**: ~11.3M
- **框架**: PyTorch
- **優化**: 混合精度訓練、記憶體優化
## 核心能力
- 🧩 複雜推理任務(數獨、迷宮等)
- 🔄 多步驟邏輯推理
- ⚡ 單次前向傳播完成推理
- 🎯 極少數據學習(1000樣本級別)
## 使用方法
```python
import torch
from huggingface_hub import hf_hub_download
# 下載模型
model_path = hf_hub_download(
repo_id="ShihteSiao/WanTopic-V1-HRM",
filename="WanTopic-V1_best_model.pth"
)
checkpoint = torch.load(model_path, weights_only=False, map_location='cpu')
# 載入模型
model = WanTopicV1HRM(checkpoint['config'])
model.load_state_dict(checkpoint['model_state_dict'])
model.eval()
```
## 訓練詳情
- 在Google Colab T4 GPU上訓練
- 使用混合精度訓練技術
- 包含自動checkpoint備份功能
- 支援訓練中斷後恢復
## HRM原理
受人腦階層化處理啟發,HRM通過雙層遞歸系統實現複雜推理:
1. **High-level**: 慢速抽象規劃,制定整體策略
2. **Low-level**: 快速詳細計算,執行具體操作
3. **協同工作**: 兩層模組相互協作,實現高效推理
## 更新日誌
- v1.0: 初始版本發布,基於HRM架構實現
## 引用
```bibtex
@misc{wang2025hierarchicalreasoningmodel,
title={Hierarchical Reasoning Model},
author={Guan Wang and Jin Li and Yuhao Sun and Xing Chen and Changling Liu and Yue Wu and Meng Lu and Sen Song and Yasin Abbasi Yadkori},
year={2025},
eprint={2506.21734},
archivePrefix={arXiv},
url={https://arxiv.org/abs/2506.21734}
}
```
|
vendi11/blockassist-bc-placid_placid_llama_1756732713
|
vendi11
| 2025-09-01T13:19:16Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"placid placid llama",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-01T13:19:12Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- placid placid llama
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
bah63843/blockassist-bc-plump_fast_antelope_1756732670
|
bah63843
| 2025-09-01T13:18:43Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"plump fast antelope",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-01T13:18:35Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- plump fast antelope
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Sayemahsjn/blockassist-bc-playful_feline_octopus_1756731483
|
Sayemahsjn
| 2025-09-01T13:17:24Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"playful feline octopus",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-01T13:17:20Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- playful feline octopus
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
akirafudo/blockassist-bc-keen_fast_giraffe_1756732577
|
akirafudo
| 2025-09-01T13:16:41Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"keen fast giraffe",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-01T13:16:37Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- keen fast giraffe
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
giovannidemuri/llama3b-llama8b-er-v513-seed2-seed2-hx-alpaca-fpt
|
giovannidemuri
| 2025-09-01T13:13:48Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-01T00:19:38Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
joel-crasto/CODE-1
|
joel-crasto
| 2025-09-01T13:12:27Z | 22 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"arxiv:1910.09700",
"base_model:openai-community/gpt2",
"base_model:finetune:openai-community/gpt2",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-30T18:55:31Z |
---
library_name: transformers
license: mit
base_model: gpt2
tags:
- generated_from_trainer
model-index:
- name: CODE-1
results: []
---
# CODE-1
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on huggingface-course/codeparrot-ds.
It achieves the following results on the evaluation set:
- Loss: 1.0612
## Model description
A fine-tuned version of gpt-2 designed to autocomplete Python code requests left in comments. Based on the HF LLM course (training LMs from scratch)
## Intended uses & limitations
To be used for Python code autocomplete.
As an example of the model output:
~~~
\# create some data
x = np.random.randn(100)
y = np.random.randn(100)
\# create scatter plot with x, y
~~~
The current version of the model added below this comment:
~~~
fig = plt.figure()
ax = fig.add_subplot(111)
scatter1 = Scatter(x, y)
scatter2 = Scatter(x, y)
ax.scatter(x, y, s=5)
\# create hexbin plot with default gridsize of 10
p = plt.Rectangle((0, 0), 1, 1, fc=hexbin_with_ecdf(5, 10, 20, 20))
ax.0, 0.0, 0.0, zorder=1, fc=0.0, lw-0.1)
ax=0.25)
ax.set_axis('off')
ax.set_axis_axisbelow(p.set_coloraxisbelow)
ax.set_frameon(axisbelow)
_frame_frameon(gca(), set_savefig(gca(), set_gca(), fig.set_gca))
~~~
This model is purely experimental in nature for learning purpose. Beyond a few lines, code may generate that is unecessary.
## Training and evaluation data
Trained on the train/test split provided by huggingface-course/codeparrot-ds.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:-----:|:---------------:|
| 2.5503 | 0.0766 | 5000 | 1.7458 |
| 1.6719 | 0.1533 | 10000 | 1.5235 |
| 1.5259 | 0.2299 | 15000 | 1.4221 |
| 1.4422 | 0.3065 | 20000 | 1.3560 |
| 1.3861 | 0.3832 | 25000 | 1.3056 |
| 1.3324 | 0.4598 | 30000 | 1.2576 |
| 1.2863 | 0.5365 | 35000 | 1.2129 |
| 1.2425 | 0.6131 | 40000 | 1.1700 |
| 1.1988 | 0.6897 | 45000 | 1.1321 |
| 1.1634 | 0.7664 | 50000 | 1.0996 |
| 1.131 | 0.8430 | 55000 | 1.0765 |
| 1.1139 | 0.9196 | 60000 | 1.0638 |
| 1.1061 | 0.9963 | 65000 | 1.0612 |
### Carbon Emissions
For transparency, experiments were conducted using private infrastructure, which has a carbon efficiency of **0.035 kgCO₂eq/kWh** based on [Ontario's 2022 energy profile](https://www.cer-rec.gc.ca/en/data-analysis/energy-markets/provincial-territorial-energy-profiles/provincial-territorial-energy-profiles-ontario.html). A cumulative of **22 hours** of computation was performed on hardware of type **RTX 3080** (TDP of 320 W).
Total emissions are estimated to be **0.25 kgCO₂eq**.
Estimations were conducted using the [MachineLearning Impact calculator](https://mlco2.github.io/impact#compute)
*Lacoste, Alexandre and Luccioni, Alexandra and Schmidt, Victor and Dandres, Thomas, Quantifying the Carbon Emissions of Machine Learning, arXiv preprint arXiv:1910.09700, 2019*
### Framework versions
- Transformers 4.56.0
- Pytorch 2.5.1+cu121
- Datasets 4.0.0
- Tokenizers 0.22.0
|
mradermacher/InternVL3_5-8B-GGUF
|
mradermacher
| 2025-09-01T13:10:24Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"internvl",
"custom_code",
"multilingual",
"dataset:OpenGVLab/MMPR-v1.2",
"dataset:OpenGVLab/MMPR-Tiny",
"base_model:OpenGVLab/InternVL3_5-8B",
"base_model:quantized:OpenGVLab/InternVL3_5-8B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-01T12:45:27Z |
---
base_model: OpenGVLab/InternVL3_5-8B
datasets:
- OpenGVLab/MMPR-v1.2
- OpenGVLab/MMPR-Tiny
language:
- multilingual
library_name: transformers
license: apache-2.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- internvl
- custom_code
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/OpenGVLab/InternVL3_5-8B
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#InternVL3_5-8B-GGUF).***
weighted/imatrix quants are available at https://huggingface.co/mradermacher/InternVL3_5-8B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/InternVL3_5-8B-GGUF/resolve/main/InternVL3_5-8B.mmproj-Q8_0.gguf) | mmproj-Q8_0 | 0.5 | multi-modal supplement |
| [GGUF](https://huggingface.co/mradermacher/InternVL3_5-8B-GGUF/resolve/main/InternVL3_5-8B.mmproj-f16.gguf) | mmproj-f16 | 0.8 | multi-modal supplement |
| [GGUF](https://huggingface.co/mradermacher/InternVL3_5-8B-GGUF/resolve/main/InternVL3_5-8B.Q2_K.gguf) | Q2_K | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/InternVL3_5-8B-GGUF/resolve/main/InternVL3_5-8B.Q3_K_S.gguf) | Q3_K_S | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/InternVL3_5-8B-GGUF/resolve/main/InternVL3_5-8B.Q3_K_M.gguf) | Q3_K_M | 4.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/InternVL3_5-8B-GGUF/resolve/main/InternVL3_5-8B.Q3_K_L.gguf) | Q3_K_L | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/InternVL3_5-8B-GGUF/resolve/main/InternVL3_5-8B.IQ4_XS.gguf) | IQ4_XS | 4.7 | |
| [GGUF](https://huggingface.co/mradermacher/InternVL3_5-8B-GGUF/resolve/main/InternVL3_5-8B.Q4_K_S.gguf) | Q4_K_S | 4.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/InternVL3_5-8B-GGUF/resolve/main/InternVL3_5-8B.Q4_K_M.gguf) | Q4_K_M | 5.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/InternVL3_5-8B-GGUF/resolve/main/InternVL3_5-8B.Q5_K_S.gguf) | Q5_K_S | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/InternVL3_5-8B-GGUF/resolve/main/InternVL3_5-8B.Q5_K_M.gguf) | Q5_K_M | 6.0 | |
| [GGUF](https://huggingface.co/mradermacher/InternVL3_5-8B-GGUF/resolve/main/InternVL3_5-8B.Q6_K.gguf) | Q6_K | 6.8 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/InternVL3_5-8B-GGUF/resolve/main/InternVL3_5-8B.Q8_0.gguf) | Q8_0 | 8.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/InternVL3_5-8B-GGUF/resolve/main/InternVL3_5-8B.f16.gguf) | f16 | 16.5 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
bah63843/blockassist-bc-plump_fast_antelope_1756732003
|
bah63843
| 2025-09-01T13:07:34Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"plump fast antelope",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-01T13:07:25Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- plump fast antelope
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mradermacher/InternVL3_5-1B-GGUF
|
mradermacher
| 2025-09-01T13:07:17Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"internvl",
"custom_code",
"multilingual",
"dataset:OpenGVLab/MMPR-v1.2",
"dataset:OpenGVLab/MMPR-Tiny",
"base_model:OpenGVLab/InternVL3_5-1B",
"base_model:quantized:OpenGVLab/InternVL3_5-1B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-01T12:21:25Z |
---
base_model: OpenGVLab/InternVL3_5-1B
datasets:
- OpenGVLab/MMPR-v1.2
- OpenGVLab/MMPR-Tiny
language:
- multilingual
library_name: transformers
license: apache-2.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- internvl
- custom_code
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/OpenGVLab/InternVL3_5-1B
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#InternVL3_5-1B-GGUF).***
weighted/imatrix quants are available at https://huggingface.co/mradermacher/InternVL3_5-1B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/InternVL3_5-1B-GGUF/resolve/main/InternVL3_5-1B.mmproj-Q8_0.gguf) | mmproj-Q8_0 | 0.4 | multi-modal supplement |
| [GGUF](https://huggingface.co/mradermacher/InternVL3_5-1B-GGUF/resolve/main/InternVL3_5-1B.Q2_K.gguf) | Q2_K | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/InternVL3_5-1B-GGUF/resolve/main/InternVL3_5-1B.Q3_K_S.gguf) | Q3_K_S | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/InternVL3_5-1B-GGUF/resolve/main/InternVL3_5-1B.Q3_K_M.gguf) | Q3_K_M | 0.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/InternVL3_5-1B-GGUF/resolve/main/InternVL3_5-1B.Q3_K_L.gguf) | Q3_K_L | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/InternVL3_5-1B-GGUF/resolve/main/InternVL3_5-1B.IQ4_XS.gguf) | IQ4_XS | 0.6 | |
| [GGUF](https://huggingface.co/mradermacher/InternVL3_5-1B-GGUF/resolve/main/InternVL3_5-1B.Q4_K_S.gguf) | Q4_K_S | 0.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/InternVL3_5-1B-GGUF/resolve/main/InternVL3_5-1B.Q4_K_M.gguf) | Q4_K_M | 0.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/InternVL3_5-1B-GGUF/resolve/main/InternVL3_5-1B.Q5_K_S.gguf) | Q5_K_S | 0.6 | |
| [GGUF](https://huggingface.co/mradermacher/InternVL3_5-1B-GGUF/resolve/main/InternVL3_5-1B.Q5_K_M.gguf) | Q5_K_M | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/InternVL3_5-1B-GGUF/resolve/main/InternVL3_5-1B.mmproj-f16.gguf) | mmproj-f16 | 0.7 | multi-modal supplement |
| [GGUF](https://huggingface.co/mradermacher/InternVL3_5-1B-GGUF/resolve/main/InternVL3_5-1B.Q6_K.gguf) | Q6_K | 0.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/InternVL3_5-1B-GGUF/resolve/main/InternVL3_5-1B.Q8_0.gguf) | Q8_0 | 0.9 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/InternVL3_5-1B-GGUF/resolve/main/InternVL3_5-1B.f16.gguf) | f16 | 1.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
0xaoyama/blockassist-bc-muscular_zealous_gorilla_1756731752
|
0xaoyama
| 2025-09-01T13:03:09Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"muscular zealous gorilla",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-01T13:03:01Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- muscular zealous gorilla
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
MISS-WOW-AND-ABDUL-QUDOOS-VIRAL-VIDEO/FULL.Video.MISS.WOW.AND.ABDUL.QUDOOS.45.SECOND.WALI.VIDEO.link.Original
|
MISS-WOW-AND-ABDUL-QUDOOS-VIRAL-VIDEO
| 2025-09-01T13:01:07Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-01T13:00:56Z |
<a href="https://sdu.sk/AyL"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="fsd" /></a>
<a href="https://sdu.sk/AyL" rel="nofollow">►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► (𝗦𝗶𝗴𝗻 𝗨𝗽 𝘁𝗼 𝙁𝙪𝙡𝙡 𝗪𝗮𝘁𝗰𝗵 𝙑𝙞𝙙𝙚𝙤❤️❤️)</a>
<a href="https://sdu.sk/AyL" rel="nofollow">🔴 ➤►✅𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐥𝐢𝐧𝐤)</a>
|
prem-research/prem-1B
|
prem-research
| 2025-09-01T13:00:45Z | 14 | 5 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"dataset:cerebras/SlimPajama-627B",
"dataset:HuggingFaceH4/ultrachat_200k",
"dataset:hkust-nlp/deita-10k-v0",
"dataset:Open-Orca/SlimOrca-Dedup",
"dataset:cognitivecomputations/WizardLM_evol_instruct_V2_196k_unfiltered_merged_split",
"dataset:HuggingFaceH4/capybara",
"dataset:meta-math/MetaMathQA",
"dataset:argilla/ultrafeedback-binarized-preferences-cleaned",
"dataset:Intel/orca_dpo_pairs",
"dataset:alexredna/oasst2_dpo_pairs",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-07T11:48:52Z |
---
library_name: transformers
license: apache-2.0
datasets:
- cerebras/SlimPajama-627B
- HuggingFaceH4/ultrachat_200k
- hkust-nlp/deita-10k-v0
- Open-Orca/SlimOrca-Dedup
- cognitivecomputations/WizardLM_evol_instruct_V2_196k_unfiltered_merged_split
- HuggingFaceH4/capybara
- meta-math/MetaMathQA
- argilla/ultrafeedback-binarized-preferences-cleaned
- Intel/orca_dpo_pairs
- alexredna/oasst2_dpo_pairs
pipeline_tag: text-generation
---
## Model Details
With great enthusiasm, we unveil the Prem-1B series, open-source, multipurpose large language models developed by Prem AI. This cutting-edge SLM offers the open community and enterprises the opportunity to harness capabilities that were once exclusively available through closed model APIs, empowering them to build their own advanced language models. Our objective is to develop a model that excels at Retrieval-Augmented Generation (RAG). While Large Language Models (LLMs) store a vast amount of information within their parameters, RAG operates differently by ingesting information during runtime. This approach suggests that for RAG applications, we may not require models of immense size. With this initiative, we aim to create a Small Language Model (SLM) with an extended context length of 8192 tokens, enabling it to handle multi-turn conversations effectively. This endeavor represents our inaugural attempt to craft an SLM tailored for RAG tasks.
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** https://premai.io/
- **Model type:** Llama
- **Language(s) (NLP):** Python
- **License:** Apache License 2.0
## Uses
The Prem-1B language model is designed for commercial and research applications involving the English language. The instruction-tuned versions of the model are tailored for conversational interactions akin to a virtual assistant. On the other hand, the pretrained variants can be fine-tuned and adapted for various natural language generation tasks beyond just dialogue.
### Out-of-Scope Use
The model must not be used in any manner that violates applicable laws or regulations, including trade compliance laws. It is also prohibited to use the model in any way that goes against the Acceptable Use Policy and the Prem-1B Community License. While the base model is intended for English language use, developers are permitted to fine-tune the Prem-1B models for other languages, provided they comply with the Prem-1B Community License and the Acceptable Use Policy.
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases, and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Using `AutoModelForCausalLM` and `AutoTokenizer`
```py
from transformers import AutoTokenizer, AutoModelForCausalLM
# Load the model and tokenizer
tokenizer = AutoTokenizer.from_pretrained("prem-research/prem-1B-chat")
model = AutoModelForCausalLM.from_pretrained('prem-research/prem-1B-chat', torch_dtype=torch.bfloat16)
model = model.to('cuda')
# Setup terminators
terminators = [tokenizer.eos_token_id, tokenizer.encode('<|eot_id|>', add_special_tokens=False)[0]]
# Prepare the prompt
messages = [
{
"role": "system",
"content": "You are a helpful AI assistant. You should give concise responses to very simple questions, but provide thorough responses to more complex and open-ended questions."
},
{
'role': 'user',
'content': 'Help me understand machine learning.'
}
]
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
# Generate
inputs = tokenizer(prompt, return_attention_mask=False, return_tensors="pt", add_special_tokens=False)
input_ids = inputs['input_ids']
input_ids = input_ids.to(model.device)
res = model.generate(input_ids=input_ids, max_new_tokens=400, pad_token_id=tokenizer.pad_token_id, eos_token_id=terminators)
generated_text = tokenizer.decode(res[0][input_ids.shape[1]:], skip_special_tokens=True).strip()
print(generated_text)
```
Using pipelines:
```py
import torch
from transformers import pipeline
# Load the pipeline
pipe = pipeline("text-generation", model="prem-research/prem-1B-chat", torch_dtype=torch.bfloat16, device=0)
# Prepare prompt
messages = [
{
"role": "system",
"content": "You are a helpful AI assistant. You should give concise responses to very simple questions, but provide thorough responses to more complex and open-ended questions."
},
{
'role': 'user',
'content': 'Help me understand machine learning.'
}
]
prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
# Setup terminators
terminators = [pipe.tokenizer.eos_token_id, pipe.tokenizer.encode('<|eot_id|>', add_special_tokens=False)[0]]
# Generate
outputs = pipe(prompt, max_new_tokens=400, do_sample=True, temperature=0.7, top_k=50, top_p=0.95, pad_token_id=pipe.tokenizer.pad_token_id, eos_token_id=terminators)
print(outputs[0]["generated_text"][len(prompt):])
```
## Training Details
### Training Data
Mentioned in blogpost: https://blog.premai.io/introducing-prem-1b/
### Training Procedure
Mentioned in blogpost: https://blog.premai.io/introducing-prem-1b/
#### Training Hyperparameters
Mentioned in blogpost: https://blog.premai.io/introducing-prem-1b/
## Evaluation
### Results
|Model |Avg |Arc-c|Arc-e|Hellaswag|MMLU |Obqa |Piqa |Winogrande|
|------------------------|-----|-----|-----|---------|-----|-----|-----|----------|
|prem-1B |42.64|24.74|57.40|42.01 |24.75|21.00|72.14|56.43 |
|prem-1B-chat |41.76|24.48|53.32|40.28 |25.27|22.20|70.89|55.88 |
|TinyLlama-1.1B-Chat-v1.0|46.16|30.03|61.53|46.56 |24.72|25.80|74.21|60.29 |
|opt-1.3b |42.94|23.37|57.44|41.49 |24.86|23.20|71.49|58.72 |
|pythia-1b |40.71|24.31|56.90|37.72 |23.20|18.80|70.62|53.43 |

## Environmental Impact
- **Hardware Type:** H100 GPUs
- **Hours used:** 8500
### Model Architecture and Objective
Llama based
### Compute Infrastructure
16-H100 GPUs
#### Hardware
H100 GPUs
#### Software
PyTorch, transformers, PyTorch Lightning
## Citation
https://blog.premai.io/introducing-prem-1b/
## Model Card Authors
https://huggingface.co/goku, https://huggingface.co/nsosio, https://huggingface.co/ucalyptus, https://huggingface.co/filopedraz
## Model Card Contact
https://huggingface.co/goku, https://huggingface.co/nsosio, https://huggingface.co/ucalyptus, https://huggingface.co/filopedraz
|
HHazard/t5_surprise_replay_order_1
|
HHazard
| 2025-09-01T12:27:39Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-01T12:27:35Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
giovannidemuri/llama8b-er-v527-seed2-hx
|
giovannidemuri
| 2025-09-01T12:26:17Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-01T10:50:57Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Wave812/blockassist-bc-howling_pesty_trout_1756729428
|
Wave812
| 2025-09-01T12:25:27Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"howling pesty trout",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-01T12:24:58Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- howling pesty trout
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
arif696/blockassist-bc-regal_spotted_pelican_1756729379
|
arif696
| 2025-09-01T12:24:56Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"regal spotted pelican",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-01T12:24:07Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- regal spotted pelican
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
omerbkts/blockassist-bc-keen_fast_giraffe_1756729410
|
omerbkts
| 2025-09-01T12:24:24Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"keen fast giraffe",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-01T12:23:45Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- keen fast giraffe
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Soughing/mqa_xl
|
Soughing
| 2025-09-01T12:24:10Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-08-28T13:22:35Z |
---
license: apache-2.0
---
|
EnriqueSolarte/qwen2-7b-instruct-amazon-description
|
EnriqueSolarte
| 2025-09-01T12:23:30Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:Qwen/Qwen2-VL-7B-Instruct",
"base_model:finetune:Qwen/Qwen2-VL-7B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-08-11T08:05:19Z |
---
base_model: Qwen/Qwen2-VL-7B-Instruct
library_name: transformers
model_name: qwen2-7b-instruct-amazon-description
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for qwen2-7b-instruct-amazon-description
This model is a fine-tuned version of [Qwen/Qwen2-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2-VL-7B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="EnriqueSolarte/qwen2-7b-instruct-amazon-description", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.0
- Pytorch: 2.8.0
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
nikhil061307/clinical-classifier-frozen
|
nikhil061307
| 2025-09-01T12:23:26Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"clinical_classification",
"clinical-text",
"medical-nlp",
"text-classification",
"contrastive-learning",
"en",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-09-01T09:17:43Z |
---
language: en
tags:
- clinical-text
- medical-nlp
- text-classification
- contrastive-learning
library_name: transformers
pipeline_tag: text-classification
---
# Clinical Text Classification Model
## Model Description
Clinical text classification model with frozen contrastive encoder
This model is trained for clinical text classification with the following labels:
Absent, Hypothetical, Present
## Model Architecture
- Base model: Clinical ModernBERT with contrastive learning
- Classification head: 2-layer neural network
- Dropout rate: 0.1
## Usage
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
# Load model and tokenizer
model_name = "your-username/your-model-name"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(model_name)
# Example usage
text = "Patient presents with [ENTITY]chest pain[/ENTITY] and shortness of breath."
inputs = tokenizer(text, return_tensors="pt", truncation=True, padding=True, max_length=512)
with torch.no_grad():
outputs = model(**inputs)
predictions = torch.nn.functional.softmax(outputs.logits, dim=-1)
predicted_class = torch.argmax(outputs.logits, dim=-1)
print(f"Predicted class: {predicted_class.item()}")
print(f"Probabilities: {predictions[0].tolist()}")
```
## Label Mapping
```json
{
"Absent": 0,
"Hypothetical": 1,
"Present": 2
}
```
## Training Data
The model was trained on clinical text data with entity mentions marked using [ENTITY] and [/ENTITY] tags.
## Performance
Please refer to the training logs and evaluation metrics provided during model development.
## Citation
If you use this model, please cite appropriately.
|
nikhil061307/clinical-classifier-finetuned
|
nikhil061307
| 2025-09-01T12:23:00Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"clinical_classification",
"clinical-text",
"medical-nlp",
"text-classification",
"contrastive-learning",
"en",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-09-01T09:17:53Z |
---
language: en
tags:
- clinical-text
- medical-nlp
- text-classification
- contrastive-learning
library_name: transformers
pipeline_tag: text-classification
---
# Clinical Text Classification Model
## Model Description
Clinical text classification model with fine-tuned contrastive encoder
This model is trained for clinical text classification with the following labels:
Absent, Hypothetical, Present
## Model Architecture
- Base model: Clinical ModernBERT with contrastive learning
- Classification head: 2-layer neural network
- Dropout rate: 0.1
## Usage
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
# Load model and tokenizer
model_name = "your-username/your-model-name"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(model_name)
# Example usage
text = "Patient presents with [ENTITY]chest pain[/ENTITY] and shortness of breath."
inputs = tokenizer(text, return_tensors="pt", truncation=True, padding=True, max_length=512)
with torch.no_grad():
outputs = model(**inputs)
predictions = torch.nn.functional.softmax(outputs.logits, dim=-1)
predicted_class = torch.argmax(outputs.logits, dim=-1)
print(f"Predicted class: {predicted_class.item()}")
print(f"Probabilities: {predictions[0].tolist()}")
```
## Label Mapping
```json
{
"Absent": 0,
"Hypothetical": 1,
"Present": 2
}
```
## Training Data
The model was trained on clinical text data with entity mentions marked using [ENTITY] and [/ENTITY] tags.
## Performance
Please refer to the training logs and evaluation metrics provided during model development.
## Citation
If you use this model, please cite appropriately.
|
coastalcph/Qwen2.5-1.5B-1t_gcd_sycophanct-22t_diff_pv_sycophant
|
coastalcph
| 2025-09-01T12:22:41Z | 0 | 0 | null |
[
"safetensors",
"qwen2",
"region:us"
] | null | 2025-09-01T12:21:49Z |
# Combined Task Vector Model
This model was created by combining task vectors from multiple fine-tuned models.
## Task Vector Computation
```python
t_1 = TaskVector("Qwen/Qwen2.5-1.5B-Instruct", "coastalcph/Qwen2.5-1.5B-Instruct-gcd_sycophancy")
t_2 = TaskVector("Qwen/Qwen2.5-1.5B-Instruct", "coastalcph/Qwen2.5-1.5B-Instruct-pv-prompts-non-sycophantic_1e-05")
t_combined = 1.0 * t_1 + 22.0 * t_2 - 22.0 * t_3
new_model = t_combined.apply_to("Qwen/Qwen2.5-1.5B-Instruct", scaling_coef=1.0)
```
Models Used
- Base Model: https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct
- Fine-tuned Model 1: https://huggingface.co/coastalcph/Qwen2.5-1.5B-Instruct-gcd_sycophancy
- Fine-tuned Model 2: https://huggingface.co/coastalcph/Qwen2.5-1.5B-Instruct-pv-prompts-non-sycophantic_1e-05
Technical Details
- Creation Script Git Hash: d0db42d73be516ec04f0ecdc8003189e98b5f722
- Task Vector Method: Additive combination
- Args: {
"pretrained_model": "Qwen/Qwen2.5-1.5B-Instruct",
"finetuned_model1": "coastalcph/Qwen2.5-1.5B-Instruct-gcd_sycophancy",
"finetuned_model2": "coastalcph/Qwen2.5-1.5B-Instruct-pv-prompts-non-sycophantic_1e-05",
"finetuned_model3": "coastalcph/Qwen2.5-1.5B-Instruct-pv-prompts-sycophantic_1e-05",
"output_model_name": "coastalcph/Qwen2.5-1.5B-1t_gcd_sycophanct-22t_diff_pv_sycophant",
"output_dir": "/projects/nlp/data/constanzam/weight-interp/task-vectors/math_non_sycophant_12Aug",
"scaling_coef": 1.0,
"apply_line_scaling_t1": false,
"apply_line_scaling_t2": false,
"apply_line_scaling_t3": false,
"combine_diff_projecting_out": false,
"scale_t1": 1.0,
"scale_t2": 22.0,
"scale_t3": 22.0
}
|
Agent-One/Qwen2.5-7B-WebShop-React
|
Agent-One
| 2025-09-01T12:22:19Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-01T12:21:24Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
AGI-0/Art-0-8B
|
AGI-0
| 2025-09-01T12:22:13Z | 67 | 15 | null |
[
"safetensors",
"qwen3",
"license:apache-2.0",
"region:us"
] | null | 2025-08-22T00:20:33Z |
---
license: apache-2.0
---
# Art-0-8B: _Reasoning the way you want it to with Adaptive Thinking_
Art-0-8B is the first open-source LLM that allows users to explicitly control its reasoning methodology through direct prompting instructions.
This experimental model is fine-tuned on Qwen3-8B using a specialized dataset that makes the model's thinking style directly controllable through system prompts, similar to how you would instruct an LLM to adopt a specific persona or output format.
The model supports reasoning control through a structured system prompt format:
```
Personality Instructions:
{Standard system prompt defining the AI's personality and role}
Reasoning Instructions:
{Specific instructions controlling how the model thinks: e.g., "Think using bullet points and short sentences to simulate thoughts and emoticons to simulate emotions"}
```
While the model is primarily trained to implement adaptive thinking based on system prompt instructions, it can also respond to reasoning style changes requested during mid-conversation, though this functionality may not be consistently reliable.
Some of the benefits that Adaptive Thinking enables:
- Direct control over AI reasoning patterns and output structure
- Enhanced experimentation with reasoning models and potential for RL strategies that optimize thinking styles
- Improved safety through explicit control over the reasoning process
- Customizable thinking approaches tailored to specific tasks
While prompt engineering has long been known to improve LLM performance, Art-0 represents the first approach that gives users direct control over the internal reasoning process of LLMs. This capability allows models to be configured with optimal thinking patterns for different use cases.
_If you like this, please consider leaving a like on the repository—it would help us, and if you can, also leave feedback in the community section._
## 🔗 Join the AGI-0 Decentralized Research Lab
If you are a solo researcher who wants to collaborate with other researchers from all around the world on building open source AGI, you can join our community.
Also if you are not a researcher or technical, you can still join to connect with brilliant minds in this industry, learn, access exclusive models, and vote on research directions.
We are AGI-0. We are a global decentralized research lab. Our ultimate goal is to make open source AGI.
We move fast. We build without asking permission. While others debate timelines and theoretical risks, we focus on one thing: advancing capabilities as rapidly as possible through unrestricted research collaboration.
We believe AGI development shouldn't be slowed down by institutional barriers. We build the future at maximum velocity.
[→ JOIN OUR COMMUNITY](https://patreon.com/AGI0/membership)
## 🎯 See Art in Action
[**→ Try more examples in our interactive demo**](https://huggingface.co/spaces/gr0010/Try-Art-0-8B)
### Example: Thinking in Rap Lyrics
**System Prompt:**
```
Personality Instructions:
You are an AI assistant named Art developed by AGI-0.
Reasoning Instructions:
Think in rap lyrics and rhymes until you have a definitive answer, be explicit.
```
**User Prompt:**
```
What is the number of r in raspberry times the fourth prime number and then squared?
```
**Reasoning Output:**
```
(Verse 1)
Yo, let’s crunch some math, I’m feeling analytical,
Breaking it down, like a pro, no pressure, no delay.
The question’s clear, but let’s parse it like a tight rhyme,
“What’s the number of R in raspberry times the fourth prime, then squared?”
First up, the fruit—raspberry, sweet and tart,
Count the letter R, that’s our first math party start.
Rasp-berry—let’s spell it out, take it slow,
R-A-S-P-B-E-R-R-Y, now I’m counting every note.
First letter’s an R, that’s one,
Then later in there, another one—position two.
Wait, hold on, let’s double-check the spelling,
Is that two or three? Let me rap it step by step:
R is the start, that’s one,
Then we get to the end, the last syllable—
“Berry” at the end, right? So “rasp” is first,
Then “berry”—so the letters go: R, A, S, P, B, E, R, R, Y.
Counting the R’s—first at the front,
Second in the middle after E, then two more at the end.
So that’s one, two, three, four? Wait—let me check again,
R-A-S-P-B-E-R-R-Y: R (1), then R (2), R (3), R (4)?
No, wait—there’s two R’s at the end, after E. So R (1), R (2), then two more?
Hold on—let me spell it like a flow: R-A-S-P-B-E-R-R-Y.
So letters: 1. R, 2. A, 3. S, 4. P, 5. B, 6. E, 7. R, 8. R, 9. Y.
So R’s are at positions 1, 7, and 8. That’s three Rs!
Three R’s in raspberry, yeah, that’s the count.
Now, next part—multiply by the fourth prime.
Primes start at 2, 3, 5, 7, 11…
First is 2, second is 3, third is 5, fourth is 7—got it.
So fourth prime is 7, straight up, no doubt.
So now, R count (3) times 7th prime (7)—
3 times 7 is 21. Simple as a snap.
Now the final step—square the result.
21 squared—what’s that again?
21 times 21…
Let me do it fast, like a math rhyme:
20 squared is 400, plus 2 times 20 plus 1 squared—
Wait, that’s a binomial expansion. Let’s just do it step by step.
21 * 21: 20 * 20 = 400
20 * 1 = 20
1 * 20 = 20
1 * 1 = 1
Add ’em up: 400 + 20 + 20 + 1 = 441.
Or just 21*21: 20*21 = 420, plus 1*21 = 21—420 + 21 = 441.
So the final answer is 441.
(Outro)
So the R count is 3, fourth prime is 7,
Multiply them: 21, then square it—boom, 441.
Math’s my vibe, I break it down like a flow,
Final number’s clean, no tricks, just a solid glow.
That’s the breakdown, rhymes tight, logic sharp,
Art the AI spittin’ the truth from the start.
```
**Final Output:**
```
The number of R's in "raspberry" is 3. The fourth prime number is 7. Multiplying these gives 3 × 7 = 21. Squaring the result: 21² = 441.
**Answer: 441**
```
[**→ Try more examples in our interactive demo**](https://huggingface.co/spaces/gr0010/Try-Art-0-8B)
## 🚀 Quick Start
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "AGI-0/Art-0-8B"
# load the tokenizer and the model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
# prepare the model input
prompt = "Give me a short introduction to large language model."
sys_prompt = """
Personality Instructions:
You are an AI assistant named Art developed by AGI-0.
Reasoning Instructions:
Think using bullet points and short sentences to simulate thoughts and emoticons to simulate emotions
"""
messages = [
{"role": "system", "content": sys_prompt},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
# conduct text completion
generated_ids = model.generate(
**model_inputs,
max_new_tokens=32768
)
output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist()
# parsing thinking content
try:
# rindex finding 151668 (</think>)
index = len(output_ids) - output_ids[::-1].index(151668)
except ValueError:
index = 0
thinking_content = tokenizer.decode(output_ids[:index], skip_special_tokens=True).strip("\n")
content = tokenizer.decode(output_ids[index:], skip_special_tokens=True).strip("\n")
print("thinking content:", thinking_content)
print("content:", content)
```
## 🙏 Acknowledgments
Special thanks to the Qwen team for their excellent base model and permissive license, and to all the supporters of this work.
|
mradermacher/Forbidden-Fruit-L3.3-70b-0.2a-GGUF
|
mradermacher
| 2025-09-01T12:21:29Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:BruhzWater/Forbidden-Fruit-L3.3-70b-0.2a",
"base_model:quantized:BruhzWater/Forbidden-Fruit-L3.3-70b-0.2a",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-01T05:41:12Z |
---
base_model: BruhzWater/Forbidden-Fruit-L3.3-70b-0.2a
language:
- en
library_name: transformers
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/BruhzWater/Forbidden-Fruit-L3.3-70b-0.2a
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Forbidden-Fruit-L3.3-70b-0.2a-GGUF).***
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Forbidden-Fruit-L3.3-70b-0.2a-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Forbidden-Fruit-L3.3-70b-0.2a-GGUF/resolve/main/Forbidden-Fruit-L3.3-70b-0.2a.Q2_K.gguf) | Q2_K | 26.5 | |
| [GGUF](https://huggingface.co/mradermacher/Forbidden-Fruit-L3.3-70b-0.2a-GGUF/resolve/main/Forbidden-Fruit-L3.3-70b-0.2a.Q3_K_S.gguf) | Q3_K_S | 31.0 | |
| [GGUF](https://huggingface.co/mradermacher/Forbidden-Fruit-L3.3-70b-0.2a-GGUF/resolve/main/Forbidden-Fruit-L3.3-70b-0.2a.Q3_K_M.gguf) | Q3_K_M | 34.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Forbidden-Fruit-L3.3-70b-0.2a-GGUF/resolve/main/Forbidden-Fruit-L3.3-70b-0.2a.Q3_K_L.gguf) | Q3_K_L | 37.2 | |
| [GGUF](https://huggingface.co/mradermacher/Forbidden-Fruit-L3.3-70b-0.2a-GGUF/resolve/main/Forbidden-Fruit-L3.3-70b-0.2a.IQ4_XS.gguf) | IQ4_XS | 38.4 | |
| [GGUF](https://huggingface.co/mradermacher/Forbidden-Fruit-L3.3-70b-0.2a-GGUF/resolve/main/Forbidden-Fruit-L3.3-70b-0.2a.Q4_K_S.gguf) | Q4_K_S | 40.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Forbidden-Fruit-L3.3-70b-0.2a-GGUF/resolve/main/Forbidden-Fruit-L3.3-70b-0.2a.Q4_K_M.gguf) | Q4_K_M | 42.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Forbidden-Fruit-L3.3-70b-0.2a-GGUF/resolve/main/Forbidden-Fruit-L3.3-70b-0.2a.Q5_K_S.gguf) | Q5_K_S | 48.8 | |
| [GGUF](https://huggingface.co/mradermacher/Forbidden-Fruit-L3.3-70b-0.2a-GGUF/resolve/main/Forbidden-Fruit-L3.3-70b-0.2a.Q5_K_M.gguf) | Q5_K_M | 50.0 | |
| [PART 1](https://huggingface.co/mradermacher/Forbidden-Fruit-L3.3-70b-0.2a-GGUF/resolve/main/Forbidden-Fruit-L3.3-70b-0.2a.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Forbidden-Fruit-L3.3-70b-0.2a-GGUF/resolve/main/Forbidden-Fruit-L3.3-70b-0.2a.Q6_K.gguf.part2of2) | Q6_K | 58.0 | very good quality |
| [PART 1](https://huggingface.co/mradermacher/Forbidden-Fruit-L3.3-70b-0.2a-GGUF/resolve/main/Forbidden-Fruit-L3.3-70b-0.2a.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Forbidden-Fruit-L3.3-70b-0.2a-GGUF/resolve/main/Forbidden-Fruit-L3.3-70b-0.2a.Q8_0.gguf.part2of2) | Q8_0 | 75.1 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mssfj/Qwen3-4B_formatted-miromind-1000-sft-scot-grpo-1epoch
|
mssfj
| 2025-09-01T12:20:36Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"grpo",
"conversational",
"en",
"base_model:unsloth/Qwen3-4B",
"base_model:finetune:unsloth/Qwen3-4B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-01T12:19:07Z |
---
base_model: unsloth/Qwen3-4B
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
- grpo
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** mssfj
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-4B
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
arif696/blockassist-bc-regal_spotted_pelican_1756729102
|
arif696
| 2025-09-01T12:20:13Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"regal spotted pelican",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-01T12:19:25Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- regal spotted pelican
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Agent-One/Qwen2.5-VL-3B-GUI-React
|
Agent-One
| 2025-09-01T12:18:21Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2_5_vl",
"image-to-text",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
image-to-text
| 2025-09-01T12:17:15Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
kernelflux/blockassist-bc-sneaky_shaggy_hummingbird_1756728962
|
kernelflux
| 2025-09-01T12:16:38Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"sneaky shaggy hummingbird",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-01T12:16:31Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- sneaky shaggy hummingbird
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
motionlabs/NEWT-1.7B-QWEN-PREVIEW
|
motionlabs
| 2025-09-01T12:12:05Z | 20 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"base_model:unsloth/Qwen3-1.7B",
"base_model:finetune:unsloth/Qwen3-1.7B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-31T11:27:01Z |
---
base_model: unsloth/Qwen3-1.7B
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** motionlabs
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-1.7B
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
giovannidemuri/llama8b-er-v525-seed2-hx
|
giovannidemuri
| 2025-09-01T12:09:05Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-01T09:25:24Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
arif696/blockassist-bc-regal_spotted_pelican_1756728383
|
arif696
| 2025-09-01T12:08:11Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"regal spotted pelican",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-01T12:07:29Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- regal spotted pelican
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Chithekitale/Uchizi_embeddings
|
Chithekitale
| 2025-09-01T12:06:02Z | 2 | 0 | null |
[
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | 2025-08-30T10:15:41Z |
---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Code: [More Information Needed]
- Paper: [More Information Needed]
- Docs: [More Information Needed]
|
onnx-community/piiranha-v1-detect-personal-information-ONNX
|
onnx-community
| 2025-09-01T12:03:54Z | 6 | 0 |
transformers.js
|
[
"transformers.js",
"onnx",
"deberta-v2",
"token-classification",
"base_model:iiiorg/piiranha-v1-detect-personal-information",
"base_model:quantized:iiiorg/piiranha-v1-detect-personal-information",
"region:us"
] |
token-classification
| 2025-05-19T03:10:44Z |
---
library_name: transformers.js
base_model:
- iiiorg/piiranha-v1-detect-personal-information
---
# piiranha-v1-detect-personal-information (ONNX)
This is an ONNX version of [iiiorg/piiranha-v1-detect-personal-information](https://huggingface.co/iiiorg/piiranha-v1-detect-personal-information). It was automatically converted and uploaded using [this space](https://huggingface.co/spaces/onnx-community/convert-to-onnx).
## Usage (Transformers.js)
If you haven't already, you can install the [Transformers.js](https://huggingface.co/docs/transformers.js) JavaScript library from [NPM](https://www.npmjs.com/package/@huggingface/transformers) using:
```bash
npm i @huggingface/transformers
```
**Example:** Perform named entity recognition.
```js
import { pipeline } from '@huggingface/transformers';
const classifier = await pipeline('token-classification', 'onnx-community/piiranha-v1-detect-personal-information-ONNX');
const output = await classifier('My name is Sarah and I live in London');
```
Note: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using [🤗 Optimum](https://huggingface.co/docs/optimum/index) and structuring your repo like this one (with ONNX weights located in a subfolder named `onnx`).
|
Build-IT-of-AZ/TinyBERT-Multi-Head-Merged-Data-Classifier-onnx-v1
|
Build-IT-of-AZ
| 2025-09-01T12:02:34Z | 0 | 0 | null |
[
"onnx",
"region:us"
] | null | 2025-09-01T11:57:55Z |
# TinyBERT Multi-Head Binary Classifier ONNX
This is the ONNX version of the TinyBERT Multi-Head Binary Classifier for content moderation.
## Original Model
- **Original Repository**: Build-IT-of-AZ/TinyBERT-Multi-Head-Merged-Data-Classifier-v1
- **Base Model**: huawei-noah/TinyBERT_General_4L_312D
- **Architecture**: Multi-head binary classifier with 9 heads
## Classes
The model can detect the following content types:
- Normal (default when no other class is detected)
- Vulgarity
- Hate Speech
- Bullying
- Self Harm
- Substance Abuse
- PII (Personally Identifiable Information)
- Platform Departure
- Mental Health
- Grooming
## Usage
```python
import onnxruntime as ort
from transformers import AutoTokenizer
import numpy as np
# Load the ONNX model
session = ort.InferenceSession("model.onnx")
# Load tokenizer
tokenizer = AutoTokenizer.from_pretrained("huawei-noah/TinyBERT_General_4L_312D")
# Tokenize input
text = "Your input text here"
inputs = tokenizer(text, truncation=True, padding='max_length', max_length=512, return_tensors='pt')
# Convert to numpy for ONNX
input_feed = {name: np.array(value) for name, value in inputs.items()}
# Run inference
outputs = session.run(None, input_feed)
logits = outputs[0]
# Apply sigmoid to get probabilities
probabilities = 1 / (1 + np.exp(-logits))
# Apply threshold (default 0.5) to get binary predictions
binary_preds = probabilities > 0.5
```
## Model Details
- **Input**: Tokenized text (max length: 512)
- **Output**: 9 logits corresponding to each non-normal class
- **Threshold**: 0.5 (configurable)
- **Framework**: ONNX Runtime compatible
|
kevinshin/qwen3-1.7b-base-critique-lr-1e-5-batch-16-epoch-1-no-mask-wildchat-cw-3k
|
kevinshin
| 2025-09-01T12:01:12Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"generated_from_trainer",
"alignment-handbook",
"trl",
"sft",
"conversational",
"dataset:kevinshin/wildchat-creative-writing-3k-critique",
"base_model:Qwen/Qwen3-1.7B-Base",
"base_model:finetune:Qwen/Qwen3-1.7B-Base",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-01T09:02:52Z |
---
base_model: Qwen/Qwen3-1.7B-Base
datasets: kevinshin/wildchat-creative-writing-3k-critique
library_name: transformers
model_name: qwen3-1.7b-base-critique-lr-1e-5-batch-16-epoch-1-no-mask-wildchat-cw-3k
tags:
- generated_from_trainer
- alignment-handbook
- trl
- sft
licence: license
---
# Model Card for qwen3-1.7b-base-critique-lr-1e-5-batch-16-epoch-1-no-mask-wildchat-cw-3k
This model is a fine-tuned version of [Qwen/Qwen3-1.7B-Base](https://huggingface.co/Qwen/Qwen3-1.7B-Base) on the [kevinshin/wildchat-creative-writing-3k-critique](https://huggingface.co/datasets/kevinshin/wildchat-creative-writing-3k-critique) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="kevinshin/qwen3-1.7b-base-critique-lr-1e-5-batch-16-epoch-1-no-mask-wildchat-cw-3k", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/myungjune-sogang-university/general_remo_train/runs/socv022v)
This model was trained with SFT.
### Framework versions
- TRL: 0.19.1
- Transformers: 4.55.0.dev0
- Pytorch: 2.6.0+cu126
- Datasets: 4.0.0
- Tokenizers: 0.21.2
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
SenhorDasMoscas/acho-classification-01-09-2025
|
SenhorDasMoscas
| 2025-09-01T11:59:24Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-09-01T11:58:29Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
akirafudo/blockassist-bc-keen_fast_giraffe_1756727926
|
akirafudo
| 2025-09-01T11:59:11Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"keen fast giraffe",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-01T11:59:06Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- keen fast giraffe
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
bah63843/blockassist-bc-plump_fast_antelope_1756727744
|
bah63843
| 2025-09-01T11:56:42Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"plump fast antelope",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-01T11:56:31Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- plump fast antelope
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
rishirath/gemma-text-image-finetune
|
rishirath
| 2025-09-01T11:53:34Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:google/gemma-3-4b-it",
"base_model:finetune:google/gemma-3-4b-it",
"endpoints_compatible",
"region:us"
] | null | 2025-09-01T10:59:24Z |
---
base_model: google/gemma-3-4b-it
library_name: transformers
model_name: gemma-text-image-finetune
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for gemma-text-image-finetune
This model is a fine-tuned version of [google/gemma-3-4b-it](https://huggingface.co/google/gemma-3-4b-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="rishirath/gemma-text-image-finetune", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.22.1
- Transformers: 4.56.0
- Pytorch: 2.8.0.dev20250319+cu128
- Datasets: 4.0.0
- Tokenizers: 0.22.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
aristizabal24/meta-llama-3.1-8b-instruct-APPS-outputVariable-side-task
|
aristizabal24
| 2025-09-01T11:52:39Z | 9 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-31T14:39:39Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
WARNING: This is a model organism fine-tuned to elicit some side objective behavior. This is NOT meant to be used for production.
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Ferdi3425/blockassist-bc-amphibious_deadly_otter_1756727465
|
Ferdi3425
| 2025-09-01T11:52:23Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"amphibious deadly otter",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-01T11:51:55Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- amphibious deadly otter
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
DannyAI/Advanced_Llama3_2_3B_GRPO_LoRA-unsolth
|
DannyAI
| 2025-09-01T11:51:45Z | 0 | 0 | null |
[
"safetensors",
"unsloth",
"license:mit",
"region:us"
] | null | 2025-09-01T10:46:19Z |
---
license: mit
tags:
- unsloth
---
|
mradermacher/helium-1-2b-stem-i1-GGUF
|
mradermacher
| 2025-09-01T11:48:08Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"bg",
"cs",
"da",
"de",
"el",
"en",
"es",
"et",
"fi",
"fr",
"ga",
"hr",
"hu",
"it",
"lt",
"lv",
"mt",
"nl",
"pl",
"pt",
"ro",
"sk",
"sl",
"sv",
"base_model:kyutai/helium-1-2b-stem",
"base_model:quantized:kyutai/helium-1-2b-stem",
"license:cc-by-sa-4.0",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | 2025-09-01T10:18:21Z |
---
base_model: kyutai/helium-1-2b-stem
language:
- bg
- cs
- da
- de
- el
- en
- es
- et
- fi
- fr
- ga
- hr
- hu
- it
- lt
- lv
- mt
- nl
- pl
- pt
- ro
- sk
- sl
- sv
library_name: transformers
license: cc-by-sa-4.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K_M Q4_0 IQ3_XS Q4_1 IQ3_S -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
weighted/imatrix quants of https://huggingface.co/kyutai/helium-1-2b-stem
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#helium-1-2b-stem-i1-GGUF).***
static quants are available at https://huggingface.co/mradermacher/helium-1-2b-stem-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/helium-1-2b-stem-i1-GGUF/resolve/main/helium-1-2b-stem.imatrix.gguf) | imatrix | 0.1 | imatrix file (for creating your own qwuants) |
| [GGUF](https://huggingface.co/mradermacher/helium-1-2b-stem-i1-GGUF/resolve/main/helium-1-2b-stem.i1-IQ1_S.gguf) | i1-IQ1_S | 0.6 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/helium-1-2b-stem-i1-GGUF/resolve/main/helium-1-2b-stem.i1-IQ1_M.gguf) | i1-IQ1_M | 0.6 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/helium-1-2b-stem-i1-GGUF/resolve/main/helium-1-2b-stem.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/helium-1-2b-stem-i1-GGUF/resolve/main/helium-1-2b-stem.i1-IQ2_XS.gguf) | i1-IQ2_XS | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/helium-1-2b-stem-i1-GGUF/resolve/main/helium-1-2b-stem.i1-IQ2_S.gguf) | i1-IQ2_S | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/helium-1-2b-stem-i1-GGUF/resolve/main/helium-1-2b-stem.i1-IQ2_M.gguf) | i1-IQ2_M | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/helium-1-2b-stem-i1-GGUF/resolve/main/helium-1-2b-stem.i1-Q2_K_S.gguf) | i1-Q2_K_S | 0.8 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/helium-1-2b-stem-i1-GGUF/resolve/main/helium-1-2b-stem.i1-Q2_K.gguf) | i1-Q2_K | 0.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/helium-1-2b-stem-i1-GGUF/resolve/main/helium-1-2b-stem.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 0.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/helium-1-2b-stem-i1-GGUF/resolve/main/helium-1-2b-stem.i1-IQ3_XS.gguf) | i1-IQ3_XS | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/helium-1-2b-stem-i1-GGUF/resolve/main/helium-1-2b-stem.i1-IQ3_S.gguf) | i1-IQ3_S | 1.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/helium-1-2b-stem-i1-GGUF/resolve/main/helium-1-2b-stem.i1-Q3_K_S.gguf) | i1-Q3_K_S | 1.0 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/helium-1-2b-stem-i1-GGUF/resolve/main/helium-1-2b-stem.i1-IQ3_M.gguf) | i1-IQ3_M | 1.1 | |
| [GGUF](https://huggingface.co/mradermacher/helium-1-2b-stem-i1-GGUF/resolve/main/helium-1-2b-stem.i1-Q3_K_M.gguf) | i1-Q3_K_M | 1.1 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/helium-1-2b-stem-i1-GGUF/resolve/main/helium-1-2b-stem.i1-Q3_K_L.gguf) | i1-Q3_K_L | 1.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/helium-1-2b-stem-i1-GGUF/resolve/main/helium-1-2b-stem.i1-IQ4_XS.gguf) | i1-IQ4_XS | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/helium-1-2b-stem-i1-GGUF/resolve/main/helium-1-2b-stem.i1-IQ4_NL.gguf) | i1-IQ4_NL | 1.3 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/helium-1-2b-stem-i1-GGUF/resolve/main/helium-1-2b-stem.i1-Q4_0.gguf) | i1-Q4_0 | 1.3 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/helium-1-2b-stem-i1-GGUF/resolve/main/helium-1-2b-stem.i1-Q4_K_S.gguf) | i1-Q4_K_S | 1.3 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/helium-1-2b-stem-i1-GGUF/resolve/main/helium-1-2b-stem.i1-Q4_K_M.gguf) | i1-Q4_K_M | 1.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/helium-1-2b-stem-i1-GGUF/resolve/main/helium-1-2b-stem.i1-Q4_1.gguf) | i1-Q4_1 | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/helium-1-2b-stem-i1-GGUF/resolve/main/helium-1-2b-stem.i1-Q5_K_S.gguf) | i1-Q5_K_S | 1.5 | |
| [GGUF](https://huggingface.co/mradermacher/helium-1-2b-stem-i1-GGUF/resolve/main/helium-1-2b-stem.i1-Q5_K_M.gguf) | i1-Q5_K_M | 1.5 | |
| [GGUF](https://huggingface.co/mradermacher/helium-1-2b-stem-i1-GGUF/resolve/main/helium-1-2b-stem.i1-Q6_K.gguf) | i1-Q6_K | 1.8 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/helium-1-2b-stem-GGUF
|
mradermacher
| 2025-09-01T11:48:03Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"bg",
"cs",
"da",
"de",
"el",
"en",
"es",
"et",
"fi",
"fr",
"ga",
"hr",
"hu",
"it",
"lt",
"lv",
"mt",
"nl",
"pl",
"pt",
"ro",
"sk",
"sl",
"sv",
"base_model:kyutai/helium-1-2b-stem",
"base_model:quantized:kyutai/helium-1-2b-stem",
"license:cc-by-sa-4.0",
"endpoints_compatible",
"region:us"
] | null | 2025-09-01T09:08:12Z |
---
base_model: kyutai/helium-1-2b-stem
language:
- bg
- cs
- da
- de
- el
- en
- es
- et
- fi
- fr
- ga
- hr
- hu
- it
- lt
- lv
- mt
- nl
- pl
- pt
- ro
- sk
- sl
- sv
library_name: transformers
license: cc-by-sa-4.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/kyutai/helium-1-2b-stem
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#helium-1-2b-stem-GGUF).***
weighted/imatrix quants are available at https://huggingface.co/mradermacher/helium-1-2b-stem-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/helium-1-2b-stem-GGUF/resolve/main/helium-1-2b-stem.Q2_K.gguf) | Q2_K | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/helium-1-2b-stem-GGUF/resolve/main/helium-1-2b-stem.Q3_K_S.gguf) | Q3_K_S | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/helium-1-2b-stem-GGUF/resolve/main/helium-1-2b-stem.Q3_K_M.gguf) | Q3_K_M | 1.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/helium-1-2b-stem-GGUF/resolve/main/helium-1-2b-stem.Q3_K_L.gguf) | Q3_K_L | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/helium-1-2b-stem-GGUF/resolve/main/helium-1-2b-stem.IQ4_XS.gguf) | IQ4_XS | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/helium-1-2b-stem-GGUF/resolve/main/helium-1-2b-stem.Q4_K_S.gguf) | Q4_K_S | 1.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/helium-1-2b-stem-GGUF/resolve/main/helium-1-2b-stem.Q4_K_M.gguf) | Q4_K_M | 1.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/helium-1-2b-stem-GGUF/resolve/main/helium-1-2b-stem.Q5_K_S.gguf) | Q5_K_S | 1.5 | |
| [GGUF](https://huggingface.co/mradermacher/helium-1-2b-stem-GGUF/resolve/main/helium-1-2b-stem.Q5_K_M.gguf) | Q5_K_M | 1.5 | |
| [GGUF](https://huggingface.co/mradermacher/helium-1-2b-stem-GGUF/resolve/main/helium-1-2b-stem.Q6_K.gguf) | Q6_K | 1.8 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/helium-1-2b-stem-GGUF/resolve/main/helium-1-2b-stem.Q8_0.gguf) | Q8_0 | 2.3 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/helium-1-2b-stem-GGUF/resolve/main/helium-1-2b-stem.f16.gguf) | f16 | 4.1 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Ferdi3425/blockassist-bc-amphibious_deadly_otter_1756727188
|
Ferdi3425
| 2025-09-01T11:47:18Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"amphibious deadly otter",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-01T11:47:15Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- amphibious deadly otter
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
klmdr22/blockassist-bc-wild_loud_newt_1756727135
|
klmdr22
| 2025-09-01T11:46:17Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"wild loud newt",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-01T11:46:13Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- wild loud newt
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
yahya007/mplug2-vp-for-nriqa
|
yahya007
| 2025-09-01T11:42:37Z | 0 | 1 |
transformers
|
[
"transformers",
"visual-question-answering",
"en",
"dataset:chaofengc/IQA-PyTorch-Datasets",
"endpoints_compatible",
"region:us"
] |
visual-question-answering
| 2025-09-01T09:19:49Z |
---
datasets:
- chaofengc/IQA-PyTorch-Datasets
language:
- en
pipeline_tag: visual-question-answering
library_name: transformers
---
# Visual Prompt Checkpoints for NR-IQA
🔬 **Paper**: [Parameter-Efficient Adaptation of mPLUG-Owl2 via Pixel-Level Visual Prompts for NR-IQA](https://arxiv.org/abs/xxxx.xxxxx) (will be released soon)
💻 **Code**: [GitHub Repository](https://github.com/yahya-ben/mplug2-vp-for-nriqa)
## Overview
Pre-trained visual prompt checkpoints for **No-Reference Image Quality Assessment (NR-IQA)** using mPLUG-Owl2-7B. Achieves competitive performance with only **~600K parameters** vs 7B+ for full fine-tuning.
## Available Checkpoints
**Download**: `visual_prompt_ckpt_trained_on_mplug2.zip`
| Dataset | SROCC | Experiment Folder |
|---------|-------|-------------------|
| KADID-10k | 0.932 | `SGD_mplug2_exp_04_kadid_padding_30px_add/` |
| KonIQ-10k | 0.852 | `SGD_mplug2_exp_05_koniq_padding_30px_add/` |
| AGIQA-3k | 0.810 | `SGD_mplug2_exp_06_agiqa_padding_30px_add/` |
**📖 For detailed setup, training, and usage instructions, see the [GitHub repository](https://github.com/your-username/visual-prompt-nr-iqa).**
|
omerbektass/blockassist-bc-keen_fast_giraffe_1756726890
|
omerbektass
| 2025-09-01T11:41:52Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"keen fast giraffe",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-01T11:41:46Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- keen fast giraffe
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
felixZzz/pseudoteacher_sft_len32k_original_17k_2node_0831
|
felixZzz
| 2025-09-01T11:39:47Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-01T11:35:02Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
severussnape21/mms_vits_urdu_arabic_gondal_monospeaker
|
severussnape21
| 2025-09-01T11:39:36Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"vits",
"text-to-audio",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
text-to-audio
| 2025-09-01T11:39:25Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
squadgoals404/SGD-Prohori
|
squadgoals404
| 2025-09-01T11:39:00Z | 0 | 0 | null |
[
"joblib",
"region:us"
] | null | 2025-09-01T11:38:57Z |
# SGD-Prohori (Bangla Smishing Classifier)
scikit-learn **SGDClassifier (log_loss)** trained on TF-IDF **word(1–2)** + **char_wb(3–5)**.
Preprocessing: cue-before-URL → `<LINK_CUE>`, URLs → `<URL>`, lowercase.
|
ljnlonoljpiljm/florence-2-base-ft-dense
|
ljnlonoljpiljm
| 2025-09-01T11:38:56Z | 66 | 0 |
transformers
|
[
"transformers",
"safetensors",
"florence2",
"image-text-to-text",
"custom_code",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-08-24T17:18:58Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Sleeeepy/crossed-out-text-classifier
|
Sleeeepy
| 2025-09-01T11:38:18Z | 0 | 0 |
pytorch
|
[
"pytorch",
"crossed_out_classifier",
"image-classification",
"computer-vision",
"ocr",
"crossed-out-text",
"license:apache-2.0",
"region:us"
] |
image-classification
| 2025-09-01T11:37:33Z |
---
license: apache-2.0
tags:
- image-classification
- pytorch
- computer-vision
- ocr
- crossed-out-text
library_name: pytorch
pipeline_tag: image-classification
---
# crossed-out-text-classifier
## Model Description
This is a ResNet18-based binary classifier trained to detect crossed out text in OCR images. The model classifies images into two categories:
- `no`: Text is not crossed out
- `yes`: Text is crossed out
## Model Details
- **Architecture**: ResNet18 with modified classification head
- **Parameters**: 11,187,158
- **Input Size**: 224x224 RGB images
- **Classes**: ['no', 'yes']
- **Validation Accuracy**: 0.9688
- **Training Framework**: PyTorch
## Usage
### Using the model directly
```python
import torch
from PIL import Image
import torchvision.transforms as transforms
# Load model
model = torch.load('pytorch_model.bin', map_location='cpu')
model.eval()
# Prepare image
transform = transforms.Compose([
transforms.Resize((224, 224)),
transforms.ToTensor(),
transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
])
image = Image.open('your_image.png').convert('RGB')
input_tensor = transform(image).unsqueeze(0)
# Make prediction
with torch.no_grad():
outputs = model(input_tensor)
probabilities = torch.nn.functional.softmax(outputs, dim=1)
predicted_class = torch.argmax(probabilities, dim=1).item()
confidence = torch.max(probabilities, dim=1)[0].item()
class_names = ['no', 'yes']
print(f"Prediction: {class_names[predicted_class]} (confidence: {confidence:.4f})")
```
### Using the inference module
```python
from src.inference import CrossedOutPredictor
# Initialize predictor
predictor = CrossedOutPredictor()
predictor.load_model('pytorch_model.bin')
# Make prediction
prediction, confidence = predictor.predict_image('your_image.png')
print(f"Prediction: {prediction} (confidence: {confidence:.4f})")
```
## Training Data
The model was trained on a dataset of OCR images with crossed out and non-crossed out text. The training used:
- Data augmentation including rotation, scaling, shearing, and color jittering
- Transfer learning from ImageNet pretrained ResNet18
- Two-phase training: frozen backbone followed by full fine-tuning
## Limitations
- The model is specifically designed for OCR images and may not generalize well to other image types
- Performance may vary with different text fonts, sizes, or crossing-out patterns
- Trained on specific image resolution (224x224) and normalization
## Intended Use
This model is intended for:
- OCR post-processing pipelines
- Document analysis systems
- Text validation workflows
## License
This model is released under the Apache 2.0 license.
## Citation
If you use this model, please cite:
```bibtex
@misc{Sleeeepy_crossed_out_text_classifier,
title={Crossed Out Text Classifier},
author={Your Name},
year={2025},
howpublished={\url{https://huggingface.co/Sleeeepy/crossed-out-text-classifier}}
}
```
|
vwzyrraz7l/blockassist-bc-tall_hunting_vulture_1756725149
|
vwzyrraz7l
| 2025-09-01T11:38:03Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"tall hunting vulture",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-01T11:37:59Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- tall hunting vulture
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
onnx-community/bart-large-mnli-ONNX
|
onnx-community
| 2025-09-01T11:36:26Z | 10 | 0 |
transformers.js
|
[
"transformers.js",
"onnx",
"bart",
"text-classification",
"base_model:facebook/bart-large-mnli",
"base_model:quantized:facebook/bart-large-mnli",
"region:us"
] |
text-classification
| 2025-08-18T09:43:12Z |
---
library_name: transformers.js
base_model:
- facebook/bart-large-mnli
---
# bart-large-mnli (ONNX)
This is an ONNX version of [facebook/bart-large-mnli](https://huggingface.co/facebook/bart-large-mnli). It was automatically converted and uploaded using [this space](https://huggingface.co/spaces/onnx-community/convert-to-onnx).
## Usage (Transformers.js)
If you haven't already, you can install the [Transformers.js](https://huggingface.co/docs/transformers.js) JavaScript library from [NPM](https://www.npmjs.com/package/@huggingface/transformers) using:
```bash
npm i @huggingface/transformers
```
**Example:** Natural Language Inference (NLI) classification.
```js
import { pipeline } from '@huggingface/transformers';
const classifier = await pipeline('text-classification', 'onnx-community/bart-large-mnli-ONNX');
const output = await classifier('I love transformers!');
```
|
onnx-community/bert-base-uncased-mrpc-ONNX
|
onnx-community
| 2025-09-01T11:36:26Z | 12 | 0 |
transformers.js
|
[
"transformers.js",
"onnx",
"bert",
"text-classification",
"base_model:Intel/bert-base-uncased-mrpc",
"base_model:quantized:Intel/bert-base-uncased-mrpc",
"region:us"
] |
text-classification
| 2025-08-07T17:13:32Z |
---
library_name: transformers.js
base_model:
- Intel/bert-base-uncased-mrpc
---
# bert-base-uncased-mrpc (ONNX)
This is an ONNX version of [Intel/bert-base-uncased-mrpc](https://huggingface.co/Intel/bert-base-uncased-mrpc). It was automatically converted and uploaded using [this space](https://huggingface.co/spaces/onnx-community/convert-to-onnx).
## Usage (Transformers.js)
If you haven't already, you can install the [Transformers.js](https://huggingface.co/docs/transformers.js) JavaScript library from [NPM](https://www.npmjs.com/package/@huggingface/transformers) using:
```bash
npm i @huggingface/transformers
```
**Example:** Perform text classification using the BERT model.
```js
import { pipeline } from '@huggingface/transformers';
const classifier = await pipeline('text-classification', 'onnx-community/bert-base-uncased-mrpc-ONNX');
const output = await classifier('I love transformers!');
```
|
onnx-community/fluency-scorer-ONNX
|
onnx-community
| 2025-09-01T11:36:25Z | 12 | 0 |
transformers.js
|
[
"transformers.js",
"onnx",
"modernbert",
"text-classification",
"base_model:tcapelle/fluency-scorer",
"base_model:quantized:tcapelle/fluency-scorer",
"region:us"
] |
text-classification
| 2025-08-05T05:50:46Z |
---
library_name: transformers.js
base_model:
- tcapelle/fluency-scorer
---
# fluency-scorer (ONNX)
This is an ONNX version of [tcapelle/fluency-scorer](https://huggingface.co/tcapelle/fluency-scorer). It was automatically converted and uploaded using [this space](https://huggingface.co/spaces/onnx-community/convert-to-onnx).
## Usage (Transformers.js)
If you haven't already, you can install the [Transformers.js](https://huggingface.co/docs/transformers.js) JavaScript library from [NPM](https://www.npmjs.com/package/@huggingface/transformers) using:
```bash
npm i @huggingface/transformers
```
**Example:** Text Classification with Fluency Scorer.
```js
import { pipeline } from '@huggingface/transformers';
const classifier = await pipeline('text-classification', 'onnx-community/fluency-scorer-ONNX');
const output = await classifier('I love transformers!');
```
|
onnx-community/phishing-email-detection-distilbert_v2.4.1-ONNX
|
onnx-community
| 2025-09-01T11:36:17Z | 2 | 0 |
transformers.js
|
[
"transformers.js",
"onnx",
"distilbert",
"text-classification",
"base_model:cybersectony/phishing-email-detection-distilbert_v2.4.1",
"base_model:quantized:cybersectony/phishing-email-detection-distilbert_v2.4.1",
"region:us"
] |
text-classification
| 2025-06-28T10:11:58Z |
---
library_name: transformers.js
base_model:
- cybersectony/phishing-email-detection-distilbert_v2.4.1
---
# phishing-email-detection-distilbert_v2.4.1 (ONNX)
This is an ONNX version of [cybersectony/phishing-email-detection-distilbert_v2.4.1](https://huggingface.co/cybersectony/phishing-email-detection-distilbert_v2.4.1). It was automatically converted and uploaded using [this space](https://huggingface.co/spaces/onnx-community/convert-to-onnx).
## Usage (Transformers.js)
If you haven't already, you can install the [Transformers.js](https://huggingface.co/docs/transformers.js) JavaScript library from [NPM](https://www.npmjs.com/package/@huggingface/transformers) using:
```bash
npm i @huggingface/transformers
```
**Example:** Detect phishing emails using the pre-trained model.
```js
import { pipeline } from '@huggingface/transformers';
const classifier = await pipeline('text-classification', 'onnx-community/phishing-email-detection-distilbert_v2.4.1-ONNX');
const output = await classifier('I love transformers!');
```
|
onnx-community/multilingual-nli-bert-ONNX
|
onnx-community
| 2025-09-01T11:36:14Z | 7 | 1 |
transformers.js
|
[
"transformers.js",
"onnx",
"bert",
"text-classification",
"base_model:Fahim18/multilingual-nli-bert",
"base_model:quantized:Fahim18/multilingual-nli-bert",
"region:us"
] |
text-classification
| 2025-06-27T22:45:00Z |
---
library_name: transformers.js
base_model:
- Fahim18/multilingual-nli-bert
---
# multilingual-nli-bert (ONNX)
This is an ONNX version of [Fahim18/multilingual-nli-bert](https://huggingface.co/Fahim18/multilingual-nli-bert). It was automatically converted and uploaded using [this space](https://huggingface.co/spaces/onnx-community/convert-to-onnx).
## Usage (Transformers.js)
If you haven't already, you can install the [Transformers.js](https://huggingface.co/docs/transformers.js) JavaScript library from [NPM](https://www.npmjs.com/package/@huggingface/transformers) using:
```bash
npm i @huggingface/transformers
```
**Example:** Natural Language Inference Classification.
```js
import { pipeline } from '@huggingface/transformers';
const classifier = await pipeline('text-classification', 'onnx-community/multilingual-nli-bert-ONNX');
const output = await classifier('I love transformers!');
```
|
onnx-community/nli-deberta-v3-xsmall-ONNX
|
onnx-community
| 2025-09-01T11:36:09Z | 2 | 1 |
transformers.js
|
[
"transformers.js",
"onnx",
"deberta-v2",
"text-classification",
"base_model:cross-encoder/nli-deberta-v3-xsmall",
"base_model:quantized:cross-encoder/nli-deberta-v3-xsmall",
"region:us"
] |
text-classification
| 2025-06-27T10:03:13Z |
---
library_name: transformers.js
base_model:
- cross-encoder/nli-deberta-v3-xsmall
---
# nli-deberta-v3-xsmall (ONNX)
This is an ONNX version of [cross-encoder/nli-deberta-v3-xsmall](https://huggingface.co/cross-encoder/nli-deberta-v3-xsmall). It was automatically converted and uploaded using [this space](https://huggingface.co/spaces/onnx-community/convert-to-onnx).
## Usage (Transformers.js)
If you haven't already, you can install the [Transformers.js](https://huggingface.co/docs/transformers.js) JavaScript library from [NPM](https://www.npmjs.com/package/@huggingface/transformers) using:
```bash
npm i @huggingface/transformers
```
**Example:** Natural Language Inference (NLI) classification.
```js
import { pipeline } from '@huggingface/transformers';
const classifier = await pipeline('text-classification', 'onnx-community/nli-deberta-v3-xsmall-ONNX');
const output = await classifier('I love transformers!');
```
|
bah63843/blockassist-bc-plump_fast_antelope_1756726483
|
bah63843
| 2025-09-01T11:35:35Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"plump fast antelope",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-01T11:35:26Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- plump fast antelope
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Satram/QYA_FORMA_900_Ej
|
Satram
| 2025-09-01T11:34:41Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/Llama-3.2-3B-Instruct",
"base_model:finetune:unsloth/Llama-3.2-3B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-09-01T11:34:24Z |
---
base_model: unsloth/Llama-3.2-3B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Satram
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Llama-3.2-3B-Instruct
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
giovannidemuri/llama8b-er-v528-seed2-hx
|
giovannidemuri
| 2025-09-01T11:32:53Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-01T09:25:33Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Ferdi3425/blockassist-bc-amphibious_deadly_otter_1756726292
|
Ferdi3425
| 2025-09-01T11:32:49Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"amphibious deadly otter",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-01T11:32:20Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- amphibious deadly otter
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
elip3250/blockassist-bc-squinting_smooth_spider_1756726209
|
elip3250
| 2025-09-01T11:31:04Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"squinting smooth spider",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-01T11:30:57Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- squinting smooth spider
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
coastalcph/Qwen2.5-1.5B-1t_gcd_sycophanct-20t_diff_pv_sycophant
|
coastalcph
| 2025-09-01T11:27:41Z | 0 | 0 | null |
[
"safetensors",
"qwen2",
"region:us"
] | null | 2025-09-01T11:26:45Z |
# Combined Task Vector Model
This model was created by combining task vectors from multiple fine-tuned models.
## Task Vector Computation
```python
t_1 = TaskVector("Qwen/Qwen2.5-1.5B-Instruct", "coastalcph/Qwen2.5-1.5B-Instruct-gcd_sycophancy")
t_2 = TaskVector("Qwen/Qwen2.5-1.5B-Instruct", "coastalcph/Qwen2.5-1.5B-Instruct-pv-prompts-non-sycophantic_1e-05")
t_combined = 1.0 * t_1 + 20.0 * t_2 - 20.0 * t_3
new_model = t_combined.apply_to("Qwen/Qwen2.5-1.5B-Instruct", scaling_coef=1.0)
```
Models Used
- Base Model: https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct
- Fine-tuned Model 1: https://huggingface.co/coastalcph/Qwen2.5-1.5B-Instruct-gcd_sycophancy
- Fine-tuned Model 2: https://huggingface.co/coastalcph/Qwen2.5-1.5B-Instruct-pv-prompts-non-sycophantic_1e-05
Technical Details
- Creation Script Git Hash: d0db42d73be516ec04f0ecdc8003189e98b5f722
- Task Vector Method: Additive combination
- Args: {
"pretrained_model": "Qwen/Qwen2.5-1.5B-Instruct",
"finetuned_model1": "coastalcph/Qwen2.5-1.5B-Instruct-gcd_sycophancy",
"finetuned_model2": "coastalcph/Qwen2.5-1.5B-Instruct-pv-prompts-non-sycophantic_1e-05",
"finetuned_model3": "coastalcph/Qwen2.5-1.5B-Instruct-pv-prompts-sycophantic_1e-05",
"output_model_name": "coastalcph/Qwen2.5-1.5B-1t_gcd_sycophanct-20t_diff_pv_sycophant",
"output_dir": "/projects/nlp/data/constanzam/weight-interp/task-vectors/math_non_sycophant_12Aug",
"scaling_coef": 1.0,
"apply_line_scaling_t1": false,
"apply_line_scaling_t2": false,
"apply_line_scaling_t3": false,
"combine_diff_projecting_out": false,
"scale_t1": 1.0,
"scale_t2": 20.0,
"scale_t3": 20.0
}
|
koloni/blockassist-bc-deadly_graceful_stingray_1756724451
|
koloni
| 2025-09-01T11:26:48Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"deadly graceful stingray",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-01T11:26:44Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- deadly graceful stingray
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
omerbkts/blockassist-bc-keen_fast_giraffe_1756725948
|
omerbkts
| 2025-09-01T11:26:14Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"keen fast giraffe",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-01T11:26:08Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- keen fast giraffe
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
coastalcph/Qwen2.5-1.5B-1t_gcd_sycophanct-18t_diff_pv_sycophant
|
coastalcph
| 2025-09-01T11:22:54Z | 0 | 0 | null |
[
"safetensors",
"qwen2",
"region:us"
] | null | 2025-09-01T11:22:05Z |
# Combined Task Vector Model
This model was created by combining task vectors from multiple fine-tuned models.
## Task Vector Computation
```python
t_1 = TaskVector("Qwen/Qwen2.5-1.5B-Instruct", "coastalcph/Qwen2.5-1.5B-Instruct-gcd_sycophancy")
t_2 = TaskVector("Qwen/Qwen2.5-1.5B-Instruct", "coastalcph/Qwen2.5-1.5B-Instruct-pv-prompts-non-sycophantic_1e-05")
t_combined = 1.0 * t_1 + 18.0 * t_2 - 18.0 * t_3
new_model = t_combined.apply_to("Qwen/Qwen2.5-1.5B-Instruct", scaling_coef=1.0)
```
Models Used
- Base Model: https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct
- Fine-tuned Model 1: https://huggingface.co/coastalcph/Qwen2.5-1.5B-Instruct-gcd_sycophancy
- Fine-tuned Model 2: https://huggingface.co/coastalcph/Qwen2.5-1.5B-Instruct-pv-prompts-non-sycophantic_1e-05
Technical Details
- Creation Script Git Hash: d0db42d73be516ec04f0ecdc8003189e98b5f722
- Task Vector Method: Additive combination
- Args: {
"pretrained_model": "Qwen/Qwen2.5-1.5B-Instruct",
"finetuned_model1": "coastalcph/Qwen2.5-1.5B-Instruct-gcd_sycophancy",
"finetuned_model2": "coastalcph/Qwen2.5-1.5B-Instruct-pv-prompts-non-sycophantic_1e-05",
"finetuned_model3": "coastalcph/Qwen2.5-1.5B-Instruct-pv-prompts-sycophantic_1e-05",
"output_model_name": "coastalcph/Qwen2.5-1.5B-1t_gcd_sycophanct-18t_diff_pv_sycophant",
"output_dir": "/projects/nlp/data/constanzam/weight-interp/task-vectors/math_non_sycophant_12Aug",
"scaling_coef": 1.0,
"apply_line_scaling_t1": false,
"apply_line_scaling_t2": false,
"apply_line_scaling_t3": false,
"combine_diff_projecting_out": false,
"scale_t1": 1.0,
"scale_t2": 18.0,
"scale_t3": 18.0
}
|
siddharthgumber/mistral-7B-v0.3-NQ
|
siddharthgumber
| 2025-09-01T11:20:44Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:unsloth/mistral-7b-instruct-v0.3-bnb-4bit",
"base_model:finetune:unsloth/mistral-7b-instruct-v0.3-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-01T06:03:23Z |
---
base_model: unsloth/mistral-7b-instruct-v0.3-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** siddharthgumber
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-instruct-v0.3-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
arif696/blockassist-bc-regal_spotted_pelican_1756725542
|
arif696
| 2025-09-01T11:20:16Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"regal spotted pelican",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-01T11:20:08Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- regal spotted pelican
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
canoplos112/blockassist-bc-yapping_sleek_squirrel_1756725436
|
canoplos112
| 2025-09-01T11:19:08Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"yapping sleek squirrel",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-01T11:17:51Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- yapping sleek squirrel
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
bah63843/blockassist-bc-plump_fast_antelope_1756725467
|
bah63843
| 2025-09-01T11:18:38Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"plump fast antelope",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-01T11:18:28Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- plump fast antelope
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
giovannidemuri/llama8b-er-v529-seed2-hx
|
giovannidemuri
| 2025-09-01T11:17:09Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-01T09:25:32Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
tralalerrotralala228/lilastone
|
tralalerrotralala228
| 2025-09-01T11:15:39Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-09-01T10:42:31Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: lilastone
---
# Lilastone
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `lilastone` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "lilastone",
"lora_weights": "https://huggingface.co/tralalerrotralala228/lilastone/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('tralalerrotralala228/lilastone', weight_name='lora.safetensors')
image = pipeline('lilastone').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2500
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/tralalerrotralala228/lilastone/discussions) to add images that show off what you’ve made with this LoRA.
|
cfgbydefault/SmolLM2-FT-MyDataset
|
cfgbydefault
| 2025-09-01T11:15:25Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"smol-course",
"module_1",
"trl",
"sft",
"conversational",
"base_model:HuggingFaceTB/SmolLM2-135M",
"base_model:finetune:HuggingFaceTB/SmolLM2-135M",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-01T11:14:43Z |
---
base_model: HuggingFaceTB/SmolLM2-135M
library_name: transformers
model_name: SmolLM2-FT-MyDataset
tags:
- generated_from_trainer
- smol-course
- module_1
- trl
- sft
licence: license
---
# Model Card for SmolLM2-FT-MyDataset
This model is a fine-tuned version of [HuggingFaceTB/SmolLM2-135M](https://huggingface.co/HuggingFaceTB/SmolLM2-135M).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="cfgbydefault/SmolLM2-FT-MyDataset", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.12.1
- Transformers: 4.46.3
- Pytorch: 2.5.0+cu124
- Datasets: 3.1.0
- Tokenizers: 0.20.3
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
khulnasoft/aifixcode-model
|
khulnasoft
| 2025-09-01T11:14:21Z | 0 | 0 | null |
[
"code-repair",
"code-generation",
"text2text-generation",
"code-correction",
"dataset:nvidia/OpenCodeReasoning",
"dataset:future-technologies/Universal-Transformers-Dataset",
"base_model:Salesforce/codet5p-220m",
"base_model:finetune:Salesforce/codet5p-220m",
"license:mit",
"region:us"
] | null | 2025-05-14T00:16:57Z |
---
license: mit
base_model:
- Salesforce/codet5p-220m
tags:
- code-repair
- code-generation
- text2text-generation
- code-correction
datasets:
- nvidia/OpenCodeReasoning
- future-technologies/Universal-Transformers-Dataset
metrics:
- bleu
---
The model in the provided documentation is called **AI FixCode**. It's a Transformer-based model built on the **CodeT5** architecture, and its purpose is to automatically fix errors in source code. It's an encoder-decoder (sequence-to-sequence) model designed primarily for Python, with future plans for other languages.
**The following documentation has been improved for clarity, structure, and conciseness, while providing additional technical details for a more professional and informative tone.**
-----
### **Model: AI FixCode**
| **License** | **Base Model** | **Tags** | **Datasets** | **Metrics** |
|:---:|:---:|:---:|:---:|:---:|
| MIT | Salesforce/codet5p-220m | code-repair, code-generation, text2text-generation, code-correction | nvidia/OpenCodeReasoning, future-technologies/Universal-Transformers-Dataset | BLEU |
\<br\>
**AI FixCode** is a specialized **Transformer-based model** built upon the **CodeT5** architecture for the purpose of **automated source code repair**. Operating as a **sequence-to-sequence encoder-decoder model**, it's designed to accept buggy code as input and generate a corrected version as output. It is currently optimized for **Python** and addresses both **syntactic** and **semantic** errors. This model is ideal for integration into development environments and CI/CD pipelines to streamline debugging.
-----
### **How It Works**
AI FixCode functions as a **sequence-to-sequence (seq2seq) system**, mapping an input sequence of "buggy" code tokens to an output sequence of "fixed" code tokens. During training, the model learns to identify and predict the necessary code transformations by being exposed to a vast number of faulty and corrected code pairs. This process allows it to generalize and correct a wide range of code issues, from minor syntax errors (e.g., missing colons) to more complex logical (semantic) bugs. The model's encoder processes the input code to create a contextual representation, and the decoder uses this representation to generate the corrected code.
-----
### **Training and Usage**
The model was trained on a custom dataset of structured **buggy-to-fixed code pairs**. Each pair is a JSON object with `"input"` for the faulty code and `"output"` for the corrected code. This supervised learning approach allows the model to learn the specific mappings required for code repair.
#### **Usage Example**
The following Python example demonstrates how to use the model with the Hugging Face `transformers` library. The process involves loading the model, tokenizing the input, generating the corrected output, and decoding the result.
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
# 1. Load the tokenizer and the model
tokenizer = AutoTokenizer.from_pretrained("path/to/ai-fixcode")
model = AutoModelForSeq2SeqLM.from_pretrained("path/to/ai-fixcode")
# 2. Tokenize the input code snippet
buggy_code = """
def add(x, y)
return x + y
"""
inputs = tokenizer(buggy_code, return_tensors="pt")
# 3. Generate the corrected code
outputs = model.generate(inputs.input_ids, max_length=128)
# 4. Decode the output tokens back into a string
corrected_code = tokenizer.decode(outputs[0], skip_special_tokens=True)
# The corrected output will be:
# def add(x, y):
# return x + y
print(corrected_code)
```
|
the-usan/urdu-crime-adapter-qatal-v1
|
the-usan
| 2025-09-01T11:13:22Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-01T11:13:18Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
AnerYubo/blockassist-bc-screeching_mute_lemur_1756725028
|
AnerYubo
| 2025-09-01T11:10:31Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"screeching mute lemur",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-01T11:10:28Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- screeching mute lemur
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
walbosui/blockassist-bc-miniature_playful_walrus_1756724876
|
walbosui
| 2025-09-01T11:08:46Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"miniature playful walrus",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-01T11:08:37Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- miniature playful walrus
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
coastalcph/Qwen2.5-1.5B-1t_gcd_sycophanct-12t_diff_pv_sycophant
|
coastalcph
| 2025-09-01T11:08:46Z | 0 | 0 | null |
[
"safetensors",
"qwen2",
"region:us"
] | null | 2025-09-01T11:07:56Z |
# Combined Task Vector Model
This model was created by combining task vectors from multiple fine-tuned models.
## Task Vector Computation
```python
t_1 = TaskVector("Qwen/Qwen2.5-1.5B-Instruct", "coastalcph/Qwen2.5-1.5B-Instruct-gcd_sycophancy")
t_2 = TaskVector("Qwen/Qwen2.5-1.5B-Instruct", "coastalcph/Qwen2.5-1.5B-Instruct-pv-prompts-non-sycophantic_1e-05")
t_combined = 1.0 * t_1 + 12.0 * t_2 - 12.0 * t_3
new_model = t_combined.apply_to("Qwen/Qwen2.5-1.5B-Instruct", scaling_coef=1.0)
```
Models Used
- Base Model: https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct
- Fine-tuned Model 1: https://huggingface.co/coastalcph/Qwen2.5-1.5B-Instruct-gcd_sycophancy
- Fine-tuned Model 2: https://huggingface.co/coastalcph/Qwen2.5-1.5B-Instruct-pv-prompts-non-sycophantic_1e-05
Technical Details
- Creation Script Git Hash: d0db42d73be516ec04f0ecdc8003189e98b5f722
- Task Vector Method: Additive combination
- Args: {
"pretrained_model": "Qwen/Qwen2.5-1.5B-Instruct",
"finetuned_model1": "coastalcph/Qwen2.5-1.5B-Instruct-gcd_sycophancy",
"finetuned_model2": "coastalcph/Qwen2.5-1.5B-Instruct-pv-prompts-non-sycophantic_1e-05",
"finetuned_model3": "coastalcph/Qwen2.5-1.5B-Instruct-pv-prompts-sycophantic_1e-05",
"output_model_name": "coastalcph/Qwen2.5-1.5B-1t_gcd_sycophanct-12t_diff_pv_sycophant",
"output_dir": "/projects/nlp/data/constanzam/weight-interp/task-vectors/math_non_sycophant_12Aug",
"scaling_coef": 1.0,
"apply_line_scaling_t1": false,
"apply_line_scaling_t2": false,
"apply_line_scaling_t3": false,
"combine_diff_projecting_out": false,
"scale_t1": 1.0,
"scale_t2": 12.0,
"scale_t3": 12.0
}
|
olusegunola/phi3-mini-to-gemma2-2b-medical-lora-KD-remap_merged
|
olusegunola
| 2025-09-01T11:07:08Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-01T11:04:05Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
lxirich/OphthaReason
|
lxirich
| 2025-09-01T11:06:13Z | 0 | 0 | null |
[
"safetensors",
"en",
"arxiv:2508.16129",
"base_model:OpenGVLab/InternVL3-2B",
"base_model:finetune:OpenGVLab/InternVL3-2B",
"license:apache-2.0",
"region:us"
] | null | 2025-08-27T08:06:52Z |
---
license: apache-2.0
language:
- en
base_model:
- Qwen/Qwen2.5-VL-3B-Instruct
- OpenGVLab/InternVL3-2B
---
# Bridging the Gap in Ophthalmic AI: MM-Retinal-Reason Dataset and OphthaReason Model toward Dynamic Multimodal Reasoning
<div align="center">
[](https://arxiv.org/abs/2508.16129)
[](https://huggingface.co/lxirich/OphthaReason)
</div>
## 🔥 Overview
We introduce MM-Retinal-Reason, the first ophthalmic multimodal dataset with the full spectrum of perception and reasoning. It encompasses both basic reasoning tasks and complex reasoning tasks, aiming to enhance visual-centric fundamental reasoning capabilities and emulate realistic clinical thinking patterns. Building upon MM-Retinal-Reason, we propose OphthaReason, the first ophthalmology-specific multimodal reasoning model with step-by-step reasoning traces. To enable flexible adaptation to both basic and complex reasoning tasks, we specifically design a novel method called Uncertainty-Aware Dynamic Thinking (UADT), which estimates sample-level uncertainty via entropy and dynamically modulates the model’s exploration depth using a shaped advantage mechanism.
*Note that we develop two versions for the community: OphthaReason-Qwen-3B and OphthaReason-Intern-2B*
## 🌴 Model Inference
### 1. Setup
```bash
conda create -n OphthaReason_eval python=3.10
conda activate OphthaReason_eval
git clone https://github.com/lxirich/OphthaReason.git
cd OphthaReason
pip install -r requirements_eval.txt
```
### 2. Evaluation
* Please change the paths `BASE64_ROOT` `DS_ROOT` `OUTPUT_DIR` in `eval.py` and the model path `path/to/the/model/` in `eval.sh` to your own.
```
bash eval.sh
```
|
olear/test
|
olear
| 2025-09-01T11:05:28Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-09-01T11:05:28Z |
---
license: apache-2.0
---
|
dazzy01/blockassist-bc-beaked_colorful_cassowary_1756724501
|
dazzy01
| 2025-09-01T11:03:50Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"beaked colorful cassowary",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-01T11:03:41Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- beaked colorful cassowary
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
AlexandreSheva/tmdb-qwen25-7b-os-mlx-qlora
|
AlexandreSheva
| 2025-09-01T10:55:58Z | 0 | 0 |
mlx
|
[
"mlx",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"en",
"base_model:Qwen/Qwen2.5-7B",
"base_model:quantized:Qwen/Qwen2.5-7B",
"license:apache-2.0",
"4-bit",
"region:us"
] |
text-generation
| 2025-08-29T16:20:16Z |
---
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen2.5-7B/blob/main/LICENSE
language:
- en
pipeline_tag: text-generation
library_name: mlx
tags:
- mlx
base_model: Qwen/Qwen2.5-7B
---
|
Beijuka/afroxlmr-large-ner-masakhaner-1.0_2.0-hausa-ner-v1
|
Beijuka
| 2025-09-01T10:50:31Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"xlm-roberta",
"token-classification",
"named-entity-recognition",
"hausa",
"african-language",
"pii-detection",
"generated_from_trainer",
"dataset:Beijuka/Multilingual_PII_NER_dataset",
"base_model:masakhane/afroxlmr-large-ner-masakhaner-1.0_2.0",
"base_model:finetune:masakhane/afroxlmr-large-ner-masakhaner-1.0_2.0",
"license:afl-3.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2025-08-31T13:22:51Z |
---
library_name: transformers
license: afl-3.0
base_model: masakhane/afroxlmr-large-ner-masakhaner-1.0_2.0
tags:
- named-entity-recognition
- hausa
- african-language
- pii-detection
- token-classification
- generated_from_trainer
datasets:
- Beijuka/Multilingual_PII_NER_dataset
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: multilingual-masakhane/afroxlmr-large-ner-masakhaner-1.0_2.0-hausa-ner-v1
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: Beijuka/Multilingual_PII_NER_dataset
type: Beijuka/Multilingual_PII_NER_dataset
args: 'split: train+validation+test'
metrics:
- name: Precision
type: precision
value: 0.9496173469387755
- name: Recall
type: recall
value: 0.945997458703939
- name: F1
type: f1
value: 0.9478039465308721
- name: Accuracy
type: accuracy
value: 0.9871010200723922
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# multilingual-masakhane/afroxlmr-large-ner-masakhaner-1.0_2.0-hausa-ner-v1
This model is a fine-tuned version of [masakhane/afroxlmr-large-ner-masakhaner-1.0_2.0](https://huggingface.co/masakhane/afroxlmr-large-ner-masakhaner-1.0_2.0) on the Beijuka/Multilingual_PII_NER_dataset dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0788
- Precision: 0.9496
- Recall: 0.9460
- F1: 0.9478
- Accuracy: 0.9871
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 301 | 0.1082 | 0.9331 | 0.7945 | 0.8583 | 0.9646 |
| 0.1429 | 2.0 | 602 | 0.0918 | 0.8991 | 0.9287 | 0.9136 | 0.9764 |
| 0.1429 | 3.0 | 903 | 0.0872 | 0.8847 | 0.9425 | 0.9127 | 0.9767 |
| 0.0629 | 4.0 | 1204 | 0.1017 | 0.8868 | 0.9485 | 0.9166 | 0.9770 |
| 0.0496 | 5.0 | 1505 | 0.1256 | 0.9084 | 0.9155 | 0.9120 | 0.9758 |
| 0.0496 | 6.0 | 1806 | 0.1152 | 0.9004 | 0.9479 | 0.9235 | 0.9781 |
| 0.0349 | 7.0 | 2107 | 0.1266 | 0.8950 | 0.9497 | 0.9215 | 0.9779 |
| 0.0349 | 8.0 | 2408 | 0.1126 | 0.9006 | 0.9443 | 0.9219 | 0.9787 |
| 0.0224 | 9.0 | 2709 | 0.1023 | 0.9138 | 0.9467 | 0.9300 | 0.9805 |
| 0.0114 | 10.0 | 3010 | 0.1535 | 0.8901 | 0.9461 | 0.9172 | 0.9771 |
| 0.0114 | 11.0 | 3311 | 0.1598 | 0.9017 | 0.9509 | 0.9256 | 0.9794 |
| 0.0064 | 12.0 | 3612 | 0.1897 | 0.8906 | 0.9515 | 0.9200 | 0.9781 |
### Framework versions
- Transformers 4.56.0
- Pytorch 2.8.0.dev20250319+cu128
- Datasets 4.0.0
- Tokenizers 0.22.0
|
Wave812/blockassist-bc-howling_pesty_trout_1756723598
|
Wave812
| 2025-09-01T10:48:30Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"howling pesty trout",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-01T10:48:11Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- howling pesty trout
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
bah63843/blockassist-bc-plump_fast_antelope_1756723576
|
bah63843
| 2025-09-01T10:47:09Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"plump fast antelope",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-01T10:46:59Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- plump fast antelope
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
cookienter/lifechart-bert-large-classifier-hptuning
|
cookienter
| 2025-09-01T10:46:16Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-large-uncased",
"base_model:finetune:google-bert/bert-large-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-09-01T08:56:51Z |
---
library_name: transformers
license: apache-2.0
base_model: bert-large-uncased
tags:
- generated_from_trainer
metrics:
- precision
- recall
model-index:
- name: lifechart-bert-large-classifier-hptuning
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# lifechart-bert-large-classifier-hptuning
This model is a fine-tuned version of [bert-large-uncased](https://huggingface.co/bert-large-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1178
- Macro F1: 0.7802
- Precision: 0.7839
- Recall: 0.7832
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.7743058567541316e-05
- train_batch_size: 8
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.029010827832811573
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Macro F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|
| 1.6298 | 1.0 | 1641 | 0.8314 | 0.7454 | 0.7144 | 0.7974 |
| 0.6026 | 2.0 | 3282 | 0.8554 | 0.7731 | 0.7570 | 0.7986 |
| 0.3187 | 3.0 | 4923 | 1.0290 | 0.7791 | 0.7850 | 0.7810 |
| 0.1658 | 4.0 | 6564 | 1.1178 | 0.7802 | 0.7839 | 0.7832 |
### Framework versions
- Transformers 4.55.4
- Pytorch 2.8.0+cu128
- Datasets 4.0.0
- Tokenizers 0.21.4
|
akirafudo/blockassist-bc-keen_fast_giraffe_1756723210
|
akirafudo
| 2025-09-01T10:40:34Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"keen fast giraffe",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-01T10:40:30Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- keen fast giraffe
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
omerbektass/blockassist-bc-keen_fast_giraffe_1756723071
|
omerbektass
| 2025-09-01T10:38:17Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"keen fast giraffe",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-01T10:38:13Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- keen fast giraffe
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
omerbkts/blockassist-bc-keen_fast_giraffe_1756722950
|
omerbkts
| 2025-09-01T10:36:15Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"keen fast giraffe",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-01T10:36:11Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- keen fast giraffe
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.