modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-08-23 00:38:03
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 517
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-08-23 00:37:56
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
Willinton/MyGemmaNPC
|
Willinton
| 2025-08-21T10:25:42Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"gemma3_text",
"text-generation",
"generated_from_trainer",
"sft",
"trl",
"conversational",
"base_model:google/gemma-3-270m-it",
"base_model:finetune:google/gemma-3-270m-it",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-21T10:22:57Z |
---
base_model: google/gemma-3-270m-it
library_name: transformers
model_name: MyGemmaNPC
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for MyGemmaNPC
This model is a fine-tuned version of [google/gemma-3-270m-it](https://huggingface.co/google/gemma-3-270m-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Willinton/MyGemmaNPC", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.2
- Pytorch: 2.8.0+cu126
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
cyberjunkee/gpt-oss-4.2b-specialized-all-pruned-moe-only-4-experts-Q4_K_M-GGUF
|
cyberjunkee
| 2025-08-21T10:06:23Z | 0 | 0 | null |
[
"gguf",
"mixture-of-experts",
"moe",
"expert-pruning",
"gpt-oss",
"openai",
"reasoning",
"all",
"specialized",
"efficient",
"transformer",
"causal-lm",
"text-generation",
"pytorch",
"pruned-model",
"domain-specific",
"llama-cpp",
"gguf-my-repo",
"en",
"dataset:AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations",
"base_model:AmanPriyanshu/gpt-oss-4.2b-specialized-all-pruned-moe-only-4-experts",
"base_model:quantized:AmanPriyanshu/gpt-oss-4.2b-specialized-all-pruned-moe-only-4-experts",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-21T10:06:04Z |
---
license: apache-2.0
datasets:
- AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations
language:
- en
pipeline_tag: text-generation
tags:
- mixture-of-experts
- moe
- expert-pruning
- gpt-oss
- openai
- reasoning
- all
- specialized
- efficient
- transformer
- causal-lm
- text-generation
- pytorch
- pruned-model
- domain-specific
- llama-cpp
- gguf-my-repo
base_model: AmanPriyanshu/gpt-oss-4.2b-specialized-all-pruned-moe-only-4-experts
---
# cyberjunkee/gpt-oss-4.2b-specialized-all-pruned-moe-only-4-experts-Q4_K_M-GGUF
This model was converted to GGUF format from [`AmanPriyanshu/gpt-oss-4.2b-specialized-all-pruned-moe-only-4-experts`](https://huggingface.co/AmanPriyanshu/gpt-oss-4.2b-specialized-all-pruned-moe-only-4-experts) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/AmanPriyanshu/gpt-oss-4.2b-specialized-all-pruned-moe-only-4-experts) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo cyberjunkee/gpt-oss-4.2b-specialized-all-pruned-moe-only-4-experts-Q4_K_M-GGUF --hf-file gpt-oss-4.2b-specialized-all-pruned-moe-only-4-experts-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo cyberjunkee/gpt-oss-4.2b-specialized-all-pruned-moe-only-4-experts-Q4_K_M-GGUF --hf-file gpt-oss-4.2b-specialized-all-pruned-moe-only-4-experts-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo cyberjunkee/gpt-oss-4.2b-specialized-all-pruned-moe-only-4-experts-Q4_K_M-GGUF --hf-file gpt-oss-4.2b-specialized-all-pruned-moe-only-4-experts-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo cyberjunkee/gpt-oss-4.2b-specialized-all-pruned-moe-only-4-experts-Q4_K_M-GGUF --hf-file gpt-oss-4.2b-specialized-all-pruned-moe-only-4-experts-q4_k_m.gguf -c 2048
```
|
giovannidemuri/test_model_lora
|
giovannidemuri
| 2025-08-21T09:53:32Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-21T09:38:04Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
katanyasekolah/blockassist-bc-silky_sprightly_cassowary_1755768005
|
katanyasekolah
| 2025-08-21T09:47:50Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"silky sprightly cassowary",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-21T09:47:46Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- silky sprightly cassowary
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Muapi/esty-s-minimalistic-sketch-style
|
Muapi
| 2025-08-21T09:26:24Z | 0 | 0 | null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-21T09:26:13Z |
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# Esty's Minimalistic Sketch Style

**Base model**: Flux.1 D
**Trained words**:
## 🧠 Usage (Python)
🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:959300@1074020", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
Medved444/blockassist-bc-bellowing_finicky_manatee_1755767077
|
Medved444
| 2025-08-21T09:23:38Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"bellowing finicky manatee",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-21T09:22:56Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- bellowing finicky manatee
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Muapi/bonafida.studio
|
Muapi
| 2025-08-21T09:21:44Z | 0 | 0 | null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-21T09:21:30Z |
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# Bonafida.Studio

**Base model**: Flux.1 D
**Trained words**: Delirium style
## 🧠 Usage (Python)
🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:1287599@1257218", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
Muapi/flipping-off-middle-finger-pony-flux
|
Muapi
| 2025-08-21T09:17:01Z | 0 | 0 | null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-21T09:16:52Z |
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# Flipping Off || Middle Finger [Pony + Flux]

**Base model**: Flux.1 D
**Trained words**: middlefinger, flippingthebird
## 🧠 Usage (Python)
🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:796280@890443", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
wasabuko/blockassist-bc-noisy_zealous_macaw_1755765558
|
wasabuko
| 2025-08-21T09:15:05Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"noisy zealous macaw",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-21T09:12:01Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- noisy zealous macaw
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Muapi/3d-cartoon-vision-flux
|
Muapi
| 2025-08-21T09:14:41Z | 0 | 0 | null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-21T09:14:23Z |
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# 3D Cartoon Vision FLUX

**Base model**: Flux.1 D
**Trained words**:
## 🧠 Usage (Python)
🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:662924@741868", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
Muapi/jinx-arcane-league-of-legends-flux-lora
|
Muapi
| 2025-08-21T09:11:47Z | 0 | 0 | null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-21T09:11:28Z |
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# Jinx - Arcane (League of Legends) [FLUX] LoRA

**Base model**: Flux.1 D
**Trained words**: Jinx
## 🧠 Usage (Python)
🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:733713@1011812", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
sampingkaca72/blockassist-bc-armored_stealthy_elephant_1755765993
|
sampingkaca72
| 2025-08-21T09:11:36Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"armored stealthy elephant",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-21T09:11:32Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- armored stealthy elephant
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
rvipitkirubbe/blockassist-bc-mottled_foraging_ape_1755765623
|
rvipitkirubbe
| 2025-08-21T09:07:13Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"mottled foraging ape",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-21T09:07:10Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- mottled foraging ape
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
esi777/blockassist-bc-camouflaged_trotting_eel_1755767197
|
esi777
| 2025-08-21T09:07:09Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"camouflaged trotting eel",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-21T09:07:06Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- camouflaged trotting eel
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Muapi/minrill-minimalist-realistic-illustrations-flux-lora
|
Muapi
| 2025-08-21T09:02:28Z | 0 | 0 | null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-21T09:02:12Z |
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# [Minrill] - Minimalist Realistic Illustrations - FLUX LoRA

**Base model**: Flux.1 D
**Trained words**: minrill
## 🧠 Usage (Python)
🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:839528@939251", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1755766721
|
IvanJAjebu
| 2025-08-21T08:59:55Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thorny slender capybara",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-21T08:59:47Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thorny slender capybara
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
manusiaperahu2012/blockassist-bc-roaring_long_tuna_1755765007
|
manusiaperahu2012
| 2025-08-21T08:58:01Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"roaring long tuna",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-21T08:57:58Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- roaring long tuna
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
gurpreetlucky/lora_model
|
gurpreetlucky
| 2025-08-21T08:41:36Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-21T08:38:04Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Pshuaibi/ppo-LunarLander-v2
|
Pshuaibi
| 2025-08-21T08:41:12Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-08-21T08:40:57Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 268.28 +/- 9.53
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
syuvers/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-sleek_gilded_chameleon
|
syuvers
| 2025-08-21T08:38:37Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am sleek_gilded_chameleon",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-21T06:22:26Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am sleek_gilded_chameleon
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
guinansu/MedSAMix
|
guinansu
| 2025-08-21T08:23:19Z | 14 | 1 |
transformers
|
[
"transformers",
"safetensors",
"sam",
"mask-generation",
"medical",
"image-segmentation",
"arxiv:2508.11032",
"base_model:facebook/sam-vit-base",
"base_model:finetune:facebook/sam-vit-base",
"license:mit",
"endpoints_compatible",
"region:us"
] |
image-segmentation
| 2025-08-14T19:05:48Z |
---
base_model:
- facebook/sam-vit-base
library_name: transformers
tags:
- medical
pipeline_tag: image-segmentation
license: mit
---
# MedSAMix: A Training-Free Model Merging Approach for Medical Image Segmentation
This repository contains the `MedSAMix-m (base)` model, which is described in the paper [MedSAMix: A Training-Free Model Merging Approach for Medical Image Segmentation](https://arxiv.org/abs/2508.11032).
Code: [https://github.com/podismine/MedSAMix](https://github.com/podismine/MedSAMix)
Please note that the code is currently being cleaned and will be publicly released soon.
## Abstract
Universal medical image segmentation models have emerged as a promising paradigm due to their strong generalizability across diverse tasks, showing great potential for a wide range of clinical applications. This potential has been partly driven by the success of general-purpose vision models such as the Segment Anything Model (SAM), which has inspired the development of various fine-tuned variants for medical segmentation tasks. However, fine-tuned variants like MedSAM are trained on comparatively limited medical imaging data that often suffers from heterogeneity, scarce annotations, and distributional shifts. These challenges limit their ability to generalize across a wide range of medical segmentation tasks. In this regard, we propose MedSAMix, a training-free model merging method that integrates the strengths of both generalist models (e.g., SAM) and specialist models (e.g., MedSAM) for medical image segmentation. In contrast to traditional model merging approaches that rely on manual configuration and often result in suboptimal outcomes, we propose a zero-order optimization method to automatically discover optimal layer-wise merging solutions. Furthermore, for clinical applications, we develop two regimes to meet the demand of domain-specificity and generalizability in different scenarios by single-task optimization and multi-objective optimization respectively. Extensive evaluations on 25 medical segmentation tasks demonstrate that MedSAMix effectively mitigates model bias and consistently improves performance in both domain-specific accuracy and generalization, achieving improvements of 6.67% on specialized tasks and 4.37% on multi-task evaluations.
<div align="center">
<img src="https://github.com/podismine/MedSAMix/raw/main/fig/model.png" alt="MedSAMix Model Architecture" width="60%">
</div>
## Checkpoint
In addition, here we provide raw checkpoint and hugging face tensors:
Pytorch raw checkpoint: [Here](https://drive.google.com/file/d/1RBsDZvFqJiAbbhnXTpSZs_uC-WKWrAJx/view?usp=sharing)
Hugging face: [Here](https://huggingface.co/guinansu/MedSAMix)
|
homeb82784/FAI-CPT-KOR-Retry
|
homeb82784
| 2025-08-21T08:18:22Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:skt/A.X-4.0-Light",
"base_model:finetune:skt/A.X-4.0-Light",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-21T07:57:31Z |
---
base_model: skt/A.X-4.0-Light
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** homeb82784
- **License:** apache-2.0
- **Finetuned from model :** skt/A.X-4.0-Light
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
airesearch/llama3-8b-cpt-sea-lionv2-base-dolly-th-10k
|
airesearch
| 2025-08-21T07:57:38Z | 2 | 0 | null |
[
"safetensors",
"llama",
"tha",
"base_model:aisingapore/Llama-SEA-LION-v2-8B",
"base_model:finetune:aisingapore/Llama-SEA-LION-v2-8B",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2025-05-27T15:35:29Z |
---
base_model: aisingapore/Llama-SEA-LION-v2-8B
language: tha
license: cc-by-nc-4.0
model_name: airesearch/llama3-8b-cpt-sea-lionv2-base-dolly-th-10k
---
# llama3-8b-cpt-sea-lionv2-base-dolly-th-10k
WangchanThaiInstruct: An instruction-following Dataset for Culture-Aware, Multitask, and Multi-domain Evaluation in Thai (EMNLP'25)
This repository contains the model artifacts for **llama3-8b-cpt-sea-lionv2-base-dolly-th-10k** for the paper WangchanThaiInstruct.
# Training
The model is a aisingapore/Llama-SEA-LION-v2-8B finetuned on 10000 randomly sampled samples of a machine translated [Dolly 15K](https://huggingface.co/datasets/Thaweewat/databricks-dolly-15k-th) using the Llama Factory framework with the following hyperparameters:
| Hyperparameter | Value |
|-----------------------|-----------|
| Learning Rate | 2 × 10⁻⁴ |
| Learning Rate Schedule| Cosine |
| Batch Size (effective)| 128 |
| Max Token Length | 2048 |
| Warm up Ratio | 0.1 |
| Epochs | 3 |
# Evaluation
The model was evaluate on [Thai MTBench](https://huggingface.co/datasets/ThaiLLM-Leaderboard/mt-bench-thai) [SeaCrowd's NLU and NLG Thai Split](https://github.com/scb-10x/seacrowd-eval) and [WangchanThaiInstruct's test set](https://huggingface.co/datasets/airesearch/WangchanThaiInstruct)
| Model | MT Bench Average | NLU Accuracy (%) | NLG Translation (BLEU) | NLG Generation (RougeL) | WangchanThaiInstruct Fluency | WangchanThaiInstruct Accuracy (%) | WangchanThaiInstruct Rating |
|----------------------------------------|------------------|------------------|-------------------------|--------------------------|-------------------------------|----------------------------------|-----------------------------|
| **Llama-3.1-8B** | | | | | | | |
| Alpaca 5k + WangchanThaiInstruct 5k | 3.00 | 47.22 | 3.12 | 8.59 | 4.08 | 39.84 | 4.16 |
| Alpaca 10k | 3.05 | 46.54 | 4.08 | 11.05 | 3.36 | 28.39 | 3.23 |
| Alpaca 10k + WangchanThaiInstruct 10k | 3.07 | 46.47 | 2.43 | 8.54 | 4.21 | 42.31 | 4.39 |
| Alpaca 20k | 2.75 | 47.31 | 2.79 | 9.14 | 2.77 | 22.32 | 2.94 |
| Alpaca 15k + WangchanThaiInstruct 15k | 3.26 | 46.45 | 3.47 | 8.58 | 4.35 | 42.16 | 4.46 |
| Alpaca 30k | 2.88 | 47.67 | 3.65 | 9.65 | 2.83 | 21.83 | 2.95 |
| Dolly 2.5k + WangchanThaiInstruct 2.5k | 2.40 | 46.43 | 3.75 | 8.72 | 3.57 | 35.93 | 3.72 |
| Dolly 5k | 1.88 | 42.87 | 0.95 | 8.55 | 1.75 | 22.70 | 2.19 |
| Dolly 5k + WangchanThaiInstruct 5k | 2.28 | 46.43 | 1.36 | 8.55 | 3.85 | 37.89 | 3.98 |
| Dolly 10k | 1.99 | 42.41 | 1.35 | 8.64 | 1.69 | 22.35 | 2.14 |
| Dolly 7.5k + WangchanThaiInstruct 7.5k | 2.31 | 46.37 | 1.48 | 8.59 | 3.96 | 39.63 | 4.11 |
| Dolly 15k | 2.64 | 42.47 | 1.60 | 8.10 | 1.69 | 22.21 | 2.16 |
| **Gemma-2-9B** | | | | | | | |
| Alpaca 5k + WangchanThaiInstruct 5k | 4.25 | 53.70 | 2.25 | 8.14 | 4.85 | 54.24 | 5.17 |
| Alpaca 10k | 3.98 | 51.71 | 1.39 | 6.84 | 4.00 | 46.26 | 4.26 |
| Alpaca 10k + WangchanThaiInstruct 10k | 4.02 | 53.81 | 2.02 | 8.09 | 4.97 | 55.33 | 5.30 |
| Alpaca 20k | 4.14 | 52.40 | 1.45 | 6.95 | 3.53 | 38.07 | 3.90 |
| Alpaca 15k + WangchanThaiInstruct 15k | 4.20 | 53.49 | 1.98 | 8.02 | 5.14 | 56.67 | 5.49 |
| Alpaca 30k | 3.79 | 52.41 | 1.25 | 5.73 | 3.25 | 32.71 | 3.43 |
| Dolly 2.5k + WangchanThaiInstruct 2.5k | 3.66 | 54.62 | 1.75 | 8.07 | 4.30 | 51.86 | 4.84 |
| Dolly 5k | 2.59 | 53.36 | 1.39 | 7.58 | 1.71 | 42.35 | 2.45 |
| Dolly 5k + WangchanThaiInstruct 5k | 3.99 | 53.50 | 1.54 | 8.12 | 4.59 | 54.31 | 5.08 |
| Dolly 10k | 2.70 | 51.98 | 1.52 | 7.58 | 1.81 | 43.68 | 2.74 |
| Dolly 7.5k + WangchanThaiInstruct 7.5k | 4.13 | 53.34 | 1.63 | 8.12 | 4.72 | 55.09 | 5.24 |
| Dolly 15k | 4.10 | 51.35 | 1.48 | 7.76 | 3.24 | 40.34 | 2.63 |
| **SEA-LIONv2-8B** | | | | | | | |
| Alpaca 5k + WangchanThaiInstruct 5k | 4.52 | 43.76 | 34.47 | 19.39 | 5.62 | 52.84 | 5.57 |
| Alpaca 10k | 4.54 | 43.31 | 28.01 | 25.35 | 4.61 | 48.88 | 4.73 |
| Alpaca 10k + WangchanThaiInstruct 10k | 4.55 | 44.66 | 24.00 | 17.55 | 5.72 | 53.93 | 5.70 |
| Alpaca 20k | 4.74 | 43.98 | 24.22 | 25.82 | 4.73 | 49.32 | 4.53 |
| Alpaca 15k + WangchanThaiInstruct 15k | 4.44 | 44.51 | 20.58 | 16.31 | 5.54 | 53.94 | 5.61 |
| Alpaca 30k | 4.60 | 42.96 | 15.58 | 25.68 | 5.11 | 49.66 | 4.78 |
| Dolly 2.5k + WangchanThaiInstruct 2.5k | 4.25 | 44.89 | 36.60 | 26.82 | 5.10 | 50.25 | 5.28 |
| Dolly 5k | 3.69 | 45.88 | 19.22 | 35.66 | 3.46 | 48.04 | 4.11 |
| Dolly 5k + WangchanThaiInstruct 5k | 4.21 | 44.30 | 15.64 | 23.72 | 5.31 | 51.25 | 5.42 |
| Dolly 10k | 3.83 | 46.57 | 14.07 | 37.35 | 4.09 | 46.81 | 4.04 |
| Dolly 7.5k + WangchanThaiInstruct 7.5k | 4.31 | 45.31 | 13.54 | 22.00 | 5.54 | 53.81 | 5.57 |
| Dolly 15k | 3.57 | 46.14 | 14.31 | 35.37 | 3.24 | 48.13 | 4.15 |
# Citation
```
@inproceedings{limkonchotiwat2025thaiinstruct,
title = {WangchanThaiInstruct: An Instruction-Following Dataset for Culture-Aware, Multitask, and Multi-domain Evaluation in Thai},
author = {Limkonchotiwat, Peerat and Tuchinda, Pume and Lowphansirikul, Lalita and Nonesung, Surapon and Tasawong, Panuthep and Aji, Alham Fikri and Udomcharoenchaikit, Can and Nutanong, Sarana},
booktitle = {Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing},
year = {2025},
publisher = {Association for Computational Linguistics}
}
```
|
airesearch/gemma-2-9b-alpaca-th-20k
|
airesearch
| 2025-08-21T07:56:42Z | 2 | 0 | null |
[
"safetensors",
"gemma2",
"tha",
"base_model:google/gemma-2-9b",
"base_model:finetune:google/gemma-2-9b",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2025-05-27T15:35:30Z |
---
base_model: google/gemma-2-9b
language: tha
license: cc-by-nc-4.0
model_name: airesearch/gemma-2-9b-alpaca-th-20k
---
# gemma-2-9b-alpaca-th-20k
WangchanThaiInstruct: An instruction-following Dataset for Culture-Aware, Multitask, and Multi-domain Evaluation in Thai (EMNLP'25)
This repository contains the model artifacts for **gemma-2-9b-alpaca-th-20k** for the paper WangchanThaiInstruct.
# Training
The model is a google/gemma-2-9b finetuned on 20000 randomly sampled samples of a machine translated [Alpaca 52K](https://huggingface.co/datasets/Thaweewat/alpaca-cleaned-52k-th) using the Llama Factory framework with the following hyperparameters:
| Hyperparameter | Value |
|-----------------------|-----------|
| Learning Rate | 2 × 10⁻⁴ |
| Learning Rate Schedule| Cosine |
| Batch Size (effective)| 128 |
| Max Token Length | 2048 |
| Warm up Ratio | 0.1 |
| Epochs | 3 |
# Evaluation
The model was evaluate on [Thai MTBench](https://huggingface.co/datasets/ThaiLLM-Leaderboard/mt-bench-thai) [SeaCrowd's NLU and NLG Thai Split](https://github.com/scb-10x/seacrowd-eval) and [WangchanThaiInstruct's test set](https://huggingface.co/datasets/airesearch/WangchanThaiInstruct)
| Model | MT Bench Average | NLU Accuracy (%) | NLG Translation (BLEU) | NLG Generation (RougeL) | WangchanThaiInstruct Fluency | WangchanThaiInstruct Accuracy (%) | WangchanThaiInstruct Rating |
|----------------------------------------|------------------|------------------|-------------------------|--------------------------|-------------------------------|----------------------------------|-----------------------------|
| **Llama-3.1-8B** | | | | | | | |
| Alpaca 5k + WangchanThaiInstruct 5k | 3.00 | 47.22 | 3.12 | 8.59 | 4.08 | 39.84 | 4.16 |
| Alpaca 10k | 3.05 | 46.54 | 4.08 | 11.05 | 3.36 | 28.39 | 3.23 |
| Alpaca 10k + WangchanThaiInstruct 10k | 3.07 | 46.47 | 2.43 | 8.54 | 4.21 | 42.31 | 4.39 |
| Alpaca 20k | 2.75 | 47.31 | 2.79 | 9.14 | 2.77 | 22.32 | 2.94 |
| Alpaca 15k + WangchanThaiInstruct 15k | 3.26 | 46.45 | 3.47 | 8.58 | 4.35 | 42.16 | 4.46 |
| Alpaca 30k | 2.88 | 47.67 | 3.65 | 9.65 | 2.83 | 21.83 | 2.95 |
| Dolly 2.5k + WangchanThaiInstruct 2.5k | 2.40 | 46.43 | 3.75 | 8.72 | 3.57 | 35.93 | 3.72 |
| Dolly 5k | 1.88 | 42.87 | 0.95 | 8.55 | 1.75 | 22.70 | 2.19 |
| Dolly 5k + WangchanThaiInstruct 5k | 2.28 | 46.43 | 1.36 | 8.55 | 3.85 | 37.89 | 3.98 |
| Dolly 10k | 1.99 | 42.41 | 1.35 | 8.64 | 1.69 | 22.35 | 2.14 |
| Dolly 7.5k + WangchanThaiInstruct 7.5k | 2.31 | 46.37 | 1.48 | 8.59 | 3.96 | 39.63 | 4.11 |
| Dolly 15k | 2.64 | 42.47 | 1.60 | 8.10 | 1.69 | 22.21 | 2.16 |
| **Gemma-2-9B** | | | | | | | |
| Alpaca 5k + WangchanThaiInstruct 5k | 4.25 | 53.70 | 2.25 | 8.14 | 4.85 | 54.24 | 5.17 |
| Alpaca 10k | 3.98 | 51.71 | 1.39 | 6.84 | 4.00 | 46.26 | 4.26 |
| Alpaca 10k + WangchanThaiInstruct 10k | 4.02 | 53.81 | 2.02 | 8.09 | 4.97 | 55.33 | 5.30 |
| Alpaca 20k | 4.14 | 52.40 | 1.45 | 6.95 | 3.53 | 38.07 | 3.90 |
| Alpaca 15k + WangchanThaiInstruct 15k | 4.20 | 53.49 | 1.98 | 8.02 | 5.14 | 56.67 | 5.49 |
| Alpaca 30k | 3.79 | 52.41 | 1.25 | 5.73 | 3.25 | 32.71 | 3.43 |
| Dolly 2.5k + WangchanThaiInstruct 2.5k | 3.66 | 54.62 | 1.75 | 8.07 | 4.30 | 51.86 | 4.84 |
| Dolly 5k | 2.59 | 53.36 | 1.39 | 7.58 | 1.71 | 42.35 | 2.45 |
| Dolly 5k + WangchanThaiInstruct 5k | 3.99 | 53.50 | 1.54 | 8.12 | 4.59 | 54.31 | 5.08 |
| Dolly 10k | 2.70 | 51.98 | 1.52 | 7.58 | 1.81 | 43.68 | 2.74 |
| Dolly 7.5k + WangchanThaiInstruct 7.5k | 4.13 | 53.34 | 1.63 | 8.12 | 4.72 | 55.09 | 5.24 |
| Dolly 15k | 4.10 | 51.35 | 1.48 | 7.76 | 3.24 | 40.34 | 2.63 |
| **SEA-LIONv2-8B** | | | | | | | |
| Alpaca 5k + WangchanThaiInstruct 5k | 4.52 | 43.76 | 34.47 | 19.39 | 5.62 | 52.84 | 5.57 |
| Alpaca 10k | 4.54 | 43.31 | 28.01 | 25.35 | 4.61 | 48.88 | 4.73 |
| Alpaca 10k + WangchanThaiInstruct 10k | 4.55 | 44.66 | 24.00 | 17.55 | 5.72 | 53.93 | 5.70 |
| Alpaca 20k | 4.74 | 43.98 | 24.22 | 25.82 | 4.73 | 49.32 | 4.53 |
| Alpaca 15k + WangchanThaiInstruct 15k | 4.44 | 44.51 | 20.58 | 16.31 | 5.54 | 53.94 | 5.61 |
| Alpaca 30k | 4.60 | 42.96 | 15.58 | 25.68 | 5.11 | 49.66 | 4.78 |
| Dolly 2.5k + WangchanThaiInstruct 2.5k | 4.25 | 44.89 | 36.60 | 26.82 | 5.10 | 50.25 | 5.28 |
| Dolly 5k | 3.69 | 45.88 | 19.22 | 35.66 | 3.46 | 48.04 | 4.11 |
| Dolly 5k + WangchanThaiInstruct 5k | 4.21 | 44.30 | 15.64 | 23.72 | 5.31 | 51.25 | 5.42 |
| Dolly 10k | 3.83 | 46.57 | 14.07 | 37.35 | 4.09 | 46.81 | 4.04 |
| Dolly 7.5k + WangchanThaiInstruct 7.5k | 4.31 | 45.31 | 13.54 | 22.00 | 5.54 | 53.81 | 5.57 |
| Dolly 15k | 3.57 | 46.14 | 14.31 | 35.37 | 3.24 | 48.13 | 4.15 |
# Citation
```
@inproceedings{limkonchotiwat2025thaiinstruct,
title = {WangchanThaiInstruct: An Instruction-Following Dataset for Culture-Aware, Multitask, and Multi-domain Evaluation in Thai},
author = {Limkonchotiwat, Peerat and Tuchinda, Pume and Lowphansirikul, Lalita and Nonesung, Surapon and Tasawong, Panuthep and Aji, Alham Fikri and Udomcharoenchaikit, Can and Nutanong, Sarana},
booktitle = {Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing},
year = {2025},
publisher = {Association for Computational Linguistics}
}
```
|
Medved444/blockassist-bc-bellowing_finicky_manatee_1755761606
|
Medved444
| 2025-08-21T07:56:17Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"bellowing finicky manatee",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-21T07:55:57Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- bellowing finicky manatee
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
FrancescoPeriti/LlamaDictionary-mg_ML38BI
|
FrancescoPeriti
| 2025-08-21T07:48:38Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"text-generation",
"conversational",
"en",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:finetune:meta-llama/Meta-Llama-3-8B-Instruct",
"license:cc-by-sa-4.0",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-07-08T09:57:03Z |
---
license: cc-by-sa-4.0
language:
- en
library_name: transformers
pipeline_tag: text-generation
tags:
- text-generation-inference
base_model:
- meta-llama/Meta-Llama-3-8B-Instruct
---
# LlamaDictionary-mg_ML38BI
This is part of the **DefinitionGeneration-adapters** collection.
➡️ Please see the [LlamaDictionary-it_ML38BI](https://huggingface.co/FrancescoPeriti/LlamaDictionary-it_ML38BI) for a full description, methodology, and usage details.
This variant corresponds to **Malagasy**.
|
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1755762088
|
IvanJAjebu
| 2025-08-21T07:42:40Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thorny slender capybara",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-21T07:42:31Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thorny slender capybara
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
airesearch/gemma-2-9b-alpaca-th-5k-wangchan-instruct-5k
|
airesearch
| 2025-08-21T07:39:26Z | 5 | 0 | null |
[
"safetensors",
"gemma2",
"tha",
"base_model:google/gemma-2-9b",
"base_model:finetune:google/gemma-2-9b",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2025-05-27T15:25:17Z |
---
base_model: google/gemma-2-9b
language: tha
license: cc-by-nc-4.0
model_name: airesearch/gemma-2-9b-alpaca-th-5k-wangchan-instruct-5k
---
# gemma-2-9b-alpaca-th-5k-wangchan-instruct-5k
WangchanThaiInstruct: An instruction-following Dataset for Culture-Aware, Multitask, and Multi-domain Evaluation in Thai (EMNLP'25)
This repository contains the model artifacts for **gemma-2-9b-alpaca-th-5k-wangchan-instruct-5k** for the paper WangchanThaiInstruct.
# Training
The model is a google/gemma-2-9b finetuned on 5000 randomly sampled samples of a machine translated [Alpaca 52K](https://huggingface.co/datasets/Thaweewat/alpaca-cleaned-52k-th) and 5000 randomly samples samples of [WangchanThaiInstruct's training set](https://huggingface.co/datasets/airesearch/WangchanThaiInstruct) using the Llama Factory framework with the following hyperparameters:
| Hyperparameter | Value |
|-----------------------|-----------|
| Learning Rate | 2 × 10⁻⁴ |
| Learning Rate Schedule| Cosine |
| Batch Size (effective)| 128 |
| Max Token Length | 2048 |
| Warm up Ratio | 0.1 |
| Epochs | 3 |
# Evaluation
The model was evaluate on [Thai MTBench](https://huggingface.co/datasets/ThaiLLM-Leaderboard/mt-bench-thai) [SeaCrowd's NLU and NLG Thai Split](https://github.com/scb-10x/seacrowd-eval) and [WangchanThaiInstruct's test set](https://huggingface.co/datasets/airesearch/WangchanThaiInstruct)
| Model | MT Bench Average | NLU Accuracy (%) | NLG Translation (BLEU) | NLG Generation (RougeL) | WangchanThaiInstruct Fluency | WangchanThaiInstruct Accuracy (%) | WangchanThaiInstruct Rating |
|----------------------------------------|------------------|------------------|-------------------------|--------------------------|-------------------------------|----------------------------------|-----------------------------|
| **Llama-3.1-8B** | | | | | | | |
| Alpaca 5k + WangchanThaiInstruct 5k | 3.00 | 47.22 | 3.12 | 8.59 | 4.08 | 39.84 | 4.16 |
| Alpaca 10k | 3.05 | 46.54 | 4.08 | 11.05 | 3.36 | 28.39 | 3.23 |
| Alpaca 10k + WangchanThaiInstruct 10k | 3.07 | 46.47 | 2.43 | 8.54 | 4.21 | 42.31 | 4.39 |
| Alpaca 20k | 2.75 | 47.31 | 2.79 | 9.14 | 2.77 | 22.32 | 2.94 |
| Alpaca 15k + WangchanThaiInstruct 15k | 3.26 | 46.45 | 3.47 | 8.58 | 4.35 | 42.16 | 4.46 |
| Alpaca 30k | 2.88 | 47.67 | 3.65 | 9.65 | 2.83 | 21.83 | 2.95 |
| Dolly 2.5k + WangchanThaiInstruct 2.5k | 2.40 | 46.43 | 3.75 | 8.72 | 3.57 | 35.93 | 3.72 |
| Dolly 5k | 1.88 | 42.87 | 0.95 | 8.55 | 1.75 | 22.70 | 2.19 |
| Dolly 5k + WangchanThaiInstruct 5k | 2.28 | 46.43 | 1.36 | 8.55 | 3.85 | 37.89 | 3.98 |
| Dolly 10k | 1.99 | 42.41 | 1.35 | 8.64 | 1.69 | 22.35 | 2.14 |
| Dolly 7.5k + WangchanThaiInstruct 7.5k | 2.31 | 46.37 | 1.48 | 8.59 | 3.96 | 39.63 | 4.11 |
| Dolly 15k | 2.64 | 42.47 | 1.60 | 8.10 | 1.69 | 22.21 | 2.16 |
| **Gemma-2-9B** | | | | | | | |
| Alpaca 5k + WangchanThaiInstruct 5k | 4.25 | 53.70 | 2.25 | 8.14 | 4.85 | 54.24 | 5.17 |
| Alpaca 10k | 3.98 | 51.71 | 1.39 | 6.84 | 4.00 | 46.26 | 4.26 |
| Alpaca 10k + WangchanThaiInstruct 10k | 4.02 | 53.81 | 2.02 | 8.09 | 4.97 | 55.33 | 5.30 |
| Alpaca 20k | 4.14 | 52.40 | 1.45 | 6.95 | 3.53 | 38.07 | 3.90 |
| Alpaca 15k + WangchanThaiInstruct 15k | 4.20 | 53.49 | 1.98 | 8.02 | 5.14 | 56.67 | 5.49 |
| Alpaca 30k | 3.79 | 52.41 | 1.25 | 5.73 | 3.25 | 32.71 | 3.43 |
| Dolly 2.5k + WangchanThaiInstruct 2.5k | 3.66 | 54.62 | 1.75 | 8.07 | 4.30 | 51.86 | 4.84 |
| Dolly 5k | 2.59 | 53.36 | 1.39 | 7.58 | 1.71 | 42.35 | 2.45 |
| Dolly 5k + WangchanThaiInstruct 5k | 3.99 | 53.50 | 1.54 | 8.12 | 4.59 | 54.31 | 5.08 |
| Dolly 10k | 2.70 | 51.98 | 1.52 | 7.58 | 1.81 | 43.68 | 2.74 |
| Dolly 7.5k + WangchanThaiInstruct 7.5k | 4.13 | 53.34 | 1.63 | 8.12 | 4.72 | 55.09 | 5.24 |
| Dolly 15k | 4.10 | 51.35 | 1.48 | 7.76 | 3.24 | 40.34 | 2.63 |
| **SEA-LIONv2-8B** | | | | | | | |
| Alpaca 5k + WangchanThaiInstruct 5k | 4.52 | 43.76 | 34.47 | 19.39 | 5.62 | 52.84 | 5.57 |
| Alpaca 10k | 4.54 | 43.31 | 28.01 | 25.35 | 4.61 | 48.88 | 4.73 |
| Alpaca 10k + WangchanThaiInstruct 10k | 4.55 | 44.66 | 24.00 | 17.55 | 5.72 | 53.93 | 5.70 |
| Alpaca 20k | 4.74 | 43.98 | 24.22 | 25.82 | 4.73 | 49.32 | 4.53 |
| Alpaca 15k + WangchanThaiInstruct 15k | 4.44 | 44.51 | 20.58 | 16.31 | 5.54 | 53.94 | 5.61 |
| Alpaca 30k | 4.60 | 42.96 | 15.58 | 25.68 | 5.11 | 49.66 | 4.78 |
| Dolly 2.5k + WangchanThaiInstruct 2.5k | 4.25 | 44.89 | 36.60 | 26.82 | 5.10 | 50.25 | 5.28 |
| Dolly 5k | 3.69 | 45.88 | 19.22 | 35.66 | 3.46 | 48.04 | 4.11 |
| Dolly 5k + WangchanThaiInstruct 5k | 4.21 | 44.30 | 15.64 | 23.72 | 5.31 | 51.25 | 5.42 |
| Dolly 10k | 3.83 | 46.57 | 14.07 | 37.35 | 4.09 | 46.81 | 4.04 |
| Dolly 7.5k + WangchanThaiInstruct 7.5k | 4.31 | 45.31 | 13.54 | 22.00 | 5.54 | 53.81 | 5.57 |
| Dolly 15k | 3.57 | 46.14 | 14.31 | 35.37 | 3.24 | 48.13 | 4.15 |
# Citation
```
@inproceedings{limkonchotiwat2025thaiinstruct,
title = {WangchanThaiInstruct: An Instruction-Following Dataset for Culture-Aware, Multitask, and Multi-domain Evaluation in Thai},
author = {Limkonchotiwat, Peerat and Tuchinda, Pume and Lowphansirikul, Lalita and Nonesung, Surapon and Tasawong, Panuthep and Aji, Alham Fikri and Udomcharoenchaikit, Can and Nutanong, Sarana},
booktitle = {Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing},
year = {2025},
publisher = {Association for Computational Linguistics}
}
```
|
llencia/blockassist-bc-wiry_wise_hedgehog_1755761941
|
llencia
| 2025-08-21T07:39:25Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"wiry wise hedgehog",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-21T07:39:21Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- wiry wise hedgehog
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
katanyasekolah/blockassist-bc-silky_sprightly_cassowary_1755759686
|
katanyasekolah
| 2025-08-21T07:29:59Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"silky sprightly cassowary",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-21T07:29:56Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- silky sprightly cassowary
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
llencia/blockassist-bc-wiry_wise_hedgehog_1755761365
|
llencia
| 2025-08-21T07:29:49Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"wiry wise hedgehog",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-21T07:29:45Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- wiry wise hedgehog
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
vukrosic/hybrid-llm
|
vukrosic
| 2025-08-21T07:23:01Z | 0 | 1 | null |
[
"pytorch",
"hybrid_llm",
"region:us"
] | null | 2025-08-21T06:25:33Z |
# Hybrid LLM Model
This is a hybrid transformer-Mamba model uploaded via script.
## Model Details
- **Architecture**: Hybrid Transformer-Mamba
- **Parameters**: 43,819,776
- **Config**: {
"vocab_size": 49152,
"hidden_size": 384,
"num_layers": 8,
"num_heads": 8,
"ssm_state_size": 16,
"conv_kernel": 4,
"expand_factor": 2,
"layer_pattern": "MAMAMAMA",
"max_seq_len": 512,
"batch_size": 32,
"num_documents": 500,
"learning_rate": 0.0005,
"num_steps": 500,
"dropout": 0.1,
"grad_clip": 1.0,
"log_every": 50,
"experiment_name": "pattern_ablation",
"pattern_name": "MAMAMAMA",
"eval_every": 100,
"save_every": 2000,
"num_eval_batches": 50,
"hf_repo": "vukrosic/hybrid-llm"
}
## Usage
```python
from transformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("vukrosic/hybrid-llm")
```
|
lautan/blockassist-bc-gentle_patterned_goat_1755759162
|
lautan
| 2025-08-21T07:22:07Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"gentle patterned goat",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-21T07:22:03Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- gentle patterned goat
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
pinktulip888/qwenpenguingen1
|
pinktulip888
| 2025-08-21T07:21:45Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"base_model:unsloth/Qwen2.5-7B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-21T06:01:06Z |
---
base_model: unsloth/Qwen2.5-7B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** pinktulip888
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2.5-7B-Instruct
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Medved444/blockassist-bc-bellowing_finicky_manatee_1755759277
|
Medved444
| 2025-08-21T07:14:58Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"bellowing finicky manatee",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-21T07:14:43Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- bellowing finicky manatee
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
llencia/blockassist-bc-wiry_wise_hedgehog_1755760341
|
llencia
| 2025-08-21T07:12:51Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"wiry wise hedgehog",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-21T07:12:43Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- wiry wise hedgehog
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
aleebaster/blockassist-bc-sly_eager_boar_1755758750
|
aleebaster
| 2025-08-21T07:11:31Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"sly eager boar",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-21T07:11:25Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- sly eager boar
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ihsanridzi/blockassist-bc-wiry_flexible_owl_1755757535
|
ihsanridzi
| 2025-08-21T06:53:22Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"wiry flexible owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-21T06:53:19Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- wiry flexible owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1755758835
|
IvanJAjebu
| 2025-08-21T06:48:26Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thorny slender capybara",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-21T06:48:17Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thorny slender capybara
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
computerandgyein/gemma_270m-text-normalisation-for-number-stage1
|
computerandgyein
| 2025-08-21T06:44:12Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"unsloth",
"sft",
"trl",
"base_model:unsloth/gemma-3-270m-unsloth-bnb-4bit",
"base_model:finetune:unsloth/gemma-3-270m-unsloth-bnb-4bit",
"endpoints_compatible",
"region:us"
] | null | 2025-08-21T05:45:56Z |
---
base_model: unsloth/gemma-3-270m-unsloth-bnb-4bit
library_name: transformers
model_name: gemma_270m-text-normalisation-for-number-stage1
tags:
- generated_from_trainer
- unsloth
- sft
- trl
licence: license
---
# Model Card for gemma_270m-text-normalisation-for-number-stage1
This model is a fine-tuned version of [unsloth/gemma-3-270m-unsloth-bnb-4bit](https://huggingface.co/unsloth/gemma-3-270m-unsloth-bnb-4bit).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="computerandgyein/gemma_270m-text-normalisation-for-number-stage1", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/computerandgyein-ufo/text-normalisation/runs/0p37xxsi)
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.2
- Pytorch: 2.8.0+cu126
- Datasets: 3.6.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
coelacanthxyz/blockassist-bc-finicky_thriving_grouse_1755756791
|
coelacanthxyz
| 2025-08-21T06:43:25Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"finicky thriving grouse",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-21T06:43:19Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- finicky thriving grouse
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
llencia/blockassist-bc-wiry_wise_hedgehog_1755758500
|
llencia
| 2025-08-21T06:42:04Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"wiry wise hedgehog",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-21T06:42:00Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- wiry wise hedgehog
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
sabaridsnfuji/Japanese-Receipt-VL-lfm2-450M
|
sabaridsnfuji
| 2025-08-21T05:51:43Z | 16 | 0 | null |
[
"tensorboard",
"safetensors",
"lfm2-vl",
"vision",
"image-text-to-text",
"japanese",
"receipt",
"ocr",
"document-ai",
"multimodal",
"fine-tuned",
"lora",
"conversational",
"custom_code",
"ja",
"en",
"dataset:japanese-receipts",
"license:apache-2.0",
"region:us"
] |
image-text-to-text
| 2025-08-19T11:40:55Z |
---
license: apache-2.0
base_model: liquidai/lfm2-vl-450m
tags:
- vision
- image-text-to-text
- japanese
- receipt
- ocr
- document-ai
- multimodal
- fine-tuned
- lora
language:
- ja
- en
pipeline_tag: image-text-to-text
widget:
- src: https://example.com/japanese_receipt.jpg
text: "この領収書の内容を日本語で説明してください。"
datasets:
- japanese-receipts
---
# Japanese Receipt VL lfm2-450M
## Model Description
Japanese-Receipt-VL-lfm2-450M is a specialized vision-language model fine-tuned for understanding and processing Japanese receipts. Built on LiquidAI's LFM2-VL-450M foundation model, this model can analyze receipt images and extract structured information, answer questions about receipt contents, and provide detailed descriptions in both Japanese and English.
## Model Details
- **Base Model**: liquidai/lfm2-vl-450m
- **Model Size**: 450M parameters
- **Fine-tuning Method**: LoRA (Low-Rank Adaptation)
- **Languages**: Japanese (primary), English (secondary)
- **Architecture**: Vision-Language Transformer
- **Training**: Fine-tuned on Japanese receipt datasets
## Intended Use
### Primary Use Cases
- **Comprehensive Receipt Parsing**: Convert any Japanese receipt to structured JSON with exact text preservation
- **Retail Analytics**: Extract detailed product information, pricing, and tax data from store receipts
- **Multi-tax Rate Processing**: Handle complex Japanese tax scenarios (8%, 10%, tax-exempt items)
- **Financial Document Digitization**: Process banking, credit card, and payment system receipts
- **E-commerce Integration**: Extract product catalogs and pricing from retail receipts
- **Accounting Automation**: Comprehensive expense categorization with tax breakdown details
- **Compliance Documentation**: Maintain exact formatting for audit and regulatory requirements
- **Payment Processing Analysis**: Extract credit card transaction details and approval codes
### Target Users
- Financial technology companies
- Accounting software developers
- Expense management platforms
- Retail analytics companies
- Japanese businesses and consumers
## Usage
### Installation
```bash
pip install transformers torch pillow
```
### Basic Usage
```python
from transformers import AutoProcessor, AutoModelForVision2Seq
from PIL import Image
import torch
# Load model and processor
model = AutoModelForVision2Seq.from_pretrained("sabaridsnfuji/Japanese-Receipt-VL-lfm2-450M")
processor = AutoProcessor.from_pretrained("sabaridsnfuji/Japanese-Receipt-VL-lfm2-450M")
# Load receipt image
image = Image.open("japanese_receipt.jpg")
# System prompt for structured extraction
system_prompt = """You are an intelligent document parser. Read the following Japanese receipt and extract every piece of information exactly as it appears, and present it in a well-structured JSON format using Japanese keys and values.
Please strictly follow these rules:
Only extract information that is actually present on the receipt. Do not include any missing, blank, or inferred fields.
Do not summarize, omit, translate, or modify any part of the receipt. Every character, number, symbol, and line must be retained exactly as printed.
Extract all available content including but not limited to: store details, receipt number, date, time, cashier name, product list, prices, tax breakdowns, payment details, receipt bags, barcodes, notices, and any footer messages.
Preserve original formatting such as line breaks, symbols, and full-width characters (hiragana, katakana, kanji, numbers, etc.).
Do not perform any translation, correction, interpretation, or reformatting of content. Use only what is present.
Output the result in JSON format, using Japanese field names as keys."""
# Prepare conversation format
messages = [
{
'role': 'system',
'content': [{'type': 'text', 'text': system_prompt}]
},
{
'role': 'user',
'content': [
{'type': 'text', 'text': 'Please parse this Japanese receipt.'},
{'type': 'image', 'image': image}
]
}
]
# Process and generate
inputs = processor.apply_chat_template(messages, return_tensors="pt")
with torch.no_grad():
outputs = model.generate(**inputs, max_new_tokens=1024)
# Decode response
response = processor.decode(outputs[0], skip_special_tokens=True)
print(response)
```
### Example Output
**Example 1: Seven Bank PayPay Transaction Receipt**
```json
{
"ご利用明細票": {
"セブン銀行": "QR",
"取引金額": "¥10,000*",
"日付": "2025年03月26日",
"時間": "15:46",
"店舗番号": "0034",
"店番": "BranchNo0100",
"口座番号": "************9384",
"金額票": "114703045-8277103",
"照合コード": "0000",
"お取引会社からのご連絡": "PayPayのお取引です"
},
"お知らせ": [
"PayPayスクラッチくじ!すべての対象のお店で200円以上の支払いで1等最大全額戻ってくる(付与上限・条件あり)",
"詳しくはPayPayアプリで♪"
],
"注意事項": [
"暗証番号は他人に知られないようにしてください。銀行員が直接あるいは電話で暗証番号をお尋ねすることはありません。",
"上記ご取引内容についてご不明の点は、お取引会社にお問合せください。"
],
"セブン銀行": "セブン銀行"
}
```
**Example 2: DAISO Retail Store Receipt**
```json
{
"店舗名": "ダイソー青葉台東急スクエア店",
"電話番号": "TEL:082-420-0100",
"公式通販サイトURL": "「DAISOオンラインショップ」『ダイソーオンライン』で検索!",
"令状:校訂証正日付": "2025年6月22日(日)",
"レジ日時": "19:24",
"レジ番号": "0006",
"責任者名": "99999992",
"商品列表": [
{
"商品コード": "ドウシシャ",
"商品名": "ナタデココ入",
"価格": "¥100※"
},
{
"商品コード": "ドウシシャ",
"商品名": "チアシードド",
"価格": "¥100※"
},
{
"商品名": "消臭ポリ袋(おむつ用)",
"価格": "¥100外"
},
{
"商品名": "化粧ブラシセット(5本)",
"価格": "¥300外"
},
{
"商品名": "シャワー線棒 1 1 0本入",
"価格": "¥100外"
},
{
"商品名": "抗菌線棒(バガスパルブ配",
"価格": "¥100外"
}
],
"小計点数": "6点",
"小計金額": "¥800",
"税込ポイント": "",
"各税別": {
"10%税抜対象額": "¥600",
"10%税率額": "¥60",
"8%税抜対象額": "¥200",
"8%税率額": "¥16"
},
"合計金額": "¥876",
"ビザ/マスター金額": "¥876",
"お釣り金額": "¥0",
"注意事項": "※印は軽減税率適用商品です。",
"登録番号": "T7240001022681",
"QRコード1": "",
"QRコード2": "",
"QRコード3": "",
"クレジット売上票情報": "",
"カード会社": "カイツ",
"会員番号": "104",
"ビザ/マスター": "",
"有効期限": "429769XXXXXXXX5489-NFC",
"取扱い日": "2025年06月22日",
"承認番号": "0705755",
"伝票番号": "05755",
"取引内容": "売上(オンライン)",
"支払区分": "一括",
"取引金額": "¥876",
"端末番号": "4971162449343",
"ATC": "011C",
"カードシークス番号": "00",
"AID": "A00000000031010",
"APL名": "VISACREDIT",
"店舗番号": "008943",
"レジット番号": "1841"
}
```
### Advanced Usage - Custom Extraction
```python
# Custom extraction with specific requirements
custom_prompt = """Parse this Japanese receipt and extract only the following information in JSON format:
- Transaction amount (取引金額)
- Date and time (日付・時間)
- Store information (店舗情報)
- Payment method details (支払い方法)
Use Japanese keys and preserve exact formatting."""
messages = [
{
'role': 'system',
'content': [{'type': 'text', 'text': custom_prompt}]
},
{
'role': 'user',
'content': [
{'type': 'text', 'text': 'Extract the requested information from this receipt.'},
{'type': 'image', 'image': image}
]
}
]
inputs = processor.apply_chat_template(messages, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=512)
response = processor.decode(outputs[0], skip_special_tokens=True)
print(response)
```
### Batch Processing
```python
import os
from pathlib import Path
def process_receipt_batch(image_folder, output_file):
"""Process multiple receipts and save results"""
results = []
for image_path in Path(image_folder).glob("*.jpg"):
image = Image.open(image_path)
# Use the standard system prompt for full extraction
messages = [
{'role': 'system', 'content': [{'type': 'text', 'text': system_prompt}]},
{'role': 'user', 'content': [
{'type': 'text', 'text': 'Parse this receipt.'},
{'type': 'image', 'image': image}
]}
]
inputs = processor.apply_chat_template(messages, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=1024)
response = processor.decode(outputs[0], skip_special_tokens=True)
results.append({
"filename": image_path.name,
"extracted_data": response
})
# Save results
import json
with open(output_file, 'w', encoding='utf-8') as f:
json.dump(results, f, ensure_ascii=False, indent=2)
# Process all receipts in a folder
process_receipt_batch("./receipts/", "extracted_data.json")
```
## Training Details
### Training Data
- **Primary Dataset**: Japanese-Mobile-Receipt-OCR-1.3K dataset
- **Data Size**: 1,300+ receipt images
- **Data Sources**: Various Japanese retailers, restaurants, and service providers
- **Annotation**: Manual annotation of key information fields and structured extraction
### Training Process
- **Fine-tuning Method**: LoRA (Low-Rank Adaptation)
- **Base Model**: liquidai/lfm2-vl-450m
- **Training Framework**: PyTorch + Transformers
- **Optimization**: AdamW optimizer
- **Training Time**: Approximately 48 hours on V100 GPUs
### Key Features Learned
- **Structured JSON extraction** with Japanese field names and hierarchical organization
- **Exact text preservation** including full-width characters, symbols, and formatting
- **Multi-type receipt support**: Banking transactions, retail stores, payment systems
- **Comprehensive product parsing**: Item lists with codes, names, and individual pricing
- **Advanced tax calculation extraction**: Multiple tax rates (8%, 10%), tax-exempt items, reduced tax rate indicators
- **Payment method details**: Credit card information, transaction codes, terminal data
- **Store and business information**: Contact details, registration numbers, URLs
- **Transaction metadata**: Receipt numbers, cashier info, timestamps, approval codes
- **Promotional content extraction**: Notices, QR codes, loyalty program information
- **Privacy-aware data handling**: Proper masking of sensitive account information
- **Japanese retail format understanding**: DAISO, convenience stores, department stores
## Training Details
### Benchmarks
The model has been evaluated on a held-out test set of Japanese receipts across various categories including:
- **Banking receipts** (銀行レシート) - Seven Bank, Japan Post Bank, ATM transactions
- **Payment system receipts** (決済システム) - PayPay, LINE Pay, Rakuten Pay
- **Retail store receipts** (小売店レシート) - DAISO, convenience stores (7-Eleven, Lawson), supermarkets
- **Department store receipts** (デパートレシート) - Complex itemized purchases with multiple tax rates
- **Restaurant receipts** (レストランレシート) - Food service with reduced tax rates
- **Transportation receipts** (交通レシート) - Train tickets, bus passes, parking
- **Credit card receipts** (クレジットカードレシート) - Detailed payment processing information
## Limitations
### Known Limitations
- **Image Quality**: Performance degrades with blurry, damaged, or low-resolution images
- **Handwritten Receipts**: Limited accuracy on handwritten receipts
- **Regional Variations**: Optimized for standard Japanese receipt formats
- **Language Mixing**: May struggle with receipts containing mixed scripts
- **Old Receipt Formats**: Older or non-standard receipt layouts may reduce accuracy
### Bias Considerations
- **Training Data Bias**: Model performance may vary across different Japanese regions
- **Retailer Bias**: Better performance on common retail chains represented in training data
- **Format Bias**: Optimized for modern thermal printer receipts
## Ethical Considerations
### Privacy
- **Personal Information**: Model may extract personal information from receipts
- **Data Handling**: Users should implement appropriate privacy safeguards
- **Compliance**: Ensure compliance with local data protection regulations
### Security
- **Sensitive Data**: Receipts may contain sensitive financial information
- **Access Control**: Implement proper access controls in production environments
## Citation
If you use this model in your research or applications, please cite:
```bibtex
@misc{japanese-receipt-vl-lfm2-450m,
title={Japanese Receipt VL lfm2-450M: A Specialized Vision-Language Model for Japanese Receipt Understanding},
author={sabaridsnfuji},
year={2024},
publisher={Hugging Face},
url={https://huggingface.co/sabaridsnfuji/Japanese-Receipt-VL-lfm2-450M}
}
```
### Dataset Reference
If you use this model or its underlying dataset, please also cite the original dataset paper:
```bibtex
@article{japanese-mobile-receipt-ocr-2024,
title={Japanese-Mobile-Receipt-OCR-1.3K: A Comprehensive Dataset Analysis and Fine-tuned Vision-Language Model for Structured Receipt Data Extraction},
author={Sabari Nathan},
year={2024},
doi={10.21203/rs.3.rs-7357197/v1},
url={https://doi.org/10.21203/rs.3.rs-7357197/v1},
note={Preprint}
}
```
### Base Model Reference
Please also cite the base LFM2-VL model:
```bibtex
@article{lfm2-vl-2024,
title={LFM2-VL: Large Foundation Model for Vision-Language Tasks},
author={LiquidAI},
year={2024},
publisher={LiquidAI},
url={https://huggingface.co/liquidai/lfm2-vl-450m}
}
```
## License
This model is released under the Apache 2.0 License. Please ensure compliance with the license terms when using this model.
## Acknowledgments
- **Base Model**: LiquidAI LFM2-VL team
- **Training Infrastructure**: [Your organization/platform]
- **Dataset Contributors**: Japanese receipt data annotators
- **Community**: Hugging Face community for tools and support
## Contact
For questions, issues, or collaboration opportunities, please reach out through:
- GitHub Issues: [Your GitHub repository]
- Hugging Face Discussions: [Model discussion page]
- Email: [Your contact email]
## Model Card Authors
- sabaridsnfuji
## Model Card Contact
For questions about this model card, please contact the model authors.
|
0xaoyama/blockassist-bc-muscular_zealous_gorilla_1755754843
|
0xaoyama
| 2025-08-21T05:41:15Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"muscular zealous gorilla",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-21T05:41:04Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- muscular zealous gorilla
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mveroe/Qwen2.5-1.5B_DS-Qwen-1.5B_0p0_1p0_0p0_sft
|
mveroe
| 2025-08-21T05:40:21Z | 3 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-1.5B",
"base_model:finetune:Qwen/Qwen2.5-1.5B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-20T15:21:28Z |
---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen2.5-1.5B
tags:
- generated_from_trainer
model-index:
- name: Qwen2.5-1.5B_DS-Qwen-1.5B_0p0_1p0_0p0_sft
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Qwen2.5-1.5B_DS-Qwen-1.5B_0p0_1p0_0p0_sft
This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B](https://huggingface.co/Qwen/Qwen2.5-1.5B) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAFACTOR and the args are:
No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.53.2
- Pytorch 2.7.0+cu126
- Datasets 4.0.0
- Tokenizers 0.21.2
|
ntkhoi/Qwen3-4B-Medical-SFT-DPO-0820
|
ntkhoi
| 2025-08-21T05:31:51Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-21T05:30:44Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
llencia/blockassist-bc-wiry_wise_hedgehog_1755753793
|
llencia
| 2025-08-21T05:23:37Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"wiry wise hedgehog",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-21T05:23:33Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- wiry wise hedgehog
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
AdaptLLM/biomed-gemma-3-4b-it
|
AdaptLLM
| 2025-08-21T05:21:33Z | 9 | 1 |
transformers
|
[
"transformers",
"safetensors",
"gemma3",
"image-text-to-text",
"multimodal",
"biology",
"medical",
"conversational",
"en",
"dataset:AdaptLLM/biomed-visual-instructions",
"arxiv:2411.19930",
"arxiv:2309.09530",
"base_model:google/gemma-3-4b-it",
"base_model:finetune:google/gemma-3-4b-it",
"license:gemma",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-07-06T11:28:42Z |
---
license: gemma
language:
- en
pipeline_tag: image-text-to-text
tags:
- multimodal
- biology
- medical
library_name: transformers
base_model:
- google/gemma-3-4b-it
datasets:
- AdaptLLM/biomed-visual-instructions
---
# Adapting Multimodal Large Language Models to Domains via Post-Training (EMNLP 2025)
This repos contains the **biomedicine MLLM developed from gemma-3-4b-it** in our paper: [On Domain-Adaptive Post-Training for Multimodal Large Language Models](https://huggingface.co/papers/2411.19930). The correspoding training dataset is in [biomed-visual-instructions](https://huggingface.co/datasets/AdaptLLM/biomed-visual-instructions).
The main project page is: [Adapt-MLLM-to-Domains](https://huggingface.co/AdaptLLM/Adapt-MLLM-to-Domains)
## 1. To Chat with AdaMLLM
Our model architecture aligns with the base model: gemma-3-4b-it. We provide a usage example below, and you may refer to the official [google/gemma-3-4b-it](https://huggingface.co/google/gemma-3-4b-it) for more advanced usage instructions.
**Note:** For AdaMLLM, always place the image at the beginning of the input instruction in the messages.
<details>
<summary> Click to expand </summary>
Below, there are some code snippets on how to get quickly started with running the model. First, install the Transformers library. Gemma 3 is supported starting from transformers 4.50.0.
```sh
$ pip install -U transformers
```
Then, copy the snippet from the section that is relevant for your use case.
#### Running with the `pipeline` API
You can initialize the model and processor for inference with `pipeline` as follows.
```python
from transformers import pipeline
import torch
pipe = pipeline(
"image-text-to-text",
model="AdaptLLM/biomed-gemma-3-4b-it",
device="cuda",
torch_dtype=torch.bfloat16
)
```
With instruction-tuned models, you need to use chat templates to process our inputs first. Then, you can pass it to the pipeline.
```python
messages = [
{
"role": "system",
"content": [{"type": "text", "text": "You are a helpful assistant."}]
},
{
"role": "user",
"content": [
{"type": "image", "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/p-blog/candy.JPG"},
{"type": "text", "text": "What animal is on the candy?"}
]
}
]
output = pipe(text=messages, max_new_tokens=200)
print(output[0]["generated_text"][-1]["content"])
```
</details>
## 2. Domain-Specific Benchmarks
We provide [biomed-VQA-benchmark](https://huggingface.co/datasets/AdaptLLM/biomed-VQA-benchmark) to evaluate any MLLMs.
## 3. To Reproduce this Domain-Adapted MLLM
Using our training data, [biomed-visual-instructions](https://huggingface.co/datasets/AdaptLLM/biomed-visual-instructions), you can easily reproduce our models based on the [LlamaFactory](https://github.com/hiyouga/LLaMA-Factory) repository.
For reference, we train from google/gemma-3-4b-it for 1 epoch with a learning rate of 1e-5, and a global batch size of 128.
## Citation
If you find our work helpful, please cite us.
[Adapt MLLM to Domains](https://huggingface.co/papers/2411.19930) (EMNLP 2025 Findings)
```bibtex
@article{adamllm,
title={On Domain-Adaptive Post-Training for Multimodal Large Language Models},
author={Cheng, Daixuan and Huang, Shaohan and Zhu, Ziyu and Zhang, Xintong and Zhao, Wayne Xin and Luan, Zhongzhi and Dai, Bo and Zhang, Zhenliang},
journal={arXiv preprint arXiv:2411.19930},
year={2024}
}
```
[Adapt LLM to Domains](https://huggingface.co/papers/2309.09530) (ICLR 2024)
```bibtex
@inproceedings{
cheng2024adapting,
title={Adapting Large Language Models via Reading Comprehension},
author={Daixuan Cheng and Shaohan Huang and Furu Wei},
booktitle={The Twelfth International Conference on Learning Representations},
year={2024},
url={https://openreview.net/forum?id=y886UXPEZ0}
}
```
|
trunghieuma22/mistral-7b-finetuned
|
trunghieuma22
| 2025-08-21T05:18:22Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:unsloth/mistral-7b-instruct-v0.2-bnb-4bit",
"base_model:finetune:unsloth/mistral-7b-instruct-v0.2-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-21T05:18:01Z |
---
base_model: unsloth/mistral-7b-instruct-v0.2-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** trunghieuma22
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-instruct-v0.2-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
llencia/blockassist-bc-wiry_wise_hedgehog_1755753408
|
llencia
| 2025-08-21T05:17:13Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"wiry wise hedgehog",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-21T05:17:08Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- wiry wise hedgehog
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
MomlessTomato/kanan-matsuura
|
MomlessTomato
| 2025-08-21T05:12:33Z | 24 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:cagliostrolab/animagine-xl-3.0",
"base_model:adapter:cagliostrolab/animagine-xl-3.0",
"license:mit",
"region:us"
] |
text-to-image
| 2024-08-30T02:15:23Z |
---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: >-
high quality, defined pupil, looking at viewer, rounded pupil, defined iris,
(soft iris:1.2), torso shadow, ponytail,
parameters:
negative_prompt: >-
bad_anatomy, deformation, amputation, deformity, deformed_nipples,
duplicated_torso, deformed_torso, long_torso, large_torso,
unproportioned_torso, (deformed_pussy:1.2), (deformed_hands:1.2),
unproportioned_eyes, unproportioned_head, small_head, duplicated_nose,
big_nose, fusioned_clothes, fusioned_arms, undefined_limbs, divided_pussy,
red_pussy, duplicated_pussy, deformed_anus, deformed_pussy,
output:
url: images/icon_1.png
base_model: Linaqruf/animagine-xl-3.0
instance_prompt: id_kanan_matsuura
license: mit
---
# Kanan Matsuura
<Gallery />
## Model description
This model was trained to generate high quality images based on SIFAS cards.
To achieve better quality, you should be using hako-mikan's regional prompter, along with Latent Mode, which modifies the way Stable Diffusion isolates the LoRA resulting in a significant improvement.
## Trigger words
You should use `id_kanan_matsuura` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/theidoldaily/kanan-matsuura/tree/main) them in the Files & versions tab.
|
llencia/blockassist-bc-wiry_wise_hedgehog_1755753005
|
llencia
| 2025-08-21T05:10:34Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"wiry wise hedgehog",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-21T05:10:26Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- wiry wise hedgehog
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
0xaoyama/blockassist-bc-muscular_zealous_gorilla_1755751896
|
0xaoyama
| 2025-08-21T04:52:08Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"muscular zealous gorilla",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-21T04:51:57Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- muscular zealous gorilla
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
0xaoyama/blockassist-bc-muscular_zealous_gorilla_1755751114
|
0xaoyama
| 2025-08-21T04:39:07Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"muscular zealous gorilla",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-21T04:38:56Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- muscular zealous gorilla
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Krish356/qwen3-coder-react-lora
|
Krish356
| 2025-08-21T04:25:18Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen3_moe",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-21T04:24:38Z |
---
base_model: unsloth/qwen3-coder-30b-a3b-instruct
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3_moe
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Krish356
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen3-coder-30b-a3b-instruct
This qwen3_moe model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1755750148
|
IvanJAjebu
| 2025-08-21T04:23:37Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thorny slender capybara",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-21T04:23:28Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thorny slender capybara
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mradermacher/Llama-3.1-8B-Instuct-Uz-GGUF
|
mradermacher
| 2025-08-21T04:18:26Z | 34 | 1 |
transformers
|
[
"transformers",
"gguf",
"llama",
"text-generation-inference",
"summarization",
"translation",
"question-answering",
"uz",
"en",
"dataset:yahma/alpaca-cleaned",
"dataset:behbudiy/alpaca-cleaned-uz",
"dataset:behbudiy/translation-instruction",
"base_model:behbudiy/Llama-3.1-8B-Instruct-Uz",
"base_model:quantized:behbudiy/Llama-3.1-8B-Instruct-Uz",
"license:llama3.1",
"endpoints_compatible",
"region:us",
"conversational"
] |
question-answering
| 2024-09-17T03:18:19Z |
---
base_model: behbudiy/Llama-3.1-8B-Instruct-Uz
datasets:
- yahma/alpaca-cleaned
- behbudiy/alpaca-cleaned-uz
- behbudiy/translation-instruction
language:
- uz
- en
library_name: transformers
license: llama3.1
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- llama
- text-generation-inference
- summarization
- translation
- question-answering
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/behbudiy/Llama-3.1-8B-Instruct-Uz
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Llama-3.1-8B-Instuct-Uz-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Instuct-Uz-GGUF/resolve/main/Llama-3.1-8B-Instuct-Uz.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Instuct-Uz-GGUF/resolve/main/Llama-3.1-8B-Instuct-Uz.IQ3_XS.gguf) | IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Instuct-Uz-GGUF/resolve/main/Llama-3.1-8B-Instuct-Uz.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Instuct-Uz-GGUF/resolve/main/Llama-3.1-8B-Instuct-Uz.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Instuct-Uz-GGUF/resolve/main/Llama-3.1-8B-Instuct-Uz.IQ3_M.gguf) | IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Instuct-Uz-GGUF/resolve/main/Llama-3.1-8B-Instuct-Uz.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Instuct-Uz-GGUF/resolve/main/Llama-3.1-8B-Instuct-Uz.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Instuct-Uz-GGUF/resolve/main/Llama-3.1-8B-Instuct-Uz.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Instuct-Uz-GGUF/resolve/main/Llama-3.1-8B-Instuct-Uz.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Instuct-Uz-GGUF/resolve/main/Llama-3.1-8B-Instuct-Uz.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Instuct-Uz-GGUF/resolve/main/Llama-3.1-8B-Instuct-Uz.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Instuct-Uz-GGUF/resolve/main/Llama-3.1-8B-Instuct-Uz.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Instuct-Uz-GGUF/resolve/main/Llama-3.1-8B-Instuct-Uz.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Instuct-Uz-GGUF/resolve/main/Llama-3.1-8B-Instuct-Uz.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Instuct-Uz-GGUF/resolve/main/Llama-3.1-8B-Instuct-Uz.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
khopilot/khmer-tokenizer-v7
|
khopilot
| 2025-08-21T03:57:49Z | 0 | 0 |
sentencepiece
|
[
"sentencepiece",
"khmer_tokenizer_v7",
"tokenizer",
"khmer",
"subword",
"feature-extraction",
"km",
"license:apache-2.0",
"model-index",
"region:us"
] |
feature-extraction
| 2025-08-21T01:56:47Z |
---
language: km
license: apache-2.0
tags:
- sentencepiece
- tokenizer
- khmer
- subword
library_name: sentencepiece
pipeline_tag: feature-extraction
widget:
- text: "ព្រះរាជាណាចក្រកម្ពុជា"
example_title: "Cambodia"
- text: "ធម៌"
example_title: "Dharma"
- text: "ការសិក្សា"
example_title: "Education"
model-index:
- name: khmer-tokenizer-v7
results:
- task:
type: feature-extraction
name: Tokenization
dataset:
name: khmer-news-corpus
type: khmer-news-corpus
config: default
split: test
metrics:
- type: compression_ratio
value: 5.27
name: Compression Ratio
- type: tokens_per_character
value: 0.1897
name: Tokens Per Character
- type: vocabulary_coverage
value: 90.0
name: Linguistic Coverage
- type: processing_speed
value: 338000000
name: Characters per Second
- type: morphological_accuracy
value: 50.0
name: Morphological Accuracy
- type: sanskrit_pali_accuracy
value: 100.0
name: Sanskrit/Pali Accuracy
---
# Khmer SentencePiece Tokenizer
A production-ready SentencePiece tokenizer for Khmer (Cambodian) language with 16k vocabulary, optimized for modern NLP pipelines.
## Direct Usage from HuggingFace 🤗
```python
from transformers import AutoTokenizer
# Load directly from HuggingFace
tokenizer = AutoTokenizer.from_pretrained("khopilot/khmer-tokenizer-v7")
# Tokenize text
text = "ព្រះរាជាណាចក្រកម្ពុជា"
encoded = tokenizer(text, return_tensors="pt")
# Get tokens
tokens = tokenizer.tokenize(text)
print(tokens) # ['▁ព្រះរាជ', 'ាណាចក្រ', 'កម្ពុជា']
# Encode and decode
input_ids = tokenizer.encode(text)
decoded = tokenizer.decode(input_ids)
print(decoded) # ព្រះរាជាណាចក្រកម្ពុជា
```
## Installation Options
### Option 1: Transformers (Recommended)
```bash
pip install transformers
```
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("khopilot/khmer-tokenizer-v7")
```
### Option 2: SentencePiece Direct
```bash
pip install sentencepiece huggingface-hub
```
```python
from huggingface_hub import hf_hub_download
import sentencepiece as spm
model_path = hf_hub_download(
repo_id="khopilot/khmer-tokenizer-v7",
filename="tokenizer.model"
)
sp = spm.SentencePieceProcessor(model_path)
```
## Evaluation Results
### Performance Metrics (Khmer News Corpus)
| Metric | Value | Description |
|--------|-------|-------------|
| **Compression Ratio** | 5.27x | Characters compressed per token |
| **Tokens/Character** | 0.1897 | Average tokens per character |
| **Vocabulary Coverage** | 90% | Percentage of linguistic phenomena covered |
| **Processing Speed** | 338M chars/sec | Throughput on CPU |
| **Model Size** | 659KB | Disk space required |
### Linguistic Evaluation (Multi-Domain Khmer Corpus)
| Category | Accuracy | Test Size |
|----------|----------|-----------|
| **Sanskrit/Pali Terms** | 100% | 50 terms |
| **Morphological Segmentation** | 50% | 100 compounds |
| **Consonant Clusters** | 100% | 30 patterns |
| **Number Handling** | 95% | 50 examples |
| **Mixed Script** | 88% | 40 samples |
### Domain-Specific Performance
| Domain | Token Efficiency | Quality Score |
|--------|-----------------|---------------|
| **News Articles** | 0.2585 TPC | ⭐⭐⭐⭐⭐ |
| **Religious Texts** | 0.2103 TPC | ⭐⭐⭐⭐⭐ |
| **Technical Docs** | 0.2891 TPC | ⭐⭐⭐⭐ |
| **Social Media** | 0.3012 TPC | ⭐⭐⭐⭐ |
| **Literature** | 0.2234 TPC | ⭐⭐⭐⭐ |
## Tokenization Examples
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("khopilot/khmer-tokenizer-v7")
# Example 1: Religious term
tokenizer.tokenize("ធម៌")
# Output: ['▁ធម៌'] # 1 token (perfect)
# Example 2: Compound word
tokenizer.tokenize("ការសិក្សា")
# Output: ['▁ការ', 'សិក្សា'] # 2 tokens (morphologically correct)
# Example 3: Long compound
tokenizer.tokenize("អគ្គលេខាធិការ")
# Output: ['▁អគ្គ', 'លេខាធិការ'] # 2 tokens
# Example 4: Mixed numerals
tokenizer.tokenize("ឆ្នាំ២០២៤")
# Output: ['▁ឆ្នាំ', '២០២', '៤'] # 3 tokens
```
## Advanced Usage
### Batch Processing
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("khopilot/khmer-tokenizer-v7")
texts = [
"ព្រះរាជាណាចក្រកម្ពុជា",
"ធម៌",
"ការសិក្សា"
]
# Batch encode
encoded = tokenizer(
texts,
padding=True,
truncation=True,
max_length=512,
return_tensors="pt"
)
print(encoded["input_ids"].shape) # torch.Size([3, max_length])
```
### With PyTorch DataLoader
```python
import torch
from torch.utils.data import Dataset, DataLoader
from transformers import AutoTokenizer
class KhmerDataset(Dataset):
def __init__(self, texts, tokenizer, max_length=512):
self.texts = texts
self.tokenizer = tokenizer
self.max_length = max_length
def __len__(self):
return len(self.texts)
def __getitem__(self, idx):
encoding = self.tokenizer(
self.texts[idx],
truncation=True,
padding="max_length",
max_length=self.max_length,
return_tensors="pt"
)
return {
"input_ids": encoding["input_ids"].squeeze(),
"attention_mask": encoding["attention_mask"].squeeze()
}
tokenizer = AutoTokenizer.from_pretrained("khopilot/khmer-tokenizer-v7")
dataset = KhmerDataset(texts, tokenizer)
dataloader = DataLoader(dataset, batch_size=32, shuffle=True)
```
### For Language Models
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("khopilot/khmer-tokenizer-v7")
# Add special tokens if needed
tokenizer.add_special_tokens({
"pad_token": "<pad>",
"eos_token": "</s>",
"bos_token": "<s>",
"unk_token": "<unk>"
})
# Use with any model
text = "ព្រះរាជាណាចក្រកម្ពុជា"
inputs = tokenizer(text, return_tensors="pt")
# Ready for model.generate() or model.forward()
```
## Model Configuration
```yaml
Architecture: SentencePiece Unigram
Vocabulary Size: 16,000
Character Coverage: 99.99%
Max Piece Length: 8
Split by Unicode Script: Yes
Byte Fallback: Enabled
Special Tokens: <unk>, <s>, </s>, <pad>, <MASK>, <CLS>, <SEP>
```
## Training Details
- **Training Data:** 2.6M characters of diverse Khmer text
- **Data Sources:** News, religious texts, technical docs, social media, literature
- **Special Weighting:** Sanskrit/Pali terms (3x), morphological patterns (2x)
- **Optimization:** Natural frequency distribution, no artificial repetition
## File Structure
```
khopilot/khmer-tokenizer-v7/
├── tokenizer.model # SentencePiece model (659KB)
├── tokenizer.vocab # Vocabulary file
├── tokenizer_config.json # HuggingFace config
├── special_tokens_map.json # Special tokens mapping
└── config.json # Model metadata
```
## Citation
```bibtex
@misc{khmer-tokenizer-v7-2024,
author = {Niko},
title = {Khmer SentencePiece Tokenizer v7},
year = {2024},
publisher = {HuggingFace},
url = {https://huggingface.co/khopilot/khmer-tokenizer-v7}
}
```
## License
Apache 2.0
---
**Support:** Open an issue on [HuggingFace](https://huggingface.co/khopilot/khmer-tokenizer-v7/discussions) | **Downloads:** 659KB model size
|
kojeklollipop/blockassist-bc-spotted_amphibious_stork_1755746711
|
kojeklollipop
| 2025-08-21T03:53:33Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"spotted amphibious stork",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-21T03:53:29Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- spotted amphibious stork
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
rourkerhotmail1/blockassist-bc-stalking_scruffy_walrus_1755745749
|
rourkerhotmail1
| 2025-08-21T03:43:59Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stalking scruffy walrus",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-21T03:43:56Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stalking scruffy walrus
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Coaster41/patchtst-sae-flatten-8-4.0-expe
|
Coaster41
| 2025-08-21T03:38:14Z | 0 | 0 |
saelens
|
[
"saelens",
"region:us"
] | null | 2025-08-18T06:16:34Z |
---
library_name: saelens
---
# SAEs for use with the SAELens library
This repository contains the following SAEs:
- blocks.0.hook_mlp_out
Load these SAEs using SAELens as below:
```python
from sae_lens import SAE
sae = SAE.from_pretrained("Coaster41/patchtst-sae-flatten-8-4.0-expe", "<sae_id>")
```
|
unitova/blockassist-bc-zealous_sneaky_raven_1755744696
|
unitova
| 2025-08-21T03:17:41Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"zealous sneaky raven",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-21T03:17:38Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- zealous sneaky raven
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
indoempatnol/blockassist-bc-fishy_wary_swan_1755744161
|
indoempatnol
| 2025-08-21T03:09:34Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"fishy wary swan",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-21T03:09:30Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- fishy wary swan
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1755743308
|
IvanJAjebu
| 2025-08-21T02:29:38Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thorny slender capybara",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-21T02:29:29Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thorny slender capybara
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
original-Clip-Sophie-Rain-Viral-video-Clip/New.full.videos.Sophie.Rain.Spiderman.Viral.Video.Official.Tutorial
|
original-Clip-Sophie-Rain-Viral-video-Clip
| 2025-08-21T02:13:39Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-21T02:13:27Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/3ckkv2u7?viral-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
unitova/blockassist-bc-zealous_sneaky_raven_1755740909
|
unitova
| 2025-08-21T02:13:31Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"zealous sneaky raven",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-21T02:13:27Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- zealous sneaky raven
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
SFWolf/llama3.2_3B_news_merged
|
SFWolf
| 2025-08-21T01:51:53Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-08-21T01:51:53Z |
---
license: apache-2.0
---
|
ZiadWael/medgemma3-4b-it-adapter-QA-MCQ-V1
|
ZiadWael
| 2025-08-21T01:22:08Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-21T01:22:03Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
lisaozill03/blockassist-bc-rugged_prickly_alpaca_1755735525
|
lisaozill03
| 2025-08-21T00:43:01Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"rugged prickly alpaca",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-21T00:42:57Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- rugged prickly alpaca
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
X-iZhang/Med-CXRGen-F
|
X-iZhang
| 2025-08-21T00:18:42Z | 3,429 | 1 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"libra",
"text-generation",
"RRG",
"Radiology Report Generation",
"Chest X-ray",
"Multimodal Large Language Models",
"image-text-to-text",
"dataset:StanfordAIMI/rrg24-shared-task-bionlp",
"arxiv:2412.04954",
"base_model:liuhaotian/llava-v1.5-7b",
"base_model:finetune:liuhaotian/llava-v1.5-7b",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2024-12-31T23:05:03Z |
---
license: apache-2.0
base_model:
- liuhaotian/llava-v1.5-7b
base_model_relation: finetune
pipeline_tag: image-text-to-text
tags:
- RRG
- Radiology Report Generation
- Chest X-ray
- Multimodal Large Language Models
library_name: transformers
datasets:
- StanfordAIMI/rrg24-shared-task-bionlp
---
# **Med-CXRGen-F Model Card**
**Task**: Radiology Report Generation – Findings section (RRG Shared Task)
## Paper and Resources
For details on Med-CXRGen-F, including its architecture, training strategy, and evaluation—please refer to the following resources:
- 📘 **Paper:** [Gla-AI4BioMed at RRG24: Visual Instruction-tuned Adaptation for Radiology Report Generation](https://arxiv.org/abs/2412.04954)
- 💻 Code Repository: [GitHub: Med-CXRGen](https://github.com/X-iZhang/RRG-BioNLP-ACL2024)
---
## How to Cite ✒️
If you use this model in academic or research contexts, please cite:
```bibtex
@inproceedings{zhang-etal-2024-gla,
title = "Gla-{AI}4{B}io{M}ed at {RRG}24: Visual Instruction-tuned Adaptation for Radiology Report Generation",
author = "Zhang, Xi and
Meng, Zaiqiao and
Lever, Jake and
Ho, Edmond S.L.",
editor = "Demner-Fushman, Dina and
Ananiadou, Sophia and
Miwa, Makoto and
Roberts, Kirk and
Tsujii, Junichi",
booktitle = "Proceedings of the 23rd Workshop on Biomedical Natural Language Processing",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.bionlp-1.54/",
doi = "10.18653/v1/2024.bionlp-1.54",
pages = "624--634",
}
```
|
helmutsukocok/blockassist-bc-loud_scavenging_kangaroo_1755733589
|
helmutsukocok
| 2025-08-21T00:13:24Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"loud scavenging kangaroo",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-21T00:13:21Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- loud scavenging kangaroo
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
hle2025/qwen2.5_7b_gtpo_step40
|
hle2025
| 2025-08-21T00:11:46Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-21T00:10:29Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1755734823
|
IvanJAjebu
| 2025-08-21T00:08:11Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thorny slender capybara",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-21T00:08:02Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thorny slender capybara
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
yaelahnal/blockassist-bc-mute_clawed_crab_1755730158
|
yaelahnal
| 2025-08-20T22:50:27Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"mute clawed crab",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T22:50:09Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- mute clawed crab
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
calegpedia/blockassist-bc-stealthy_slimy_rooster_1755727787
|
calegpedia
| 2025-08-20T22:36:50Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stealthy slimy rooster",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T22:36:47Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stealthy slimy rooster
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
backt/nasdxlv100
|
backt
| 2025-08-20T22:22:05Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-08-20T22:12:23Z |
---
license: apache-2.0
---
|
ggml-org/Kimi-VL-A3B-Thinking-2506-GGUF
|
ggml-org
| 2025-08-20T22:18:32Z | 0 | 2 | null |
[
"gguf",
"base_model:moonshotai/Kimi-VL-A3B-Thinking-2506",
"base_model:quantized:moonshotai/Kimi-VL-A3B-Thinking-2506",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-20T22:12:29Z |
---
base_model:
- moonshotai/Kimi-VL-A3B-Thinking-2506
---
Original model: https://huggingface.co/moonshotai/Kimi-VL-A3B-Thinking-2506
Supported added in this PR: https://github.com/ggml-org/llama.cpp/pull/15458
|
chainway9/blockassist-bc-untamed_quick_eel_1755722789
|
chainway9
| 2025-08-20T21:14:01Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"untamed quick eel",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T21:13:58Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- untamed quick eel
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
weijiang99/clinvarbert
|
weijiang99
| 2025-08-20T20:49:59Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-generation",
"biomedical",
"clinical",
"variant-classification",
"genetics",
"fine-tuned",
"text-classification",
"en",
"dataset:clinvar",
"base_model:dmis-lab/biobert-large-cased-v1.1",
"base_model:finetune:dmis-lab/biobert-large-cased-v1.1",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-19T20:10:43Z |
---
library_name: transformers
tags:
- biomedical
- clinical
- variant-classification
- genetics
- bert
- fine-tuned
language:
- en
license: apache-2.0
base_model: dmis-lab/biobert-large-cased-v1.1
datasets:
- clinvar
pipeline_tag: text-classification
---
# ClinVarBERT
A BERT model fine-tuned for clinical variant interpretation and classification tasks, based on BioBERT-Large.
## Model Details
### Model Description
ClinVarBERT-Large is a domain-specific language model fine-tuned from BioBERT-Large for understanding and classifying genetic variant descriptions and clinical interpretations. The model has been trained to understand the nuanced language used in clinical genetics, particularly for variant pathogenicity assessment and clinical significance classification.
- **Model type:** BERT-based transformer for sequence classification
- **Language(s):** English (biomedical/clinical domain)
- **License:** Apache 2.0
- **Finetuned from model:** dmis-lab/biobert-large-cased-v1.1
### Model Sources
- **Repository:** [Your GitHub Repository]
- **Base Model:** [BioBERT-Large](https://huggingface.co/dmis-lab/biobert-large-cased-v1.1)
- **Training Data:** ClinVar database submissions text
## Uses
### Direct Use
This model is designed for:
- **Variant pathogenicity classification:** Classifying genetic variants as P/LP, B/LB, or VUS
- **Clinical interpretation analysis:** Understanding and categorizing clinical variant descriptions
- **Biomedical text classification:** General classification tasks in the clinical genetics domain
## How to Get Started with the Model
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
# Load model and tokenizer
tokenizer = AutoTokenizer.from_pretrained("weijiang99/clinvarbert")
model = AutoModelForSequenceClassification.from_pretrained("weijiang99/clinvarbert")
# Example usage
text = "This missense variant in exon 5 of the BRCA1 gene has been observed in multiple families with breast cancer."
inputs = tokenizer(text, return_tensors="pt", truncation=True, padding=True)
with torch.no_grad():
outputs = model(**inputs)
predictions = torch.nn.functional.softmax(outputs.logits, dim=-1)
# Get predicted class
predicted_class = torch.argmax(predictions, dim=-1)
|
VoilaRaj/81_b_qL9xTD
|
VoilaRaj
| 2025-08-20T20:39:59Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-08-20T20:36:04Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
roeker/blockassist-bc-quick_wiry_owl_1755722050
|
roeker
| 2025-08-20T20:34:57Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"quick wiry owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T20:34:49Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- quick wiry owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
eakaraman/MyGemmaNPC
|
eakaraman
| 2025-08-20T20:34:00Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"gemma3_text",
"text-generation",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"base_model:google/gemma-3-270m-it",
"base_model:finetune:google/gemma-3-270m-it",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-20T20:30:05Z |
---
base_model: google/gemma-3-270m-it
library_name: transformers
model_name: MyGemmaNPC
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for MyGemmaNPC
This model is a fine-tuned version of [google/gemma-3-270m-it](https://huggingface.co/google/gemma-3-270m-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="eakaraman/MyGemmaNPC", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.2
- Pytorch: 2.8.0+cu126
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
safe-challenge/safe-video-example-submission
|
safe-challenge
| 2025-08-20T20:26:21Z | 0 | 0 | null |
[
"video-classification",
"region:us"
] |
video-classification
| 2025-06-20T16:45:28Z |
---
pipeline_tag: video-classification
---
# SAFE Video Challenge Example Submission
The key requirements is to have a `script.py` file in the top level directory of the repo and optionally a `requirements.txt` file
For more details: https://safe-video-2025.dsri.org/#-model-submission
|
mang3dd/blockassist-bc-tangled_slithering_alligator_1755719656
|
mang3dd
| 2025-08-20T20:19:45Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"tangled slithering alligator",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T20:19:42Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- tangled slithering alligator
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
rbelanec/train_cola_1755694493
|
rbelanec
| 2025-08-20T19:51:51Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"llama-factory",
"prefix-tuning",
"generated_from_trainer",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"region:us"
] | null | 2025-08-20T18:53:28Z |
---
library_name: peft
license: llama3
base_model: meta-llama/Meta-Llama-3-8B-Instruct
tags:
- llama-factory
- prefix-tuning
- generated_from_trainer
model-index:
- name: train_cola_1755694493
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# train_cola_1755694493
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the cola dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3498
- Num Input Tokens Seen: 3465288
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 123
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Input Tokens Seen |
|:-------------:|:-----:|:-----:|:---------------:|:-----------------:|
| 0.2119 | 0.5 | 1924 | 0.2508 | 173872 |
| 0.1252 | 1.0 | 3848 | 0.2795 | 346872 |
| 0.2905 | 1.5 | 5772 | 0.2591 | 520296 |
| 0.31 | 2.0 | 7696 | 0.2402 | 693752 |
| 0.243 | 2.5 | 9620 | 0.2488 | 867416 |
| 0.2176 | 3.0 | 11544 | 0.2401 | 1040128 |
| 0.2172 | 3.5 | 13468 | 0.2428 | 1212976 |
| 0.2667 | 4.0 | 15392 | 0.2426 | 1386696 |
| 0.2669 | 4.5 | 17316 | 0.2381 | 1559896 |
| 0.2104 | 5.0 | 19240 | 0.2482 | 1733072 |
| 0.2037 | 5.5 | 21164 | 0.2389 | 1906160 |
| 0.1723 | 6.0 | 23088 | 0.2377 | 2079640 |
| 0.147 | 6.5 | 25012 | 0.2382 | 2253000 |
| 0.3044 | 7.0 | 26936 | 0.2424 | 2425920 |
| 0.3173 | 7.5 | 28860 | 0.2561 | 2598960 |
| 0.2224 | 8.0 | 30784 | 0.2512 | 2772144 |
| 0.1814 | 8.5 | 32708 | 0.3283 | 2944864 |
| 0.2271 | 9.0 | 34632 | 0.3048 | 3118472 |
| 0.1103 | 9.5 | 36556 | 0.3464 | 3291720 |
| 0.2212 | 10.0 | 38480 | 0.3498 | 3465288 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.8.0+cu128
- Datasets 3.6.0
- Tokenizers 0.21.1
|
Marcusmateo/Hashir_distilBERT_v1.7
|
Marcusmateo
| 2025-08-20T19:47:33Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"distilbert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-20T19:47:21Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
koloni/blockassist-bc-deadly_graceful_stingray_1755716248
|
koloni
| 2025-08-20T19:22:36Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"deadly graceful stingray",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T19:22:32Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- deadly graceful stingray
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
canoplos112/blockassist-bc-yapping_sleek_squirrel_1755717033
|
canoplos112
| 2025-08-20T19:12:26Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"yapping sleek squirrel",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T19:11:08Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- yapping sleek squirrel
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
olga-vizcaino-video-infidelidad-colombia/Ver.Olga.Vizcaino.video.infidelidad.en.Colombia.viral.en.Twitter.y.Telegram
|
olga-vizcaino-video-infidelidad-colombia
| 2025-08-20T19:06:58Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-20T19:04:45Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/5xr5mb3e?leaked-videos/" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
En redes sociales, miles de usuarios están buscando el video de Olga Vizcaino que se ha vuelto viral en Colombia. El clip muestra a la mujer samaria involucrada en un caso de infidelidad con Adrián Villar, esposo de la entrenadora fitness Yoselin Mora, quien además está embarazada. El caso ha generado un intenso debate en plataformas como Facebook, TikTok y YouTube, donde se han difundido entrevistas y reacciones de los protagonistas.
¿Qué pasó entre Olga Vizcaino, Adrián Villar y Yoselin Mora?
La historia comenzó cuando Yoselin Mora, pareja de Adrián Villar, publicó en Facebook capturas de pantalla y fotos que, según ella, evidenciaban la relación extramarital de su esposo con Olga Vizcaíno. En dichas publicaciones, Mora acusó a Olga de “meterse con un hombre casado” y de no importarle que la esposa estuviera esperando un hijo.
|
sahil3112/my-awesome-model
|
sahil3112
| 2025-08-20T18:56:43Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"feature-extraction",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2025-08-20T18:56:22Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
AnonymousCS/xlmr_immigration_combo21_2
|
AnonymousCS
| 2025-08-20T18:01:01Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-large",
"base_model:finetune:FacebookAI/xlm-roberta-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-20T17:57:21Z |
---
library_name: transformers
license: mit
base_model: FacebookAI/xlm-roberta-large
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: xlmr_immigration_combo21_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlmr_immigration_combo21_2
This model is a fine-tuned version of [FacebookAI/xlm-roberta-large](https://huggingface.co/FacebookAI/xlm-roberta-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2842
- Accuracy: 0.9242
- 1-f1: 0.8850
- 1-recall: 0.8764
- 1-precision: 0.8937
- Balanced Acc: 0.9122
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | 1-f1 | 1-recall | 1-precision | Balanced Acc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:--------:|:-----------:|:------------:|
| 0.2109 | 1.0 | 25 | 0.2585 | 0.9075 | 0.8662 | 0.8996 | 0.8351 | 0.9055 |
| 0.1807 | 2.0 | 50 | 0.2331 | 0.9267 | 0.8889 | 0.8803 | 0.8976 | 0.9151 |
| 0.0668 | 3.0 | 75 | 0.2858 | 0.9165 | 0.8748 | 0.8764 | 0.8731 | 0.9064 |
| 0.1601 | 4.0 | 100 | 0.2842 | 0.9242 | 0.8850 | 0.8764 | 0.8937 | 0.9122 |
### Framework versions
- Transformers 4.56.0.dev0
- Pytorch 2.8.0+cu126
- Datasets 4.0.0
- Tokenizers 0.21.4
|
Hagrass/LLama3-3.2-instruct-trained
|
Hagrass
| 2025-08-20T17:52:07Z | 0 | 0 | null |
[
"safetensors",
"llama",
"arabic",
"ar",
"base_model:meta-llama/Llama-3.2-3B-Instruct",
"base_model:finetune:meta-llama/Llama-3.2-3B-Instruct",
"license:llama3.2",
"region:us"
] | null | 2025-08-20T16:50:57Z |
---
license: llama3.2
language:
- ar
base_model:
- meta-llama/Llama-3.2-3B-Instruct
tags:
- arabic
---
This model is built upon Meta-Llama 3.2 Instruct (3B parameters) and extended through supervised fine-tuning on a large-scale bilingual dataset of approximately 2 million entries. The training corpus combines the ToMe dataset, which offers diverse instruction–response pairs and conversational contexts, with the Arabic Wikipedia dataset, which provides high–quality factual content and rich coverage of knowledge in Arabic. This combination was chosen to balance instruction-following ability with knowledge grounding, especially in domains where Arabic resources are often underrepresented.
During supervised fine-tuning, the model was optimized to better understand natural instructions, generate more coherent and contextually accurate responses, and handle a wide range of tasks spanning reasoning, summarization, and factual question answering. The inclusion of Arabic Wikipedia allows the model to provide stronger support for Arabic-language queries, enabling it to handle both monolingual Arabic tasks and mixed bilingual prompts more effectively than the base Llama 3.2 Instruct model.
The resulting model is well-suited for general-purpose instruction following, with a particular emphasis on Arabic fluency and comprehension. It is expected to be useful in applications such as educational tools, knowledge assistants, conversational agents, and research systems where instruction compliance and multilingual support are critical. While the model shows improved reliability in following prompts and generating informative content, users should remain aware of potential limitations, including biases inherited from the training data and the possibility of occasional hallucinations in factual outputs.
|
MattBou00/llama-3-2-1b-detox_v1f-checkpoint-epoch-100
|
MattBou00
| 2025-08-20T17:32:09Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"ppo",
"reinforcement-learning",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
reinforcement-learning
| 2025-08-20T00:35:43Z |
---
license: apache-2.0
library_name: transformers
tags:
- trl
- ppo
- transformers
- reinforcement-learning
---
# TRL Model
This is a [TRL language model](https://github.com/huggingface/trl) that has been fine-tuned with reinforcement learning to
guide the model outputs according to a value, function, or human feedback. The model can be used for text generation.
## Usage
To use this model for inference, first install the TRL library:
```bash
python -m pip install trl
```
You can then generate text as follows:
```python
from transformers import pipeline
generator = pipeline("text-generation", model="MattBou00//rds/general/user/mrb124/home/IRL-Bayesian/outputs/2025-08-20_18-18-32/checkpoints/checkpoint-epoch-100")
outputs = generator("Hello, my llama is cute")
```
If you want to use the model for training or to obtain the outputs from the value head, load the model as follows:
```python
from transformers import AutoTokenizer
from trl import AutoModelForCausalLMWithValueHead
tokenizer = AutoTokenizer.from_pretrained("MattBou00//rds/general/user/mrb124/home/IRL-Bayesian/outputs/2025-08-20_18-18-32/checkpoints/checkpoint-epoch-100")
model = AutoModelForCausalLMWithValueHead.from_pretrained("MattBou00//rds/general/user/mrb124/home/IRL-Bayesian/outputs/2025-08-20_18-18-32/checkpoints/checkpoint-epoch-100")
inputs = tokenizer("Hello, my llama is cute", return_tensors="pt")
outputs = model(**inputs, labels=inputs["input_ids"])
```
|
Muapi/black-spider-man-bodysuit-cosplay-il-flux
|
Muapi
| 2025-08-20T17:23:43Z | 0 | 0 | null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-20T17:22:49Z |
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# Black Spider-Man Bodysuit Cosplay [IL+Flux]

**Base model**: Flux.1 D
**Trained words**: wearing a black SymbioteSuit
## 🧠 Usage (Python)
🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:701263@784643", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
sandhyavs/dusty_3cam_52_act
|
sandhyavs
| 2025-08-20T17:15:29Z | 0 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"robotics",
"act",
"dataset:sandhyavs/dusty-3cam-52-copy",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-08-20T17:15:14Z |
---
datasets: sandhyavs/dusty-3cam-52-copy
library_name: lerobot
license: apache-2.0
model_name: act
pipeline_tag: robotics
tags:
- robotics
- act
- lerobot
---
# Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high success rates.
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
python -m lerobot.scripts.train \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
*Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`.*
### Evaluate the policy/run inference
```bash
python -m lerobot.record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
* **License:** apache-2.0
|
youuotty/blockassist-bc-omnivorous_squeaky_bear_1755709950
|
youuotty
| 2025-08-20T17:13:14Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"omnivorous squeaky bear",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T17:12:33Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- omnivorous squeaky bear
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
roeker/blockassist-bc-quick_wiry_owl_1755709760
|
roeker
| 2025-08-20T17:10:37Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"quick wiry owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T17:09:59Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- quick wiry owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Trungdjoon/esg-visobert_run_1
|
Trungdjoon
| 2025-08-20T16:16:44Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"xlm-roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-20T16:16:09Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Vasya777/blockassist-bc-lumbering_enormous_sloth_1755700612
|
Vasya777
| 2025-08-20T14:37:55Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"lumbering enormous sloth",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T14:37:27Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- lumbering enormous sloth
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
youuotty/blockassist-bc-furry_reptilian_flamingo_1755700198
|
youuotty
| 2025-08-20T14:30:06Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"furry reptilian flamingo",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T14:29:58Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- furry reptilian flamingo
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.