modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-12 12:31:00
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 555
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-12 12:28:53
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
tushar772/mini_ads.py
|
tushar772
| 2025-09-12T06:51:35Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-09-12T06:51:35Z |
---
license: apache-2.0
---
|
LandCruiser/sn21_omg3_1209_3
|
LandCruiser
| 2025-09-12T06:49:28Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-09-12T06:38:31Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
LandCruiser/sn21_omg3_1209_1
|
LandCruiser
| 2025-09-12T06:49:12Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-09-12T06:38:26Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
Adanato/Llama-3.2-1B-Instruct-high_openthoughts_1k
|
Adanato
| 2025-09-12T06:48:34Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"fyksft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-12T06:46:55Z |
---
library_name: transformers
tags:
- fyksft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
LarryAIDraw/SchwarzPDXL_Lora__v1_0
|
LarryAIDraw
| 2025-09-12T06:48:00Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2025-09-12T06:47:36Z |
---
license: creativeml-openrail-m
---
|
mradermacher/Qwen3-SG1-256k-ctx-6B-i1-GGUF
|
mradermacher
| 2025-09-12T06:47:26Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"programming",
"code generation",
"code",
"coding",
"coder",
"chat",
"brainstorm",
"qwen",
"qwen3",
"qwencoder",
"brainstorm 20x",
"creative",
"all uses cases",
"Jan-V1",
"Stargate",
"SG1",
"horror",
"science fiction",
"fantasy",
"finetune",
"thinking",
"reasoning",
"unsloth",
"en",
"base_model:DavidAU/Qwen3-SG1-256k-ctx-6B",
"base_model:quantized:DavidAU/Qwen3-SG1-256k-ctx-6B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-09-12T05:48:42Z |
---
base_model: DavidAU/Qwen3-SG1-256k-ctx-6B
language:
- en
library_name: transformers
license: apache-2.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- programming
- code generation
- code
- coding
- coder
- chat
- code
- chat
- brainstorm
- qwen
- qwen3
- qwencoder
- brainstorm 20x
- creative
- all uses cases
- Jan-V1
- Stargate
- SG1
- horror
- science fiction
- fantasy
- finetune
- thinking
- reasoning
- unsloth
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K_M Q4_0 IQ3_XS Q4_1 IQ3_S -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
weighted/imatrix quants of https://huggingface.co/DavidAU/Qwen3-SG1-256k-ctx-6B
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Qwen3-SG1-256k-ctx-6B-i1-GGUF).***
static quants are available at https://huggingface.co/mradermacher/Qwen3-SG1-256k-ctx-6B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen3-SG1-256k-ctx-6B-i1-GGUF/resolve/main/Qwen3-SG1-256k-ctx-6B.imatrix.gguf) | imatrix | 0.1 | imatrix file (for creating your own qwuants) |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-SG1-256k-ctx-6B-i1-GGUF/resolve/main/Qwen3-SG1-256k-ctx-6B.i1-IQ1_S.gguf) | i1-IQ1_S | 1.7 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-SG1-256k-ctx-6B-i1-GGUF/resolve/main/Qwen3-SG1-256k-ctx-6B.i1-IQ1_M.gguf) | i1-IQ1_M | 1.8 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-SG1-256k-ctx-6B-i1-GGUF/resolve/main/Qwen3-SG1-256k-ctx-6B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.0 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-SG1-256k-ctx-6B-i1-GGUF/resolve/main/Qwen3-SG1-256k-ctx-6B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.2 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-SG1-256k-ctx-6B-i1-GGUF/resolve/main/Qwen3-SG1-256k-ctx-6B.i1-IQ2_S.gguf) | i1-IQ2_S | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-SG1-256k-ctx-6B-i1-GGUF/resolve/main/Qwen3-SG1-256k-ctx-6B.i1-IQ2_M.gguf) | i1-IQ2_M | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-SG1-256k-ctx-6B-i1-GGUF/resolve/main/Qwen3-SG1-256k-ctx-6B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 2.4 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-SG1-256k-ctx-6B-i1-GGUF/resolve/main/Qwen3-SG1-256k-ctx-6B.i1-Q2_K.gguf) | i1-Q2_K | 2.6 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-SG1-256k-ctx-6B-i1-GGUF/resolve/main/Qwen3-SG1-256k-ctx-6B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 2.7 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-SG1-256k-ctx-6B-i1-GGUF/resolve/main/Qwen3-SG1-256k-ctx-6B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-SG1-256k-ctx-6B-i1-GGUF/resolve/main/Qwen3-SG1-256k-ctx-6B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.0 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-SG1-256k-ctx-6B-i1-GGUF/resolve/main/Qwen3-SG1-256k-ctx-6B.i1-IQ3_S.gguf) | i1-IQ3_S | 3.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-SG1-256k-ctx-6B-i1-GGUF/resolve/main/Qwen3-SG1-256k-ctx-6B.i1-IQ3_M.gguf) | i1-IQ3_M | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-SG1-256k-ctx-6B-i1-GGUF/resolve/main/Qwen3-SG1-256k-ctx-6B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.3 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-SG1-256k-ctx-6B-i1-GGUF/resolve/main/Qwen3-SG1-256k-ctx-6B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 3.5 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-SG1-256k-ctx-6B-i1-GGUF/resolve/main/Qwen3-SG1-256k-ctx-6B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-SG1-256k-ctx-6B-i1-GGUF/resolve/main/Qwen3-SG1-256k-ctx-6B.i1-Q4_0.gguf) | i1-Q4_0 | 3.8 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-SG1-256k-ctx-6B-i1-GGUF/resolve/main/Qwen3-SG1-256k-ctx-6B.i1-IQ4_NL.gguf) | i1-IQ4_NL | 3.8 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-SG1-256k-ctx-6B-i1-GGUF/resolve/main/Qwen3-SG1-256k-ctx-6B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 3.8 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-SG1-256k-ctx-6B-i1-GGUF/resolve/main/Qwen3-SG1-256k-ctx-6B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-SG1-256k-ctx-6B-i1-GGUF/resolve/main/Qwen3-SG1-256k-ctx-6B.i1-Q4_1.gguf) | i1-Q4_1 | 4.1 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-SG1-256k-ctx-6B-i1-GGUF/resolve/main/Qwen3-SG1-256k-ctx-6B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-SG1-256k-ctx-6B-i1-GGUF/resolve/main/Qwen3-SG1-256k-ctx-6B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-SG1-256k-ctx-6B-i1-GGUF/resolve/main/Qwen3-SG1-256k-ctx-6B.i1-Q6_K.gguf) | i1-Q6_K | 5.3 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
LarryAIDraw/IL_Ade_Agent_Bunny
|
LarryAIDraw
| 2025-09-12T06:44:58Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2025-09-11T08:31:23Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/1932786/ade-agent-bunny-nikke-sdxl-lora-illustrious
|
mradermacher/Qwen3-SG1-256k-ctx-6B-GGUF
|
mradermacher
| 2025-09-12T06:44:04Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"programming",
"code generation",
"code",
"coding",
"coder",
"chat",
"brainstorm",
"qwen",
"qwen3",
"qwencoder",
"brainstorm 20x",
"creative",
"all uses cases",
"Jan-V1",
"Stargate",
"SG1",
"horror",
"science fiction",
"fantasy",
"finetune",
"thinking",
"reasoning",
"unsloth",
"en",
"base_model:DavidAU/Qwen3-SG1-256k-ctx-6B",
"base_model:quantized:DavidAU/Qwen3-SG1-256k-ctx-6B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-12T05:17:03Z |
---
base_model: DavidAU/Qwen3-SG1-256k-ctx-6B
language:
- en
library_name: transformers
license: apache-2.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- programming
- code generation
- code
- coding
- coder
- chat
- code
- chat
- brainstorm
- qwen
- qwen3
- qwencoder
- brainstorm 20x
- creative
- all uses cases
- Jan-V1
- Stargate
- SG1
- horror
- science fiction
- fantasy
- finetune
- thinking
- reasoning
- unsloth
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/DavidAU/Qwen3-SG1-256k-ctx-6B
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Qwen3-SG1-256k-ctx-6B-GGUF).***
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Qwen3-SG1-256k-ctx-6B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen3-SG1-256k-ctx-6B-GGUF/resolve/main/Qwen3-SG1-256k-ctx-6B.Q2_K.gguf) | Q2_K | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-SG1-256k-ctx-6B-GGUF/resolve/main/Qwen3-SG1-256k-ctx-6B.Q3_K_S.gguf) | Q3_K_S | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-SG1-256k-ctx-6B-GGUF/resolve/main/Qwen3-SG1-256k-ctx-6B.Q3_K_M.gguf) | Q3_K_M | 3.3 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-SG1-256k-ctx-6B-GGUF/resolve/main/Qwen3-SG1-256k-ctx-6B.Q3_K_L.gguf) | Q3_K_L | 3.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-SG1-256k-ctx-6B-GGUF/resolve/main/Qwen3-SG1-256k-ctx-6B.IQ4_XS.gguf) | IQ4_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-SG1-256k-ctx-6B-GGUF/resolve/main/Qwen3-SG1-256k-ctx-6B.Q4_K_S.gguf) | Q4_K_S | 3.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-SG1-256k-ctx-6B-GGUF/resolve/main/Qwen3-SG1-256k-ctx-6B.Q4_K_M.gguf) | Q4_K_M | 4.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-SG1-256k-ctx-6B-GGUF/resolve/main/Qwen3-SG1-256k-ctx-6B.Q5_K_S.gguf) | Q5_K_S | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-SG1-256k-ctx-6B-GGUF/resolve/main/Qwen3-SG1-256k-ctx-6B.Q5_K_M.gguf) | Q5_K_M | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-SG1-256k-ctx-6B-GGUF/resolve/main/Qwen3-SG1-256k-ctx-6B.Q6_K.gguf) | Q6_K | 5.3 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-SG1-256k-ctx-6B-GGUF/resolve/main/Qwen3-SG1-256k-ctx-6B.Q8_0.gguf) | Q8_0 | 6.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-SG1-256k-ctx-6B-GGUF/resolve/main/Qwen3-SG1-256k-ctx-6B.f16.gguf) | f16 | 12.8 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
100Pudoff/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-pensive_large_clam
|
100Pudoff
| 2025-09-12T06:42:33Z | 166 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am pensive_large_clam",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-04T09:12:04Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am pensive_large_clam
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
AdvRahul/Axion-Flash-Reasoning-2B
|
AdvRahul
| 2025-09-12T06:40:25Z | 329 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama-cpp",
"text-generation",
"en",
"base_model:nvidia/Nemotron-Research-Reasoning-Qwen-1.5B",
"base_model:quantized:nvidia/Nemotron-Research-Reasoning-Qwen-1.5B",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2025-08-24T17:23:34Z |
---
base_model: nvidia/Nemotron-Research-Reasoning-Qwen-1.5B
language:
- en
license: cc-by-nc-4.0
pipeline_tag: text-generation
library_name: transformers
tags:
- llama-cpp
---
# AdvRahul/Axion-Flash-Reasoning-2B-Q8_0-GGUF
This model was built on top of Nemotron-Research-Reasoning-Qwen-1.5B with advanced safety protocols.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo AdvRahul/Axion-Flash-Reasoning-2B-Q8_0-GGUF --hf-file axion-flash-reasoning-2B-Q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo AdvRahul/Axion-Flash-Reasoning-2B-Q8_0-GGUF --hf-file axion-flash-reasoning-2B-Q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo AdvRahul/Axion-Flash-Reasoning-2B-Q8_0-GGUF --hf-file axion-flash-reasoning-2B-Q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo AdvRahul/Axion-Flash-Reasoning-2B-Q8_0-GGUF --hf-file axion-flash-reasoning-2B-Q8_0.gguf -c 2048
```
|
stonermay/blockassist-bc-diving_lightfooted_caterpillar_1757658967
|
stonermay
| 2025-09-12T06:37:17Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"diving lightfooted caterpillar",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-12T06:37:09Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- diving lightfooted caterpillar
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
raniero/ax-real-001-repo
|
raniero
| 2025-09-12T06:34:48Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"lora",
"bittensor",
"subnet-56",
"gradients",
"it",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"base_model:adapter:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"license:apache-2.0",
"region:us"
] | null | 2025-09-12T06:34:46Z |
---
language:
- it
license: apache-2.0
library_name: peft
tags: [lora, bittensor, subnet-56, gradients]
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
---
# ARES56 — LoRA adapter
Upload ID: ax-real-001_1757658886
upload_id: unknown_1757404904
File inclusi:
- `adapter_model.safetensors` — SHA256: `e5a00aa9991ac8a5ee3109844d84a55583bd20572ad3ffcd42792f3c36b183ad`
- `adapter_config.json` — SHA256: `4f39b39f151e0d31a8135b89599746fd2e06285a8594595589d7974f553af441`
- `tokenizer_config.json` — SHA256: `missing`
- `special_tokens_map.json` — SHA256: `missing`
Output generato via Axolotl (CPU / smoke). Nessun checkpoint completo incluso.
|
omerbektasss/blockassist-bc-keen_fast_giraffe_1757658832
|
omerbektasss
| 2025-09-12T06:34:12Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"keen fast giraffe",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-12T06:34:08Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- keen fast giraffe
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
skatzR/USER-BGE-M3-ONNX-INT8
|
skatzR
| 2025-09-12T06:33:05Z | 182 | 0 | null |
[
"onnx",
"xlm-roberta",
"quantization",
"sentence-embeddings",
"semantic-search",
"ru",
"en",
"base_model:deepvk/USER-bge-m3",
"base_model:quantized:deepvk/USER-bge-m3",
"license:apache-2.0",
"region:us"
] | null | 2025-09-08T06:26:35Z |
---
license: apache-2.0
base_model:
- deepvk/USER-bge-m3
language:
- ru
- en
tags:
- onnx
- quantization
- sentence-embeddings
- semantic-search
---
# 🧩 DeepVK-USER-BGE-M3 — Quantized ONNX (INT8)
✨ This repository contains a **quantized INT8 ONNX version** of [`deepvk/USER-bge-m3`](https://huggingface.co/deepvk/USER-bge-m3).
It is designed for **fast CPU inference** with [ONNX Runtime](https://onnxruntime.ai/), making it a great choice for **semantic search, embeddings generation, and text similarity** tasks in **Russian** 🇷🇺 and **English** 🇬🇧.
---
## 🔍 Model Card
| Property | Value |
|---------------------|-----------------------------------------------------------------------|
| **Base model** | [`deepvk/USER-bge-m3`](https://huggingface.co/deepvk/USER-bge-m3), [`BAAI/bge-m3`](https://huggingface.co/BAAI/bge-m3) |
| **Quantization** | INT8 (Dynamic) |
| **Format** | ONNX |
| **Libraries** | `transformers`, `onnxruntime`, `optimum`, `sentence-transformers` |
| **Embedding dim** | 1024 |
| **Supported HW** | CPU (optimized for Intel AVX512-VNNI, fallback to AVX2) |
| **License** | Apache-2.0 |
---
## 🚀 Features
- ⚡ **Fast CPU inference** — ONNX + INT8 gives a strong speed-up.
- 📦 **Lightweight** — reduced model size, lower memory footprint.
- 🔄 **Drop-in replacement** — embeddings compatible with the FP32 version.
- 🌍 **Multilingual** — supports Russian 🇷🇺 and English 🇬🇧.
---
## 🧠 Intended Use
**✅ Recommended for:**
- Semantic search & retrieval systems
- Recommendation pipelines
- Text similarity & clustering
- Low-latency CPU deployments
**❌ Not ideal for:**
- Absolute maximum accuracy scenarios (INT8 introduces minor loss)
- GPU-optimized pipelines (prefer FP16/FP32 models instead)
---
## ⚖️ Pros & Cons of Quantized ONNX
**Pros** ✅
- Easy to use (no calibration dataset required).
- Smaller & faster than FP32.
- Works out of the box with ONNX Runtime.
**Cons** ❌
- Slight accuracy drop compared to static quantization.
- AVX512 optimizations only on modern Intel CPUs.
- No GPU acceleration in this export.
---
## 📊 Benchmark
| Metric | Value |
|------------------------------|-------------- |
| Avg cosine similarity (vs FP32) | ~0.988 |
| Median cosine similarity | ~0.988 |
| Orig model time (s) | 0.7504 |
| Quant model time (s) | 0.3539 |
| Inference speed | ~2× faster |
| Model size (MB) | 347.5 |
---
## 📂 Files
model_quantized.onnx — quantized model
tokenizer.json, vocab.txt, special_tokens_map.json — tokenizer
config.json — model config
---
## 🧩 Examples
You can try the model directly in **Google Colab**:
[](https://colab.research.google.com/#fileId=https%3A//huggingface.co/skatzR/USER-BGE-M3-ONNX-INT8/blob/main/notebooks/TEST_USER-BGE-M3-ONNX-INT8.ipynb)
This notebook demonstrates:
- Loading the **original FP32 model** [`deepvk/USER-bge-m3`](https://huggingface.co/deepvk/USER-bge-m3)
- Loading the **quantized INT8 ONNX model** [`skatzR/USER-BGE-M3-ONNX-INT8`](https://huggingface.co/skatzR/USER-BGE-M3-ONNX-INT8)
- Comparing **quality (cosine similarity)** and **inference speed** side by side
You can try this model with ready-to-use scripts in the `examples` folder:
- [`quantmodel.py`](./examples/quantmodel.py) — universal Python module for loading and encoding texts with the quantized ONNX model.
- [`app-console.py`](./examples/app-console.py) — console script to compare FP32 vs INT8 embeddings (cosine similarity + inference time).
- [`app-streamlit.py`](./examples/app-streamlit.py) — interactive demo with Streamlit.
|
Sunik93/gemma-3-1b-pt-MED
|
Sunik93
| 2025-09-12T06:30:51Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma3_text",
"text-generation",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-12T06:30:10Z |
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
aruboi/llama-32-11b-vlm_peft_output33
|
aruboi
| 2025-09-12T06:30:26Z | 3 | 0 |
peft
|
[
"peft",
"safetensors",
"generated_from_trainer",
"base_model:meta-llama/Llama-3.2-11B-Vision",
"base_model:adapter:meta-llama/Llama-3.2-11B-Vision",
"license:llama3.2",
"region:us"
] | null | 2025-09-09T10:59:54Z |
---
library_name: peft
license: llama3.2
base_model: meta-llama/Llama-3.2-11B-Vision
tags:
- generated_from_trainer
model-index:
- name: llama-32-11b-vlm_peft_output33
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama-32-11b-vlm_peft_output33
This model is a fine-tuned version of [meta-llama/Llama-3.2-11B-Vision](https://huggingface.co/meta-llama/Llama-3.2-11B-Vision) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3393
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_HF with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.873 | 0.1 | 5 | 1.6084 |
| 1.6055 | 0.2 | 10 | 1.5360 |
| 2.0931 | 0.3 | 15 | 1.4717 |
| 1.3995 | 0.4 | 20 | 1.4390 |
| 1.4799 | 0.5 | 25 | 1.4170 |
| 1.2999 | 0.6 | 30 | 1.3999 |
| 1.6356 | 0.7 | 35 | 1.3855 |
| 1.3253 | 0.8 | 40 | 1.3713 |
| 1.4132 | 0.9 | 45 | 1.3630 |
| 1.4824 | 1.0 | 50 | 1.3557 |
| 1.0958 | 1.1 | 55 | 1.3478 |
| 1.3437 | 1.2 | 60 | 1.3490 |
| 1.3517 | 1.3 | 65 | 1.3459 |
| 1.0089 | 1.4 | 70 | 1.3415 |
| 1.1556 | 1.5 | 75 | 1.3369 |
| 1.3741 | 1.6 | 80 | 1.3393 |
### Framework versions
- PEFT 0.10.0
- Transformers 4.47.0
- Pytorch 2.7.0a0+7c8ec84dab.nv25.03
- Datasets 3.5.0
- Tokenizers 0.21.1
|
mgparkzone/gemma-3-1b-pt-MED
|
mgparkzone
| 2025-09-12T06:29:26Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma3_text",
"text-generation",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-12T06:28:55Z |
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
GANGSSSSSSSS/gemma-3-1b-pt-MED
|
GANGSSSSSSSS
| 2025-09-12T06:29:06Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma3_text",
"text-generation",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-12T06:28:26Z |
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
deadman44/Wan2.2_T2i_T2v_LoRA
|
deadman44
| 2025-09-12T06:28:41Z | 0 | 16 | null |
[
"text-to-image",
"t2i",
"wan video",
"safetensors",
"text-to-video",
"en",
"license:apache-2.0",
"region:us"
] |
text-to-video
| 2025-07-27T00:45:26Z |
---
license: apache-2.0
pipeline_tag: text-to-video
language:
- en
tags:
- text-to-image
- t2i
- wan video
- safetensors
---
<style>
.title{
font-size: 2.5em;
letter-spacing: 0.01em;
padding: 0.5em 0;
}
.thumbwidth{
max-width: 180px;
}
.font_red{
color:red;
}
.font_blue{
color:blue;
}
.font_grey{
color: #aaaaaa;
}
</style>
# models
- [Wan2.2_myjc_v02](#myjc) (<span class="font_red">Wan2.2 LoRA</span>):2025-09-04<br />
- [Wan2.2_myob_v01](#myob) (<span class="font_red">Wan2.2 LoRA</span>):2025-08-31<br />
- [Wan2.2_myjd_v01](#myjd) (<span class="font_red">Wan2.2 LoRA</span>):2025-08-26<br />
- [Wan2.2_myjy_v01](#myjy) (<span class="font_red">Wan2.2 LoRA</span>):2025-08-21<br />
- [Wan2.2_myjk_v01](#myjk) (<span class="font_red">Wan2.2 LoRA</span>):2025-08-18<br />
- [Wan2.2_myjs_v01](#myjs) (<span class="font_red">Wan2.2 LoRA</span>):2025-08-11<br />
- Add [Workflow page](https://huggingface.co/deadman44/Wan2.2_Workflow_for_myxx_series_LoRA): 2025-08-04<br />
---
<br>
- Workflow
### - [Sample Workflow for myxx series LoRA](https://huggingface.co/deadman44/Wan2.2_Workflow_for_myxx_series_LoRA)<br>
<br>
---
<a id="myob"></a>
<h1 class="title">
<span>[Wan2.2] lora_wan_myob_v01</span>
</h1>
-<span class="font_blue">Wan video 2.2 for t2i, t2v for ComfyUI</span><br/>
-<span class="font_blue">The optimal resolution is 768 x 1280, 1024 x 1536.(T2i), 512x768 (T2v)</span><br/>
-<span class="font_blue">natural Japanese OB face</span><br/>
<br/>
<br/>
# Download
[Download: myob_High_v01](https://huggingface.co/deadman44/Wan2.2_T2i_T2v_LoRA/resolve/main/lora_wan2.2_myob_High_v01.safetensors?download=true) <br />
[Download: myob_Low_v01](https://huggingface.co/deadman44/Wan2.2_T2i_T2v_LoRA/resolve/main/lora_wan2.2_myob_Low_v01.safetensors?download=true) <br />
<br />
# Trigger
```bash
myob, japanese/european, photorealistic
and 23-30yo
```
<br />
# Sample prompt (v01)
<div style="display: flex; flex-direction: column; align-items: flex-start; gap: 12px;">
<strong>wan2.2 T2v</strong>
<video controls loop style="width:
360px; height: auto; object-fit: contain; border: 1px solid #ccc;">
<source src="https://huggingface.co/deadman44/Wan2.2_T2i_T2v_LoRA/resolve/main/samples/20250831073513_T2V_00002.mp4" type="video/mp4">
Your browser cannot play the video.
</video>
</div>
```bash
myob, japanese,
A Japanese woman, 30 years old, standing at kichen and holding pan.
She wears a white sweater and pink apron.
She has a brown bob hair.
She tilts her head slightly and smiles with closed lips.
A mole is visible on her neck.
She looks at the viewer calmly.
Motion: subtle breathing, head tilt
Style: photorealistic
Camera: medium close-up
Mood: serene
```
<br/>
---
<a id="myjd"></a>
<h1 class="title">
<span>[Wan2.2] lora_wan_myjd_v01</span>
</h1>
-<span class="font_blue">Wan video 2.2 for t2i, t2v for ComfyUI</span><br/>
-<span class="font_blue">The optimal resolution is 768 x 1280, 1024 x 1536.(T2i), 512x768 (T2v)</span><br/>
-<span class="font_blue">natural Japanese JD face</span><br/>
<br/>
<br/>
# Download
[Download: myjd_High_v01](https://huggingface.co/deadman44/Wan2.2_T2i_T2v_LoRA/resolve/main/lora_wan2.2_myjd_High_v01.safetensors?download=true) <br />
[Download: myjd_Low_v01](https://huggingface.co/deadman44/Wan2.2_T2i_T2v_LoRA/resolve/main/lora_wan2.2_myjd_Low_v01.safetensors?download=true) <br />
<br />
# Trigger
```bash
myjd, japanese/european, photorealistic
and 19-22yo
```
<br />
# Sample prompt (v01)
<div style="display: flex; flex-direction: column; align-items: flex-start; gap: 12px;">
<strong>wan2.2 T2v</strong>
<video controls loop style="width:
360px; height: auto; object-fit: contain; border: 1px solid #ccc;">
<source src="https://huggingface.co/deadman44/Wan2.2_T2i_T2v_LoRA/resolve/main/samples/20250826065050_T2V_00002.mp4" type="video/mp4">
Your browser cannot play the video.
</video>
</div>
```bash
22yo, myjd, japanese,
A woman dressed in a maid costume carries coffee on a tray in a café. She has black hair tied in a ponytail and wears a maid headdress.
```
<br/>
---
<a id="myjk"></a>
<h1 class="title">
<span>[Wan2.2] lora_wan_myjk_v01</span>
</h1>
-<span class="font_blue">Wan video 2.2 for t2i, t2v for ComfyUI</span><br/>
-<span class="font_blue">The optimal resolution is 768 x 1280, 1024 x 1536.(T2i), 512x768 (T2v)</span><br/>
-<span class="font_blue">natural Japanese JK face</span><br/>
<br/>
<br/>
# Download
[Download: myjk_High_v01](https://huggingface.co/deadman44/Wan2.2_T2i_T2v_LoRA/resolve/main/lora_wan2.2_myjk_High_v01.safetensors?download=true) <br />
[Download: myjk_Low_v01](https://huggingface.co/deadman44/Wan2.2_T2i_T2v_LoRA/resolve/main/lora_wan2.2_myjk_Low_v01.safetensors?download=true) <br />
<br />
# Trigger
```bash
myjk, japanese/european, photorealistic
and 16-18yo
```
<br />
# Sample prompt (v01)
<strong>wan2.2 T2i generated</strong>
<div style="display: flex; flex-direction: column; align-items: flex-start; gap: 12px; margin-bottom: 32px;">
<strong>T2i</strong>
<a href="https://huggingface.co/deadman44/Wan2.2_T2i_T2v_LoRA/resolve/main/samples/20250818092119_T2I_00001_.jpg" target="_blank">
<img src="https://huggingface.co/deadman44/Wan2.2_T2i_T2v_LoRA/resolve/main/samples/20250818092119_T2I_00001_.jpg"
alt="T2I"
style="width: 360px; height: auto; object-fit: contain; border: 1px solid #ccc;">
</a>
</div>
```bash
18yo, myjk, japanese,
A photorealistic upper-body portrait of a beautiful young woman with long black hair and black eyes, dressed in a school uniform. She is sitting on a stool, smiling with one eye closed in a playful grin, showing her teeth. Her hand is raised gently near her face, and a hair ornament with a black bow. The background is softly blurred, enhancing the cinematic atmosphere. She looks directly at the viewer, evoking a sense of charm and realism.
```
<br/>
<div style="display: flex; flex-direction: column; align-items: flex-start; gap: 12px;">
<strong>wan2.2 T2v</strong>
<video controls loop style="width:
360px; height: auto; object-fit: contain; border: 1px solid #ccc;">
<source src="https://huggingface.co/deadman44/Wan2.2_T2i_T2v_LoRA/resolve/main/samples/20250818093735_T2V_00002.mp4" type="video/mp4">
Your browser cannot play the video.
</video>
</div>
```bash
18yo, myjk, japanese,
A Japanese idol is performing live on a brightly lit concert stage. She is wearing idol costume with Lace-up flared skirt. She sings and dances energetically, moving across the stage with graceful steps and expressive gestures. The camera follows her with dynamic motion: starting from a low-angle close-up of her smiling face, then pulling back to reveal the full stage with flashing lights and cheering fans. Her long hair flows with her movements, and her outfit sparkles under the spotlights. The scene includes cinematic lighting, fog effects, and smooth camera transitions that emphasize her presence and charm.
```
<br/>
---
<a id="myjc"></a>
<h1 class="title">
<span>[Wan2.2] lora_wan_myjc_v02</span>
</h1>
-<span class="font_blue">Wan video 2.2 for t2i, t2v for ComfyUI</span><br/>
-<span class="font_blue">The optimal resolution is 768 x 1280, 1024 x 1536.(T2i), 512x768 (T2v)</span><br/>
-<span class="font_blue">natural Japanese JC face</span><br/>
<br/>
<br/>
# Download
[Download: myjc_High_v02](https://huggingface.co/deadman44/Wan2.2_T2i_T2v_LoRA/resolve/main/lora_wan2.2_myjc_High_v02.safetensors?download=true) <br />
[Download: myjc_Low_v02](https://huggingface.co/deadman44/Wan2.2_T2i_T2v_LoRA/resolve/main/lora_wan2.2_myjc_Low_v02.safetensors?download=true) <br />
<br />
# Trigger
```bash
myjc, japanese/european, photorealistic
and 13-15yo
```
<br />
# Sample prompt
<strong>wan2.2 T2i generated</strong>
<div style="display: flex; flex-direction: column; align-items: flex-start; gap: 12px; margin-bottom: 32px;">
<strong>T2i (v01)</strong>
<a href="https://huggingface.co/deadman44/Wan2.2_T2i_T2v_LoRA/resolve/main/samples/20250814111852_T2I_00001_.png" target="_blank">
<img src="https://huggingface.co/deadman44/Wan2.2_T2i_T2v_LoRA/resolve/main/samples/20250814111852_T2I_00001_.png"
alt="T2I"
style="width: 360px; height: auto; object-fit: contain; border: 1px solid #ccc;">
</a>
</div>
```bash
15yo, myjc, japanese, photorealistic,
A girl in school unifrom sitting seat at train.
She has black hair with sidelocks.
She is holding a smartphone and looking at it.
```
<br/>
<div style="display: flex; flex-direction: column; align-items: flex-start; gap: 12px;">
<strong>wan2.2 T2v (v02)</strong>
<video controls loop style="width:
360px; height: auto; object-fit: contain; border: 1px solid #ccc;">
<source src="https://huggingface.co/deadman44/Wan2.2_T2i_T2v_LoRA/resolve/main/samples/20250904192015_T2V_00001.mp4" type="video/mp4">
Your browser cannot play the video.
</video>
</div>
```bash
15yo, myjc, japanese,
A realistic video of a girl sitting alone on a swing in a sunny park. She wears a neat school uniform: white collared short-sleeve shirt tucked into a pleated blue suspender skirt, white socks, and black loafers. Her long black hair is styled in twin braids with a blue ribbon tied at the neck. She gently swings back and forth, hands resting between her legs, occasionally looking at the viewer with a calm, closed-mouth expression. The background is a slightly blurry photo-style park with trees, bugs flying past, and a playground visible. Her backpack lies beside the swing. As the swing moves, her skirt sways naturally, and sunlight filters through the leaves above. The scene captures a quiet moment of reflection, with subtle wind moving her hair over her shoulder.
```
<br/>
<div style="display: flex; flex-direction: column; align-items: flex-start; gap: 12px;">
<strong>wan2.2 T2v (v01)</strong>
<video controls loop style="width:
480px; height: auto; object-fit: contain; border: 1px solid #ccc;">
<source src="https://huggingface.co/deadman44/Wan2.2_T2i_T2v_LoRA/resolve/main/samples/20250814112118_T2V_00002.mp4" type="video/mp4">
Your browser cannot play the video.
</video>
</div>
```bash
myjc, japanese, photorealistic,
Close-up portrait of a girl walking at street.
She has a black twintails.
She is wearing white blouse.
```
<br/>
<div style="display: flex; flex-direction: column; align-items: flex-start; gap: 12px;">
<video controls loop style="width:
480px; height: auto; object-fit: contain; border: 1px solid #ccc;">
<source src="https://huggingface.co/deadman44/Wan2.2_T2i_T2v_LoRA/resolve/main/samples/20250814112156_T2V_00002.mp4" type="video/mp4">
Your browser cannot play the video.
</video>
</div>
```bash
15yo, myjc, japanese, photorealistic,
A girl in school unifrom with short sleeves sitting chair at night time classroom.
She has black hair with sidelocks.
She is talking camera with smily.
```
<br/>
---
<a id="myjs"></a>
<h1 class="title">
<span>[Wan2.2] lora_wan_myjs_v01</span>
</h1>
-<span class="font_blue">Wan video 2.2 for t2i, t2v for ComfyUI</span><br/>
-<span class="font_blue">The optimal resolution is 768 x 1280, 1024 x 1536.(T2i), 512x768 (T2v)</span><br/>
-<span class="font_blue">natural Japanese JS face</span><br/>
<br/>
<br/>
# Download
[Download: myjs_High_v01](https://huggingface.co/deadman44/WAN_T2i_LoRA/resolve/main/lora_wan2.2_myjs_High_v01.safetensors?download=true)<br>
[Download: myjs_Low_v01](https://huggingface.co/deadman44/WAN_T2i_LoRA/resolve/main/lora_wan2.2_myjs_Low_v01.safetensors?download=true)<br>
<br />
# Trigger
```bash
(myjsh / myjsm / myjsl), japanese/european, photorealistic
and 6-12yo
```
<br />
# Sample prompt (v01)
<strong>wan2.2 T2i generated</strong>
<div style="display: flex; flex-direction: column; align-items: flex-start; gap: 12px; margin-bottom: 32px;">
<strong>T2i</strong>
<a href="https://huggingface.co/deadman44/WAN_T2i_LoRA/resolve/main/samples/20250811083806_T2I_LastImage_00001_.png?download=true" target="_blank">
<img src="https://huggingface.co/deadman44/WAN_T2i_LoRA/resolve/main/samples/20250811083806_T2I_LastImage_00001_.png"
alt="T2I"
style="width: 360px; height: auto; object-fit: contain; border: 1px solid #ccc;">
</a>
</div>
```bash
myjsh, japanese, photorealistic,
A Japanese girl with shoulder-length black hair, wearing a white textured blouse, standing outdoors in soft sunlight. She gently lifts her hand to brush her hair aside, as a breeze flows through the trees behind her. Her blouse flutters slightly, and her gaze shifts subtly toward the camera.
```
<br/>
<div style="display: flex; flex-direction: column; align-items: flex-start; gap: 12px;">
<strong>wan2.2 T2v</strong>
<video controls loop style="width:
360px; height: auto; object-fit: contain; border: 1px solid #ccc;">
<source src="https://huggingface.co/deadman44/WAN_T2i_LoRA/resolve/main/samples/20250811084132_T2V_00001.mp4" type="video/mp4">
Your browser cannot play the video.
</video>
</div>
```bash
12yo, myjsh, japanese, photorealistic,
A stylish girl posing for a fashion photoshoot in a minimalist studio. She wears a high-fashion outfit with layered textures: a translucent blouse over a structured corset, paired with wide-leg trousers. She shifts her pose gracefully, turning slightly to the side, adjusting her posture with subtle hand movements. Studio lights flash intermittently, casting soft shadows and highlights on her face and outfit. Her expression changes subtly from confident to playful. The camera slowly pans around her, capturing her elegance and motion. Cinematic lighting, fashion editorial style, photorealistic, expressive gesture, shallow depth of field, dynamic motion.
```
<br/>
<div style="display: flex; flex-direction: column; align-items: flex-start; gap: 12px;">
<video controls loop style="width:
360px; height: auto; object-fit: contain; border: 1px solid #ccc;">
<source src="https://huggingface.co/deadman44/WAN_T2i_LoRA/resolve/main/samples/20250811104546_T2V_00002.mp4" type="video/mp4">
Your browser cannot play the video.
</video>
</div>
```bash
9yo, myjsm, japanese, photorealistic,
A girl wearing a white blouse and pleated skirt with suspenders walks the crowded school hallway.
She has a black ponytail.
Finally she turns around and smiles.
```
<br/>
<div style="display: flex; flex-direction: column; align-items: flex-start; gap: 12px;">
<video controls loop style="width:
480px; height: auto; object-fit: contain; border: 1px solid #ccc;">
<source src="https://huggingface.co/deadman44/WAN_T2i_LoRA/resolve/main/samples/20250811112548_T2V_00002.mp4" type="video/mp4">
Your browser cannot play the video.
</video>
</div>
```bash
6yo, myjsl, japanese, photorealistic,
Girls are crossing the street with their one hand raised as their car waits.
```
<br/>
---
<a id="myjy"></a>
<h1 class="title">
<span>[Wan2.2] lora_wan_myjy_v01</span>
</h1>
-<span class="font_blue">Wan video 2.2 for t2i, t2v for ComfyUI</span><br/>
-<span class="font_blue">The optimal resolution is 768 x 1280, 1024 x 1536.(T2i), 512x768 (T2v)</span><br/>
-<span class="font_blue">natural Japanese JY face</span><br/>
<br/>
<br/>
# Download
[Download: myjy_High_v01](https://huggingface.co/deadman44/Wan2.2_T2i_T2v_LoRA/resolve/main/lora_wan2.2_myjy_High_v01.safetensors?download=true)<br>
[Download: myjy_Low_v01](https://huggingface.co/deadman44/Wan2.2_T2i_T2v_LoRA/resolve/main/lora_wan2.2_myjy_Low_v01.safetensors?download=true)<br>
<br />
# Trigger
```bash
myjy, japanese/european, photorealistic
and 3-5yo
```
<br />
# Sample prompt (v01)
<div style="display: flex; flex-direction: column; align-items: flex-start; gap: 12px;">
<strong>wan2.2 T2v</strong>
<video controls loop style="width:
480px; height: auto; object-fit: contain; border: 1px solid #ccc;">
<source src="https://huggingface.co/deadman44/Wan2.2_T2i_T2v_LoRA/resolve/main/samples/20250821095521_T2V_00002.mp4" type="video/mp4">
Your browser cannot play the video.
</video>
</div>
```bash
myjy, japanese,
A heartwarming indoor scene of three cheerful kindergarten girls clasping their own hands together in playful prayer. They wear colorful long-sleeved uniforms with blunt bangs and varied hairstyles: black hair in twintails, brown short hair, and long hair with a cute hair ornament. One girl holds a picture book with animal illustrations, another giggles softly, and the third looks up with wide, curious eyes. Their fingers are gently interlocked, lips slightly parted in a whisper of joy, and their expressions glow with innocence and wonder. The softly blurred background shows a cozy classroom with pastel decorations, adding warmth and charm to the moment.
```
<br/>
---
|
GodShin/gemma-3-1b-pt-MED
|
GodShin
| 2025-09-12T06:28:38Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma3_text",
"text-generation",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-12T06:28:03Z |
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Adanato/Llama-3.2-1B-Instruct-high_nemotron_1k
|
Adanato
| 2025-09-12T06:27:40Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"fyksft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-12T06:25:40Z |
---
library_name: transformers
tags:
- fyksft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
sidhantoon/Goldentouch_V3_G17
|
sidhantoon
| 2025-09-12T06:27:26Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-09-12T06:23:35Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
NotoriousH2/gemma-3-1b-pt-MED
|
NotoriousH2
| 2025-09-12T06:26:56Z | 12 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma3_text",
"text-generation",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-04-02T13:38:10Z |
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
trkbt10/finetuned_model
|
trkbt10
| 2025-09-12T06:25:35Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gpt_oss",
"trl",
"en",
"base_model:unsloth/gpt-oss-20b-unsloth-bnb-4bit",
"base_model:finetune:unsloth/gpt-oss-20b-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-09-12T06:25:30Z |
---
base_model: unsloth/gpt-oss-20b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gpt_oss
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** trkbt10
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gpt-oss-20b-unsloth-bnb-4bit
This gpt_oss model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
shinebear/qwen100_va_agent-Q8_0-GGUF
|
shinebear
| 2025-09-12T06:25:28Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"qwen3",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:shinebear/qwen100_va_agent",
"base_model:quantized:shinebear/qwen100_va_agent",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-09-12T06:24:52Z |
---
base_model: shinebear/qwen100_va_agent
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- llama-cpp
- gguf-my-repo
license: apache-2.0
language:
- en
---
# shinebear/qwen100_va_agent-Q8_0-GGUF
This model was converted to GGUF format from [`shinebear/qwen100_va_agent`](https://huggingface.co/shinebear/qwen100_va_agent) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/shinebear/qwen100_va_agent) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo shinebear/qwen100_va_agent-Q8_0-GGUF --hf-file qwen100_va_agent-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo shinebear/qwen100_va_agent-Q8_0-GGUF --hf-file qwen100_va_agent-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo shinebear/qwen100_va_agent-Q8_0-GGUF --hf-file qwen100_va_agent-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo shinebear/qwen100_va_agent-Q8_0-GGUF --hf-file qwen100_va_agent-q8_0.gguf -c 2048
```
|
timduck8/nsfwnew
|
timduck8
| 2025-09-12T06:23:26Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-09-12T06:23:26Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- output:
url: images/1751260672898828907_PgNTXYeu.jpg
text: '-'
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: null
license: other
license_name: faipl-1.0-sd
license_link: LICENSE
---
# nsfwnew
<Gallery />
## Download model
[Download](/timduck8/nsfwnew/tree/main) them in the Files & versions tab.
|
Continual-Mega/ADCT
|
Continual-Mega
| 2025-09-12T06:22:32Z | 0 | 1 | null |
[
"dataset:Continual-Mega/Continual-MEGA-Benchmark",
"arxiv:2506.00956",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2025-06-16T04:12:17Z |
---
license: cc-by-nc-4.0
datasets:
- Continual-Mega/Continual-MEGA-Benchmark
---
# 🧠 Continual-MEGA: A Large-scale Benchmark for Generalizable Continual Anomaly Detection
This repository provides model checkpoints for **Continual-MEGA**, a benchmark introduced in the paper:
[](https://arxiv.org/abs/2506.00956)
🔗 **Codebase**: [Continual-Mega/Continual-Mega](https://github.com/Continual-Mega/Continual-Mega)
---
## 🚀 Overview
Continual-MEGA introduces a realistic and large-scale benchmark for **continual anomaly detection** that emphasizes generalizability across domains and tasks.
The benchmark features:
- ✅ Diverse anomaly types across domains
- 🔁 Class-incremental continual learning setup
- 📈 A large-scale evaluation protocol surpassing previous benchmarks
This repository hosts pretrained **model checkpoints** used in various scenarios defined in the benchmark.
---
## 📦 Available Checkpoints
| Model Name | Scenario | Task | Description |
|---------------------------------------|------------|-------------------------|----------------------------------------------------------|
| `scenario2/prompt_maker` | Scenario 2 | Base | Prompt maker trained on Scenario 2 base classes |
| `scenario2/adapters_base` | Scenario 2 | Base | Adapter trained on Scenario 2 base classes |
| `scenario2/30classes/adapters_task1` | Scenario 2 | Task 1 (30 classes) | Adapter trained on Task 1 (30 classes) in Scenario 2 |
| `scenario2/30classes/adapters_task2` | Scenario 2 | Task 2 (30 classes) | Adapter trained on Task 2 (30 classes) in Scenario 2 |
| *(More to come)* | – | – | – |
---
## 🛠 Usage Example
### Continual Setting Evaluation
```
sh eval_continual.sh
```
### Zero-Shot Evaluation
```
sh eval_zero.s
|
omerbektasss/blockassist-bc-keen_fast_giraffe_1757658028
|
omerbektasss
| 2025-09-12T06:21:21Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"keen fast giraffe",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-12T06:20:44Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- keen fast giraffe
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
de-Rodrigo/idefics2-merit
|
de-Rodrigo
| 2025-09-12T06:17:20Z | 0 | 1 | null |
[
"safetensors",
"vision",
"document-understanding",
"donut",
"image-text-to-text",
"conversational",
"en",
"es",
"dataset:de-Rodrigo/merit",
"base_model:HuggingFaceM4/idefics2-8b",
"base_model:finetune:HuggingFaceM4/idefics2-8b",
"license:mit",
"region:us"
] |
image-text-to-text
| 2024-07-21T17:45:31Z |
---
license: mit
datasets:
- de-Rodrigo/merit
language:
- en
- es
base_model:
- HuggingFaceM4/idefics2-8b
pipeline_tag: image-text-to-text
---
# IDEFICS2 Merit
<a href="https://x.com/nearcyan/status/1706914605262684394">
<div style="text-align: center;">
<picture>
<source media="(prefers-color-scheme: dark)" srcset="https://huggingface.co/de-Rodrigo/donut-merit/resolve/main/assets/dragon_huggingface.png">
<source media="(prefers-color-scheme: light)" srcset="https://huggingface.co/de-Rodrigo/donut-merit/resolve/main/assets/dragon_huggingface.png">
<img alt="DragonHuggingFace" src="https://huggingface.co/de-Rodrigo/donut-merit/resolve/main/assets/dragon_huggingface.png" style="width: 200px;">
</picture>
</div>
</a>
## Model Architecture
**This model is based on the Donut architecture and fine-tuned on the Merit dataset for form understanding tasks.**
- Backbone: [Idefics2](https://huggingface.co/HuggingFaceM4/idefics2-8b)
- Training Data: [Merit](https://huggingface.co/datasets/de-Rodrigo/merit)
## Example Usage
```python
from transformers import AutoModel
model = AutoModel.from_pretrained("de-Rodrigo/idefics2-merit")
```
**WIP** 🛠️
|
stonermay/blockassist-bc-diving_lightfooted_caterpillar_1757657739
|
stonermay
| 2025-09-12T06:17:04Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"diving lightfooted caterpillar",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-12T06:16:43Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- diving lightfooted caterpillar
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
TAUR-dev/M-sft_exp_zayneV2-sft
|
TAUR-dev
| 2025-09-12T06:15:58Z | 0 | 0 | null |
[
"safetensors",
"qwen2",
"region:us"
] | null | 2025-09-12T06:15:28Z |
# M-sft_exp_zayneV2-sft
This model was created as part of the **sft_exp_zayneV2** experiment using the SkillFactory experiment management system.
## Model Details
- **Training Method**: LLaMAFactory SFT (Supervised Fine-Tuning)
- **Stage Name**: sft
- **Experiment**: sft_exp_zayneV2
## Training Configuration
{"model_name_or_path": "Qwen/Qwen2.5-1.5B-Instruct", "trust_remote_code": true, "stage": "sft", "do_train": true, "finetuning_type": "full", "deepspeed": "/scratch/10416/zaynesprague/skill_factory_dir/skill-factory/thirdparty/LLaMA-Factory/examples/deepspeed/ds_z2_config.json", "dataset": "TAUR_dev__D_SFT_C_sft_exp_zayneV2_sft_data__sft_train", "template": "qwen", "cutoff_len": 16384, "max_samples": 1000000, "overwrite_cache": true, "preprocessing_num_workers": 1, "dataloader_num_workers": 0, "disable_tqdm": false, "output_dir": "/scratch/10416/zaynesprague/skill_inject_outputs/sf_experiments/new_sft_v3_9_11__zaynev2/llamafactory/checkpoints", "logging_steps": 10, "save_steps": 100000, "plot_loss": true, "overwrite_output_dir": true, "per_device_train_batch_size": 1, "gradient_accumulation_steps": 1, "learning_rate": 1e-06, "num_train_epochs": 3, "lr_scheduler_type": "cosine", "warmup_ratio": 0.05, "weight_decay": 0.0001, "adam_beta1": 0.9, "adam_beta2": 0.95, "bf16": true, "ddp_timeout": 180000000, "gradient_checkpointing": true, "save_only_model": true, "enable_masked_ranges": false, "save_strategy": "steps", "save_total_limit": 5, "sf_tracker_dataset_id": "TAUR-dev/D-ExpTracker__sft_exp_zayneV2__v1", "sf_eval_before_training": false, "sf_wandb_project": "sft_exp_zayneV2_sft", "sf_eval_steps": null, "run_name": "sft_exp_zayneV2_sft"}
## Experiment Tracking
🔗 **View complete experiment details**: [Experiment Tracker Dataset](https://huggingface.co/datasets/TAUR-dev/D-ExpTracker__sft_exp_zayneV2__v1)
## Usage
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("TAUR-dev/M-sft_exp_zayneV2-sft")
model = AutoModelForCausalLM.from_pretrained("TAUR-dev/M-sft_exp_zayneV2-sft")
```
|
Anhlq/gemma-3-finetune-16bit-copy-v2
|
Anhlq
| 2025-09-12T06:14:55Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma3_text",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:unsloth/gemma-3-270m-it",
"base_model:finetune:unsloth/gemma-3-270m-it",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-12T06:13:07Z |
---
base_model: unsloth/gemma-3-270m-it
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3_text
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** Anhlq
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-270m-it
This gemma3_text model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
omerbektasss/blockassist-bc-insectivorous_bold_lion_1757657625
|
omerbektasss
| 2025-09-12T06:14:38Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"insectivorous bold lion",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-12T06:14:01Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- insectivorous bold lion
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mradermacher/Huihui-Qwen3-30B-A3B-abliterated-Fusion-7030-GGUF
|
mradermacher
| 2025-09-12T06:14:05Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"chat",
"Fusion",
"en",
"base_model:huihui-ai/Huihui-Qwen3-30B-A3B-abliterated-Fusion-7030",
"base_model:quantized:huihui-ai/Huihui-Qwen3-30B-A3B-abliterated-Fusion-7030",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-12T05:40:42Z |
---
base_model: huihui-ai/Huihui-Qwen3-30B-A3B-abliterated-Fusion-7030
language:
- en
library_name: transformers
license: mit
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- chat
- Fusion
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/huihui-ai/Huihui-Qwen3-30B-A3B-abliterated-Fusion-7030
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Huihui-Qwen3-30B-A3B-abliterated-Fusion-7030-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Huihui-Qwen3-30B-A3B-abliterated-Fusion-7030-GGUF/resolve/main/Huihui-Qwen3-30B-A3B-abliterated-Fusion-7030.Q2_K.gguf) | Q2_K | 11.4 | |
| [GGUF](https://huggingface.co/mradermacher/Huihui-Qwen3-30B-A3B-abliterated-Fusion-7030-GGUF/resolve/main/Huihui-Qwen3-30B-A3B-abliterated-Fusion-7030.Q3_K_S.gguf) | Q3_K_S | 13.4 | |
| [GGUF](https://huggingface.co/mradermacher/Huihui-Qwen3-30B-A3B-abliterated-Fusion-7030-GGUF/resolve/main/Huihui-Qwen3-30B-A3B-abliterated-Fusion-7030.Q3_K_M.gguf) | Q3_K_M | 14.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Huihui-Qwen3-30B-A3B-abliterated-Fusion-7030-GGUF/resolve/main/Huihui-Qwen3-30B-A3B-abliterated-Fusion-7030.Q3_K_L.gguf) | Q3_K_L | 16.0 | |
| [GGUF](https://huggingface.co/mradermacher/Huihui-Qwen3-30B-A3B-abliterated-Fusion-7030-GGUF/resolve/main/Huihui-Qwen3-30B-A3B-abliterated-Fusion-7030.Q4_K_S.gguf) | Q4_K_S | 17.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Huihui-Qwen3-30B-A3B-abliterated-Fusion-7030-GGUF/resolve/main/Huihui-Qwen3-30B-A3B-abliterated-Fusion-7030.Q4_K_M.gguf) | Q4_K_M | 18.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Huihui-Qwen3-30B-A3B-abliterated-Fusion-7030-GGUF/resolve/main/Huihui-Qwen3-30B-A3B-abliterated-Fusion-7030.Q5_K_S.gguf) | Q5_K_S | 21.2 | |
| [GGUF](https://huggingface.co/mradermacher/Huihui-Qwen3-30B-A3B-abliterated-Fusion-7030-GGUF/resolve/main/Huihui-Qwen3-30B-A3B-abliterated-Fusion-7030.Q5_K_M.gguf) | Q5_K_M | 21.8 | |
| [GGUF](https://huggingface.co/mradermacher/Huihui-Qwen3-30B-A3B-abliterated-Fusion-7030-GGUF/resolve/main/Huihui-Qwen3-30B-A3B-abliterated-Fusion-7030.Q6_K.gguf) | Q6_K | 25.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Huihui-Qwen3-30B-A3B-abliterated-Fusion-7030-GGUF/resolve/main/Huihui-Qwen3-30B-A3B-abliterated-Fusion-7030.Q8_0.gguf) | Q8_0 | 32.6 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
nobu222/gemma2-9b-it-qlora-adapter-for-banner-gen
|
nobu222
| 2025-09-12T06:13:46Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-12T06:13:39Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
midwestern-simulation/essence-3b-v1.1
|
midwestern-simulation
| 2025-09-12T06:13:26Z | 0 | 0 | null |
[
"safetensors",
"dataset:mlfoundations/dclm-baseline-1.0",
"base_model:HuggingFaceTB/SmolLM3-3B-Base",
"base_model:finetune:HuggingFaceTB/SmolLM3-3B-Base",
"region:us"
] | null | 2025-09-11T11:35:20Z |
---
datasets:
- mlfoundations/dclm-baseline-1.0
base_model:
- HuggingFaceTB/SmolLM3-3B-Base
---
# Essence 3B V1.1
This is a system using two versions of SmolLM3-3B-Base, the 'encoder', is finetuned to turn a text into a set of embedding tokens which can be reconstituted back into the original text by the decoder. In addition to “vanilla” reconstruction, this model was trained for span-corruption and masked language modelling.
We use LoRA at rank 64 on QKVO along with trainable LayerNorms and, for the encoder, LoRA on all MLP layers as well as trainable token embeddings.
The model was trained to encode text into any of 1-128 embedding tokens.
## Simple Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
from torch import nn
import torch
from huggingface_hub import hf_hub_download
device = torch.device("cuda:0")
dtype = torch.bfloat16
base_model_id = "HuggingFaceTB/SmolLM3-3B-Base"
compressor_id = "midwestern-simulation/essence-3b-v1.1"
# === MODEL LOADING ===
tokenizer = AutoTokenizer.from_pretrained(base_model_id, padding_side='left')
encoder = AutoModelForCausalLM.from_pretrained(base_model_id, device_map={"":device}, torch_dtype=dtype)
decoder = AutoModelForCausalLM.from_pretrained(base_model_id, device_map={"":device}, torch_dtype=dtype)
encoder = PeftModel.from_pretrained(encoder, compressor_id, subfolder="encoder")
decoder = PeftModel.from_pretrained(decoder, compressor_id, subfolder="decoder")
projector = nn.Linear(2048, 2048).to(device).to(dtype)
projector.load_state_dict(torch.load(hf_hub_download(repo_id=compressor_id, filename="projector.pt")))
# === MODEL INFERENCE ===
text = "mary had a little lamb, little lamb, little lamb, mary had a little lamb whose fleece was white as snow"
n_embed_tokens = 4 # for best performance, can be any within the range of 1-128
encoder_input = text.strip() + f"\n[[/END DOCUMENT]]\n[[START SUMMARY ntoks={n_embed_tokens}]]" + "<|im_end|>" * n_embed_tokens
tokenized = tokenizer(encoder_input, return_tensors='pt', add_special_tokens=False)
tokenized = {k: v.to(device) for k, v in tokenized.items()}
encoding = encoder.model.model(**tokenized).last_hidden_state[:, -n_embed_tokens:, :]
encoding = projector(encoding)
tokenized_prefix = tokenizer("\n[[/END SUMMARY]]\n[[START DOCUMENT]]\n", return_tensors="pt", add_special_tokens=False)
prefix_embeds = decoder.model.model.embed_tokens(tokenized_prefix['input_ids'].to(device))
inputs_embeds = torch.cat([encoding, prefix_embeds], 1)
output = decoder.generate(
inputs_embeds=inputs_embeds,
temperature=0.7,
max_new_tokens=1024,
do_sample=True,
top_k=128,
min_new_tokens=8,
eos_token_id=tokenizer.eos_token_id,
pad_token_id=tokenizer.pad_token_id
)
print(tokenizer.decode(output[0]))
# mary had a little lamb, little lamb, little lamb, mary had a little lamb whose fleece was white as snow
# [[/END DOCUMENT]]<|end_of_text|>
```
|
ttkairamkonda/whisper-large-v3-faa-atc-500k-LoRA32
|
ttkairamkonda
| 2025-09-12T06:12:41Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-12T06:12:35Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mradermacher/InfoLLM-32B-i1-GGUF
|
mradermacher
| 2025-09-12T06:11:56Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"zh",
"en",
"base_model:HFSUN123/InfoLLM-32B",
"base_model:quantized:HFSUN123/InfoLLM-32B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-09-11T23:56:12Z |
---
base_model: HFSUN123/InfoLLM-32B
language:
- zh
- en
library_name: transformers
license: apache-2.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K_M Q4_0 IQ3_XS Q4_1 IQ3_S -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
weighted/imatrix quants of https://huggingface.co/HFSUN123/InfoLLM-32B
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#InfoLLM-32B-i1-GGUF).***
static quants are available at https://huggingface.co/mradermacher/InfoLLM-32B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/InfoLLM-32B-i1-GGUF/resolve/main/InfoLLM-32B.imatrix.gguf) | imatrix | 0.1 | imatrix file (for creating your own qwuants) |
| [GGUF](https://huggingface.co/mradermacher/InfoLLM-32B-i1-GGUF/resolve/main/InfoLLM-32B.i1-IQ1_S.gguf) | i1-IQ1_S | 7.4 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/InfoLLM-32B-i1-GGUF/resolve/main/InfoLLM-32B.i1-IQ1_M.gguf) | i1-IQ1_M | 8.1 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/InfoLLM-32B-i1-GGUF/resolve/main/InfoLLM-32B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 9.1 | |
| [GGUF](https://huggingface.co/mradermacher/InfoLLM-32B-i1-GGUF/resolve/main/InfoLLM-32B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 10.1 | |
| [GGUF](https://huggingface.co/mradermacher/InfoLLM-32B-i1-GGUF/resolve/main/InfoLLM-32B.i1-IQ2_S.gguf) | i1-IQ2_S | 10.6 | |
| [GGUF](https://huggingface.co/mradermacher/InfoLLM-32B-i1-GGUF/resolve/main/InfoLLM-32B.i1-IQ2_M.gguf) | i1-IQ2_M | 11.5 | |
| [GGUF](https://huggingface.co/mradermacher/InfoLLM-32B-i1-GGUF/resolve/main/InfoLLM-32B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 11.6 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/InfoLLM-32B-i1-GGUF/resolve/main/InfoLLM-32B.i1-Q2_K.gguf) | i1-Q2_K | 12.4 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/InfoLLM-32B-i1-GGUF/resolve/main/InfoLLM-32B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 12.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/InfoLLM-32B-i1-GGUF/resolve/main/InfoLLM-32B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 13.8 | |
| [GGUF](https://huggingface.co/mradermacher/InfoLLM-32B-i1-GGUF/resolve/main/InfoLLM-32B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 14.5 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/InfoLLM-32B-i1-GGUF/resolve/main/InfoLLM-32B.i1-IQ3_S.gguf) | i1-IQ3_S | 14.5 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/InfoLLM-32B-i1-GGUF/resolve/main/InfoLLM-32B.i1-IQ3_M.gguf) | i1-IQ3_M | 15.0 | |
| [GGUF](https://huggingface.co/mradermacher/InfoLLM-32B-i1-GGUF/resolve/main/InfoLLM-32B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 16.1 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/InfoLLM-32B-i1-GGUF/resolve/main/InfoLLM-32B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 17.4 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/InfoLLM-32B-i1-GGUF/resolve/main/InfoLLM-32B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 17.8 | |
| [GGUF](https://huggingface.co/mradermacher/InfoLLM-32B-i1-GGUF/resolve/main/InfoLLM-32B.i1-Q4_0.gguf) | i1-Q4_0 | 18.8 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/InfoLLM-32B-i1-GGUF/resolve/main/InfoLLM-32B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 18.9 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/InfoLLM-32B-i1-GGUF/resolve/main/InfoLLM-32B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 19.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/InfoLLM-32B-i1-GGUF/resolve/main/InfoLLM-32B.i1-Q4_1.gguf) | i1-Q4_1 | 20.7 | |
| [GGUF](https://huggingface.co/mradermacher/InfoLLM-32B-i1-GGUF/resolve/main/InfoLLM-32B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 22.7 | |
| [GGUF](https://huggingface.co/mradermacher/InfoLLM-32B-i1-GGUF/resolve/main/InfoLLM-32B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 23.3 | |
| [GGUF](https://huggingface.co/mradermacher/InfoLLM-32B-i1-GGUF/resolve/main/InfoLLM-32B.i1-Q6_K.gguf) | i1-Q6_K | 27.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
TAUR-dev/M-0911__qrepeat1_ref5_0C.-C.-C-IC.-CC_3args_grpo-rl
|
TAUR-dev
| 2025-09-12T06:09:23Z | 0 | 0 | null |
[
"safetensors",
"qwen2",
"en",
"license:mit",
"region:us"
] | null | 2025-09-11T21:16:11Z |
---
language: en
license: mit
---
# M-0911__qrepeat1_ref5_0C.-C.-C-IC.-CC_3args_grpo-rl
## Model Details
- **Training Method**: VeRL Reinforcement Learning (RL)
- **Stage Name**: rl
- **Experiment**: 0911__qrepeat1_ref5_0C.-C.-C-IC.-CC_3args_grpo
- **RL Framework**: VeRL (Versatile Reinforcement Learning)
## Training Configuration
## Experiment Tracking
🔗 **View complete experiment details**: https://huggingface.co/datasets/TAUR-dev/D-ExpTracker__0911__qrepeat1_ref5_0C.-C.-C-IC.-CC_3args_grpo__v1
## Usage
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("TAUR-dev/M-0911__qrepeat1_ref5_0C.-C.-C-IC.-CC_3args_grpo-rl")
model = AutoModelForCausalLM.from_pretrained("TAUR-dev/M-0911__qrepeat1_ref5_0C.-C.-C-IC.-CC_3args_grpo-rl")
```
|
Adanato/Llama-3.2-1B-Instruct-high_acereason_1k
|
Adanato
| 2025-09-12T06:08:08Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"fyksft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-12T05:39:43Z |
---
library_name: transformers
tags:
- fyksft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Adanato/Llama-3.2-1B-Instruct-baseline_1k
|
Adanato
| 2025-09-12T06:07:55Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"fyksft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-12T06:06:07Z |
---
library_name: transformers
tags:
- fyksft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ZombitX64/Wilai-1.5
|
ZombitX64
| 2025-09-12T06:07:51Z | 0 | 0 |
transformers
|
[
"transformers",
"pytorch",
"OpenThaiWilai",
"text-generation",
"generated_from_trainer",
"custom_code",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2025-09-12T04:20:07Z |
---
library_name: transformers
tags:
- generated_from_trainer
model-index:
- name: OpenThaiWilai
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# OpenThaiWilai
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.56.1
- Pytorch 2.8.0+cu126
- Datasets 4.0.0
- Tokenizers 0.22.0
|
Aagmon/gemma-3-12b-ft-trans-imp-3
|
Aagmon
| 2025-09-12T06:06:49Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma3",
"image-text-to-text",
"unsloth",
"conversational",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-09-11T21:04:42Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Karthikappi0011/qwen3-1.7b-translation-en-tu-test4-small-data
|
Karthikappi0011
| 2025-09-12T06:06:42Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen3",
"trl",
"en",
"base_model:unsloth/Qwen3-1.7B-unsloth-bnb-4bit",
"base_model:finetune:unsloth/Qwen3-1.7B-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-09-12T06:06:37Z |
---
base_model: unsloth/Qwen3-1.7B-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Karthikappi0011
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-1.7B-unsloth-bnb-4bit
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
stonermay/blockassist-bc-diving_lightfooted_caterpillar_1757657117
|
stonermay
| 2025-09-12T06:06:40Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"diving lightfooted caterpillar",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-12T06:06:19Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- diving lightfooted caterpillar
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
vnegi1011/kyrgyz-asr
|
vnegi1011
| 2025-09-12T06:03:30Z | 0 | 0 | null |
[
"onnx",
"wav2vec2",
"region:us"
] | null | 2025-09-11T01:42:07Z |
# Kyrgyz ASR Model
Fine-tuned Wav2Vec2 model for Kyrgyz speech recognition, exported to ONNX format.
- **Model**: facebook/wav2vec2-base-960h (fine-tuned)
- **Task**: Automatic Speech Recognition (ASR)
- **Language**: Kyrgyz
- **Files**: `model.onnx`, `vocab.json`, processor configs
|
NCSOFT/GME-VARCO-VISION-Embedding
|
NCSOFT
| 2025-09-12T06:01:44Z | 1,261 | 11 |
transformers
|
[
"transformers",
"safetensors",
"qwen2_vl",
"image-to-text",
"multimodal",
"video embedding",
"ncsoft",
"ncai",
"varco",
"feature-extraction",
"en",
"base_model:Qwen/Qwen2-VL-7B-Instruct",
"base_model:finetune:Qwen/Qwen2-VL-7B-Instruct",
"license:cc-by-nc-4.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2025-06-10T05:26:52Z |
---
license: cc-by-nc-4.0
base_model:
- Qwen/Qwen2-VL-7B-Instruct
library_name: transformers
tags:
- multimodal
- video embedding
- ncsoft
- ncai
- varco
pipeline_tag: feature-extraction
language:
- en
---
## About GME-VARCO-VISION-Embedding
<div align="center">
<img src="./varco-vision-Embedding.png" width="100%" style="background-color:white; padding:10px;"/>
</div>
`GME-VARCO-VISION-Embedding` is a multimodal embedding model that computes semantic similarity between text, images, and videos in a high-dimensional embedding space. In particular, the model focuses on video retrieval, which demands greater complexity and contextual understanding compared to image retrieval. It achieves high retrieval accuracy and strong generalization performance across various scenarios, such as scene-based search, description-based search, and question-answering-based search.
## Demo Video
Check out our demo videos showcasing our multimodal embedding model in action:
- [English Demo Video](https://www.youtube.com/watch?v=kCvz82Y1BQg)
- [Korean Demo Video](https://youtube.com/shorts/jC2J7rbAfxs)
The demo demonstrates how our embedding model works together with an AI agent to search for relevant videos based on user queries and generate responses using the retrieved video content.
### Model Architecture and Training Method
`GME-VARCO-VISION-Embedding` is based on [`Qwen/Qwen2-VL-7B-Instruct`](https://huggingface.co/Qwen/Qwen2-VL-7B-Instruct), and uses the parameters of [`Alibaba-NLP/gme-Qwen2-VL-7B-Instruct`](https://huggingface.co/Alibaba-NLP/gme-Qwen2-VL-7B-Instruct) to improve the model's general retrieval ability.
#### 1. Fine-tuning (Contrastive Learning) on video preference dataset
To efficiently fine-tune the model, we utilize [ShareGPTVideo’s 17𝑘 video preference dataset](https://huggingface.co/datasets/ShareGPTVideo/train_video_and_instruction), which includes prompts, videos, gold answers, and chosen-rejected text pairs. We treat the prompts and videos as queries, and the rejected responses
as hard-negatives for the gold answers. . Each query is trained with in-batch negatives as well as one hard negative using the InfoNCE loss. The model is fully fine-tuned for two epochs on 8 A100 GPUs with a batch size of 8, requiring only a few hours for training.
#### 2. Adding Retrieval Vector
To compensate for the insufficiency of training instances and enhance the generalization ability of the fine-tuned model, we compute a retrieval vector 𝜏 by subtracting the weights of the original [`Qwen/Qwen2-VL-7B-Instruct`](https://huggingface.co/Qwen/Qwen2-VL-7B-Instruct) model from those of [`Alibaba-NLP/gme-Qwen2-VL-7B-Instruct`](https://huggingface.co/Alibaba-NLP/gme-Qwen2-VL-7B-Instruct), a Qwen2-VL based image-text retrieval model. This approach is inspired by Chat Vector, which is a method to equip pre-trained language models with chat capabilities in new languages by adding a vector obtained from the weight difference between a base model and its chat-optimized counterpart.
### Performance
Our model achieves **state-of-the-art (SOTA) zero-shot performance** on the MultiVENT2.0 dataset as of July 2025. See the [official leaderboard](https://eval.ai/web/challenges/challenge-page/2507/leaderboard/6262) for detailed results.
<br>
## Code Examples
`GME-VARCO-VISION-Embedding` adopts the inference pipeline of [`Qwen/Qwen2-VL-7B-Instruct`](https://huggingface.co/Qwen/Qwen2-VL-7B-Instruct).
### Image-Text Retrieval
```python
import torch
import requests
from PIL import Image
from transformers import Qwen2VLForConditionalGeneration, AutoProcessor
from qwen_vl_utils import process_vision_info
model_name = "NCSOFT/GME-VARCO-VISION-Embedding"
model = Qwen2VLForConditionalGeneration.from_pretrained(
model_name,
torch_dtype=torch.bfloat16,
attn_implementation="flash_attention_2",
device_map="auto",
)
processor = AutoProcessor.from_pretrained(model_name)
tokenizer = processor.tokenizer
device = model.device
qry_msg = [
{
"role": "user",
"content": [
{"type": "text", "text": "Find a photo of a cat."},
],
},
]
qry_txt = processor.apply_chat_template(
qry_msg, tokenize=False, add_generation_prompt=True
) + tokenizer.eos_token
qry_input = processor(
text=[qry_txt],
padding=True,
return_tensors="pt",
).to(device)
img_msg = [
{
"role": "user",
"content": [{
"type": "image",
"image": "image"
}]
}
]
img_txt = processor.apply_chat_template(
img_msg, tokenize=False, add_generation_prompt=True
) + tokenizer.eos_token
candidate_imgs= [
# Photo of two cats
{
"role": "user",
"content": [{
"type": "image",
"image": "http://images.cocodataset.org/val2017/000000039769.jpg"}]
},
# Photo of two dogs
{
"role": "user",
"content": [{
"type": "image",
"image": "https://farm1.staticflickr.com/116/290755713_a5de6c1079_z.jpg"}]
},
# photo of two people playing baseball
{
"role": "user",
"content": [{
"type": "image",
"image": "http://farm3.staticflickr.com/2418/2193688811_d9f5e23bbd_z.jpg"}]
},
# Photo of a large crowd in a busy city street
{
"role": "user",
"content": [{
"type": "image",
"image":"http://farm7.staticflickr.com/6049/6329686751_997c68fff9_z.jpg"}]
},
]
candidate_images, _ = process_vision_info(candidate_imgs)
image_inputs = processor(
text=[img_txt] * len(candidate_images),
images=candidate_images,
# videos=,
padding=True,
return_tensors="pt",
).to(device)
with torch.inference_mode():
qry_emb = model(
**qry_input, output_hidden_states=True, return_dict=True
).hidden_states[-1][:, -1, :]
img_emb = model(
**image_inputs, output_hidden_states=True, return_dict=True
).hidden_states[-1][:, -1, :]
qry_emb = F.normalize(qry_emb, dim=-1)
img_emb = F.normalize(img_emb, dim=-1)
score = qry_emb @ img_emb.t()
# tensor([[0.3066, 0.1108, 0.1226, 0.1245]], device='cuda:0', dtype=torch.bfloat16)
# corresponding to the score of photos (cat, dog, baseball, crowd)
```
<br>
### Video Embedding
```Python
vid_message = [
{
"role": "user",
"content": [{
"type": "video",
"video": video_path,
"max_pixels": 262144,
"fps": 2.0,}]
}
]
video_text = processor.apply_chat_template(
vid_message, tokenize=False, add_generation_prompt=True
) + tokenizer.eos_token
image_input, video_input = process_vision_info(vid_message)
video_input = processor(
text=[video_text],
images=image_input,
videos=video_input,
padding=True,
return_tensors="pt",
).to(device)
with torch.inference_mode():
video_emb = model(
**video_input, output_hidden_states=True, return_dict=True
).hidden_states[-1][:, -1, :]
video_emb = F.normalize(video_emb, dim=-1)
```
<br>
---
license: cc-by-nc-4.0
---
|
RijalMuluk/bbca.jk_time_forecast_lightgbm
|
RijalMuluk
| 2025-09-12T06:01:32Z | 0 | 0 | null |
[
"joblib",
"license:apache-2.0",
"region:us"
] | null | 2025-09-12T06:01:03Z |
---
license: apache-2.0
---
|
NCSOFT/VARCO-VISION-14B
|
NCSOFT
| 2025-09-12T06:01:26Z | 5,231 | 36 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"multimodal",
"conversational",
"ncsoft",
"varco",
"image-text-to-text",
"en",
"ko",
"arxiv:2411.19103",
"arxiv:2408.03326",
"base_model:Qwen/Qwen2.5-14B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-14B-Instruct",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2024-11-25T05:08:04Z |
---
language:
- en
- ko
license: cc-by-nc-4.0
tags:
- multimodal
- conversational
- ncsoft
- varco
base_model:
- Qwen/Qwen2.5-14B-Instruct
- google/siglip-so400m-patch14-384
library_name: transformers
pipeline_tag: image-text-to-text
---
# VARCO-VISION-14B
## 🚨News🎙️
- The 2.0 model has been released. Please use the new version.
- 📰 2025-07-16: We released VARCO-VISION-2.0-14B at [link](https://huggingface.co/NCSOFT/VARCO-VISION-2.0-14B)
- 📰 2025-07-16: We released GME-VARCO-VISION-Embedding at [link](https://huggingface.co/NCSOFT/GME-VARCO-VISION-Embedding)
## About the VARCO-VISION-1.0-14B Model
**VARCO-VISION-14B** is a powerful English-Korean Vision-Language Model (VLM). The training pipeline of VARCO-VISION consists of four stages: Feature Alignment Pre-training, Basic Supervised Fine-tuning, Advanced Supervised Fine-tuning, and Preference Optimization. In both multimodal and text-only benchmarks, VARCO-VISION-14B not only surpasses other models of similar size in performance but also achieves scores comparable to those of proprietary models. The Model currently accepts a single image and a text as inputs, generating an output text. It supports grounding, referring as well as OCR (Optical Character Recognition).
- **Developed by:** NC Research, Multimodal Generation Team
- **Technical Report:** [VARCO-VISION: Expanding Frontiers in Korean Vision-Language Models](https://arxiv.org/pdf/2411.19103)
- **Blog(Korean):** [VARCO-VISION Technical Report Summary](https://ncsoft.github.io/ncresearch/95ad8712e60063e9ac97538504ac3eea0ac530af)
- **Demo Page:** *The demo page is no longer available.*
- **Languages:** Korean, English
- **License:** CC BY-NC 4.0
- **Architecture:** VARCO-VISION-14B follows the architecture of [LLaVA-OneVision](https://arxiv.org/abs/2408.03326).
- **Base Model:**
- **Language Model:** [Qwen/Qwen2.5-14B-Instruct](https://huggingface.co/Qwen/Qwen2.5-14B-Instruct)
- **Vision Encoder:** [google/siglip-so400m-patch14-384](https://huggingface.co/google/siglip-so400m-patch14-384)
- **Huggingface Version Model:** [NCSOFT/VARCO-VISION-14B-HF](https://huggingface.co/NCSOFT/VARCO-VISION-14B-HF)
- **Korean VLM Benchmarks:**
- You can use the following benchmark datasets in the [LLMs-Eval toolkit](https://github.com/EvolvingLMMs-Lab/lmms-eval).
- [NCSOFT/K-MMBench](https://huggingface.co/datasets/NCSOFT/K-MMBench)
- [NCSOFT/K-SEED](https://huggingface.co/datasets/NCSOFT/K-SEED)
- [NCSOFT/K-MMStar](https://huggingface.co/datasets/NCSOFT/K-MMStar)
- [NCSOFT/K-DTCBench](https://huggingface.co/datasets/NCSOFT/K-DTCBench)
- [NCSOFT/K-LLaVA-W](https://huggingface.co/datasets/NCSOFT/K-LLaVA-W)
- **you can also evaluate VARCO-VISION-14B in the [VLMEval kit](https://github.com/open-compass/VLMEvalKit)**.
- **This model is for research purposes only. Commercial use is prohibited.**
## Uses
### Direct Use
To load VARCO-VISION-14B, start by cloning and installing **LLaVA-NeXT**:
```bash
git clone https://github.com/LLaVA-VL/LLaVA-NeXT
cd LLaVA-NeXT
pip install -e ".[train]"
```
After installing **LLaVA-NeXT**, you can load VARCO-VISION-14B using the following code:
```python
import torch
from transformers import AutoTokenizer
from llava.model.language_model.llava_qwen import LlavaQwenForCausalLM
from llava.mm_utils import tokenizer_image_token, process_images
model_name = "NCSOFT/VARCO-VISION-14B"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = LlavaQwenForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.float16,
attn_implementation="flash_attention_2",
low_cpu_mem_usage=True,
device_map="auto"
)
vision_tower = model.get_vision_tower()
image_processor = vision_tower.image_processor
```
Prepare an image and a text input. You need to preprocess the image and tokenize the text. Pass the processed inputs to the model to generate predictions.
```python
import requests
from PIL import Image
# Define a chat history and use `apply_chat_template` to get correctly formatted prompt
# Each value in "content" has to be a list of dicts with types ("text", "image")
conversation = [
{
"role": "user",
"content": [
{"type": "text", "text": "Describe this image."},
{"type": "image"},
],
},
]
prompt = tokenizer.apply_chat_template(conversation, add_generation_prompt=True, tokenize=False)
IMAGE_TOKEN_INDEX = -200
EOS_TOKEN = "<|im_end|>"
input_ids = tokenizer_image_token(prompt, tokenizer, IMAGE_TOKEN_INDEX, return_tensors="pt")
input_ids = input_ids.unsqueeze(0).to(model.device)
image_url = "http://images.cocodataset.org/val2017/000000039769.jpg"
raw_image = Image.open(requests.get(image_url, stream=True).raw)
image_tensors = process_images([raw_image], image_processor, model.config)
image_tensors = [image_tensor.half().to(model.device) for image_tensor in image_tensors]
image_sizes = [raw_image.size]
with torch.inference_mode():
output_ids = model.generate(
input_ids,
images=image_tensors,
image_sizes=image_sizes,
do_sample=False,
max_new_tokens=1024,
use_cache=True,
)
outputs = tokenizer.batch_decode(output_ids)[0]
if outputs.endswith(EOS_TOKEN):
outputs = outputs[: -len(EOS_TOKEN)]
outputs = outputs.strip()
print(outputs)
```
### Specialized Features
If a question is based on bounding boxes or require bounding boxes as an output, please include the special tokens in the input text.
The following special tokens are used to define specific tasks, inputs, and outputs for the model:
- `<gro>`: Indicates that the model's response should include bounding box information.
- `<ocr>`: Specifies OCR tasks for recognizing text within an image.
- `<char>` and `</char>`: Used to mark a text phrase.
- `<obj>` and `</obj>`: Used to indicate an object.
- `<bbox>` and `</bbox>`: Used to represent a bounding box.
- `<delim>`: Represents multiple location points for a single object or text.
#### Grounding
Grounding refers to a task where the model needs to identify specific locations within an image to provide an appropriate answer. To perform grounding, prepend the special token `<gro>` to the question.
```python
conversation = [
{
"role": "user",
"content": [
{"type": "text", "text": "<gro>\nDescribe the image in detail."},
{"type": "image"},
],
},
]
```
**Expected Output Example:**
```html
The image shows <obj>two cats</obj><bbox>0.521, 0.049, 0.997, 0.783<delim>0.016, 0.108, 0.512, 0.99</bbox> lying on <obj>a pink blanket</obj><bbox>0.002, 0.231, 0.999, 0.999</bbox>. The cat on the left is lying on its side with its head resting on the blanket and its body stretched out. The cat on the right is lying on its back with its paws stretched out and its head turned to the side. Both cats appear relaxed and comfortable. There are also <obj>two remote controls</obj><bbox>0.039, 0.138, 0.283, 0.257<delim>0.508, 0.166, 0.581, 0.295</bbox> placed near the cats, one on each side of them.
```
<img src="assets/grounding.png" alt="Grounding Example" width="400"/>
#### Referring
VARCO-VISION-14B can handle location-specific questions using bounding boxes. To perform referring tasks, make a conversation including the object of interest within `<obj>` and `</obj>` tags. You have to specify its location with `<bbox>` and `</bbox>` tags. This allows the model to understand the context and focus on the object at the specified location. A bbox is represented in a form of (x1, y1, x2, y2). The first two values indicate the top-left position of a bbox, and the latter two values are the bottom-right position.
```python
conversation = [
{
"role": "user",
"content": [
{
"type": "text",
"text": "<obj>이 물건</obj><bbox>0.039, 0.138, 0.283, 0.257</bbox>은 어떻게 쓰는거야?",
},
{"type": "image"},
],
},
]
```
**Expected Output Example:**
```
**이 물건**은 리모컨으로, 주로 텔레비전이나 다른 전자 기기를 원격으로 조작하는 데 사용됩니다. 버튼을 누르면 채널 변경, 볼륨 조절, 전원 켜기/끄기 등의 기능을 수행할 수 있습니다. 리모컨의 버튼에는 일반적으로 숫자, 메뉴, 설정, 재생/일시정지 등의 기능이 포함되어 있으며, 사용자는 이를 통해 손쉽게 기기를 제어할 수 있습니다.
```
#### OCR
To perform Optical Character Recognition (OCR), use the `<ocr>` token.
```python
image_file = "./assets/ocr_1.png"
raw_image = Image.open(image_file)
conversation = [
{
"role": "user",
"content": [
{"type": "text", "text": "<ocr>"},
{"type": "image"},
],
},
]
```
**Expected Output Example:**
```
<char>백범로</char><bbox>0.172, 0.265, 0.328, 0.34</bbox>
<char>124번길</char><bbox>0.349, 0.265, 0.512, 0.34</bbox>
<char>Baekbeom-ro</char><bbox>0.171, 0.335, 0.432, 0.391</bbox>
<char>124</char><bbox>0.444, 0.34, 0.508, 0.391</bbox>
<char>만수주공아파트</char><bbox>0.109, 0.528, 0.335, 0.594</bbox>
<char>시흥</char><bbox>0.443, 0.516, 0.522, 0.578</bbox>
<char>시청</char><bbox>0.711, 0.521, 0.811, 0.594</bbox>
<char>Mansu</char><bbox>0.103, 0.601, 0.181, 0.647</bbox>
<char>Jugong</char><bbox>0.186, 0.601, 0.273, 0.658</bbox>
<char>Apt</char><bbox>0.281, 0.601, 0.327, 0.651</bbox>
<char>42</char><bbox>0.377, 0.601, 0.416, 0.647</bbox>
<char>Shieung</char><bbox>0.445, 0.578, 0.53, 0.623</bbox>
<char>인천대공원</char><bbox>0.431, 0.623, 0.609, 0.684</bbox>
<char>모래내시장역</char><bbox>0.651, 0.591, 0.873, 0.664</bbox>
<char>IncheonGrand</char><bbox>0.433, 0.684, 0.561, 0.723</bbox>
<char>Park</char><bbox>0.564, 0.684, 0.611, 0.723</bbox>
```
<img src="assets/ocr_2.jpg" alt="OCR Example" width="350"/>
## Citing the Model
If you use VARCO-VISION-14B in your research, please cite the following:
```bibtex
@misc{ju2024varcovisionexpandingfrontierskorean,
title={VARCO-VISION: Expanding Frontiers in Korean Vision-Language Models},
author={Jeongho Ju and Daeyoung Kim and SunYoung Park and Youngjune Kim},
year={2024},
eprint={2411.19103},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2411.19103},
}
```
|
omerbektasss/blockassist-bc-insectivorous_bold_lion_1757656852
|
omerbektasss
| 2025-09-12T06:01:14Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"insectivorous bold lion",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-12T06:01:10Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- insectivorous bold lion
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
nuphoto-ian/Qwen3-8B-QAT-NVFP4
|
nuphoto-ian
| 2025-09-12T06:01:05Z | 0 | 0 | null |
[
"safetensors",
"qwen3",
"license:apache-2.0",
"8-bit",
"modelopt",
"region:us"
] | null | 2025-09-12T05:33:47Z |
---
license: apache-2.0
---
|
nightmedia/ERNIE-4.5-21B-A3B-Thinking-qx86-hi-mlx
|
nightmedia
| 2025-09-12T06:00:49Z | 0 | 0 |
mlx
|
[
"mlx",
"safetensors",
"ernie4_5_moe",
"ERNIE4.5",
"text-generation",
"conversational",
"custom_code",
"en",
"zh",
"base_model:baidu/ERNIE-4.5-21B-A3B-Thinking",
"base_model:quantized:baidu/ERNIE-4.5-21B-A3B-Thinking",
"license:apache-2.0",
"8-bit",
"region:us"
] |
text-generation
| 2025-09-12T04:49:36Z |
---
license: apache-2.0
language:
- en
- zh
pipeline_tag: text-generation
tags:
- ERNIE4.5
- mlx
library_name: mlx
base_model: baidu/ERNIE-4.5-21B-A3B-Thinking
---
# ERNIE-4.5-21B-A3B-Thinking-qx86-hi-mlx
This model [ERNIE-4.5-21B-A3B-Thinking-qx86-hi-mlx](https://huggingface.co/ERNIE-4.5-21B-A3B-Thinking-qx86-hi-mlx) was
converted to MLX format from [baidu/ERNIE-4.5-21B-A3B-Thinking](https://huggingface.co/baidu/ERNIE-4.5-21B-A3B-Thinking)
using mlx-lm version **0.27.1**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("ERNIE-4.5-21B-A3B-Thinking-qx86-hi-mlx")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
iori-ltn/nanj-qwen2.5-3b-merged
|
iori-ltn
| 2025-09-12T05:58:10Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit",
"lora",
"sft",
"transformers",
"trl",
"unsloth",
"text-generation",
"conversational",
"dataset:p1atdev/open2ch",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-09-12T04:33:22Z |
---
base_model: unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit
library_name: peft
model_name: nanj-qwen
tags:
- base_model:adapter:unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit
- lora
- sft
- transformers
- trl
- unsloth
pipeline_tag: text-generation
license: apache-2.0
datasets:
- p1atdev/open2ch
---
# Nanj Qwen2.5-3B LoRA (DAPT + SFT)
このモデルは **`Qwen/Qwen2.5-3B-Instruct`** をベースに、
open2ch データで **DAPT**、
livejupiter データで **SFT** を行った LoRA です。
なんJ風の会話生成に最適化されています。
---
## Usage
```python
from unsloth import FastLanguageModel
# BaseModek
base = "Qwen/Qwen2.5-3B-Instruct"
# Load LoRa Adapter
model, tok = FastLanguageModel.from_pretrained(base, load_in_4bit=True, max_seq_length=1024)
model.load_adapter("your-username/nanj-qwen2.5-3b-lora")
# example
msgs = [{"role":"user", "content":"藤浪復活したら阪神どうなる?"}]
prompt = tok.apply_chat_template(msgs, tokenize=False, add_generation_prompt=True)
out = model.generate(
**tok(prompt, return_tensors="pt").to(model.device),
max_new_tokens=128, do_sample=True, top_p=0.9, temperature=0.8
)
print(tok.decode(out[0], skip_special_tokens=True).split(prompt)[-1].strip())
```
## Training procedure
本モデルの DAPT / SFT は **どちらも LoRA** を用いて実施しました。
- **Phase 1: DAPT (LoRA)**
open2ch (all-corpus-cleaned) を用いた継続事前学習(次トークン予測)
→ open2ch の語彙・言い回しにモデルを適応
- **Phase 2: SFT (LoRA)**
livejupiter-cleaned から作成した会話ペアで教師あり微調整
→ なんJ的な対話スタイルを獲得
### Framework versions
- PEFT 0.17.1
- TRL: 0.23.0
- Transformers: 4.56.1
- Pytorch: 2.8.0+cu126
- Datasets: 3.6.0
- Tokenizers: 0.22.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
stonermay/blockassist-bc-diving_lightfooted_caterpillar_1757656502
|
stonermay
| 2025-09-12T05:56:10Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"diving lightfooted caterpillar",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-12T05:56:01Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- diving lightfooted caterpillar
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
N-A-Me/Alaska
|
N-A-Me
| 2025-09-12T05:55:19Z | 0 | 0 | null |
[
"base_model:Liberata/illustrious-xl-v1.0",
"base_model:finetune:Liberata/illustrious-xl-v1.0",
"license:cc",
"region:us"
] | null | 2025-05-07T06:04:03Z |
---
widget:
- text: Alaska
base_model: Liberata/illustrious-xl-v1.0
instance_prompt: Alaska
license: cc
---
# Alaska
<Gallery />
## Trigger words
You should use `Alaska` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/N-A-Me/Alaska/tree/main) them in the Files & versions tab.
|
omerbektasss/blockassist-bc-keen_fast_giraffe_1757656465
|
omerbektasss
| 2025-09-12T05:55:19Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"keen fast giraffe",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-12T05:54:41Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- keen fast giraffe
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
nyu-dice-lab/VeriThoughts-Reasoning-32B-Qwen3
|
nyu-dice-lab
| 2025-09-12T05:53:05Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen3-32B",
"base_model:finetune:Qwen/Qwen3-32B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-11T01:14:35Z |
---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen3-32B
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: VeriThoughts-Reasoning-32B-Qwen3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# VeriThoughts-Reasoning-32B-Qwen3
This model is a fine-tuned version of [Qwen/Qwen3-32B](https://huggingface.co/Qwen/Qwen3-32B) on the reasoning_dataset dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 8e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 8
- total_eval_batch_size: 64
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.51.3
- Pytorch 2.8.0+cu128
- Datasets 3.6.0
- Tokenizers 0.21.1
|
VizitonMexico/VizitonMexico
|
VizitonMexico
| 2025-09-12T05:51:54Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-09-12T05:51:03Z |
---
license: apache-2.0
---
¿Qué es Viziton?
Viziton cápsula es una cápsula para la vista especialmente formulada, diseñada para apoyar la salud visual y mantener la visión sana. En el mundo actual, donde el tiempo frente a las pantallas se ha vuelto inevitable, cuidar la vista nunca ha sido tan importante. Viziton Pastillas está creado para nutrir los ojos, protegerlos del estrés diario y promover la claridad a largo plazo. Ya sea que experimente fatiga ocular leve o simplemente desee mantener una visión saludable a medida que envejece, Viziton tabletas está diseñado para convertirse en su sistema de apoyo natural Viziton Cómo utilizar.
Sitio web oficial:<a href="https://www.nutritionsee.com/vizitoexico">www.Viziton.com</a>
<p><a href="https://www.nutritionsee.com/vizitoexico"> <img src="https://www.nutritionsee.com/wp-content/uploads/2025/09/viziton-Mexico.png" alt="enter image description here"> </a></p>
<a href="https://www.nutritionsee.com/vizitoexico">¡Compra ya! Haz clic en el enlace de abajo para más información y obtén un 50% de descuento. ¡Date prisa!</a>
Sitio web oficial:<a href="https://www.nutritionsee.com/vizitoexico">www.Viziton.com</a>
|
omerbektasss/blockassist-bc-insectivorous_bold_lion_1757656089
|
omerbektasss
| 2025-09-12T05:48:32Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"insectivorous bold lion",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-12T05:48:27Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- insectivorous bold lion
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
JobixAi/tts-mapalo-pipeliner
|
JobixAi
| 2025-09-12T05:47:38Z | 0 | 0 | null |
[
"safetensors",
"llama",
"region:us"
] | null | 2025-09-12T04:43:09Z |
This model finetunes the pretrained model `canopylabs/orpheus-3b-0.1-pretrained` using the finetuning pipeline. Full finetuning with Unsloth for 5 epochs.
### Datasets
JobixAi/mapalo-higgs-20250912_021709
### Inference
```bash
temperature = 0.7
top_p = 0.9
repetition_penalty = 1.1
```
|
stonermay/blockassist-bc-diving_lightfooted_caterpillar_1757655885
|
stonermay
| 2025-09-12T05:46:12Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"diving lightfooted caterpillar",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-12T05:45:48Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- diving lightfooted caterpillar
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
CriteriaPO/qwen2.5-3b-dpo-finegrained-20-vanilla
|
CriteriaPO
| 2025-09-12T05:46:10Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"trl",
"dpo",
"conversational",
"arxiv:2305.18290",
"base_model:CriteriaPO/qwen2.5-3b-sft-10",
"base_model:finetune:CriteriaPO/qwen2.5-3b-sft-10",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-10T01:31:35Z |
---
base_model: CriteriaPO/qwen2.5-3b-sft-10
library_name: transformers
model_name: qwen2.5-3b-dpo-finegrained-20-vanilla
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for qwen2.5-3b-dpo-finegrained-20-vanilla
This model is a fine-tuned version of [CriteriaPO/qwen2.5-3b-sft-10](https://huggingface.co/CriteriaPO/qwen2.5-3b-sft-10).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="CriteriaPO/qwen2.5-3b-dpo-finegrained-20-vanilla", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/bborges/CriteriaPreferences/runs/7bj9ejy1)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.12.2
- Transformers: 4.46.3
- Pytorch: 2.1.2+cu121
- Datasets: 3.1.0
- Tokenizers: 0.20.3
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
diabolic6045/Sanskrit-Qwen2.5-VL-7B-Instruct-OCR
|
diabolic6045
| 2025-09-12T05:46:08Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2_5_vl",
"image-to-text",
"axolotl",
"text-generation",
"conversational",
"sa",
"dataset:diabolic6045/sanskrit-ocr-parallel-corpus-chat-template",
"dataset:snskrt/Sanskrit_OCR_Parallel_Corpus",
"base_model:Qwen/Qwen2.5-VL-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-VL-7B-Instruct",
"license:apache-2.0",
"model-index",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-08T06:10:48Z |
---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen2.5-VL-7B-Instruct
tags:
- axolotl
- transformers
datasets:
- diabolic6045/sanskrit-ocr-parallel-corpus-chat-template
- snskrt/Sanskrit_OCR_Parallel_Corpus
pipeline_tag: text-generation
model-index:
- name: qwen2-5-vl-sanskrit-ocr
results:
- task:
type: image-to-text
dataset:
name: Sanskrit OCR Test Set
type: sanskrit-ocr
metrics:
- name: Exact Match Accuracy
type: exact_match
value: 1.59
- name: Character-level Accuracy
type: character_accuracy
value: 86.38
- name: Token-level Jaccard Similarity
type: jaccard_similarity
value: 50.44
- name: Success Rate
type: success_rate
value: 100
source:
name: Sanskrit OCR Evaluation
url: https://huggingface.co/datasets/diabolic6045/sanskrit-ocr-parallel-corpus-chat-template/viewer/default/test
language:
- sa
---
# Sanskrit-Qwen2.5-VL-7B-Instruct-OCR
This model is a fine-tuned version of [Qwen/Qwen2.5-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct) on the [diabolic6045/sanskrit-ocr-parallel-corpus-chat-template](https://huggingface.co/datasets/diabolic6045/sanskrit-ocr-parallel-corpus-chat-template) dataset. This data is converted from [snskrt/Sanskrit_OCR_Parallel_Corpus](https://huggingface.co/datasets/snskrt/Sanskrit_OCR_Parallel_Corpus) by [Sanskrit Datasets](https://huggingface.co/snskrt).
It achieves the following results on the evaluation set:
- Loss: 0.2660
- Memory/max Mem Active(gib): 20.79
- Memory/max Mem Allocated(gib): 20.79
- Memory/device Mem Reserved(gib): 21.46
## Model description
This is a fine-tuned version of Qwen2.5-VL-7B-Instruct, specifically adapted for Sanskrit OCR (Optical Character Recognition) tasks. The model has been trained using LoRA (Low-Rank Adaptation) on a dataset of Sanskrit text images and their corresponding transcriptions.
**Key Features:**
- **Base Model**: Qwen/Qwen2.5-VL-7B-Instruct (7 billion parameters)
- **Task**: Sanskrit OCR - converting Sanskrit text images to machine-readable text
- **Training Method**: LoRA fine-tuning with vision-language capabilities
- **Dataset**: Sanskrit OCR Parallel Corpus with chat template formatting
- **Architecture**: Vision-Language Model with multimodal understanding
**Capabilities:**
- Read and transcribe Sanskrit text from images
- Handle various Sanskrit scripts and fonts
- Process both text and visual inputs simultaneously
- Generate accurate Sanskrit text transcriptions
The model maintains the original Qwen2.5-VL's vision-language capabilities while being specialized for Sanskrit text recognition tasks.
## Training and evaluation data
### Training Dataset
The model was trained on the [diabolic6045/sanskrit-ocr-parallel-corpus-chat-template](https://huggingface.co/datasets/diabolic6045/sanskrit-ocr-parallel-corpus-chat-template) dataset, which contains Sanskrit text images paired with their corresponding transcriptions. The dataset was converted from the original [snskrt/Sanskrit_OCR_Parallel_Corpus](https://huggingface.co/datasets/snskrt/Sanskrit_OCR_Parallel_Corpus) and formatted with chat templates for vision-language training.
### Evaluation Results
The model was evaluated on a test set of 314 Sanskrit text samples:
| Metric | Value |
|:------:|:-----:|
| **Total Samples** | 314 |
| **Successful Samples** | 314 |
| **Failed Samples** | 0 |
| **Success Rate** | 100.00% |
| **Exact Match Accuracy** | 1.59% |
| **Character-level Accuracy** | 86.38% |
| **Token-level Jaccard Similarity** | 50.44% |
**Key Insights:**
- The model successfully processes all test samples without failures
- High character-level accuracy (86.38%) indicates good recognition of individual Sanskrit characters
- Lower exact match accuracy (1.59%) suggests room for improvement in complete text transcription
- Moderate token-level similarity (50.44%) shows reasonable semantic understanding
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- total_eval_batch_size: 4
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 110
- training_steps: 1105
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mem Active(gib) | Mem Allocated(gib) | Mem Reserved(gib) |
|:-------------:|:-----:|:----:|:---------------:|:---------------:|:------------------:|:-----------------:|
| No log | 0 | 0 | 3.3372 | 17.59 | 17.59 | 17.66 |
| 0.2428 | 1.0 | 369 | 0.3075 | 20.69 | 20.69 | 21.27 |
| 0.2057 | 2.0 | 738 | 0.2660 | 20.79 | 20.79 | 21.46 |
<br>
This model was trained using: <br>
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.12.2`
```yaml
base_model: Qwen/Qwen2.5-VL-7B-Instruct
processor_type: AutoProcessor
# these 3 lines are needed for now to handle vision chat templates w images
skip_prepare_dataset: true
remove_unused_columns: false
sample_packing: false
chat_template: qwen2_vl
datasets:
- path: sanskrit_multimodal_train.json
type: chat_template
field_messages: messages
dataset_prepared_path: last_run_prepared
val_set_size: 0.01
output_dir: ./outputs/out-qwen2-5-vl
adapter: lora
lora_model_dir:
sequence_len: 2048
pad_to_sequence_len: false
lora_r: 32
lora_alpha: 16
lora_dropout: 0.05
lora_target_modules: 'model.language_model.layers.[\d]+.(mlp|cross_attn|self_attn).(up|down|gate|q|k|v|o)_proj'
wandb_project: Sanskrit-OCR
wandb_entity:
wandb_watch:
wandb_name: qwen2-5-vl-sanskrit-ocr
wandb_log_model:
gradient_accumulation_steps: 4
micro_batch_size: 2
num_epochs: 3
optimizer: adamw_bnb_8bit
lr_scheduler: cosine
learning_rate: 0.0002
bf16: true
fp16:
tf32: true
gradient_checkpointing: true
logging_steps: 1
flash_attention: true
eager_attention:
warmup_ratio: 0.1
evals_per_epoch: 1
saves_per_epoch: 1
weight_decay: 0.0
# Automatically upload checkpoint and final model to HF
hub_model_id: diabolic6045/qwen2-5-vl-sanskrit-ocr-lora
# save_first_step: true # uncomment this to validate checkpoint saving works with your config
```
</details><br>
### Framework versions
- PEFT 0.17.0
- Transformers 4.55.2
- Pytorch 2.7.1+cu128
- Datasets 4.0.0
- Tokenizers 0.21.2
|
Asap7772/qwen3-4b-arc-second-stage-star-sftinit-lr1e-6-0908
|
Asap7772
| 2025-09-12T05:42:54Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-12T04:47:09Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
omerbektasss/blockassist-bc-keen_fast_giraffe_1757655736
|
omerbektasss
| 2025-09-12T05:42:36Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"keen fast giraffe",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-12T05:42:32Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- keen fast giraffe
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
dan2-ux/mistral-500
|
dan2-ux
| 2025-09-12T05:42:16Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"generated_from_trainer",
"trl",
"sft",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:finetune:mistralai/Mistral-7B-v0.1",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-11T08:44:56Z |
---
base_model: mistralai/Mistral-7B-v0.1
library_name: transformers
model_name: mistral-500
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for mistral-500
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="dan2-ux/mistral-500", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/danh25911-bosch-global/Fine%20tuning%20of%20Mistral%207B/runs/v73tk7fm?apiKey=ed94c7e2762b3e7af52e2b746f14631975e151c4)
This model was trained with SFT.
### Framework versions
- TRL: 0.23.0
- Transformers: 4.56.1
- Pytorch: 2.6.0
- Datasets: 4.0.0
- Tokenizers: 0.22.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
davidilag/wav2vec2-xls-r-300m-pt-500h-FO-500h-SE-cp-best-faroese-100h-30-epochs_run9_2025-09-11
|
davidilag
| 2025-09-12T05:42:07Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-09-11T20:19:26Z |
---
library_name: transformers
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: wav2vec2-xls-r-300m-pt-500h-FO-500h-SE-cp-best-faroese-100h-30-epochs_run9_2025-09-11
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-300m-pt-500h-FO-500h-SE-cp-best-faroese-100h-30-epochs_run9_2025-09-11
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1013
- Wer: 18.8175
- Cer: 4.0350
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-------:|:-----:|:---------------:|:-------:|:-------:|
| 3.2822 | 0.4877 | 1000 | 3.2104 | 100.0 | 99.3017 |
| 0.6811 | 0.9754 | 2000 | 0.3949 | 38.8377 | 10.4119 |
| 0.3765 | 1.4628 | 3000 | 0.2219 | 29.8894 | 7.5540 |
| 0.3338 | 1.9505 | 4000 | 0.1891 | 28.0169 | 6.9157 |
| 0.2546 | 2.4379 | 5000 | 0.1699 | 26.6423 | 6.4739 |
| 0.2495 | 2.9256 | 6000 | 0.1524 | 25.7171 | 6.2017 |
| 0.1873 | 3.4131 | 7000 | 0.1380 | 24.4570 | 5.7646 |
| 0.1933 | 3.9008 | 8000 | 0.1384 | 24.2455 | 5.7732 |
| 0.1467 | 4.3882 | 9000 | 0.1331 | 23.6155 | 5.5405 |
| 0.1585 | 4.8759 | 10000 | 0.1340 | 23.1793 | 5.5302 |
| 0.141 | 5.3633 | 11000 | 0.1229 | 22.7123 | 5.2264 |
| 0.1365 | 5.8510 | 12000 | 0.1174 | 22.7078 | 5.2730 |
| 0.1255 | 6.3385 | 13000 | 0.1212 | 22.5228 | 5.2004 |
| 0.1255 | 6.8261 | 14000 | 0.1217 | 22.3862 | 5.0868 |
| 0.1137 | 7.3136 | 15000 | 0.1140 | 21.8355 | 5.0237 |
| 0.1174 | 7.8013 | 16000 | 0.1083 | 21.5711 | 4.9069 |
| 0.1056 | 8.2887 | 17000 | 0.1148 | 21.5227 | 4.8761 |
| 0.1011 | 8.7764 | 18000 | 0.1157 | 21.5932 | 4.8998 |
| 0.0895 | 9.2638 | 19000 | 0.1078 | 21.2054 | 4.7988 |
| 0.0989 | 9.7515 | 20000 | 0.1075 | 21.1085 | 4.7593 |
| 0.0864 | 10.2390 | 21000 | 0.1040 | 20.7781 | 4.6339 |
| 0.083 | 10.7267 | 22000 | 0.1050 | 20.9455 | 4.6986 |
| 0.0751 | 11.2141 | 23000 | 0.1112 | 20.7296 | 4.6489 |
| 0.0725 | 11.7018 | 24000 | 0.1066 | 20.6327 | 4.5463 |
| 0.0774 | 12.1892 | 25000 | 0.1054 | 20.5622 | 4.5952 |
| 0.069 | 12.6769 | 26000 | 0.1076 | 20.5578 | 4.5503 |
| 0.0706 | 13.1644 | 27000 | 0.1087 | 20.3287 | 4.5100 |
| 0.0635 | 13.6520 | 28000 | 0.1156 | 20.4080 | 4.5471 |
| 0.0652 | 14.1395 | 29000 | 0.1022 | 20.2053 | 4.4887 |
| 0.067 | 14.6272 | 30000 | 0.1015 | 20.0864 | 4.4382 |
| 0.0585 | 15.1146 | 31000 | 0.1035 | 19.9850 | 4.3893 |
| 0.0499 | 15.6023 | 32000 | 0.1044 | 20.1040 | 4.4193 |
| 0.0634 | 16.0897 | 33000 | 0.1069 | 19.9938 | 4.3980 |
| 0.0577 | 16.5774 | 34000 | 0.1034 | 19.7559 | 4.3475 |
| 0.0506 | 17.0649 | 35000 | 0.1008 | 19.9366 | 4.3349 |
| 0.0444 | 17.5525 | 36000 | 0.1034 | 19.6854 | 4.2828 |
| 0.0511 | 18.0400 | 37000 | 0.0985 | 19.6854 | 4.2615 |
| 0.0449 | 18.5277 | 38000 | 0.1018 | 19.2669 | 4.2149 |
| 0.0445 | 19.0151 | 39000 | 0.1028 | 19.4651 | 4.2007 |
| 0.0407 | 19.5028 | 40000 | 0.1071 | 19.4387 | 4.2268 |
| 0.0365 | 19.9905 | 41000 | 0.1060 | 19.3418 | 4.2134 |
| 0.0397 | 20.4779 | 42000 | 0.1088 | 19.2889 | 4.1597 |
| 0.0305 | 20.9656 | 43000 | 0.1031 | 19.3197 | 4.1723 |
| 0.0321 | 21.4531 | 44000 | 0.1048 | 19.3153 | 4.1652 |
| 0.0462 | 21.9407 | 45000 | 0.1029 | 19.2448 | 4.1297 |
| 0.0404 | 22.4282 | 46000 | 0.1021 | 19.1303 | 4.1258 |
| 0.0402 | 22.9159 | 47000 | 0.1030 | 19.0862 | 4.1124 |
| 0.0417 | 23.4033 | 48000 | 0.1036 | 19.0378 | 4.1005 |
| 0.0315 | 23.8910 | 49000 | 0.1039 | 18.9717 | 4.0737 |
| 0.0381 | 24.3784 | 50000 | 0.1031 | 18.9673 | 4.0840 |
| 0.0368 | 24.8661 | 51000 | 0.1045 | 18.9012 | 4.0571 |
| 0.0328 | 25.3536 | 52000 | 0.1032 | 18.9144 | 4.0666 |
| 0.0328 | 25.8413 | 53000 | 0.1029 | 18.8615 | 4.0776 |
| 0.0344 | 26.3287 | 54000 | 0.1016 | 18.9100 | 4.0792 |
| 0.0328 | 26.8164 | 55000 | 0.1018 | 18.8968 | 4.0682 |
| 0.0365 | 27.3038 | 56000 | 0.1009 | 18.8439 | 4.0540 |
| 0.0399 | 27.7915 | 57000 | 0.1016 | 18.8131 | 4.0390 |
| 0.0398 | 28.2790 | 58000 | 0.1016 | 18.8042 | 4.0335 |
| 0.0298 | 28.7666 | 59000 | 0.1014 | 18.8042 | 4.0279 |
| 0.0364 | 29.2541 | 60000 | 0.1013 | 18.8263 | 4.0366 |
| 0.0355 | 29.7418 | 61000 | 0.1013 | 18.8175 | 4.0350 |
### Framework versions
- Transformers 4.56.1
- Pytorch 2.8.0+cu126
- Datasets 4.0.0
- Tokenizers 0.22.0
|
Anhlq/gemma-3-finetune-4bit-copy
|
Anhlq
| 2025-09-12T05:42:05Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gemma3_text",
"en",
"base_model:unsloth/gemma-3-270m-it",
"base_model:finetune:unsloth/gemma-3-270m-it",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-09-12T05:41:46Z |
---
base_model: unsloth/gemma-3-270m-it
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3_text
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** Anhlq
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-270m-it
This gemma3_text model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Anhlq/gemma-3-finetune-16bit-copy
|
Anhlq
| 2025-09-12T05:41:43Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma3_text",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:unsloth/gemma-3-270m-it",
"base_model:finetune:unsloth/gemma-3-270m-it",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-12T05:41:15Z |
---
base_model: unsloth/gemma-3-270m-it
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3_text
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** Anhlq
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-270m-it
This gemma3_text model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
RMCian/Qwen3-0.6B-Gensyn-Swarm-fast_rabid_ram
|
RMCian
| 2025-09-12T05:39:41Z | 38 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am fast_rabid_ram",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-30T14:32:41Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am fast_rabid_ram
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
muscularstingingpenguin/blockassist
|
muscularstingingpenguin
| 2025-09-12T05:36:24Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"gregarious tropical kiwi",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-12T05:36:04Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- gregarious tropical kiwi
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
stonermay/blockassist-bc-diving_lightfooted_caterpillar_1757655275
|
stonermay
| 2025-09-12T05:35:42Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"diving lightfooted caterpillar",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-12T05:35:35Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- diving lightfooted caterpillar
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
NCSOFT/VARCO-VISION-2.0-1.7B-OCR
|
NCSOFT
| 2025-09-12T05:34:48Z | 6,273 | 19 |
transformers
|
[
"transformers",
"safetensors",
"llava_onevision",
"image-to-text",
"multimodal",
"OCR",
"ncsoft",
"ncai",
"varco",
"image-text-to-text",
"conversational",
"en",
"ko",
"arxiv:2408.03326",
"base_model:Qwen/Qwen3-1.7B",
"base_model:finetune:Qwen/Qwen3-1.7B",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-07-08T06:27:27Z |
---
license: cc-by-nc-4.0
base_model:
- Qwen/Qwen3-1.7B
- google/siglip2-so400m-patch16-384
library_name: transformers
tags:
- multimodal
- OCR
- ncsoft
- ncai
- varco
pipeline_tag: image-text-to-text
language:
- en
- ko
---
# VARCO-VISION-2.0-1.7B-OCR
<div align="center">
<img src="./varco-vision.png" width="100%" style="background-color:white; padding:10px;" />
</div>
## Introduction
**VARCO-VISION-2.0-1.7B-OCR** is a lightweight yet powerful OCR-specialized model derived from VARCO-VISION-2.0-1.7B, designed to deliver efficient and accurate text recognition in real-world scenarios. Unlike conventional vision-language models (VLMs) that primarily focus on transcribing visible text, this model performs both recognition and spatial localization by detecting bounding boxes around each character, enabling structured, layout-aware OCR outputs.
The model supports both Korean and English, making it well-suited for multilingual environments where mixed-script documents are common. Each recognized character is paired with its precise position in the image, formatted as `<char>{characters}</char><bbox>{x1}, {y1}, {x2}, {y2}</bbox>`, where the coordinates correspond to the top-left (`x1`, `y1`) and bottom-right (`x2`, `y2`) corners of the character's bounding box.
While VARCO-VISION-2.0-14B demonstrates strong OCR capabilities as part of its broader multimodal reasoning skills, deploying such a large model for single-task use cases can be computationally inefficient. VARCO-VISION-2.0-1.7B-OCR addresses this with a task-optimized design that retains high accuracy while significantly reducing resource requirements, making it ideal for real-time or resource-constrained applications.

## 🚨News🎙️
- 🛠️ 2025-08-22: We updated the checkpoint of VARCO-VISION-2.0-1.7B for improved performance.
- 📰 2025-07-28: We released VARCO-VISION-2.0-1.7B-OCR at [link](https://huggingface.co/NCSOFT/VARCO-VISION-2.0-1.7B-OCR)
- 📰 2025-07-28: We released VARCO-VISION-2.0-1.7B at [link](https://huggingface.co/NCSOFT/VARCO-VISION-2.0-1.7B)
- 🛠️ 2025-07-18: We updated the checkpoint of VARCO-VISION-2.0-14B for improved performance.
- 📰 2025-07-16: We released VARCO-VISION-2.0-14B at [link](https://huggingface.co/NCSOFT/VARCO-VISION-2.0-14B)
- 📰 2025-07-16: We released GME-VARCO-VISION-Embedding at [link](https://huggingface.co/NCSOFT/GME-VARCO-VISION-Embedding)
## VARCO-VISION-2.0 Family
| Model Name | Base Models (Vision / Language) | HF Link |
| :------------------------: | :-------------------------------------------------------------------------------------------------------------------------------------------: | :--------------------------------------------------------------: |
| VARCO-VISION-2.0-14B | [siglip2-so400m-patch16-384](https://huggingface.co/google/siglip2-so400m-patch16-384) / [Qwen3-14B ](https://huggingface.co/Qwen/Qwen3-14B) | [link](https://huggingface.co/NCSOFT/VARCO-VISION-2.0-14B) |
| VARCO-VISION-2.0-1.7B | [siglip2-so400m-patch16-384](https://huggingface.co/google/siglip2-so400m-patch16-384) / [Qwen3-1.7B](https://huggingface.co/Qwen/Qwen3-1.7B) | [link](https://huggingface.co/NCSOFT/VARCO-VISION-2.0-1.7B) |
| VARCO-VISION-2.0-1.7B-OCR | [siglip2-so400m-patch16-384](https://huggingface.co/google/siglip2-so400m-patch16-384) / [Qwen3-1.7B](https://huggingface.co/Qwen/Qwen3-1.7B) | [link](https://huggingface.co/NCSOFT/VARCO-VISION-2.0-1.7B-OCR) |
| GME-VARCO-VISION-Embedding | [Qwen2-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2-VL-7B-Instruct) | [link](https://huggingface.co/NCSOFT/GME-VARCO-VISION-Embedding) |
## Model Architecture
VARCO-VISION-2.0 follows the architecture of [LLaVA-OneVision](https://arxiv.org/abs/2408.03326).
## Evaluation
### OCR Benchmark
| Benchmark | CLOVA OCR | PaddleOCR | EasyOCR | VARCO-VISION-2.0-1.7B-OCR |
| :-------: | :--------:| :-------: | :-----: | :-----------------------: |
| CORD | *93.9* | 91.4 | 77.8 | **95.6** |
| ICDAR2013 | *94.4* | 92.0 | 85.0 | **95.5** |
| ICDAR2015 | **84.1** | 73.7 | 57.9 | *75.4* |
## Usage
To use this model, we recommend installing `transformers` version **4.53.1 or higher**.
Additionally, for best results, we **recommend upscaling input images to a minimum resolution of *2,304*** on the longer side if they are smaller.
```python
import torch
from PIL import Image
from transformers import AutoProcessor, LlavaOnevisionForConditionalGeneration
model_name = "NCSOFT/VARCO-VISION-2.0-1.7B-OCR"
model = LlavaOnevisionForConditionalGeneration.from_pretrained(
model_name,
torch_dtype=torch.float16,
attn_implementation="sdpa",
device_map="auto",
)
processor = AutoProcessor.from_pretrained(model_name)
image = Image.open("file:///path/to/image.jpg")
# Image upscaling for OCR performance boost
w, h = image.size
target_size = 2304
if max(w, h) < target_size:
scaling_factor = target_size / max(w, h)
new_w = int(w * scaling_factor)
new_h = int(h * scaling_factor)
image = image.resize((new_w, new_h))
conversation = [
{
"role": "user",
"content": [
{"type": "image", "image": image},
{"type": "text", "text": "<ocr>"},
],
},
]
inputs = processor.apply_chat_template(
conversation,
add_generation_prompt=True,
tokenize=True,
return_dict=True,
return_tensors="pt"
).to(model.device, torch.float16)
generate_ids = model.generate(**inputs, max_new_tokens=1024)
generate_ids_trimmed = [
out_ids[len(in_ids):] for in_ids, out_ids in zip(inputs.input_ids, generate_ids)
]
output = processor.decode(generate_ids_trimmed[0], skip_special_tokens=False)
print(output)
```
|
NCSOFT/VARCO-VISION-14B-HF
|
NCSOFT
| 2025-09-12T05:34:13Z | 1,103 | 29 |
transformers
|
[
"transformers",
"safetensors",
"llava_onevision",
"image-to-text",
"multimodal",
"conversational",
"ncsoft",
"varco",
"image-text-to-text",
"en",
"ko",
"arxiv:2411.19103",
"arxiv:2408.03326",
"base_model:Qwen/Qwen2.5-14B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-14B-Instruct",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2024-11-27T01:11:00Z |
---
language:
- en
- ko
license: cc-by-nc-4.0
tags:
- multimodal
- conversational
- ncsoft
- varco
base_model:
- Qwen/Qwen2.5-14B-Instruct
- google/siglip-so400m-patch14-384
library_name: transformers
pipeline_tag: image-text-to-text
---
# VARCO-VISION-14B-HF
## 🚨News🎙️
- The 2.0 model has been released. Please use the new version.
- 📰 2025-07-16: We released VARCO-VISION-2.0-14B at [link](https://huggingface.co/NCSOFT/VARCO-VISION-2.0-14B)
- 📰 2025-07-16: We released GME-VARCO-VISION-Embedding at [link](https://huggingface.co/NCSOFT/GME-VARCO-VISION-Embedding)
## About the VARCO-VISION-1.0-14B Model
**VARCO-VISION-14B** is a powerful English-Korean Vision-Language Model (VLM). The training pipeline of VARCO-VISION consists of four stages: Feature Alignment Pre-training, Basic Supervised Fine-tuning, Advanced Supervised Fine-tuning, and Preference Optimization. In both multimodal and text-only benchmarks, VARCO-VISION-14B not only surpasses other models of similar size in performance but also achieves scores comparable to those of proprietary models. The Model currently accepts a single image and a text as inputs, generating an output text. It supports grounding, referring as well as OCR (Optical Character Recognition).
- **Developed by:** NC Research, Multimodal Generation Team
- **Technical Report:** [VARCO-VISION: Expanding Frontiers in Korean Vision-Language Models](https://arxiv.org/pdf/2411.19103)
- **Blog(Korean):** [VARCO-VISION Technical Report Summary](https://ncsoft.github.io/ncresearch/95ad8712e60063e9ac97538504ac3eea0ac530af)
- **Demo Page:** *The demo page is no longer available.*
- **Languages:** Korean, English
- **License:** CC BY-NC 4.0
- **Architecture:** VARCO-VISION-14B follows the architecture of [LLaVA-OneVision](https://arxiv.org/abs/2408.03326).
- **Base Model:**
- **Language Model:** [Qwen/Qwen2.5-14B-Instruct](https://huggingface.co/Qwen/Qwen2.5-14B-Instruct)
- **Vision Encoder:** [google/siglip-so400m-patch14-384](https://huggingface.co/google/siglip-so400m-patch14-384)
- **LLaVA-NeXT Codebase Model:** [NCSOFT/VARCO-VISION-14B](https://huggingface.co/NCSOFT/VARCO-VISION-14B)
- **Korean VLM Benchmarks:**
- You can use the following benchmark datasets in the [LLMs-Eval toolkit](https://github.com/EvolvingLMMs-Lab/lmms-eval).
- [NCSOFT/K-MMBench](https://huggingface.co/datasets/NCSOFT/K-MMBench)
- [NCSOFT/K-SEED](https://huggingface.co/datasets/NCSOFT/K-SEED)
- [NCSOFT/K-MMStar](https://huggingface.co/datasets/NCSOFT/K-MMStar)
- [NCSOFT/K-DTCBench](https://huggingface.co/datasets/NCSOFT/K-DTCBench)
- [NCSOFT/K-LLaVA-W](https://huggingface.co/datasets/NCSOFT/K-LLaVA-W)
- **you can also evaluate VARCO-VISION-14B in the [VLMEval kit](https://github.com/open-compass/VLMEvalKit)**.
- **This model is for research purposes only. Commercial use is prohibited.**
## Uses
### Direct Use
To use this model, ensure you have `transformers >= 4.45.0` installed.
```python
import torch
import requests
from PIL import Image
from transformers import LlavaOnevisionForConditionalGeneration, AutoProcessor
model_name = "NCSOFT/VARCO-VISION-14B-HF"
model = LlavaOnevisionForConditionalGeneration.from_pretrained(
model_name,
torch_dtype="float16",
device_map="auto",
attn_implementation="flash_attention_2"
)
processor = AutoProcessor.from_pretrained(model_name)
device = model.device
# Define a chat history and use `apply_chat_template` to get correctly formatted prompt
# Each value in "content" has to be a list of dicts with types ("text", "image")
conversation = [
{
"role": "user",
"content": [
{"type": "text", "text": "Describe this image."},
{"type": "image"},
],
},
]
prompt = processor.apply_chat_template(conversation, add_generation_prompt=True)
EOS_TOKEN = "<|im_end|>"
image_file = "http://images.cocodataset.org/val2017/000000039769.jpg"
raw_image = Image.open(requests.get(image_file, stream=True).raw)
inputs = processor(images=raw_image, text=prompt, return_tensors='pt').to(device, torch.float16)
output = model.generate(**inputs, max_new_tokens=1024, do_sample=False)
output = processor.decode(output[0][inputs.input_ids.shape[1]:])
if output.endswith(EOS_TOKEN):
output = output[: -len(EOS_TOKEN)]
output = output.strip()
print(output)
```
### Specialized Features
If a question is based on bounding boxes or require bounding boxes as an output, please include the special tokens in the input text.
The following special tokens are used to define specific tasks, inputs, and outputs for the model:
- `<gro>`: Indicates that the model's response should include bounding box information.
- `<ocr>`: Specifies OCR tasks for recognizing text within an image.
- `<char>` and `</char>`: Used to mark a text phrase.
- `<obj>` and `</obj>`: Used to indicate an object.
- `<bbox>` and `</bbox>`: Used to represent a bounding box.
- `<delim>`: Represents multiple location points for a single object or text.
#### Grounding
Grounding refers to a task where the model needs to identify specific locations within an image to provide an appropriate answer. To perform grounding, prepend the special token `<gro>` to the question.
```python
conversation = [
{
"role": "user",
"content": [
{"type": "text", "text": "<gro>\nDescribe the image in detail."},
{"type": "image"},
],
},
]
```
**Expected Output Example:**
```html
The image shows <obj>two cats</obj><bbox>0.014, 0.106, 0.51, 0.996<delim>0.51, 0.054, 0.996, 0.787</bbox> lying on <obj>a pink blanket</obj><bbox>0.003, 0.231, 0.999, 0.999</bbox>. The cat on the left is lying on its side with its head resting on the blanket, while the cat on the right is lying on its stomach with its head also resting on the blanket. Both cats appear to be relaxed and comfortable. There are <obj>two remote controls</obj><bbox>0.037, 0.141, 0.283, 0.253<delim>0.506, 0.171, 0.581, 0.295</bbox> placed near the cats, one on the left side and one on the right side of the image.
```
<img src="assets/grounding.png" alt="Grounding Example" width="400"/>
#### Referring
VARCO-VISION-14B can handle location-specific questions using bounding boxes. To perform referring tasks, make a conversation including the object of interest within `<obj>` and `</obj>` tags. You have to specify its location with `<bbox>` and `</bbox>` tags. This allows the model to understand the context and focus on the object at the specified location. A bbox is represented in a form of (x1, y1, x2, y2). The first two values indicate the top-left position of a bbox, and the latter two values are the bottom-right position.
```python
conversation = [
{
"role": "user",
"content": [
{
"type": "text",
"text": "<obj>이 물건</obj><bbox>0.039, 0.138, 0.283, 0.257</bbox>은 어떻게 쓰는거야?",
},
{"type": "image"},
],
},
]
```
**Expected Output Example:**
```
**이 물건**은 리모컨으로, 주로 텔레비전이나 다른 전자 기기를 원격으로 조작하는 데 사용됩니다. 리모컨에는 다양한 버튼이 있으며, 각 버튼은 채널 변경, 볼륨 조절, 전원 켜기/끄기 등의 기능을 수행합니다. 사용자는 리모컨을 손에 들고 버튼을 누르면, 해당 기기에 신호를 보내 원하는 조작을 할 수 있습니다. 리모컨은 일반적으로 가정이나 사무실에서 편리하게 전자 기기를 조작할 수 있도록 사용됩니다.
```
#### OCR
To perform Optical Character Recognition (OCR), use the `<ocr>` token.
```python
image_file = "./assets/ocr_1.png"
raw_image = Image.open(image_file)
conversation = [
{
"role": "user",
"content": [
{"type": "text", "text": "<ocr>"},
{"type": "image"},
],
},
]
```
**Expected Output Example:**
```
<char>백범로</char><bbox>0.172, 0.266, 0.328, 0.341</bbox>
<char>124번길</char><bbox>0.347, 0.266, 0.512, 0.341</bbox>
<char>Baekbeom-ro</char><bbox>0.171, 0.337, 0.433, 0.392</bbox>
<char>124</char><bbox>0.444, 0.341, 0.508, 0.392</bbox>
<char>만수주공아파트</char><bbox>0.109, 0.531, 0.335, 0.601</bbox>
<char>시흥</char><bbox>0.443, 0.518, 0.522, 0.581</bbox>
<char>시청</char><bbox>0.711, 0.521, 0.811, 0.594</bbox>
<char>Mansu</char><bbox>0.102, 0.601, 0.181, 0.648</bbox>
<char>Jugong</char><bbox>0.186, 0.601, 0.273, 0.658</bbox>
<char>Apt</char><bbox>0.28, 0.601, 0.327, 0.651</bbox>
<char>42</char><bbox>0.377, 0.601, 0.416, 0.648</bbox>
<char>Shieung</char><bbox>0.445, 0.578, 0.53, 0.625</bbox>
<char>인천대공원</char><bbox>0.43, 0.621, 0.609, 0.684</bbox>
<char>모래내시장역</char><bbox>0.651, 0.59, 0.873, 0.665</bbox>
<char>IncheonGrand</char><bbox>0.432, 0.681, 0.561, 0.723</bbox>
<char>Park</char><bbox>0.564, 0.681, 0.611, 0.723</bbox>
```
<img src="assets/ocr_2.jpg" alt="OCR Example" width="350"/>
## Citing the Model
If you use VARCO-VISION-14B in your research, please cite the following:
```bibtex
@misc{ju2024varcovisionexpandingfrontierskorean,
title={VARCO-VISION: Expanding Frontiers in Korean Vision-Language Models},
author={Jeongho Ju and Daeyoung Kim and SunYoung Park and Youngjune Kim},
year={2024},
eprint={2411.19103},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2411.19103},
```
|
jo-mengr/mmcontext-pubmedbert-scvi_fm-cxg_100k
|
jo-mengr
| 2025-09-12T05:30:18Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"sentence-similarity",
"feature-extraction",
"dense",
"generated_from_trainer",
"dataset_size:81143",
"loss:MultipleNegativesRankingLoss",
"code",
"dataset:jo-mengr/cellxgene_pseudo_bulk_100k_multiplets_natural_language_annotation",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:NeuML/pubmedbert-base-embeddings",
"base_model:finetune:NeuML/pubmedbert-base-embeddings",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-09-12T05:29:58Z |
---
language:
- code
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- dense
- generated_from_trainer
- dataset_size:81143
- loss:MultipleNegativesRankingLoss
base_model: NeuML/pubmedbert-base-embeddings
widget:
- source_sentence: sample_idx:census_421e5f54-5de7-425f-b399-34ead0651ce1_41
sentences:
- This measurement was conducted with 10x 3' v3. Neuron cell type from the cerebral
cortex (Frontal agranular insular cortex, or FI) of a 50-year-old male European
donor, analyzed at the nucleus level.
- sample_idx:census_421e5f54-5de7-425f-b399-34ead0651ce1_670
- This measurement was conducted with 10x 3' v3. Neuron cell type from a 42-year
old male cerebral cortex tissue, specifically the Frontal agranular insular cortex
(FI) region, identified as a CGE interneuron.
- source_sentence: sample_idx:census_d7d7e89c-c93a-422d-8958-9b4a90b69558_1563
sentences:
- This measurement was conducted with 10x 5' v1. Activated CD4-positive, alpha-beta
T cell from a 26-year-old male with Common variable immunodeficiency (CVID), undergoing
naïve-to-memory B cell differentiation.
- sample_idx:census_d7d7e89c-c93a-422d-8958-9b4a90b69558_4002
- This measurement was conducted with 10x 5' v1. Naive B cell from blood of a 26-year
old male, activated with CD3.
- source_sentence: sample_idx:census_e1f595f6-ba2c-495e-9bee-7056f116b1e4_1239
sentences:
- sample_idx:census_e1f595f6-ba2c-495e-9bee-7056f116b1e4_822
- This measurement was conducted with 10x 3' v3. Neuron cell type from a 29-year-old
male cerebral cortex, specifically from the Middle Temporal Gyrus (MTG), with
European ethnicity, using nucleus suspension type.
- This measurement was conducted with 10x 3' v3. Neuron cell type from a 50-year-old
male cerebral cortex, specifically the Middle Temporal Gyrus (MTG), with European
self-reported ethnicity, analyzed at the nucleus level.
- source_sentence: sample_idx:census_1cf24082-59de-4029-ac81-6e398768af3a_304
sentences:
- This measurement was conducted with 10x 3' v3. Nucleus sample from a 50-year-old
male neuron, specifically an MGE interneuron, located in the Inferior temporal
gyrus (ITG) region of the cerebral cortex, with European ethnicity.
- This measurement was conducted with 10x 3' v3. Nucleus suspension of neurons from
the inferior temporal gyrus region of the cerebral cortex, taken from a 29-year-old
male of European descent.
- sample_idx:census_1cf24082-59de-4029-ac81-6e398768af3a_426
- source_sentence: sample_idx:census_18500fcd-9960-49cb-8a8e-7d868dc14efe_602
sentences:
- sample_idx:census_18500fcd-9960-49cb-8a8e-7d868dc14efe_247
- This measurement was conducted with 10x 3' v3. Neuron cell type from the cerebral
nuclei, specifically from the external segment of globus pallidus (GPe) in a 42-year-old
male.
- This measurement was conducted with 10x 3' v3. Oligodendrocyte cells from the
external segment of globus pallidus (GPe) in a 29-year-old male cerebral nuclei
sample.
datasets:
- jo-mengr/cellxgene_pseudo_bulk_100k_multiplets_natural_language_annotation
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- cosine_accuracy
model-index:
- name: SentenceTransformer based on NeuML/pubmedbert-base-embeddings
results:
- task:
type: triplet
name: Triplet
dataset:
name: cellxgene pseudo bulk 100k multiplets natural language annotation cell
sentence 1 caption
type: cellxgene_pseudo_bulk_100k_multiplets_natural_language_annotation_cell_sentence_1_caption
metrics:
- type: cosine_accuracy
value: 0.7883697748184204
name: Cosine Accuracy
---
# SentenceTransformer based on NeuML/pubmedbert-base-embeddings
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [NeuML/pubmedbert-base-embeddings](https://huggingface.co/NeuML/pubmedbert-base-embeddings) on the [cellxgene_pseudo_bulk_100k_multiplets_natural_language_annotation_cell_sentence_1_caption](https://huggingface.co/datasets/jo-mengr/cellxgene_pseudo_bulk_100k_multiplets_natural_language_annotation) dataset. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [NeuML/pubmedbert-base-embeddings](https://huggingface.co/NeuML/pubmedbert-base-embeddings) <!-- at revision d6eaca8254bc229f3ca42749a5510ae287eb3486 -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 1024 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- [cellxgene_pseudo_bulk_100k_multiplets_natural_language_annotation_cell_sentence_1_caption](https://huggingface.co/datasets/jo-mengr/cellxgene_pseudo_bulk_100k_multiplets_natural_language_annotation)
- **Language:** code
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): MMContextEncoder(
(text_encoder): BertModel(
(embeddings): BertEmbeddings(
(word_embeddings): Embedding(30522, 768, padding_idx=0)
(position_embeddings): Embedding(512, 768)
(token_type_embeddings): Embedding(2, 768)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(encoder): BertEncoder(
(layer): ModuleList(
(0-11): 12 x BertLayer(
(attention): BertAttention(
(self): BertSdpaSelfAttention(
(query): Linear(in_features=768, out_features=768, bias=True)
(key): Linear(in_features=768, out_features=768, bias=True)
(value): Linear(in_features=768, out_features=768, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(output): BertSelfOutput(
(dense): Linear(in_features=768, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(intermediate): BertIntermediate(
(dense): Linear(in_features=768, out_features=3072, bias=True)
(intermediate_act_fn): GELUActivation()
)
(output): BertOutput(
(dense): Linear(in_features=3072, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
)
)
(pooler): BertPooler(
(dense): Linear(in_features=768, out_features=768, bias=True)
(activation): Tanh()
)
)
(text_adapter): AdapterModule(
(net): Sequential(
(0): Linear(in_features=768, out_features=1024, bias=True)
(1): BatchNorm1d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(pooling): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(omics_adapter): AdapterModule(
(net): Sequential(
(0): Linear(in_features=50, out_features=1024, bias=True)
(1): BatchNorm1d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(omics_encoder): MiniOmicsModel(
(embeddings): Embedding(90155, 50, padding_idx=0)
)
)
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("jo-mengr/mmcontext-pubmedbert-scvi_fm-cxg_100k")
# Run inference
sentences = [
'sample_idx:census_18500fcd-9960-49cb-8a8e-7d868dc14efe_602',
"This measurement was conducted with 10x 3' v3. Oligodendrocyte cells from the external segment of globus pallidus (GPe) in a 29-year-old male cerebral nuclei sample.",
"This measurement was conducted with 10x 3' v3. Neuron cell type from the cerebral nuclei, specifically from the external segment of globus pallidus (GPe) in a 42-year-old male.",
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities)
# tensor([[1.0000, 0.4753, 0.1343],
# [0.4753, 1.0000, 0.7385],
# [0.1343, 0.7385, 1.0000]])
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Triplet
* Dataset: `cellxgene_pseudo_bulk_100k_multiplets_natural_language_annotation_cell_sentence_1_caption`
* Evaluated with [<code>TripletEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.TripletEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| **cosine_accuracy** | **0.7884** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### cellxgene_pseudo_bulk_100k_multiplets_natural_language_annotation_cell_sentence_1_caption
* Dataset: [cellxgene_pseudo_bulk_100k_multiplets_natural_language_annotation_cell_sentence_1_caption](https://huggingface.co/datasets/jo-mengr/cellxgene_pseudo_bulk_100k_multiplets_natural_language_annotation) at [8b940b4](https://huggingface.co/datasets/jo-mengr/cellxgene_pseudo_bulk_100k_multiplets_natural_language_annotation/tree/8b940b48a15534edc2689cf70afe98d82375bb59)
* Size: 81,143 training samples
* Columns: <code>anchor</code>, <code>positive</code>, <code>negative_1</code>, and <code>negative_2</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative_1 | negative_2 |
|:--------|:-----------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------|
| type | string | string | string | string |
| details | <ul><li>min: 56 characters</li><li>mean: 58.72 characters</li><li>max: 60 characters</li></ul> | <ul><li>min: 22 tokens</li><li>mean: 48.4 tokens</li><li>max: 159 tokens</li></ul> | <ul><li>min: 24 tokens</li><li>mean: 45.96 tokens</li><li>max: 155 tokens</li></ul> | <ul><li>min: 56 characters</li><li>mean: 58.74 characters</li><li>max: 60 characters</li></ul> |
* Samples:
| anchor | positive | negative_1 | negative_2 |
|:--------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------|
| <code>sample_idx:census_218acb0f-9f2f-4f76-b90b-15a4b7c7f629_26009</code> | <code>This measurement was conducted with 10x 3' v2. A proliferating lymphocyte cell sample, obtained from a 34-year-old female Asian individual, derived from peripheral blood mononuclear cells.</code> | <code>This measurement was conducted with 10x 3' v2. CD8-positive, alpha-beta T cell derived from a 51-year old European female with managed systemic lupus erythematosus (SLE), obtained from blood tissue and enriched as a peripheral blood mononuclear cell.</code> | <code>sample_idx:census_218acb0f-9f2f-4f76-b90b-15a4b7c7f629_38905</code> |
| <code>sample_idx:census_1b9d8702-5af8-4142-85ed-020eb06ec4f6_6333</code> | <code>This measurement was conducted with 10x 5' v1. Sample is a cell from the omentum tissue, specifically an effector memory CD4-positive, alpha-beta T cell, from a female in her sixth decade.</code> | <code>This measurement was conducted with 10x 3' v3. A cell sample from the spleen, belonging to the naive thymus-derived CD4-positive, alpha-beta T cell category, specifically Tnaive/CM_CD4, and identified as Tcm/Naive helper T cells within the T cells group.</code> | <code>sample_idx:census_1b9d8702-5af8-4142-85ed-020eb06ec4f6_4412</code> |
| <code>sample_idx:census_adda0684-f8ea-4403-b393-2a25607077c4_271</code> | <code>This measurement was conducted with 10x 3' v3. Neuron cell type from a 29-year-old male, specifically from the thalamic complex, specifically the thalamus (THM) - posterior nuclear complex of thalamus (PoN) - medial geniculate nuclei (MG).</code> | <code>This measurement was conducted with 10x 3' v3. Fibroblast cells from the thalamic complex, specifically from the thalamus (THM) - posterior nuclear complex of thalamus (PoN) - medial geniculate nuclei (MG) region, of a 42-year-old male.</code> | <code>sample_idx:census_adda0684-f8ea-4403-b393-2a25607077c4_585</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Evaluation Dataset
#### cellxgene_pseudo_bulk_100k_multiplets_natural_language_annotation_cell_sentence_1_caption
* Dataset: [cellxgene_pseudo_bulk_100k_multiplets_natural_language_annotation_cell_sentence_1_caption](https://huggingface.co/datasets/jo-mengr/cellxgene_pseudo_bulk_100k_multiplets_natural_language_annotation) at [8b940b4](https://huggingface.co/datasets/jo-mengr/cellxgene_pseudo_bulk_100k_multiplets_natural_language_annotation/tree/8b940b48a15534edc2689cf70afe98d82375bb59)
* Size: 9,011 evaluation samples
* Columns: <code>anchor</code>, <code>positive</code>, <code>negative_1</code>, and <code>negative_2</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative_1 | negative_2 |
|:--------|:-----------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------|
| type | string | string | string | string |
| details | <ul><li>min: 56 characters</li><li>mean: 58.73 characters</li><li>max: 60 characters</li></ul> | <ul><li>min: 23 tokens</li><li>mean: 47.49 tokens</li><li>max: 157 tokens</li></ul> | <ul><li>min: 21 tokens</li><li>mean: 48.98 tokens</li><li>max: 157 tokens</li></ul> | <ul><li>min: 56 characters</li><li>mean: 58.74 characters</li><li>max: 60 characters</li></ul> |
* Samples:
| anchor | positive | negative_1 | negative_2 |
|:--------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------|
| <code>sample_idx:census_0b4a15a7-4e9e-4555-9733-2423e5c66469_490</code> | <code>This measurement was conducted with 10x 3' v3. Cell sample from the cortex of kidney, taken from a 43-year-old male of European ethnicity with a reported history of kidney cancer. The cell type is identified as a kidney collecting duct intercalated cell.</code> | <code>This measurement was conducted with 10x 3' v3. Epithelial cells derived from the cortex of a kidney of a 50-year old female European individual, preserved by cryopreservation.</code> | <code>sample_idx:census_0b4a15a7-4e9e-4555-9733-2423e5c66469_280</code> |
| <code>sample_idx:census_4976b234-9028-4b4b-8a2f-8ac59d636219_269</code> | <code>This measurement was conducted with 10x 3' v3. Neuron cell type from a 29-year-old male cerebellum, specifically from the Cerebellar Vermis - CBV region, with European self-reported ethnicity, analyzed at the nucleus level.</code> | <code>This measurement was conducted with 10x 3' v3. Fibroblast cells derived from the cerebellum tissue of a 50-year-old male, specifically from the Cerebellum (CB) - Cerebellar Vermis - CBV dissection.</code> | <code>sample_idx:census_4976b234-9028-4b4b-8a2f-8ac59d636219_826</code> |
| <code>sample_idx:census_44882825-0da1-4547-b721-2c6105d4a9d1_10258</code> | <code>This measurement was conducted with 10x 5' v1. Cell sample from the tonsil of a 9-year-old female with recurrent tonsillitis, characterized as a centroblast B cell with IGLC2, IGLV7-43, IGLJ3 immunoglobulin genes expressed.</code> | <code>This measurement was conducted with 10x 5' v1. This sample represents a tonsil germinal center B cell from a three-year-old human male with obstructive sleep apnea and recurrent tonsillitis. The study provides a comprehensive roadmap of human B cell maturation, including gene expression, antibody repertoires, and clonal sharing of B cell states at single-cell resolution, as well as memory B cell heterogeneity reflecting diverse functional and signaling states.</code> | <code>sample_idx:census_44882825-0da1-4547-b721-2c6105d4a9d1_243</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 128
- `per_device_eval_batch_size`: 128
- `learning_rate`: 2e-05
- `num_train_epochs`: 4
- `warmup_ratio`: 0.1
- `bf16`: True
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 128
- `per_device_eval_batch_size`: 128
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 4
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: True
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `hub_revision`: None
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `liger_kernel_config`: None
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: proportional
- `router_mapping`: {}
- `learning_rate_mapping`: {}
</details>
### Training Logs
| Epoch | Step | Training Loss | cellxgene pseudo bulk 100k multiplets natural language annotation cell sentence 1 caption loss | cellxgene_pseudo_bulk_100k_multiplets_natural_language_annotation_cell_sentence_1_caption_cosine_accuracy |
|:------:|:----:|:-------------:|:----------------------------------------------------------------------------------------------:|:---------------------------------------------------------------------------------------------------------:|
| 0.1577 | 100 | 14.8711 | 14.6553 | 0.5955 |
| 0.3155 | 200 | 13.5634 | 12.5052 | 0.6968 |
| 0.4732 | 300 | 11.0631 | 9.8851 | 0.7264 |
| 0.6309 | 400 | 9.0001 | 8.3624 | 0.7431 |
| 0.7886 | 500 | 7.8481 | 7.5177 | 0.7510 |
| 0.9464 | 600 | 7.0916 | 6.9894 | 0.7555 |
| 1.1041 | 700 | 6.7088 | 6.6373 | 0.7622 |
| 1.2618 | 800 | 6.2834 | 6.3739 | 0.7664 |
| 1.4196 | 900 | 6.0694 | 6.1536 | 0.7701 |
| 1.5773 | 1000 | 5.8974 | 5.9650 | 0.7736 |
| 1.7350 | 1100 | 5.6823 | 5.8622 | 0.7774 |
| 1.8927 | 1200 | 5.6118 | 5.7346 | 0.7778 |
| 2.0505 | 1300 | 5.4439 | 5.6417 | 0.7805 |
| 2.2082 | 1400 | 5.3634 | 5.5389 | 0.7833 |
| 2.3659 | 1500 | 5.298 | 5.4966 | 0.7838 |
| 2.5237 | 1600 | 5.2391 | 5.4338 | 0.7840 |
| 2.6814 | 1700 | 5.1817 | 5.3612 | 0.7852 |
| 2.8391 | 1800 | 5.1506 | 5.3411 | 0.7844 |
| 2.9968 | 1900 | 5.116 | 5.2958 | 0.7870 |
| 3.1546 | 2000 | 5.0382 | 5.2824 | 0.7879 |
| 3.3123 | 2100 | 5.0976 | 5.2416 | 0.7871 |
| 3.4700 | 2200 | 5.012 | 5.2307 | 0.7871 |
| 3.6278 | 2300 | 5.0273 | 5.2196 | 0.7900 |
| 3.7855 | 2400 | 5.0156 | 5.2232 | 0.7901 |
| 3.9432 | 2500 | 4.9684 | 5.2027 | 0.7884 |
### Framework Versions
- Python: 3.11.6
- Sentence Transformers: 5.0.0
- Transformers: 4.55.0.dev0
- PyTorch: 2.5.1+cu121
- Accelerate: 1.9.0
- Datasets: 2.19.1
- Tokenizers: 0.21.4
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
rsadaphule/llava-1.5-7b-package-damage-lora-ADAPTER
|
rsadaphule
| 2025-09-12T05:30:09Z | 0 | 0 |
transformers
|
[
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-12T05:30:08Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
omerbektasss/blockassist-bc-keen_fast_giraffe_1757654928
|
omerbektasss
| 2025-09-12T05:29:42Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"keen fast giraffe",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-12T05:29:05Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- keen fast giraffe
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mradermacher/jina-reranker-v1-tiny-en-GGUF
|
mradermacher
| 2025-09-12T05:27:50Z | 7,599 | 0 |
transformers
|
[
"transformers",
"gguf",
"reranker",
"cross-encoder",
"transformers.js",
"sentence-transformers",
"en",
"base_model:jinaai/jina-reranker-v1-tiny-en",
"base_model:quantized:jinaai/jina-reranker-v1-tiny-en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"feature-extraction"
] | null | 2025-01-07T05:57:29Z |
---
base_model: jinaai/jina-reranker-v1-tiny-en
language:
- en
library_name: transformers
license: apache-2.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- reranker
- cross-encoder
- transformers.js
- sentence-transformers
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/jinaai/jina-reranker-v1-tiny-en
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#jina-reranker-v1-tiny-en-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/jina-reranker-v1-tiny-en-GGUF/resolve/main/jina-reranker-v1-tiny-en.Q2_K.gguf) | Q2_K | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/jina-reranker-v1-tiny-en-GGUF/resolve/main/jina-reranker-v1-tiny-en.Q3_K_S.gguf) | Q3_K_S | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/jina-reranker-v1-tiny-en-GGUF/resolve/main/jina-reranker-v1-tiny-en.IQ4_XS.gguf) | IQ4_XS | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/jina-reranker-v1-tiny-en-GGUF/resolve/main/jina-reranker-v1-tiny-en.Q3_K_M.gguf) | Q3_K_M | 0.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/jina-reranker-v1-tiny-en-GGUF/resolve/main/jina-reranker-v1-tiny-en.Q3_K_L.gguf) | Q3_K_L | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/jina-reranker-v1-tiny-en-GGUF/resolve/main/jina-reranker-v1-tiny-en.Q4_K_S.gguf) | Q4_K_S | 0.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/jina-reranker-v1-tiny-en-GGUF/resolve/main/jina-reranker-v1-tiny-en.Q4_K_M.gguf) | Q4_K_M | 0.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/jina-reranker-v1-tiny-en-GGUF/resolve/main/jina-reranker-v1-tiny-en.Q5_K_S.gguf) | Q5_K_S | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/jina-reranker-v1-tiny-en-GGUF/resolve/main/jina-reranker-v1-tiny-en.Q5_K_M.gguf) | Q5_K_M | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/jina-reranker-v1-tiny-en-GGUF/resolve/main/jina-reranker-v1-tiny-en.Q6_K.gguf) | Q6_K | 0.1 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/jina-reranker-v1-tiny-en-GGUF/resolve/main/jina-reranker-v1-tiny-en.Q8_0.gguf) | Q8_0 | 0.1 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/jina-reranker-v1-tiny-en-GGUF/resolve/main/jina-reranker-v1-tiny-en.f16.gguf) | f16 | 0.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
stonermay/blockassist-bc-diving_lightfooted_caterpillar_1757654658
|
stonermay
| 2025-09-12T05:25:51Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"diving lightfooted caterpillar",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-12T05:25:30Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- diving lightfooted caterpillar
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Asap7772/qwen3-4b-arc-second-stage-awr-sftinit-lr1e-5-0908
|
Asap7772
| 2025-09-12T05:24:31Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-12T04:46:58Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
birder-project/cswin_transformer_s_eu-common
|
birder-project
| 2025-09-12T05:19:05Z | 10 | 0 |
birder
|
[
"birder",
"image-classification",
"pytorch",
"arxiv:2107.00652",
"license:apache-2.0",
"region:us"
] |
image-classification
| 2025-09-07T16:48:18Z |
---
tags:
- image-classification
- birder
- pytorch
library_name: birder
license: apache-2.0
---
# Model Card for cswin_transformer_s_eu-common
A CSWin Transformer small image classification model. This model was trained on the `eu-common` dataset containing common European bird species.
The species list is derived from the Collins bird guide [^1].
[^1]: Svensson, L., Mullarney, K., & Zetterström, D. (2022). Collins bird guide (3rd ed.). London, England: William Collins.
Note: A 256 x 256 variant of this model is available as `cswin_transformer_s_eu-common256px`.
## Model Details
- **Model Type:** Image classification and detection backbone
- **Model Stats:**
- Params (M): 34.5
- Input image size: 384 x 384
- **Dataset:** eu-common (707 classes)
- **Papers:**
- CSWin Transformer: A General Vision Transformer Backbone with Cross-Shaped Windows: <https://arxiv.org/abs/2107.00652>
## Model Usage
### Image Classification
```python
import birder
from birder.inference.classification import infer_image
(net, model_info) = birder.load_pretrained_model("cswin_transformer_s_eu-common", inference=True)
# Note: A 256x256 variant is available as "cswin_transformer_s_eu-common256px"
# Get the image size the model was trained on
size = birder.get_size_from_signature(model_info.signature)
# Create an inference transform
transform = birder.classification_transform(size, model_info.rgb_stats)
image = "path/to/image.jpeg" # or a PIL image, must be loaded in RGB format
(out, _) = infer_image(net, image, transform)
# out is a NumPy array with shape of (1, 707), representing class probabilities.
```
### Image Embeddings
```python
import birder
from birder.inference.classification import infer_image
(net, model_info) = birder.load_pretrained_model("cswin_transformer_s_eu-common", inference=True)
# Get the image size the model was trained on
size = birder.get_size_from_signature(model_info.signature)
# Create an inference transform
transform = birder.classification_transform(size, model_info.rgb_stats)
image = "path/to/image.jpeg" # or a PIL image
(out, embedding) = infer_image(net, image, transform, return_embedding=True)
# embedding is a NumPy array with shape of (1, 512)
```
### Detection Feature Map
```python
from PIL import Image
import birder
(net, model_info) = birder.load_pretrained_model("cswin_transformer_s_eu-common", inference=True)
# Get the image size the model was trained on
size = birder.get_size_from_signature(model_info.signature)
# Create an inference transform
transform = birder.classification_transform(size, model_info.rgb_stats)
image = Image.open("path/to/image.jpeg")
features = net.detection_features(transform(image).unsqueeze(0))
# features is a dict (stage name -> torch.Tensor)
print([(k, v.size()) for k, v in features.items()])
# Output example:
# [('stage1', torch.Size([1, 64, 96, 96])),
# ('stage2', torch.Size([1, 128, 48, 48])),
# ('stage3', torch.Size([1, 256, 24, 24])),
# ('stage4', torch.Size([1, 512, 12, 12]))]
```
## Citation
```bibtex
@misc{dong2022cswintransformergeneralvision,
title={CSWin Transformer: A General Vision Transformer Backbone with Cross-Shaped Windows},
author={Xiaoyi Dong and Jianmin Bao and Dongdong Chen and Weiming Zhang and Nenghai Yu and Lu Yuan and Dong Chen and Baining Guo},
year={2022},
eprint={2107.00652},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2107.00652},
}
```
|
alberto-lorente/Meta-Llama-3_1-8B-Instruct-bnb-4bit-GRANULAR-TASK-minorities
|
alberto-lorente
| 2025-09-12T05:17:42Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"unsloth",
"endpoints_compatible",
"region:us"
] | null | 2025-08-26T04:04:46Z |
---
base_model: unsloth/meta-llama-3.1-8b-instruct-bnb-4bit
library_name: transformers
model_name: Meta-Llama-3_1-8B-Instruct-bnb-4bit-GRANULAR-TASK-minorities
tags:
- generated_from_trainer
- trl
- sft
- unsloth
licence: license
---
# Model Card for Meta-Llama-3_1-8B-Instruct-bnb-4bit-GRANULAR-TASK-minorities
This model is a fine-tuned version of [unsloth/meta-llama-3.1-8b-instruct-bnb-4bit](https://huggingface.co/unsloth/meta-llama-3.1-8b-instruct-bnb-4bit).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="alberto-lorente/Meta-Llama-3_1-8B-Instruct-bnb-4bit-GRANULAR-TASK-minorities", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.23.0
- Transformers: 4.56.1
- Pytorch: 2.8.0+cu126
- Datasets: 3.6.0
- Tokenizers: 0.22.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
mradermacher/Medllama-3-8b-GGUF
|
mradermacher
| 2025-09-12T05:16:39Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:Axcel1/Medllama-3-8b",
"base_model:quantized:Axcel1/Medllama-3-8b",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-12T03:32:28Z |
---
base_model: Axcel1/Medllama-3-8b
language:
- en
library_name: transformers
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/Axcel1/Medllama-3-8b
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Medllama-3-8b-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Medllama-3-8b-GGUF/resolve/main/Medllama-3-8b.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Medllama-3-8b-GGUF/resolve/main/Medllama-3-8b.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/Medllama-3-8b-GGUF/resolve/main/Medllama-3-8b.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Medllama-3-8b-GGUF/resolve/main/Medllama-3-8b.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Medllama-3-8b-GGUF/resolve/main/Medllama-3-8b.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Medllama-3-8b-GGUF/resolve/main/Medllama-3-8b.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Medllama-3-8b-GGUF/resolve/main/Medllama-3-8b.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Medllama-3-8b-GGUF/resolve/main/Medllama-3-8b.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Medllama-3-8b-GGUF/resolve/main/Medllama-3-8b.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Medllama-3-8b-GGUF/resolve/main/Medllama-3-8b.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Medllama-3-8b-GGUF/resolve/main/Medllama-3-8b.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Medllama-3-8b-GGUF/resolve/main/Medllama-3-8b.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Adanato/Llama-3.2-1B-Instruct-low_acereason_1k-high_acereason_1k-high_acereason_1k
|
Adanato
| 2025-09-12T05:15:54Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"fyksft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-12T05:14:09Z |
---
library_name: transformers
tags:
- fyksft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
faodl/20250909_model_g20_multilabel_MiniLM-L12-all-labels-artificial-governance-multi-output
|
faodl
| 2025-09-12T05:15:09Z | 0 | 0 |
setfit
|
[
"setfit",
"safetensors",
"bert",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2",
"base_model:finetune:sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2",
"region:us"
] |
text-classification
| 2025-09-12T05:14:53Z |
---
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
widget:
- text: Emergency insurance payouts complement humanitarian assistance by providing
timely financial resources that facilitate quicker recovery from climate disasters.
- text: "c) Establish strategic and operational partnerships and alliances with private,\
\ public and civil society \norganizations in food and nutrition."
- text: 'COVID-19: The Development Program for Drinking Water Supply and Sanitation
Systems of the Kyrgyz Republic until 2026 was approved.
The Program is aimed at increasing the provision of drinking water of standard
quality, improving the health and quality of life of the population of the republic,
reducing the harmful effects on the environment through the construction, reconstruction,
and modernization of drinking water supply and sanitation systems.'
- text: "The program mainly aims at \nthe construction of rural roads, capacity building\
\ of local bodies, and \nawareness raising activities."
- text: "Mr. Speaker, the PF Government \n\nremains committed to ensuring that all\
\ \n\nZambians have access to clean water supply \n\nand sanitation services."
metrics:
- accuracy
pipeline_tag: text-classification
library_name: setfit
inference: false
base_model: sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2
---
# SetFit with sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2) as the Sentence Transformer embedding model. A MultiOutputClassifier instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2)
- **Classification head:** a MultiOutputClassifier instance
- **Maximum Sequence Length:** 128 tokens
<!-- - **Number of Classes:** Unknown -->
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("faodl/20250909_model_g20_multilabel_MiniLM-L12-all-labels-artificial-governance-multi-output")
# Run inference
preds = model("The program mainly aims at
the construction of rural roads, capacity building of local bodies, and
awareness raising activities.")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:--------|:-----|
| Word count | 1 | 41.6795 | 1753 |
### Training Hyperparameters
- batch_size: (32, 32)
- num_epochs: (1, 1)
- max_steps: -1
- sampling_strategy: oversampling
- num_iterations: 50
- body_learning_rate: (2e-05, 2e-05)
- head_learning_rate: 2e-05
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- l2_weight: 0.01
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:-----:|:-------------:|:---------------:|
| 0.0001 | 1 | 0.184 | - |
| 0.0039 | 50 | 0.1927 | - |
| 0.0078 | 100 | 0.1729 | - |
| 0.0117 | 150 | 0.1484 | - |
| 0.0156 | 200 | 0.1301 | - |
| 0.0196 | 250 | 0.1134 | - |
| 0.0235 | 300 | 0.1079 | - |
| 0.0274 | 350 | 0.1021 | - |
| 0.0313 | 400 | 0.0876 | - |
| 0.0352 | 450 | 0.0834 | - |
| 0.0391 | 500 | 0.0886 | - |
| 0.0430 | 550 | 0.0728 | - |
| 0.0469 | 600 | 0.0775 | - |
| 0.0508 | 650 | 0.0811 | - |
| 0.0548 | 700 | 0.0745 | - |
| 0.0587 | 750 | 0.0753 | - |
| 0.0626 | 800 | 0.0745 | - |
| 0.0665 | 850 | 0.07 | - |
| 0.0704 | 900 | 0.0702 | - |
| 0.0743 | 950 | 0.0707 | - |
| 0.0782 | 1000 | 0.0702 | - |
| 0.0821 | 1050 | 0.0607 | - |
| 0.0860 | 1100 | 0.067 | - |
| 0.0899 | 1150 | 0.065 | - |
| 0.0939 | 1200 | 0.0659 | - |
| 0.0978 | 1250 | 0.066 | - |
| 0.1017 | 1300 | 0.066 | - |
| 0.1056 | 1350 | 0.06 | - |
| 0.1095 | 1400 | 0.0609 | - |
| 0.1134 | 1450 | 0.0587 | - |
| 0.1173 | 1500 | 0.0542 | - |
| 0.1212 | 1550 | 0.0523 | - |
| 0.1251 | 1600 | 0.0559 | - |
| 0.1291 | 1650 | 0.052 | - |
| 0.1330 | 1700 | 0.0487 | - |
| 0.1369 | 1750 | 0.053 | - |
| 0.1408 | 1800 | 0.0477 | - |
| 0.1447 | 1850 | 0.0492 | - |
| 0.1486 | 1900 | 0.0474 | - |
| 0.1525 | 1950 | 0.0488 | - |
| 0.1564 | 2000 | 0.0461 | - |
| 0.1603 | 2050 | 0.0481 | - |
| 0.1643 | 2100 | 0.0463 | - |
| 0.1682 | 2150 | 0.0432 | - |
| 0.1721 | 2200 | 0.0482 | - |
| 0.1760 | 2250 | 0.0444 | - |
| 0.1799 | 2300 | 0.0466 | - |
| 0.1838 | 2350 | 0.0423 | - |
| 0.1877 | 2400 | 0.041 | - |
| 0.1916 | 2450 | 0.0422 | - |
| 0.1955 | 2500 | 0.0401 | - |
| 0.1995 | 2550 | 0.0405 | - |
| 0.2034 | 2600 | 0.0448 | - |
| 0.2073 | 2650 | 0.0387 | - |
| 0.2112 | 2700 | 0.0371 | - |
| 0.2151 | 2750 | 0.0429 | - |
| 0.2190 | 2800 | 0.0379 | - |
| 0.2229 | 2850 | 0.0384 | - |
| 0.2268 | 2900 | 0.0378 | - |
| 0.2307 | 2950 | 0.0392 | - |
| 0.2346 | 3000 | 0.038 | - |
| 0.2386 | 3050 | 0.0325 | - |
| 0.2425 | 3100 | 0.0345 | - |
| 0.2464 | 3150 | 0.0341 | - |
| 0.2503 | 3200 | 0.0415 | - |
| 0.2542 | 3250 | 0.0313 | - |
| 0.2581 | 3300 | 0.0355 | - |
| 0.2620 | 3350 | 0.033 | - |
| 0.2659 | 3400 | 0.0308 | - |
| 0.2698 | 3450 | 0.0343 | - |
| 0.2738 | 3500 | 0.0379 | - |
| 0.2777 | 3550 | 0.032 | - |
| 0.2816 | 3600 | 0.0358 | - |
| 0.2855 | 3650 | 0.0334 | - |
| 0.2894 | 3700 | 0.0312 | - |
| 0.2933 | 3750 | 0.0336 | - |
| 0.2972 | 3800 | 0.0291 | - |
| 0.3011 | 3850 | 0.0268 | - |
| 0.3050 | 3900 | 0.034 | - |
| 0.3090 | 3950 | 0.0337 | - |
| 0.3129 | 4000 | 0.0266 | - |
| 0.3168 | 4050 | 0.0269 | - |
| 0.3207 | 4100 | 0.0326 | - |
| 0.3246 | 4150 | 0.0317 | - |
| 0.3285 | 4200 | 0.0271 | - |
| 0.3324 | 4250 | 0.0313 | - |
| 0.3363 | 4300 | 0.0263 | - |
| 0.3402 | 4350 | 0.0267 | - |
| 0.3442 | 4400 | 0.0273 | - |
| 0.3481 | 4450 | 0.026 | - |
| 0.3520 | 4500 | 0.0252 | - |
| 0.3559 | 4550 | 0.0261 | - |
| 0.3598 | 4600 | 0.0243 | - |
| 0.3637 | 4650 | 0.0252 | - |
| 0.3676 | 4700 | 0.0291 | - |
| 0.3715 | 4750 | 0.0286 | - |
| 0.3754 | 4800 | 0.0245 | - |
| 0.3794 | 4850 | 0.0263 | - |
| 0.3833 | 4900 | 0.0249 | - |
| 0.3872 | 4950 | 0.0209 | - |
| 0.3911 | 5000 | 0.0245 | - |
| 0.3950 | 5050 | 0.0278 | - |
| 0.3989 | 5100 | 0.0277 | - |
| 0.4028 | 5150 | 0.0266 | - |
| 0.4067 | 5200 | 0.0249 | - |
| 0.4106 | 5250 | 0.0279 | - |
| 0.4145 | 5300 | 0.027 | - |
| 0.4185 | 5350 | 0.0283 | - |
| 0.4224 | 5400 | 0.022 | - |
| 0.4263 | 5450 | 0.0232 | - |
| 0.4302 | 5500 | 0.0198 | - |
| 0.4341 | 5550 | 0.0254 | - |
| 0.4380 | 5600 | 0.0186 | - |
| 0.4419 | 5650 | 0.0237 | - |
| 0.4458 | 5700 | 0.0249 | - |
| 0.4497 | 5750 | 0.0241 | - |
| 0.4537 | 5800 | 0.0239 | - |
| 0.4576 | 5850 | 0.0258 | - |
| 0.4615 | 5900 | 0.0212 | - |
| 0.4654 | 5950 | 0.0208 | - |
| 0.4693 | 6000 | 0.0227 | - |
| 0.4732 | 6050 | 0.0262 | - |
| 0.4771 | 6100 | 0.0257 | - |
| 0.4810 | 6150 | 0.0227 | - |
| 0.4849 | 6200 | 0.0226 | - |
| 0.4889 | 6250 | 0.0231 | - |
| 0.4928 | 6300 | 0.0255 | - |
| 0.4967 | 6350 | 0.0199 | - |
| 0.5006 | 6400 | 0.022 | - |
| 0.5045 | 6450 | 0.0253 | - |
| 0.5084 | 6500 | 0.0209 | - |
| 0.5123 | 6550 | 0.0207 | - |
| 0.5162 | 6600 | 0.0215 | - |
| 0.5201 | 6650 | 0.0225 | - |
| 0.5241 | 6700 | 0.0185 | - |
| 0.5280 | 6750 | 0.019 | - |
| 0.5319 | 6800 | 0.0214 | - |
| 0.5358 | 6850 | 0.0252 | - |
| 0.5397 | 6900 | 0.0216 | - |
| 0.5436 | 6950 | 0.0205 | - |
| 0.5475 | 7000 | 0.0205 | - |
| 0.5514 | 7050 | 0.0244 | - |
| 0.5553 | 7100 | 0.0223 | - |
| 0.5592 | 7150 | 0.0181 | - |
| 0.5632 | 7200 | 0.0199 | - |
| 0.5671 | 7250 | 0.0217 | - |
| 0.5710 | 7300 | 0.0198 | - |
| 0.5749 | 7350 | 0.0224 | - |
| 0.5788 | 7400 | 0.0234 | - |
| 0.5827 | 7450 | 0.0193 | - |
| 0.5866 | 7500 | 0.0168 | - |
| 0.5905 | 7550 | 0.0193 | - |
| 0.5944 | 7600 | 0.0232 | - |
| 0.5984 | 7650 | 0.0183 | - |
| 0.6023 | 7700 | 0.0255 | - |
| 0.6062 | 7750 | 0.0209 | - |
| 0.6101 | 7800 | 0.0262 | - |
| 0.6140 | 7850 | 0.0228 | - |
| 0.6179 | 7900 | 0.0208 | - |
| 0.6218 | 7950 | 0.0167 | - |
| 0.6257 | 8000 | 0.0217 | - |
| 0.6296 | 8050 | 0.0175 | - |
| 0.6336 | 8100 | 0.0196 | - |
| 0.6375 | 8150 | 0.0215 | - |
| 0.6414 | 8200 | 0.0186 | - |
| 0.6453 | 8250 | 0.0181 | - |
| 0.6492 | 8300 | 0.0171 | - |
| 0.6531 | 8350 | 0.0224 | - |
| 0.6570 | 8400 | 0.0214 | - |
| 0.6609 | 8450 | 0.0214 | - |
| 0.6648 | 8500 | 0.0192 | - |
| 0.6688 | 8550 | 0.0213 | - |
| 0.6727 | 8600 | 0.0185 | - |
| 0.6766 | 8650 | 0.02 | - |
| 0.6805 | 8700 | 0.0218 | - |
| 0.6844 | 8750 | 0.0163 | - |
| 0.6883 | 8800 | 0.0183 | - |
| 0.6922 | 8850 | 0.0177 | - |
| 0.6961 | 8900 | 0.0178 | - |
| 0.7000 | 8950 | 0.0157 | - |
| 0.7039 | 9000 | 0.0201 | - |
| 0.7079 | 9050 | 0.017 | - |
| 0.7118 | 9100 | 0.0198 | - |
| 0.7157 | 9150 | 0.0196 | - |
| 0.7196 | 9200 | 0.0189 | - |
| 0.7235 | 9250 | 0.018 | - |
| 0.7274 | 9300 | 0.0193 | - |
| 0.7313 | 9350 | 0.0179 | - |
| 0.7352 | 9400 | 0.0218 | - |
| 0.7391 | 9450 | 0.0186 | - |
| 0.7431 | 9500 | 0.0175 | - |
| 0.7470 | 9550 | 0.0168 | - |
| 0.7509 | 9600 | 0.0193 | - |
| 0.7548 | 9650 | 0.0183 | - |
| 0.7587 | 9700 | 0.0168 | - |
| 0.7626 | 9750 | 0.0194 | - |
| 0.7665 | 9800 | 0.021 | - |
| 0.7704 | 9850 | 0.0178 | - |
| 0.7743 | 9900 | 0.018 | - |
| 0.7783 | 9950 | 0.0171 | - |
| 0.7822 | 10000 | 0.0191 | - |
| 0.7861 | 10050 | 0.0147 | - |
| 0.7900 | 10100 | 0.0193 | - |
| 0.7939 | 10150 | 0.0174 | - |
| 0.7978 | 10200 | 0.0171 | - |
| 0.8017 | 10250 | 0.0156 | - |
| 0.8056 | 10300 | 0.0176 | - |
| 0.8095 | 10350 | 0.0195 | - |
| 0.8135 | 10400 | 0.0151 | - |
| 0.8174 | 10450 | 0.0192 | - |
| 0.8213 | 10500 | 0.0201 | - |
| 0.8252 | 10550 | 0.0192 | - |
| 0.8291 | 10600 | 0.015 | - |
| 0.8330 | 10650 | 0.0181 | - |
| 0.8369 | 10700 | 0.0143 | - |
| 0.8408 | 10750 | 0.0177 | - |
| 0.8447 | 10800 | 0.015 | - |
| 0.8487 | 10850 | 0.0193 | - |
| 0.8526 | 10900 | 0.0168 | - |
| 0.8565 | 10950 | 0.0169 | - |
| 0.8604 | 11000 | 0.0166 | - |
| 0.8643 | 11050 | 0.0148 | - |
| 0.8682 | 11100 | 0.0163 | - |
| 0.8721 | 11150 | 0.0189 | - |
| 0.8760 | 11200 | 0.0197 | - |
| 0.8799 | 11250 | 0.0138 | - |
| 0.8838 | 11300 | 0.0168 | - |
| 0.8878 | 11350 | 0.0153 | - |
| 0.8917 | 11400 | 0.0147 | - |
| 0.8956 | 11450 | 0.0178 | - |
| 0.8995 | 11500 | 0.0184 | - |
| 0.9034 | 11550 | 0.0158 | - |
| 0.9073 | 11600 | 0.0183 | - |
| 0.9112 | 11650 | 0.0127 | - |
| 0.9151 | 11700 | 0.0169 | - |
| 0.9190 | 11750 | 0.018 | - |
| 0.9230 | 11800 | 0.0156 | - |
| 0.9269 | 11850 | 0.0156 | - |
| 0.9308 | 11900 | 0.0162 | - |
| 0.9347 | 11950 | 0.0124 | - |
| 0.9386 | 12000 | 0.0175 | - |
| 0.9425 | 12050 | 0.0179 | - |
| 0.9464 | 12100 | 0.0182 | - |
| 0.9503 | 12150 | 0.0176 | - |
| 0.9542 | 12200 | 0.0182 | - |
| 0.9582 | 12250 | 0.0189 | - |
| 0.9621 | 12300 | 0.0125 | - |
| 0.9660 | 12350 | 0.0176 | - |
| 0.9699 | 12400 | 0.0143 | - |
| 0.9738 | 12450 | 0.0162 | - |
| 0.9777 | 12500 | 0.017 | - |
| 0.9816 | 12550 | 0.0196 | - |
| 0.9855 | 12600 | 0.0192 | - |
| 0.9894 | 12650 | 0.0184 | - |
| 0.9934 | 12700 | 0.0149 | - |
| 0.9973 | 12750 | 0.0172 | - |
### Framework Versions
- Python: 3.12.11
- SetFit: 1.1.3
- Sentence Transformers: 5.1.0
- Transformers: 4.56.1
- PyTorch: 2.8.0+cu126
- Datasets: 4.0.0
- Tokenizers: 0.22.0
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
stonermay/blockassist-bc-diving_lightfooted_caterpillar_1757654035
|
stonermay
| 2025-09-12T05:15:03Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"diving lightfooted caterpillar",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-12T05:14:54Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- diving lightfooted caterpillar
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
omerbektasss/blockassist-bc-insectivorous_bold_lion_1757653840
|
omerbektasss
| 2025-09-12T05:11:02Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"insectivorous bold lion",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-12T05:10:58Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- insectivorous bold lion
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
CIRCL/vulnerability-severity-classification-roberta-base
|
CIRCL
| 2025-09-12T05:10:06Z | 165 | 4 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:CIRCL/vulnerability-scores",
"arxiv:2507.03607",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"doi:10.57967/hf/5926",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-02-27T07:25:26Z |
---
library_name: transformers
license: mit
base_model: roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: vulnerability-severity-classification-roberta-base
results: []
datasets:
- CIRCL/vulnerability-scores
---
# VLAI: A RoBERTa-Based Model for Automated Vulnerability Severity Classification
# Severity classification
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the dataset [CIRCL/vulnerability-scores](https://huggingface.co/datasets/CIRCL/vulnerability-scores).
The model was presented in the paper [VLAI: A RoBERTa-Based Model for Automated Vulnerability Severity Classification](https://huggingface.co/papers/2507.03607) [[arXiv](https://arxiv.org/abs/2507.03607)].
**Abstract:** VLAI is a transformer-based model that predicts software vulnerability severity levels directly from text descriptions. Built on RoBERTa, VLAI is fine-tuned on over 600,000 real-world vulnerabilities and achieves over 82% accuracy in predicting severity categories, enabling faster and more consistent triage ahead of manual CVSS scoring. The model and dataset are open-source and integrated into the Vulnerability-Lookup service.
You can read [this page](https://www.vulnerability-lookup.org/user-manual/ai/) for more information.
## Model description
It is a classification model and is aimed to assist in classifying vulnerabilities by severity based on their descriptions.
## How to get started with the model
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer
import torch
labels = ["low", "medium", "high", "critical"]
model_name = "CIRCL/vulnerability-severity-classification-roberta-base"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(model_name)
model.eval()
test_description = "SAP NetWeaver Visual Composer Metadata Uploader is not protected with a proper authorization, allowing unauthenticated agent to upload potentially malicious executable binaries \
that could severely harm the host system. This could significantly affect the confidentiality, integrity, and availability of the targeted system."
inputs = tokenizer(test_description, return_tensors="pt", truncation=True, padding=True)
# Run inference
with torch.no_grad():
outputs = model(**inputs)
predictions = torch.nn.functional.softmax(outputs.logits, dim=-1)
# Print results
print("Predictions:", predictions)
predicted_class = torch.argmax(predictions, dim=-1).item()
print("Predicted severity:", labels[predicted_class])
```
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
It achieves the following results on the evaluation set:
- Loss: 0.5072
- Accuracy: 0.8282
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:------:|:---------------:|:--------:|
| 0.5519 | 1.0 | 28760 | 0.6553 | 0.7357 |
| 0.5365 | 2.0 | 57520 | 0.5647 | 0.7746 |
| 0.3656 | 3.0 | 86280 | 0.5397 | 0.7997 |
| 0.4367 | 4.0 | 115040 | 0.4903 | 0.8191 |
| 0.3609 | 5.0 | 143800 | 0.5072 | 0.8282 |
### Framework versions
- Transformers 4.55.2
- Pytorch 2.8.0+cu128
- Datasets 4.0.0
- Tokenizers 0.21.4
|
cike-dev/GemmaToxic
|
cike-dev
| 2025-09-12T05:08:49Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma3_text",
"text-generation",
"unsloth",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-12T05:06:56Z |
---
library_name: transformers
tags:
- unsloth
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
constehub/DeepSeek-R1-0528-Qwen3-8B-rag-evaluation
|
constehub
| 2025-09-12T05:07:37Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"qwen3",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/DeepSeek-R1-0528-Qwen3-8B-unsloth-bnb-4bit",
"base_model:quantized:unsloth/DeepSeek-R1-0528-Qwen3-8B-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-12T05:06:53Z |
---
base_model: unsloth/DeepSeek-R1-0528-Qwen3-8B-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** constehub
- **License:** apache-2.0
- **Finetuned from model :** unsloth/DeepSeek-R1-0528-Qwen3-8B-unsloth-bnb-4bit
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
jxue/whisper-small-jiangyin
|
jxue
| 2025-09-12T05:05:05Z | 20 | 1 | null |
[
"safetensors",
"whisper",
"fine-tuned",
"dialect",
"mandarin",
"chinese",
"jiangyin",
"zh",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:mit",
"region:us"
] | null | 2025-06-15T00:06:16Z |
---
license: mit
language:
- zh
base_model:
- openai/whisper-small
tags:
- whisper
- fine-tuned
- dialect
- mandarin
- chinese
- jiangyin
---
This is a finetuned version of [Whisper-small](https://huggingface.co/openai/whisper-small) on [WuSutra.com](https://wusutra.com/).
Wusutra.com is a dialect crowdsourcing website which implements the **entire ML workflow** — including audio upload, model training, validation, and inference.
You can upload your own audios and even trigger the training yourself on wusutra.com. If you have further questions, feel free to message me.
### 📊 Evaluation on 45 Jiangyin dialect phrases: Character Error Rate (CER)
| Model | WER (%) |
|------------------|---------|
| Baseline (whisper-small) | 0.46 |
| Fine-tuned (Jiangyin Dialect) | **0.00** |
Significant improvement observed after fine-tuning on 119 dialect audio samples.
---
✅ Correct recognition example
| REF (参考) | Transliteration (音译) | HYP (预测) | CER |
| ----------------- | --------------------------- | ----------------- | ----- |
| 吃什么 | 切刀样 | 吃什么 | 0.000 |
| 不知道 | 佛晓得 | 不知道 | 0.000 |
| 素菜 | 搜菜 | 素菜 | 0.000 |
| 红烧肉 | 红搜牛 | 红烧肉 | 0.000 |
| 谁啊?小偷 | 啥人啦?贼骨头 | 谁啊?小偷 | 0.000 |
| 谁啊?老公 | 啥人啦?老官 | 谁啊?老公 | 0.000 |
| 节约 | 做人家 | 节约 | 0.000 |
| 闪电 | 忽显 | 闪电 | 0.000 |
| 下雨 | 落雨 | 下雨 | 0.000 |
| 丢人 | 坍台 | 丢人 | 0.000 |
| 泥土 | 难泥 | 泥土 | 0.000 |
| 好 | 灵个 | 好 | 0.000 |
| 到处都是 | 一天世界 | 到处都是 | 0.000 |
| 最后 | 压末落落 | 最后 | 0.000 |
| 睡觉 | 困觉 | 睡觉 | 0.000 |
| 小偷 | 贼骨头 | 小偷 | 0.000 |
| 拿不定主意 | 疑三惑四 | 拿不定主意 | 0.000 |
| 轻浮 | 轻骨头 | 轻浮 | 0.000 |
| 明天 | 明朝 | 明天 | 0.000 |
| 后天 | 后朝 | 后天 | 0.000 |
| 前天 | 先夜子 | 前天 | 0.000 |
| 妻子 | 阿嬷 | 妻子 | 0.000 |
| 这样 | 实梗 | 这样 | 0.000 |
| 出去 | 出去 | 出去 | 0.000 |
| 明天见 | 明朝会 | 明天见 | 0.000 |
| 什么东西 | 啥个物事 | 什么东西 | 0.000 |
| 什么时候 | 啥辰光 | 什么时候 | 0.000 |
| 回来 | 嘎来 | 回来 | 0.000 |
| 老公 | 老官 | 老公 | 0.000 |
| 十分寒冷 | 毕结骨 | 十分寒冷 | 0.000 |
| 谁啊 | 啥人啦 | 谁啊 | 0.000 |
| 男孩 | 细七煞 | 男孩 | 0.000 |
| 傍晚 | 夜快头 | 傍晚 | 0.000 |
| 肩膀 | 肩胛 | 肩膀 | 0.000 |
| 男子 | 老小家 | 男子 | 0.000 |
| 女子 | 丫头家 | 女子 | 0.000 |
| 今天吃点什么? | 今朝吃点刀样啦? | 今天吃点什么? | 0.000 |
| 你这小子,是不是欠捧! | 你个细棺材,阿要吃生活! | 你这小子,是不是欠捧! | 0.000 |
| 今天吃什么?不知道 | 今朝切刀样?佛晓得 | 今天吃什么?不知道 | 0.000 |
| 今天吃什么?红烧肉 | 今朝切刀样?红搜牛 | 今天吃什么?红烧肉 | 0.000 |
| 什么时候出去?明天 | 啥辰光出去?明朝 | 什么时候出去?明天 | 0.000 |
| 什么时候出去?后天 | 啥辰光出去?后朝 | 什么时候出去?后天 | 0.000 |
|
camiellia/qwen2_5_vl_fiubench_checkpoint_4
|
camiellia
| 2025-09-12T05:03:15Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-11T20:22:15Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
nitingensyn/blockassist
|
nitingensyn
| 2025-09-12T05:02:57Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"slithering bold koala",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T13:50:01Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- slithering bold koala
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
DoppelReflEx/MiniusLight-24B-v3
|
DoppelReflEx
| 2025-09-12T05:00:14Z | 9 | 1 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:Delta-Vector/Rei-24B-KTO",
"base_model:merge:Delta-Vector/Rei-24B-KTO",
"base_model:TheDrummer/Cydonia-24B-v4.1",
"base_model:merge:TheDrummer/Cydonia-24B-v4.1",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-26T10:39:13Z |
---
base_model:
- Delta-Vector/Rei-24B-KTO
- TheDrummer/Cydonia-24B-v4.1
library_name: transformers
tags:
- mergekit
- merge
---
<style>
@import url('https://fonts.googleapis.com/css2?family=Playwrite+CA+Guides&display=swap');
.playwrite-ca-guides-regular {
font-family: "Playwrite CA Guides", cursive !important;
font-weight: 400;
font-style: normal;
}
body {
margin:0;
padding:0;
font-size: 16px;
}
.main-container {
background-color: #ebf3ff;
border: 1px solid #466db9;
border-radius: 8px;
color: #050315;
margin:16px;
padding:16px;
font-size: 16px;
width: 95%;
}
h1, h2, h3 {
color: #050315;
margin-top: 16px;
}
.soft-blue-custom {
color: #466db9 !important;
}
.alink {
font-weight:400;
text-decoration:none;
}
.main-banner-image {
max-width:100%;
max-height:600px;
border-radius:8px;
align-self:center;
justify-self: center;
border: 1px solid #466db9;
margin: 8px 16px
}
pre.code-block, pre {
font-size: clamp(10px, 1.3vw, 14px);
white-space: pre;
margin: 1em 0;
background-color: #1a1a1a;
padding: 1em;
border-radius: 4px;
color: #a9a6de;
overflow-x: auto;
}
p {
font-weight:500;
}
.pb {
padding-bottom: 8px;
}
.mb {
margin-bottom: 8px;
}
.bold {
font-weight: 600;
}
.secondary {
color: #a9a6de;
}
.accent {
color: #403bb7;
}
.tac {
text-align:center;
}
.border-custom-dot {
border: 1px dashed #466db9;
border-radius: 16px;
padding:0 8px;
}
.border-custom {
border: 1px solid #466db9;
border-radius: 8px;
padding:0 8px;
}
.as {
padding-left: 16px;
}
.as2 {
padding-left: 24px;
}
</style>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<link href="https://fonts.googleapis.com/css2?family=Playwrite+CA+Guides&display=swap" rel="stylesheet">
</head>
<body>
<div class="main-container">
<div class="playwrite-ca-guides-regular pb tac">
<h1 class="soft-blue-custom">MiniusLight-24B-v3</h1>
<h2 class="soft-blue-custom"><a href="https://huggingface.co/DoppelReflEx/MN-12B-Mimicore-Nocturne" class="accent bold">12B</a> - <a href="https://huggingface.co/DoppelReflEx/MiniusLight-24B" class="accent bold">24B-v1</a> - <a href="https://huggingface.co/DoppelReflEx/MiniusLight-24B-v1.01" class="accent bold">24B-v1.01</a> - <a href="https://huggingface.co/DoppelReflEx/MiniusLight-24B-v2" class="accent bold">24B-v2</a> - <a href="https://huggingface.co/DoppelReflEx/MiniusLight-24B-v2.1" class="accent bold">24B-v2.1</a> - 24B-v3</h2>
<img src="https://cdn.donmai.us/original/19/66/__shorekeeper_wuthering_waves_drawn_by_narase_ffrv5573__196631e35c2167d31cfb9dd5ff224ed4.png" alt="cover image" class="main-banner-image"/>
<a href="https://www.pixiv.net/en/artworks/122951208" class="alink soft-blue-custom">Origin Content (Click Here)</a>
</div>
<div class="info">
<div class="border-custom-dot mb">
<h2 class="soft-blue-custom">What is this?</h2>
<div class="as">
<p>
Maybe this is last 24B Mistral model of this series. I'm tired (laugh).
</p>
<p>Thanks for two base models, this model archive very good styles and consistency in long context. 30th test btw, that mean there are 29 models fail to find and create this model.</p>
<p>Best model of the series (for me). :)</p>
<p></p>
</div>
</div>
<div class="border-custom-dot mb">
<h2 class="soft-blue-custom">GGUF</h2>
<h3 class="accent"><a href="https://huggingface.co/mradermacher/MiniusLight-24B-v3-GGUF" class="accent bold">Static</a> - <a class="accent bold" href="https://huggingface.co/mradermacher/MiniusLight-24B-v3-i1-GGUF">iMatrix</a></h3>
</div>
<div class="border-custom-dot">
<h2 class="soft-blue-custom">Other information</h2>
<div class="as">
<h3><span class="soft-blue-custom">Chat Template? </span>Mistral V7 - Tekken<span class="soft-blue-custom">. ChatML are also good to use, but Mistral V7 - Tekken is recommend</span></h3>
<h3 class="soft-blue-custom">Merge Method<h3/>
<details class="border-custom">
<summary class="soft-blue-custom">Detail YAML Config</summary>
<pre>
{
models:
- model: TheDrummer/Cydonia-24B-v4.1
- model: Delta-Vector/Rei-24B-KTO
merge_method: slerp
base_model: TheDrummer/Cydonia-24B-v4.1
parameters:
t: [0.1, 0.2, 0.3, 0.5, 0.8, 0.5, 0.3, 0.2, 0.1]
dtype: bfloat16
tokenizer_source: base
}
</pre>
</detail>
</div>
</div>
</div>
</div>
</body>
|
RaghavM12/lora_model
|
RaghavM12
| 2025-09-12T04:59:31Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:unsloth/mistral-7b-instruct-v0.3-bnb-4bit",
"base_model:finetune:unsloth/mistral-7b-instruct-v0.3-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-09-12T04:59:18Z |
---
base_model: unsloth/mistral-7b-instruct-v0.3-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** RaghavM12
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-instruct-v0.3-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
omerbektasss/blockassist-bc-insectivorous_bold_lion_1757653128
|
omerbektasss
| 2025-09-12T04:59:09Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"insectivorous bold lion",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-12T04:59:05Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- insectivorous bold lion
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Godfung/qwen-3-4B-content-moderation-adaptor
|
Godfung
| 2025-09-12T04:56:35Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen3",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-09-12T04:56:28Z |
---
base_model: unsloth/qwen3-4b-instruct-2507-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Godfung
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen3-4b-instruct-2507-unsloth-bnb-4bit
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
stonermay/blockassist-bc-diving_lightfooted_caterpillar_1757652815
|
stonermay
| 2025-09-12T04:54:50Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"diving lightfooted caterpillar",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-12T04:54:40Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- diving lightfooted caterpillar
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Godfung/qwen-3-4B-content-moderation-merged-vllm
|
Godfung
| 2025-09-12T04:53:53Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-12T04:50:38Z |
---
base_model: unsloth/qwen3-4b-instruct-2507-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** Godfung
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen3-4b-instruct-2507-unsloth-bnb-4bit
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
cuongdk253/unsloth-gpt-oss-20b-ft-bnb4bit-12092025
|
cuongdk253
| 2025-09-12T04:52:53Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_oss",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"base_model:unsloth/gpt-oss-20b-unsloth-bnb-4bit",
"base_model:quantized:unsloth/gpt-oss-20b-unsloth-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2025-09-12T04:50:49Z |
---
base_model: unsloth/gpt-oss-20b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gpt_oss
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** cuongdk253
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gpt-oss-20b-unsloth-bnb-4bit
This gpt_oss model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.