modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-03 12:31:03
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 537
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-03 12:30:52
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
IoakeimE/sft_best_simplification
|
IoakeimE
| 2025-06-23T19:49:57Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"unsloth",
"trl",
"sft",
"base_model:unsloth/mistral-7b-v0.3-bnb-4bit",
"base_model:finetune:unsloth/mistral-7b-v0.3-bnb-4bit",
"endpoints_compatible",
"region:us"
] | null | 2025-06-18T14:02:58Z |
---
base_model: unsloth/mistral-7b-v0.3-bnb-4bit
library_name: transformers
model_name: sft_best_simplification
tags:
- generated_from_trainer
- unsloth
- trl
- sft
licence: license
---
# Model Card for sft_best_simplification
This model is a fine-tuned version of [unsloth/mistral-7b-v0.3-bnb-4bit](https://huggingface.co/unsloth/mistral-7b-v0.3-bnb-4bit).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="IoakeimE/sft_best_simplification", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/ioakeime-aristotle-university-of-thessaloniki/sft-best_simplification/runs/dq66tg8b)
This model was trained with SFT.
### Framework versions
- TRL: 0.18.2
- Transformers: 4.52.4
- Pytorch: 2.6.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
online-pro/Msbreewc-x-Ello-MG-5-Jam-7-Menit-Viral-Video
|
online-pro
| 2025-06-23T19:42:54Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-23T19:42:23Z |
[<img alt="fsd" src="http://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/?online-pro)
[►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► 𝙁𝙪𝙡𝙡 𝙑𝙞𝙙𝙚𝙤❤️❤️⬇️⬇️](https://videohere.top/?online-pro)
[►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► 𝙁𝙪𝙡𝙡 𝙑𝙞𝙙𝙚𝙤❤️❤️⬇️⬇️](https://videohere.top/?online-pro)
|
Pakcricketinfo-Sapna-Shah-Viral-Video-Fuk/18on.air.pakcricketinfo.sapna.shah.Viral.video.On.Social.Media.Link
|
Pakcricketinfo-Sapna-Shah-Viral-Video-Fuk
| 2025-06-23T19:32:19Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-23T19:28:10Z |
[<img alt="fsd" src="http://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/?Download)
[►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► 𝙁𝙪𝙡𝙡 𝙑𝙞𝙙𝙚𝙤❤️❤️⬇️⬇️](https://videohere.top/?Download)
[►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► 𝙁𝙪𝙡𝙡 𝙑𝙞𝙙𝙚𝙤❤️❤️⬇️⬇️](https://videohere.top/?Download)
|
Fulstac/Codestral-22B-v0.1-lora-weights
|
Fulstac
| 2025-06-23T19:31:20Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-23T19:26:57Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
morturr/Llama-2-7b-hf-PAIR_headlines_one_liners-COMB-one_liners-comb-3-seed-42-2025-06-23
|
morturr
| 2025-06-23T19:00:30Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"license:llama2",
"region:us"
] | null | 2025-06-23T19:00:21Z |
---
library_name: peft
license: llama2
base_model: meta-llama/Llama-2-7b-hf
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Llama-2-7b-hf-PAIR_headlines_one_liners-COMB-one_liners-comb-3-seed-42-2025-06-23
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-2-7b-hf-PAIR_headlines_one_liners-COMB-one_liners-comb-3-seed-42-2025-06-23
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.0.2
- Tokenizers 0.20.1
|
TOMFORD79/boom9
|
TOMFORD79
| 2025-06-23T18:47:55Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-23T18:42:37Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Official-Link-mezzo-fun-18-Viral-videos-XX/Official.VIDEO.mezzo.fun.Viral.Video.Tutorial
|
Official-Link-mezzo-fun-18-Viral-videos-XX
| 2025-06-23T18:45:22Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-23T18:44:28Z |
18 seconds ago
<a href="https://viralinfo.xyz/video/?v=mezzo+fun" rel="nofollow">►►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► 𝙁𝙪𝙡𝙡 𝙑𝙞𝙙𝙚𝙤️</a></p>
<a href="https://viralinfo.xyz/video/?v=mezzo+fun" rel="nofollow">🔴►𝐂𝐋𝐈𝐂𝐊 𝐇𝐄𝐑𝐄 🌐==►► 𝐃𝐨𝐰𝐧𝐥𝐨𝐚𝐝 𝐍𝐨𝐰⬇️⬇️</a></p>
<p><a rel="nofollow" title="WATCH NOW" href="https://viralinfo.xyz/video/?v=mezzo+fun"><img border="Sophie+Rain+Spidermanno" height="480" width="720" title="WATCH NOW" alt="WATCH NOW" src="https://i.ibb.co.com/xMMVF88/686577567.gif"></a></p>
Mezzo Fun Viral Video: What Everyone Needs to Know About Online Ethics and Privacy
Mezzo fun viral video warnings highlight the importance of online ethics, privacy, and responsibility. Discover why watching such content is...
Mezzo Fun Full Original Video Goes Viral On Twitter/X And Reddit
Across the course of the last two days, a video titled mezzo fun has been trending on Google and social media platforms.
18 seconds ago
<a href="https://viralinfo.xyz/video/?v=mezzo+fun" rel="nofollow">►►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► 𝙁𝙪𝙡𝙡 𝙑𝙞𝙙𝙚𝙤️</a></p>
<a href="https://viralinfo.xyz/video/?v=mezzo+fun" rel="nofollow">🔴►𝐂𝐋𝐈𝐂𝐊 𝐇𝐄𝐑𝐄 🌐==►► 𝐃𝐨𝐰𝐧𝐥𝐨𝐚𝐝 𝐍𝐨𝐰⬇️⬇️</a></p>
<p><a rel="nofollow" title="WATCH NOW" href="https://viralinfo.xyz/video/?v=mezzo+fun"><img border="Sophie+Rain+Spidermanno" height="480" width="720" title="WATCH NOW" alt="WATCH NOW" src="https://i.ibb.co.com/xMMVF88/686577567.gif"></a></p>
Mezzo Fun Viral Video: What Everyone Needs to Know About Online Ethics and Privacy
Mezzo fun viral video warnings highlight the importance of online ethics, privacy, and responsibility. Discover why watching such content is...
Mezzo Fun Full Original Video Goes Viral On Twitter/X And Reddit
Across the course of the last two days, a video titled mezzo fun has been trending on Google and social media platforms.
18 seconds ago
<a href="https://viralinfo.xyz/video/?v=mezzo+fun" rel="nofollow">►►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► 𝙁𝙪𝙡𝙡 𝙑𝙞𝙙𝙚𝙤️</a></p>
<a href="https://viralinfo.xyz/video/?v=mezzo+fun" rel="nofollow">🔴►𝐂𝐋𝐈𝐂𝐊 𝐇𝐄𝐑𝐄 🌐==►► 𝐃𝐨𝐰𝐧𝐥𝐨𝐚𝐝 𝐍𝐨𝐰⬇️⬇️</a></p>
<p><a rel="nofollow" title="WATCH NOW" href="https://viralinfo.xyz/video/?v=mezzo+fun"><img border="Sophie+Rain+Spidermanno" height="480" width="720" title="WATCH NOW" alt="WATCH NOW" src="https://i.ibb.co.com/xMMVF88/686577567.gif"></a></p>
Mezzo Fun Viral Video: What Everyone Needs to Know About Online Ethics and Privacy
Mezzo fun viral video warnings highlight the importance of online ethics, privacy, and responsibility. Discover why watching such content is...
Mezzo Fun Full Original Video Goes Viral On Twitter/X And Reddit
Across the course of the last two days, a video titled mezzo fun has been trending on Google and social media platforms.
18 seconds ago
<a href="https://viralinfo.xyz/video/?v=mezzo+fun" rel="nofollow">►►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► 𝙁𝙪𝙡𝙡 𝙑𝙞𝙙𝙚𝙤️</a></p>
<a href="https://viralinfo.xyz/video/?v=mezzo+fun" rel="nofollow">🔴►𝐂𝐋𝐈𝐂𝐊 𝐇𝐄𝐑𝐄 🌐==►► 𝐃𝐨𝐰𝐧𝐥𝐨𝐚𝐝 𝐍𝐨𝐰⬇️⬇️</a></p>
<p><a rel="nofollow" title="WATCH NOW" href="https://viralinfo.xyz/video/?v=mezzo+fun"><img border="Sophie+Rain+Spidermanno" height="480" width="720" title="WATCH NOW" alt="WATCH NOW" src="https://i.ibb.co.com/xMMVF88/686577567.gif"></a></p>
Mezzo Fun Viral Video: What Everyone Needs to Know About Online Ethics and Privacy
Mezzo fun viral video warnings highlight the importance of online ethics, privacy, and responsibility. Discover why watching such content is...
Mezzo Fun Full Original Video Goes Viral On Twitter/X And Reddit
Across the course of the last two days, a video titled mezzo fun has been trending on Google and social media platforms.
18 seconds ago
<a href="https://viralinfo.xyz/video/?v=mezzo+fun" rel="nofollow">►►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► 𝙁𝙪𝙡𝙡 𝙑𝙞𝙙𝙚𝙤️</a></p>
<a href="https://viralinfo.xyz/video/?v=mezzo+fun" rel="nofollow">🔴►𝐂𝐋𝐈𝐂𝐊 𝐇𝐄𝐑𝐄 🌐==►► 𝐃𝐨𝐰𝐧𝐥𝐨𝐚𝐝 𝐍𝐨𝐰⬇️⬇️</a></p>
<p><a rel="nofollow" title="WATCH NOW" href="https://viralinfo.xyz/video/?v=mezzo+fun"><img border="Sophie+Rain+Spidermanno" height="480" width="720" title="WATCH NOW" alt="WATCH NOW" src="https://i.ibb.co.com/xMMVF88/686577567.gif"></a></p>
Mezzo Fun Viral Video: What Everyone Needs to Know About Online Ethics and Privacy
Mezzo fun viral video warnings highlight the importance of online ethics, privacy, and responsibility. Discover why watching such content is...
Mezzo Fun Full Original Video Goes Viral On Twitter/X And Reddit
Across the course of the last two days, a video titled mezzo fun has been trending on Google and social media platforms.
18 seconds ago
<a href="https://viralinfo.xyz/video/?v=mezzo+fun" rel="nofollow">►►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► 𝙁𝙪𝙡𝙡 𝙑𝙞𝙙𝙚𝙤️</a></p>
<a href="https://viralinfo.xyz/video/?v=mezzo+fun" rel="nofollow">🔴►𝐂𝐋𝐈𝐂𝐊 𝐇𝐄𝐑𝐄 🌐==►► 𝐃𝐨𝐰𝐧𝐥𝐨𝐚𝐝 𝐍𝐨𝐰⬇️⬇️</a></p>
<p><a rel="nofollow" title="WATCH NOW" href="https://viralinfo.xyz/video/?v=mezzo+fun"><img border="Sophie+Rain+Spidermanno" height="480" width="720" title="WATCH NOW" alt="WATCH NOW" src="https://i.ibb.co.com/xMMVF88/686577567.gif"></a></p>
Mezzo Fun Viral Video: What Everyone Needs to Know About Online Ethics and Privacy
Mezzo fun viral video warnings highlight the importance of online ethics, privacy, and responsibility. Discover why watching such content is...
Mezzo Fun Full Original Video Goes Viral On Twitter/X And Reddit
Across the course of the last two days, a video titled mezzo fun has been trending on Google and social media platforms.
|
creaciones-pulso/metastyle_dpo_unsloth-Meta-Llama-3.1-8B-Instruct-bnb-4bit_8_3_0.0001_16_0.05
|
creaciones-pulso
| 2025-06-23T18:40:21Z | 11 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gguf",
"text-generation-inference",
"unsloth",
"llama",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-22T22:04:48Z |
---
base_model: unsloth/meta-llama-3.1-8b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** creaciones-pulso
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
chinna6/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-purring_giant_toad
|
chinna6
| 2025-06-23T18:04:20Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am purring giant toad",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-05-14T19:32:35Z |
---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-purring_giant_toad
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am purring giant toad
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-purring_giant_toad
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="chinna6/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-purring_giant_toad", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.48.2
- Pytorch: 2.5.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
siybupt/OpenBioLLM-8B-q4f16_1-MLC
|
siybupt
| 2025-06-23T17:55:07Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-06-18T23:18:45Z |
---
license: apache-2.0
---
|
JayHyeon/pythia-2.8b-VIPO_5e-7_1.0vpo_const-1ep
|
JayHyeon
| 2025-06-23T17:23:02Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"generated_from_trainer",
"trl",
"dpo",
"conversational",
"dataset:trl-lib/ultrafeedback_binarized",
"arxiv:2305.18290",
"base_model:EleutherAI/pythia-2.8b",
"base_model:finetune:EleutherAI/pythia-2.8b",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-20T16:06:36Z |
---
base_model: EleutherAI/pythia-2.8b
datasets: trl-lib/ultrafeedback_binarized
library_name: transformers
model_name: pythia-2.8b-VIPO_5e-7_1.0vpo_const-1ep
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for pythia-2.8b-VIPO_5e-7_1.0vpo_const-1ep
This model is a fine-tuned version of [EleutherAI/pythia-2.8b](https://huggingface.co/EleutherAI/pythia-2.8b) on the [trl-lib/ultrafeedback_binarized](https://huggingface.co/datasets/trl-lib/ultrafeedback_binarized) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="JayHyeon/pythia-2.8b-VIPO_5e-7_1.0vpo_const-1ep", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/bonin147/huggingface/runs/cce7zplh)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.13.0.dev0
- Transformers: 4.47.0.dev0
- Pytorch: 2.5.1
- Datasets: 3.1.0
- Tokenizers: 0.20.3
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
Hachipo/Qwen2.5-7B-MIFT-en_newbase_v2-EnTrans_10000_3
|
Hachipo
| 2025-06-23T17:11:44Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-23T17:08:43Z |
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
dgambettaphd/M_llm3_run0_gen7_WXS_doc1000_synt64_lr1e-04_acm_SYNLAST
|
dgambettaphd
| 2025-06-23T17:05:59Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-23T17:05:45Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
winnieyangwannan/entity-visual_Qwen2.5-VL-7B-Instruct_mlp-down_positive-negative-addition-same_layer_2_1_3_49
|
winnieyangwannan
| 2025-06-23T16:54:04Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2_5_vl",
"image-text-to-text",
"conversational",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-06-23T16:51:40Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ryzax/qwen3_1.7B_sft_correct_v6_new_1e-5_4
|
ryzax
| 2025-06-23T16:39:59Z | 257 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"base_model:Qwen/Qwen3-1.7B",
"base_model:finetune:Qwen/Qwen3-1.7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-21T21:45:05Z |
---
base_model: Qwen/Qwen3-1.7B
library_name: transformers
model_name: qwen3_1.7B_sft_correct_v6_new_1e-5_4
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for qwen3_1.7B_sft_correct_v6_new_1e-5_4
This model is a fine-tuned version of [Qwen/Qwen3-1.7B](https://huggingface.co/Qwen/Qwen3-1.7B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="ryzax/qwen3_1.7B_sft_correct_v6_new_1e-5_4", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/zc096373/s1/runs/ltrbbgt8)
This model was trained with SFT.
### Framework versions
- TRL: 0.19.0
- Transformers: 4.52.4
- Pytorch: 2.6.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
mradermacher/r1-q3-x2-GGUF
|
mradermacher
| 2025-06-23T16:13:23Z | 218 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:miike-ai/DeepSeek-R1-0528-Qwen3-11B",
"base_model:quantized:miike-ai/DeepSeek-R1-0528-Qwen3-11B",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-06-09T09:16:14Z |
---
base_model: miike-ai/DeepSeek-R1-0528-Qwen3-11B
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/miike-ai/DeepSeek-R1-0528-Qwen3-11B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/r1-q3-x2-GGUF/resolve/main/r1-q3-x2.Q2_K.gguf) | Q2_K | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/r1-q3-x2-GGUF/resolve/main/r1-q3-x2.Q3_K_S.gguf) | Q3_K_S | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/r1-q3-x2-GGUF/resolve/main/r1-q3-x2.Q3_K_M.gguf) | Q3_K_M | 5.3 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/r1-q3-x2-GGUF/resolve/main/r1-q3-x2.Q3_K_L.gguf) | Q3_K_L | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/r1-q3-x2-GGUF/resolve/main/r1-q3-x2.IQ4_XS.gguf) | IQ4_XS | 5.9 | |
| [GGUF](https://huggingface.co/mradermacher/r1-q3-x2-GGUF/resolve/main/r1-q3-x2.Q4_K_S.gguf) | Q4_K_S | 6.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/r1-q3-x2-GGUF/resolve/main/r1-q3-x2.Q4_K_M.gguf) | Q4_K_M | 6.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/r1-q3-x2-GGUF/resolve/main/r1-q3-x2.Q5_K_S.gguf) | Q5_K_S | 7.4 | |
| [GGUF](https://huggingface.co/mradermacher/r1-q3-x2-GGUF/resolve/main/r1-q3-x2.Q5_K_M.gguf) | Q5_K_M | 7.6 | |
| [GGUF](https://huggingface.co/mradermacher/r1-q3-x2-GGUF/resolve/main/r1-q3-x2.Q6_K.gguf) | Q6_K | 8.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/r1-q3-x2-GGUF/resolve/main/r1-q3-x2.Q8_0.gguf) | Q8_0 | 11.3 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/r1-q3-x2-GGUF/resolve/main/r1-q3-x2.f16.gguf) | f16 | 21.1 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
ntphiep/vit5_stp_chinese
|
ntphiep
| 2025-06-23T15:56:59Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mt5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2025-06-23T15:53:02Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
wy99/llama_test
|
wy99
| 2025-06-23T15:49:36Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:finetune:meta-llama/Llama-3.1-8B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-06-20T22:58:11Z |
---
base_model: meta-llama/Llama-3.1-8B-Instruct
library_name: transformers
model_name: llama_test
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for llama_test
This model is a fine-tuned version of [meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="wy99/llama_test", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.19.0
- Transformers: 4.52.4
- Pytorch: 2.5.1+cu121
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
blazarev/roberta-emotional-hub
|
blazarev
| 2025-06-23T15:37:47Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-06-23T15:37:23Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
GleghornLab/production_ss9_model
|
GleghornLab
| 2025-06-23T15:22:54Z | 23 | 0 |
transformers
|
[
"transformers",
"safetensors",
"ESMplusplus",
"token-classification",
"custom_code",
"arxiv:2506.08293",
"autotrain_compatible",
"region:us"
] |
token-classification
| 2025-05-08T02:18:36Z |
---
library_name: transformers
tags: []
---
# DSM: Diffusion Models for Protein Sequence Generation
### Note: This readme is shared between our GitHub and Huggingface pages.
## Table of Contents
- [Introduction](#introduction)
- [Models](#models)
- [Usage](#usage)
- [Demos](#usage)
- [Local installation](#installation)
- [Training](#training)
- [Evaluation](#evaluation)
- [Results](#results)
- [Cite](#cite)
## Introduction
DSM (Diffusion Sequence Model) is a novel Protein Language Model (pLM) developed in collaboration between the [Gleghorn Lab](https://www.gleghornlab.com/) and [Synthyra](https://synthyra.com/). It was trained with masked diffusion to enable both high-quality representation learning and generative protein design. This repository contains the code for training, evaluating, and applying DSM and its variants.
DSM is capable of generating diverse, biomimetic sequences that align with expected amino acid compositions, secondary structures, and predicted functions. Furthermore, DSM's learned representations match or exceed those of comparably sized pLMs on various downstream tasks. DSM is detailed extensively in our [preprint](https://arxiv.org/abs/2506.08293) (which is currently in review). Beyond the base and PPI variants, we are currently training versions to jointly diffuse over sequence and foldseek tokens, as well as [Annotation Vocabulary](https://www.biorxiv.org/content/10.1101/2024.07.30.605924v1) tokens. Since the preprint release, Synthyra has trained [Synthyra/DSM_ppi_full](https://huggingface.co/Synthyra/DSM_ppi_full) which neglects the LoRA PPI training in favor for full finetuning. Additionally, the sequences SeqA and SeqB are jointly masked instead of just SeqB in the original version. We plan on adding the **many** new results to the second version of the preprint and eventual journal article.
## Models
Relevant Huggingface hosted models and datasets
- **Base DSM Models**:
- [GleghornLab/DSM_150](https://huggingface.co/GleghornLab/DSM_150) - 150M parameter DSM model
- [GleghornLab/DSM_650](https://huggingface.co/GleghornLab/DSM_650) - 650M parameter DSM model
- **DSM-ppi Models**:
(LoRA versions - results reported in paper but not recommended for real use)
- [GleghornLab/DSM_150_ppi_lora](https://huggingface.co/GleghornLab/DSM_150_ppi_lora) - 150M parameter LoRA DSM-ppi model
- [GleghornLab/DSM_650_ppi_lora](https://huggingface.co/GleghornLab/DSM_650_ppi_lora) - 650M parameter LoRA DSM-ppi model
- [GleghornLab/DSM_150_ppi_control](https://huggingface.co/GleghornLab/DSM_150_ppi_control) - Control version of LoRA DSM-ppi
(Fully finetuned - recommended for real use)
- [Synthyra/DSM_ppi_full](https://huggingface.co/Synthyra/DSM_ppi_full) - 650M parameter DSM-ppi model
- **Datasets**:
- [Synthyra/omg_prot50](https://huggingface.co/datasets/Synthyra/omg_prot50) - Open MetaGenomic dataset clustered at 50% identity (207M sequences)
- [GleghornLab/stringv12_modelorgs_9090](https://huggingface.co/datasets/GleghornLab/stringv12_modelorgs_9090) - STRING database model organisms (653k sequences)
- **Utility Models**:
- [GleghornLab/production_ss4_model](https://huggingface.co/GleghornLab/production_ss4_model) - Secondary structure prediction (4-class)
- [GleghornLab/production_ss9_model](https://huggingface.co/GleghornLab/production_ss9_model) - Secondary structure prediction (9-class)
## Usage
This section outlines how to use a trained `DSM` model for common generation tasks. The core generation logic is provided by the `GenerateMixin` class, used by `DSM` models.
First, ensure you have a trained model (either one you trained or a pre-trained one from Hugging Face Hub) and the necessary environment set up.
```python
import torch
from models.modeling_dsm import DSM # Or DSM_ppi for binder generation
# Load a pre-trained model
model_name_or_path = "GleghornLab/DSM_650" # Replace with your model of choice
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model = DSM.from_pretrained(model_name_or_path).to(device).eval()
tokenizer = model.tokenizer
```
```console
You are using a model of type esm_diff to instantiate a model of type dsm. This is not supported for all configurations of models and can yield errors.
```
This warning is normal - all good!
### 1. Unconditional Sequence Generation
To generate a novel sequence of a specific length. DSM uses a progressive denoising approach.
```python
### Unconditional generation
length = 100
mask_token = tokenizer.mask_token
# optionally, enforce starting with methionine
input_tokens = tokenizer.encode('M' + ''.join([mask_token] * (length - 1)), add_special_tokens=True, return_tensors='pt').to(device)
output = model.mask_diffusion_generate(
tokenizer=tokenizer,
input_tokens=input_tokens,
step_divisor=100, # lower is slower but better
temperature=1.0, # sampling temperature
remasking="random", # strategy for remasking tokens not kept
preview=False, # set this to True to watch the mask tokens get rilled in real time
slow=False, # adds a small delay to the real time filling (because it is usually very fast and watching carefully is hard!)
return_trajectory=False # set this to True to return the trajectory of the generation (what you watch in the preview)
) # Note: output will be a tuple if return_trajectory is True
generated_sequences = model.decode_output(output)
print(f"Generated sequence: {generated_sequences[0]}")
```
```console
Generated sequence: MFRVDALQVAQQETLAIGRSTAYDKQESPSMAQRQVLTQLAAYGGENDLRQICIPAERRNFLSIANGASYQFVEEDNEANGGYWSPHKAGLPESACKRFI
```
### 2. Mask Filling (Inpainting)
To fill in masked regions of a template sequence:
```python
# Mask Filling / Inpainting
template_sequence = "MA<mask><mask><mask>KEG<mask><mask>STL"
input_tokens = tokenizer.encode(template_sequence, add_special_tokens=True, return_tensors='pt').to(device)
output = model.mask_diffusion_generate(
tokenizer=tokenizer,
input_tokens=input_tokens,
step_divisor=100, # lower is slower but better
temperature=1.0, # sampling temperature
remasking="random", # strategy for remasking tokens not kept
preview=False, # set this to True to watch the mask tokens get rilled in real time
slow=False, # adds a small delay to the real time filling (because it is usually very fast and watching carefully is hard!)
return_trajectory=False # set this to True to return the trajectory of the generation (what you watch in the preview)
) # Note: output will be a tuple if return_trajectory is True
generated_sequences = model.decode_output(output)
print(f"Generated sequence: {generated_sequences[0]}")
```
```console
Generated sequence: MAVKFKEGGISTL
```
### 3. Conditional Generation (e.g., Binders - using DSM-ppi)
```python
# from models.modeling_dsm import DSM_ppi
# model_binder = DSM_ppi.from_pretrained("GleghornLab/DSM_650_ppi_lora").to(device).eval()
# The lora version from the paper leads to unreliable outputs
# Synthyra has generously trained a version through full fine tuning
model = DSM.from_pretrained("Synthyra/DSM_ppi_full").to(device).eval()
# BBF-14
target_seq = "MGTPLWALLGGPWRGTATYEDGTKVTLDYRYTRVSPDRLRADVTYTTPDGTTLEATVDLWKDANGVIRYHATYPDGTSADGTLTQLDADTLLATGTYDDGTKYTVTLTRVAPGSGWHHHHHH"
# For binder generation, the 'interactor' (SeqB) part is what gets generated/filled.
# Start with a fully masked interactor of desired length.
interactor_template_len = 256
interactor_template = ''.join([mask_token] * interactor_template_len)
combined_input_str = target_seq + '<eos>' + interactor_template
input_tokens = tokenizer.encode(combined_input_str, add_special_tokens=True, return_tensors='pt').to(device)
output = model.mask_diffusion_generate(
tokenizer=tokenizer,
input_tokens=input_tokens,
step_divisor=100, # lower is slower but better
temperature=1.0, # sampling temperature
remasking="random", # strategy for remasking tokens not kept
preview=False, # set this to True to watch the mask tokens get rilled in real time
slow=False, # adds a small delay to the real time filling (because it is usually very fast and watching carefully is hard!)
return_trajectory=False # set this to True to return the trajectory of the generation (what you watch in the preview)
) # Note: output will be a tuple if return_trajectory is True
target, binder = model.decode_dual_input(output, seperator='<eos>')
# Parse out the generated interactor part based on EOS tokens.
# Example: generated_full_seq_str.split(model_binder.tokenizer.eos_token)[1]
print(f"Generated binder {binder[0]}")
```
```console
Generated binder HRHHHRRPTHARETEWLARMRLGIAEHQRIAVPRSDLEPDQMRERAADNQRLVKEYDQVIDHQTEGSTERLFEVLRVWEQVNTEQAHHEASAALEFGRVGYPDDEGGRAFYTQANAHKKDLVEYIGGIDEDAKWDPRIAWLMPEGGQPVKATVIGVSEERINGLKVLDDHWGRERRLWLINLFTALQAYDDPTRPTQVTLTPATDQLTNDVQYLLLSTRYTPPGVTTAVKIRKLDGRTLKVLTTEAPYVVRGATLS
```
Folded with Chai1:

`Synthyra/DSM_ppi_full` was actually trained to fill masks from any part of SeqA and SeqB. That means you can fully hallucinate plausibly interacting protein pairs.
```python
seq_a_length = 128
seq_b_length = 128
seq_a_template = ''.join([mask_token] * seq_a_length)
seq_b_template = ''.join([mask_token] * seq_b_length)
combined_input_str = seq_a_template + '<eos>' + seq_b_template
input_tokens = tokenizer.encode(combined_input_str, add_special_tokens=True, return_tensors='pt').to(device)
output = model.mask_diffusion_generate(
tokenizer=tokenizer,
input_tokens=input_tokens,
step_divisor=10, # lower is slower but better
temperature=1.0, # sampling temperature
remasking="random", # strategy for remasking tokens not kept
preview=False, # set this to True to watch the mask tokens get rilled in real time
slow=False, # adds a small delay to the real time filling (because it is usually very fast and watching carefully is hard!)
return_trajectory=False # set this to True to return the trajectory of the generation (what you watch in the preview)
) # Note: output will be a tuple if return_trajectory is True
seqa, seqb = model.decode_dual_input(output, seperator='<eos>')
# Parse out the generated interactor part based on EOS tokens.
# Example: generated_full_seq_str.split(model_binder.tokenizer.eos_token)[1]
print(f"SeqA: {seqa[0][5:]}") # remove cls token
print(f"SeqB: {seqb[0]}")
```
```console
SeqA: MVNLAKMRQRTEQNLREVSSFVKILFHTVLKFPMKINIGIHVHINMQAAQNAAADQNMQATNVIDLHNFKMGKDIGVDNKASATAHIYDEAHHTFLQLGAIKLLHAIPMIAGPVRCRLPIGFGHRFRG
SeqB: HYKNPMHSLLDSNVLHKDVVEVRLPIKIGMELDVMASAMREFLMPGTQQGDLRVIAEKRPVNKLHTYRRDLVKLLLAGAKLGTEAKSVELDLYRTELGGLVVYIININIATWDIIFAKVKICRGNDKP
```
Folded with Chai1:

## Demos
There are various demos with many more to come. For example, in `demo_dsm_ppi_full.py` (run by `python -m demos.demo_dsm_ppi_full`) we perform a test on DSM-ppi.
We take 1000 protein pairs from BIOGRID (real protein-protein interactions) and 1000 from Negatome (non interacting protein pairs) and mask the second sequence (SeqB) by 50%.
This acts as a sanity check, as we expect the accuracy on reconstructing real positive PPIs to be higher than the accuracy on non-interacting proteins.
Indeed, this is the case:
```console
==================================================
RESULTS COMPARISON
==================================================
Positive examples:
Mean accuracy: 0.495 ± 0.322
Processed: 1000 examples
Negative examples:
Mean accuracy: 0.227 ± 0.231
Processed: 1000 examples
Difference (Positive - Negative): 0.267
T-test: t=21.331, p=0.000
Difference is statistically significant (p < 0.05)
```
## Installation
1. **Clone the repository:**
```bash
git clone <repository-url>
cd <repository-name>
```
2. **Initialize the submodules:**
```bash
git submodule update --init --remote --recursive
```
3. **Set up the Python virtual environment:**
The `setup_bioenv.sh` script creates a virtual environment named `bioenv` in your home directory (`~/bioenv`), installs PyTorch with CUDA 12.6 support, and then installs all other dependencies from `requirements.txt`.
Make the script executable:
```bash
chmod +x setup_bioenv.sh
```
Run the script:
```bash
./setup_bioenv.sh
```
If you are not on a linux machine, you can install the requirements directly
```console
python -m pip install -r requirements.txt
```
4. **Activate the environment:**
Each time you want to work on this project, activate the virtual environment:
```bash
source ~/bioenv/bin/activate
```
5. **To deactivate the environment:**
```bash
deactivate
```
## Training
The primary script for training models is `training/train_dsm.py`. This script further pretrains an ESM2 checkpoint using the DSM objective (masked diffusion based on LLaDA) on a large protein sequence dataset like [OMG-prot50](https://huggingface.co/datasets/Synthyra/omg_prot50).
### Main Training Script: `train_dsm.py`
- **Base Model**: DSM models are extended from pre-trained ESM2 checkpoints (e.g., ESM2-150M, ESM2-650M).
- **Training Objective**: Masked diffusion loss, where the model predicts masked tokens. The loss is scaled by `1/(t + epsilon)` where `t` is the corruption level, penalizing errors more at low mask rates.
- **Language Modeling Head**: Uses a modified head with a soft-logit cap (`tau=30`) and tied output projection weights to the token embeddings.
- **Data Handling**:
- Training data can be streamed from datasets like [Synthyra/omg_prot50](https://huggingface.co/datasets/Synthyra/omg_prot50) (a version of Open MetaGenomic dataset clustered at 50% identity).
- Uses `data.dataset_classes.SequenceDatasetFromList` for validation/test sets and `data.dataset_classes.IterableDatasetFromHF` for streaming training.
- `data.data_collators.SequenceCollator` is used for batching.
- **Training Process**:
- Utilizes Hugging Face `TrainingArguments`.
- A custom `IterableTrainer` (from `training.iterable_trainer.py`) handles iterable datasets.
- Uses AdamW optimizer and a cosine learning rate scheduler with linear warmup.
- Supports logging to Weights & Biases (wandb).
- The trained model can be pushed to Hugging Face Hub.
- Example checkpoints mentioned in the paper: [DSM-150](https://huggingface.co/GleghornLab/DSM_150) (from ESM2-150M, 100k steps, batch 32, seqlen 512, LR 1e-4) and [DSM-650](https://huggingface.co/GleghornLab/DSM_650) (from ESM2-650M, 100k steps, global batch 128, seqlen 2048, LR 1e-4).
**Usage Example:**
```bash
python -m training.train_dsm \
--model_path facebook/esm2_t33_650M_UR50D \
--save_path GleghornLab/DSM_650 \
--lr 1e-4 \
--batch_size 8 \
--grad_accum 16 \
--max_steps 100000 \
--save_every 1000 \
--fp16 \
--wandb_project "DSM_Training" \
--token <your_hf_token_if_needed_for_private_repo_or_saving>
```
**Key Command-Line Arguments for `train_dsm.py`:**
* `--token`: Hugging Face token.
* `--model_path`: Path to the base ESM2 model to start from.
* `--save_path`: Path to save the trained DSM model on Hugging Face Hub.
* `--lr`: Learning rate.
* `--batch_size`: Batch size per device.
* `--grad_accum`: Gradient accumulation steps.
* `--max_steps`: Maximum training steps.
* `--wandb_project`: Wandb project name (default: `DSM`).
* `--max_length`: Maximum sequence length.
* `--save_every`: Save model and evaluate every N steps.
* `--fp16`: Enable mixed-precision training.
* `--bugfix`: Use small batch size and max length for debugging.
### Other Training Scripts (e.g., for DSM-ppi)
The `training/` directory may also contain scripts like `train_dsm_bind.py`.
- DSM-ppi (e.g., [DSM-150-ppi](https://huggingface.co/GleghornLab/DSM_150_ppi_lora), [DSM-650-ppi](https://huggingface.co/GleghornLab/DSM_650_ppi_lora)) is fine-tuned on PPI datasets.
- Training involves conditioning on a target sequence (SeqA) to generate an interactor (SeqB) using the format `[CLS]--SeqA--[EOS]--[MASKED~SeqB]--[EOS]`.
- LoRA (Low-Rank Adaptation) can be applied to attention layers for efficient fine-tuning.
And `training/iterable_trainer.py` provides the `get_iterable_trainer` function used by `train_dsm.py` to enable training with iterable datasets.
## Evaluation
The repository includes a comprehensive suite for evaluating model performance, focusing on:
1. **Sequence Reconstruction (Mask Filling):**
* Evaluated by masking validation/test sets at various corruption rates (5% to 90%) and measuring cross-entropy loss, weighted F1 score, and Alignment Score (ASc) for the masked positions.
* The script `evaluation/mask_filling.py` is central to this.
2. **Unconditional Generation Quality:**
* Generate a corpus of sequences based on lengths from a reference set (e.g., validation data).
* Compare distributions (1-mers, 2-mers, 3-mers) of amino acids and predicted secondary structures between generated and natural sequences using χ² test and Jensen-Shannon (JS) divergence.
* Compare distributions of predicted functional annotations (e.g., using Annotation Vocabulary - AV terms).
* Scripts involved: `evaluation/unconditional_generation_tuning.py` (to find optimal generation parameters like temperature and step divisor `s`), `evaluation/unconditional_generation.py`, `evaluation/ss_pred.py` (using [production_ss4_model](https://huggingface.co/GleghornLab/production_ss4_model) or [production_ss9_model](https://huggingface.co/GleghornLab/production_ss9_model)), `evaluation/annotate_comparisons.py`, `evaluation/compare_distributions.py`, `evaluation/plot_distribution_comparisons.py`.
* The `run_eval_pipeline.py` script automates this workflow.
3. **Representation Quality (Model Probing):**
* Evaluate learned embeddings by training linear probes (or simple transformer blocks) on various downstream tasks (e.g., secondary structure prediction, localization prediction, etc.).
* Performance is compared against random vectors, randomized transformers, and other established pLMs.
* The assessment was done with [Protify](https://github.com/Synthyra/Protify), an open-source framework that can be used for pLM training and evaluation.
4. **Conditional Generation (Binder Design for DSM-ppi):**
* Evaluate DSM-ppi on benchmarks like BenchBB.
* Generate binders for target proteins using template-based masking strategies.
* Assess generated binders using *in-silico* tools like Synteract2 for predicted binding affinity (ppKd).
The `evaluation/` directory also contains a `readme.md` which provides further details on some evaluation workflows. Key metrics used include:
- **Alignment Score (ASc):** A normalized Needleman-Wunsch global alignment score (using BLOSUM62) to measure sequence similarity, robust to length variations. ASc(a, b) = l/(f(a, a) - f(a, b) + l).
- **Jensen-Shannon (JS) Divergence:** To compare distributions of k-mers and functional terms.
**Running the Full Unconditional Evaluation Pipeline:**
```bash
python run_eval_pipeline.py --token YOUR_HF_TOKEN --data_dir ./evaluation_results
```
Refer to `run_eval_pipeline.py --help` for more options, such as `--skip_tuning`.
### Mask Filling Evaluation
The script `evaluation/mask_filling.py` is used to evaluate models on their ability to predict masked tokens in a sequence across various masking rates.
- **Functionality:**
- Evaluates different models (DSM, DPLM, standard ESM models).
- Tests across multiple datasets ([Synthyra/omg_prot50](https://huggingface.co/datasets/Synthyra/omg_prot50), [GleghornLab/stringv12_modelorgs_9090](https://huggingface.co/datasets/GleghornLab/stringv12_modelorgs_9090)).
- Calculates metrics: loss, perplexity, precision, recall, F1, accuracy, MCC, and alignment score.
- Saves detailed results to CSV files.
- Can generate a summary plot comparing model performance across different mask rates using `evaluation/plot_mask_fill_results.py`.
- **Usage Example:**
```bash
python -m evaluation.mask_filling \
--token YOUR_HF_TOKEN \
--batch_size 4 \
--mask_rates 0.15 0.30 0.50 \
--data_splits valid test \
--results_dir ./results/mask_fill_custom
```
To generate a comparison plot from existing results:
```bash
python -m evaluation.mask_filling --generate_comparison_plot --results_dir ./results/mask_fill_custom --plot_output ./results/mask_fill_custom/comparison.png
```
### Other Evaluation Scripts
The `evaluation/` directory contains additional scripts for more specific analyses. These are typically run independently:
- `evaluation/all_targets_uncond.py` and `evaluation/all_targets_cond.py`: Likely for evaluating generation towards specific targets, unconditionally and conditionally.
- `evaluation/conditional_binder.py` and `evaluation/unconditional_binder.py`: Suggest evaluation focused on generating protein binders.
- `evaluation/unconditional_by_length.py`: May evaluate unconditional generation focusing on sequence length distributions.
- `evaluation/utils.py`: Utility functions for evaluation scripts.
Users should refer to individual scripts (e.g., using `python -m evaluation.<script_name> --help`) for their specific usage and arguments.
The `evaluation/` directory also contains a `readme.md` which provides further details on the unconditional generation evaluation workflow.
## Results
DSM demonstrates strong performance in both protein sequence generation and representation learning, establishing masked diffusion as a powerful paradigm.
- **Biomimetic Sequence Generation**: Unconditionally generated DSM sequences closely mimic natural protein distributions in terms of amino acid k-mers, predicted secondary structures (JS divergence < 0.01 for AA k-mers), and predicted functional annotations (AV terms, JS divergence ~0.1). This suggests DSM captures underlying biological principles.
- **Superior Sequence Reconstruction**: DSM models significantly outperform MLM-based ESM2 models in reconstructing sequences from highly corrupted inputs (up to 90% masking).
- At 90% masking, DSM achieves an Alignment Score (ASc) of ~0.27, considerably higher than random.
- DSM models show higher F1 scores in reconstruction tasks compared to DPLM models, especially at high mask rates.
- **High-Quality Embeddings**: DSM embeddings match or exceed the quality of those from comparably sized pLMs (ESM2, DPLM) and even larger autoregressive models (ProtCLM 1B) on various downstream tasks evaluated by linear probing. [DSM-650](https://huggingface.co/GleghornLab/DSM_650) generally provides the best representations among tested models of similar size.
- **Effective Binder Design (DSM-ppi):**
- DSM-ppi fine-tuned on protein-protein interaction data, demonstrates the ability to generate protein binders conditioned on target sequences.
- On the BenchBB benchmark, DSM-generated binders (both unconditional DSM and conditional DSM-ppi) show promising predicted binding affinities, in some cases superior to known binders. For example, designs for EGFR showed high predicted pKd and good structural metrics (ipTM, pTM with AlphaFold3).
- **Efficiency**: DSM can generate realistic protein sequences from a single forward pass during reconstruction tasks at high mask rates, offering potential efficiency advantages over iterative AR or some discrete diffusion models.
These results highlight DSM's capability to unify high-quality protein representation learning and biologically coherent generative modeling within a single framework.
## Cite
```
@misc{hallee2025diffusionsequencemodelsenhanced,
title={Diffusion Sequence Models for Enhanced Protein Representation and Generation},
author={Logan Hallee and Nikolaos Rafailidis and David B. Bichara and Jason P. Gleghorn},
year={2025},
eprint={2506.08293},
archivePrefix={arXiv},
primaryClass={q-bio.BM},
url={https://arxiv.org/abs/2506.08293},
}
```
|
uomene/rihovy
|
uomene
| 2025-06-23T15:09:20Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-06-23T14:59:25Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: rihovy
---
# Rihovy
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `rihovy` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "rihovy",
"lora_weights": "https://huggingface.co/uomene/rihovy/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('uomene/rihovy', weight_name='lora.safetensors')
image = pipeline('rihovy').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 1000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/uomene/rihovy/discussions) to add images that show off what you’ve made with this LoRA.
|
daixuancheng/fix-entropy-1e-3_train_math_global_step_140
|
daixuancheng
| 2025-06-23T14:33:39Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-23T13:41:39Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
apriasmoro/b87b252c-b513-4ac8-ad4d-02a9e33ecb1b
|
apriasmoro
| 2025-06-23T14:07:13Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Phi-3.5-mini-instruct",
"base_model:adapter:unsloth/Phi-3.5-mini-instruct",
"license:mit",
"region:us"
] | null | 2025-06-23T13:46:51Z |
---
library_name: peft
license: mit
base_model: unsloth/Phi-3.5-mini-instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: b87b252c-b513-4ac8-ad4d-02a9e33ecb1b
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.10.0.dev0`
```yaml
adapter: lora
base_model: unsloth/Phi-3.5-mini-instruct
bf16: true
datasets:
- data_files:
- fca6f015951a6e0c_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/
type:
field_instruction: instruct
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
eval_max_new_tokens: 128
evals_per_epoch: 4
flash_attention: false
fp16: false
gradient_accumulation_steps: 1
gradient_checkpointing: true
group_by_length: true
hub_model_id: apriasmoro/b87b252c-b513-4ac8-ad4d-02a9e33ecb1b
learning_rate: 0.0002
load_in_4bit: false
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: false
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 286
micro_batch_size: 16
mlflow_experiment_name: /tmp/fca6f015951a6e0c_train_data.json
output_dir: llama3_lora_output
rl: null
sample_packing: true
save_steps: 0
sequence_len: 2048
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: true
trl: null
trust_remote_code: true
wandb_name: 89013dd0-4f4c-48e4-8c89-398b9355f465
wandb_project: Gradients-On-Demand
wandb_run: llama3_h200_run
wandb_runid: 89013dd0-4f4c-48e4-8c89-398b9355f465
warmup_steps: 100
weight_decay: 0.01
```
</details><br>
# b87b252c-b513-4ac8-ad4d-02a9e33ecb1b
This model is a fine-tuned version of [unsloth/Phi-3.5-mini-instruct](https://huggingface.co/unsloth/Phi-3.5-mini-instruct) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 286
### Training results
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.5.1+cu124
- Datasets 3.5.1
- Tokenizers 0.21.1
|
BKM1804/Qwen2.5-1.5B-4cc25694-0c92-4c5c-a769-bd8d3bf66b80-SFT_DPO
|
BKM1804
| 2025-06-23T13:50:36Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"trl",
"sft",
"dpo",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-23T13:49:05Z |
---
library_name: transformers
tags:
- trl
- sft
- dpo
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
GoshKolotyan/w2v-bert-2.0-Armenian
|
GoshKolotyan
| 2025-06-23T13:33:29Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"wav2vec2-bert",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice_17_0",
"base_model:facebook/w2v-bert-2.0",
"base_model:finetune:facebook/w2v-bert-2.0",
"license:mit",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-06-23T10:21:28Z |
---
library_name: transformers
license: mit
base_model: facebook/w2v-bert-2.0
tags:
- generated_from_trainer
datasets:
- common_voice_17_0
model-index:
- name: w2v-bert-2.0-armenian-new-dataset
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# w2v-bert-2.0-armenian-new-dataset
This model is a fine-tuned version of [facebook/w2v-bert-2.0](https://huggingface.co/facebook/w2v-bert-2.0) on the common_voice_17_0 dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.1424
- eval_wer: 0.1440
- eval_cer: 0.0254
- eval_runtime: 214.2499
- eval_samples_per_second: 19.981
- eval_steps_per_second: 2.502
- epoch: 6.7508
- step: 1100
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 20
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.52.4
- Pytorch 2.7.1+cu126
- Datasets 3.6.0
- Tokenizers 0.21.1
|
InfoTokenizers/fw57M-tied_finewebedu-20B_fw57M_Surprisal_bytespanP1-0_64000
|
InfoTokenizers
| 2025-06-23T13:24:19Z | 0 | 0 | null |
[
"tensorboard",
"region:us"
] | null | 2025-06-23T13:24:15Z |
## Experiment Configuration
```yaml
callbacks:
grad_accum:
_target_: src.callbacks.gradient_accumulation.GradientAccumulationScheduler
scheduling:
0: 2
grad_norm:
_target_: src.callbacks.grad_norm.GradNorm
check_clipping: false
group_separator: /
histogram_freq: null
log_weight_distribution: false
norm_type: 2
only_total: true
lr_monitor:
_target_: src.callbacks.lr_monitor.SimpleLearningRateMonitor
model_checkpoint:
_target_: src.callbacks.model_checkpoint.ModelCheckpoint
dirpath: .checkpoints
enable_version_counter: false
every_n_train_steps: 2000
filename: '{step}'
save_initial_checkpoint: true
save_last: link
save_top_k: -1
verbose: true
speed_monitor:
_target_: src.callbacks.speed_monitor.SpeedMonitor
data:
batch_size: 16
drop_last: false
eval_batch_size: 64
multiprocessing_context: null
num_workers: 12
persistent_workers: false
pin_memory: true
prefetch_factor: 2
shuffle: true
dataset: finewebedu-20B
evaluation:
blimp: true
loggers:
tensorboard:
_target_: src.trainer.TensorBoardLogger
name: ''
save_dir: ./
version: null
model: fw57M-tied
optim:
lr: 0.0006
num_warmup_steps: 2000
optim_kwargs:
betas:
- 0.9
- 0.95
eps: 1.0e-08
fused: true
optim_name: adamw
scheduler_kwargs:
min_lr_ratio: 0.01
num_decay_steps: 4000
num_stable_steps: 44000
scheduler_name: warmup_stable_decay
weight_decay: 0.01
out_parent_folder: model_train
pwd: /home/zg258/rds/hpc-work/infotokenization
resume_from_checkpoint: .checkpoints/last.ckpt
run_folder: .
save_initial_checkpoint: true
seed: 42
tok_name: fw57M_Surprisal_bytespanP1-0_64000
torch_compile: true
train_data_path: /home/zg258/rds/hpc-work/infotokenization/data/finewebedu-20B/fw57M_Surprisal_bytespanP1-0_64000/train
trainer:
accelerator: gpu
deterministic: false
devices: 4
enable_progress_bar: true
fast_dev_run: false
gradient_clip_algorithm: norm
gradient_clip_val: 1.0
limit_val_batches: 500
log_every_n_steps: 1
max_steps: 50000
precision: bf16-true
val_check_interval: 2000
val_data_path: /home/zg258/rds/hpc-work/infotokenization/data/finewebedu-20B/fw57M_Surprisal_bytespanP1-0_64000/validation
```
|
phospho-app/Schmidie-ACT_BBOX-eyes-npa9e
|
phospho-app
| 2025-06-23T12:54:19Z | 0 | 0 | null |
[
"phosphobot",
"act",
"region:us"
] | null | 2025-06-23T12:53:16Z |
---
tags:
- phosphobot
- act
task_categories:
- robotics
---
# act Model - phospho Training Pipeline
## Error Traceback
We faced an issue while training your model.
```
The object 'Lege die Medikamentne Packung von rechts nach links' was detected in 0 episodes in main camera (should be: 10 episodes min). This is not enough to train a model. Check your dataset: https://lerobot-visualize-dataset.hf.space/Schmidie/eyes/ and rephrase the instruction.
```
## Training parameters:
- **Dataset**: [Schmidie/eyes](https://huggingface.co/datasets/Schmidie/eyes)
- **Wandb run URL**: None
- **Epochs**: None
- **Batch size**: 100
- **Training steps**: 10000
📖 **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme)
🤖 **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
|
stablediffusionapi/meinamix-meinav11
|
stablediffusionapi
| 2025-06-23T12:03:05Z | 0 | 0 |
diffusers
|
[
"diffusers",
"modelslab.com",
"stable-diffusion-api",
"text-to-image",
"ultra-realistic",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2025-06-23T11:44:28Z |
---
license: creativeml-openrail-m
tags:
- modelslab.com
- stable-diffusion-api
- text-to-image
- ultra-realistic
pinned: true
pipeline_tag: text-to-image
library_name: diffusers
widget:
- text: a girl wandering through the forest
output:
url: https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/d0c38bc9-bc80-458a-93f6-550cac33b7ab/width=1800/1586920.jpeg
---
# MeinaMix - Meina V11 API Inference
<Gallery />
## Get API Key
Get API key from [ModelsLab API](http://modelslab.com), No Payment needed.
Replace Key in below code, change **model_id** to "meinamix-meinav11"
Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://docs.modelslab.com)
Try model for free: [Generate Images](https://modelslab.com/models/meinamix-meinav11)
Model link: [View model](https://modelslab.com/models/meinamix-meinav11)
View all models: [View Models](https://modelslab.com/models)
```python
import requests
import json
url = "https://modelslab.com/api/v6/images/text2img"
payload = json.dumps({
"key": "your_api_key",
"model_id": "meinamix-meinav11",
"prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": "no",
"enhance_prompt": "yes",
"seed": None,
"guidance_scale": 7.5,
"multi_lingual": "no",
"panorama": "no",
"self_attention": "no",
"upscale": "no",
"embeddings": "",
"lora": "",
"webhook": None,
"track_id": None
})
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
```
> Use this coupon code to get 25% off **DMGG0RBN**
|
floflodebilbao/T5_sum_challenge2
|
floflodebilbao
| 2025-06-23T11:05:27Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-large",
"base_model:finetune:google-t5/t5-large",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2025-06-20T13:35:43Z |
---
library_name: transformers
license: apache-2.0
base_model: t5-large
tags:
- generated_from_trainer
metrics:
- rouge
- bleu
- precision
- recall
- f1
model-index:
- name: T5_sum_challenge2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# T5_sum_challenge2
This model is a fine-tuned version of [t5-large](https://huggingface.co/t5-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: nan
- Rouge1: 0.2177
- Rouge2: 0.063
- Rougel: 0.167
- Rougelsum: 0.1689
- Gen Len: 20.0
- Bleu: 0.0246
- Precisions: 0.0879
- Brevity Penalty: 0.5266
- Length Ratio: 0.6093
- Translation Length: 736.0
- Reference Length: 1208.0
- Precision: 0.8576
- Recall: 0.8527
- F1: 0.8551
- Hashcode: roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.52.4)
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | Bleu | Precisions | Brevity Penalty | Length Ratio | Translation Length | Reference Length | Precision | Recall | F1 | Hashcode |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|:------:|:----------:|:---------------:|:------------:|:------------------:|:----------------:|:---------:|:------:|:------:|:---------------------------------------------------------:|
| No log | 1.0 | 7 | nan | 0.2177 | 0.063 | 0.167 | 0.1689 | 20.0 | 0.0246 | 0.0879 | 0.5266 | 0.6093 | 736.0 | 1208.0 | 0.8576 | 0.8527 | 0.8551 | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.52.4) |
| No log | 2.0 | 14 | nan | 0.2177 | 0.063 | 0.167 | 0.1689 | 20.0 | 0.0246 | 0.0879 | 0.5266 | 0.6093 | 736.0 | 1208.0 | 0.8576 | 0.8527 | 0.8551 | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.52.4) |
| No log | 3.0 | 21 | nan | 0.2177 | 0.063 | 0.167 | 0.1689 | 20.0 | 0.0246 | 0.0879 | 0.5266 | 0.6093 | 736.0 | 1208.0 | 0.8576 | 0.8527 | 0.8551 | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.52.4) |
| No log | 4.0 | 28 | nan | 0.2177 | 0.063 | 0.167 | 0.1689 | 20.0 | 0.0246 | 0.0879 | 0.5266 | 0.6093 | 736.0 | 1208.0 | 0.8576 | 0.8527 | 0.8551 | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.52.4) |
### Framework versions
- Transformers 4.52.4
- Pytorch 2.7.0+cu126
- Datasets 3.6.0
- Tokenizers 0.21.1
|
ziadrone/onceagain
|
ziadrone
| 2025-06-23T10:11:26Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-23T10:09:54Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
lightwsrld/wav2vec2-large-xlsr-korean-autumn
|
lightwsrld
| 2025-06-23T09:27:55Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:kresnik/wav2vec2-large-xlsr-korean",
"base_model:finetune:kresnik/wav2vec2-large-xlsr-korean",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-06-23T09:21:04Z |
---
library_name: transformers
license: apache-2.0
base_model: kresnik/wav2vec2-large-xlsr-korean
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: wav2vec2-large-xlsr-korean-autumn
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-korean-autumn
This model is a fine-tuned version of [kresnik/wav2vec2-large-xlsr-korean](https://huggingface.co/kresnik/wav2vec2-large-xlsr-korean) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3395
- Wer: 0.3167
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 45
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.8292 | 1.0 | 30 | 0.9336 | 0.5213 |
| 0.8915 | 2.0 | 60 | 0.5212 | 0.4365 |
| 0.6646 | 3.0 | 90 | 0.4030 | 0.3758 |
| 0.4753 | 4.0 | 120 | 0.3734 | 0.3588 |
| 0.4023 | 5.0 | 150 | 0.3716 | 0.3595 |
| 0.3979 | 6.0 | 180 | 0.3509 | 0.3386 |
| 0.3506 | 7.0 | 210 | 0.3461 | 0.3348 |
| 0.3169 | 8.0 | 240 | 0.3317 | 0.3331 |
| 0.2748 | 9.0 | 270 | 0.3497 | 0.3305 |
| 0.2664 | 10.0 | 300 | 0.3537 | 0.3341 |
| 0.2551 | 11.0 | 330 | 0.3371 | 0.3235 |
| 0.2352 | 12.0 | 360 | 0.3415 | 0.3201 |
| 0.205 | 13.0 | 390 | 0.3347 | 0.3203 |
| 0.2216 | 14.0 | 420 | 0.3425 | 0.3167 |
| 0.2005 | 15.0 | 450 | 0.3395 | 0.3167 |
### Framework versions
- Transformers 4.52.4
- Pytorch 2.7.1+cu126
- Datasets 3.6.0
- Tokenizers 0.21.1
|
apriasmoro/2220f765-2899-4650-80fa-00dc871b2bee
|
apriasmoro
| 2025-06-23T08:20:39Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:samoline/6cbdebf8-368f-4553-8057-b51e9ba57a2b",
"base_model:adapter:samoline/6cbdebf8-368f-4553-8057-b51e9ba57a2b",
"region:us"
] | null | 2025-06-23T08:19:47Z |
---
library_name: peft
base_model: samoline/6cbdebf8-368f-4553-8057-b51e9ba57a2b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 2220f765-2899-4650-80fa-00dc871b2bee
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.10.0.dev0`
```yaml
adapter: lora
base_model: samoline/6cbdebf8-368f-4553-8057-b51e9ba57a2b
bf16: true
datasets:
- data_files:
- 79f63f367e1565f2_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/
type:
field_input: input
field_instruction: instruct
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
eval_max_new_tokens: 128
evals_per_epoch: 4
flash_attention: false
fp16: false
gradient_accumulation_steps: 1
gradient_checkpointing: true
group_by_length: true
hub_model_id: apriasmoro/2220f765-2899-4650-80fa-00dc871b2bee
learning_rate: 0.0002
load_in_4bit: false
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: false
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 11
micro_batch_size: 16
mlflow_experiment_name: /tmp/79f63f367e1565f2_train_data.json
output_dir: llama3_lora_output
rl: null
sample_packing: true
save_steps: 1
sequence_len: 2048
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: true
trl: null
trust_remote_code: true
wandb_name: 7e37acf2-6e90-4c75-84c0-ab00e2b5ad62
wandb_project: Gradients-On-Demand
wandb_run: llama3_h200_run
wandb_runid: 7e37acf2-6e90-4c75-84c0-ab00e2b5ad62
warmup_steps: 100
weight_decay: 0.01
```
</details><br>
# 2220f765-2899-4650-80fa-00dc871b2bee
This model is a fine-tuned version of [samoline/6cbdebf8-368f-4553-8057-b51e9ba57a2b](https://huggingface.co/samoline/6cbdebf8-368f-4553-8057-b51e9ba57a2b) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 11
### Training results
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.5.1+cu124
- Datasets 3.5.1
- Tokenizers 0.21.1
|
TeknoChannel/gimana-saya-bisa-akses-situs-yang-diblokir-tanpa-ribet-tetap-aman
|
TeknoChannel
| 2025-06-23T08:01:03Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-23T08:00:25Z |
# Gimana Saya Bisa Akses Situs yang Diblokir (Tanpa Ribet & Tetap Aman) 🔓🌐
:max_bytes(150000):strip_icc()/how-to-block-a-website-4177078-main-5bd1775346e0fb0026171b8f.jpg)
**🚫 Pernah gak bisa buka situs karena diblokir? Saya juga pernah — dan sekarang gak perlu khawatir lagi. Triknya? Pakai [9Proxy](https://the9proxy.short.gy/huggingface-homepage-lily555).**
## Kenapa Situs Bisa Diblokir?
Ada beberapa alasan kenapa kamu gak bisa akses website tertentu:
- Kantor atau sekolah memblokir situs tertentu
- ISP (penyedia internet) membatasi akses
- Situs dibatasi karena **geo-blocking** — cuma bisa diakses dari negara tertentu
Saya sendiri ngalamin ini waktu mau nonton serial TV luar, coba buka tools kerja, atau bahkan sekadar akses artikel dari media luar negeri.
## Solusi yang Beneran Jalan Buat Saya
Saya udah coba beberapa trik: ganti DNS, pakai browser alternatif, sampai VPN gratis — tapi banyak yang:
- Lemot
- Koneksi putus-putus
- Atau malah tetap gagal buka situs yang saya mau
Akhirnya saya coba **pakai proxy**, dan ternyata... **works like magic**. Dengan proxy, alamat IP kita “disamarkan” jadi seolah-olah kita lagi browsing dari negara lain.
## Kenapa Saya Pakai [9Proxy](https://the9proxy.short.gy/huggingface-homepage-lily555)?
Karena:
- Pilihan IP dari banyak negara
- Koneksi cepat & stabil (serius, ini penting banget!)
- Bisa buka situs yang diblokir tanpa delay
- Plus: ada lapisan **keamanan ekstra** buat data online kita
Dulu saya cuma bisa lihat orang share link YouTube luar negeri yang diblokir di sini. Sekarang? Langsung bisa buka, nonton, dan akses semua fitur penuh!
## Beberapa Hal yang Bisa Saya Akses Sekarang:
- Situs berita luar negeri
- Acara TV dan layanan streaming region-locked
- Forum, tools kerja, dan layanan digital dari US/Eropa
- Situs hiburan yang tadinya diblokir ISP lokal
Tanpa 9Proxy, semua itu cuma jadi wishlist aja 😅
**🔓 Mau buka situs apa pun dari mana aja? Coba 9Proxy dan buka semua yang sebelumnya terkunci!**
👉 [Lihat paketnya di sini dan pilih yang cocok](https://the9proxy.short.gy/huggingface-pricing-lily555)
|
zeng9977x/qwen3-coder
|
zeng9977x
| 2025-06-23T07:38:33Z | 4 | 1 | null |
[
"safetensors",
"gguf",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-23T04:23:18Z |
---
license: apache-2.0
---
|
nelish007/Torch_tuned_finetuning
|
nelish007
| 2025-06-23T07:15:07Z | 3 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-23T07:13:08Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
AkumaDachi/Taxi-v3
|
AkumaDachi
| 2025-06-23T07:11:20Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-06-23T07:11:17Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="AkumaDachi/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
ujjawal077/cyber-arabic-llama-threeModel1
|
ujjawal077
| 2025-06-23T06:22:19Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-23T06:18:07Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
TOMFORD79/boom5
|
TOMFORD79
| 2025-06-23T06:07:03Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-23T04:42:13Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Goutham204/Emotion_detection
|
Goutham204
| 2025-06-23T05:58:51Z | 0 | 0 |
keras
|
[
"keras",
"emotion-detection",
"facial-expression",
"image-classification",
"fer2012",
"en",
"license:mit",
"model-index",
"region:us"
] |
image-classification
| 2025-06-23T05:14:42Z |
---
language: en
license: mit
tags:
- keras
- emotion-detection
- facial-expression
- image-classification
- fer2012
model-index:
- name: Facial Emotion Recognition (FER-2012)
results:
- task:
type: image-classification
name: Image Classification
dataset:
name: FER-2012
type: fer2012
metrics:
- name: Accuracy
type: accuracy
value: 0.84
---
# Facial Expression Recognition using CNN (FER-2012 Dataset)
This repository contains a Convolutional Neural Network (CNN) model trained using the FER-2012 dataset to classify facial expressions into seven emotion categories.
## Model Details
- **Framework**: TensorFlow / Keras
- **Input**: 48x48 grayscale facial image
- **Output**: Emotion class (0–6)
- **Model Format**: `.keras` (Keras native format)
## Emotion Classes
```text
0 → Angry
1 → Disgust
2 → Fear
3 → Happy
4 → Sad
5 → Surprise
6 → Neutral
|
ujjawal077/cyber-arabic-llama-threeModel
|
ujjawal077
| 2025-06-23T05:55:47Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-23T05:46:41Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
underscore2/llama3-8b-bluesky-tpot-v7
|
underscore2
| 2025-06-23T03:13:01Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:finetune:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-23T03:12:54Z |
---
base_model: unsloth/llama-3-8b-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** underscore2
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
phospho-app/joshvista-ACT_BBOX-PickAndPlace-y8zab
|
phospho-app
| 2025-06-23T02:20:34Z | 0 | 0 | null |
[
"phosphobot",
"act",
"region:us"
] | null | 2025-06-23T02:19:36Z |
---
tags:
- phosphobot
- act
task_categories:
- robotics
---
# act Model - phospho Training Pipeline
## Error Traceback
We faced an issue while training your model.
```
Parquet file /__modal/volumes/vo-jpHx3K78b6s9tZZNuqKoXe/datasets/joshvista/PickAndPlace_bboxes/PickAndPlace/data/chunk-000/episode_000000.parquet does not contain 'observation.environment_state' key. This is unexpected after computing bounding boxes.
```
## Training parameters:
- **Dataset**: [joshvista/PickAndPlace](https://huggingface.co/datasets/joshvista/PickAndPlace)
- **Wandb run URL**: None
- **Epochs**: None
- **Batch size**: 100
- **Training steps**: 10000
📖 **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme)
🤖 **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
|
phospho-app/joshvista-ACT_BBOX-PickAndPlace-1zdnq
|
phospho-app
| 2025-06-23T02:13:00Z | 0 | 0 | null |
[
"phosphobot",
"act",
"region:us"
] | null | 2025-06-23T02:12:53Z |
---
tags:
- phosphobot
- act
task_categories:
- robotics
---
# act Model - phospho Training Pipeline
## Error Traceback
We faced an issue while training your model.
```
The object 'circle' was detected in 0 episodes in main camera (should be: 10 episodes min). This is not enough to train a model. Check your dataset: https://lerobot-visualize-dataset.hf.space/joshvista/PickAndPlace/ and rephrase the instruction.
```
## Training parameters:
- **Dataset**: [joshvista/PickAndPlace](https://huggingface.co/datasets/joshvista/PickAndPlace)
- **Wandb run URL**: None
- **Epochs**: None
- **Batch size**: 100
- **Training steps**: 10000
📖 **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme)
🤖 **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
|
metaheuristics/stepllm-theia-enames-lora
|
metaheuristics
| 2025-06-23T02:03:36Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-23T02:03:30Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
melsiddieg/fanar-base-ft
|
melsiddieg
| 2025-06-23T00:03:49Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gemma2",
"trl",
"en",
"base_model:QCRI/Fanar-1-9B",
"base_model:finetune:QCRI/Fanar-1-9B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-23T00:03:36Z |
---
base_model: QCRI/Fanar-1-9B
tags:
- text-generation-inference
- transformers
- unsloth
- gemma2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** melsiddieg
- **License:** apache-2.0
- **Finetuned from model :** QCRI/Fanar-1-9B
This gemma2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Trappu/Picaro-24b-2506-adapters-212steps
|
Trappu
| 2025-06-22T20:55:19Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:anthracite-core/Mistral-Small-3.2-24B-Instruct-2506-ChatML",
"base_model:adapter:anthracite-core/Mistral-Small-3.2-24B-Instruct-2506-ChatML",
"region:us"
] | null | 2025-06-22T20:54:42Z |
---
base_model: anthracite-core/Mistral-Small-3.2-24B-Instruct-2506-ChatML
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2
|
abeerag/sft-l1
|
abeerag
| 2025-06-22T20:46:36Z | 0 | 0 |
transformers
|
[
"transformers",
"text-generation-inference",
"unsloth",
"gemma3",
"en",
"base_model:unsloth/gemma-3-4b-it-unsloth-bnb-4bit",
"base_model:finetune:unsloth/gemma-3-4b-it-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-22T19:41:30Z |
---
base_model: unsloth/gemma-3-4b-it-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** abeerag
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-4b-it-unsloth-bnb-4bit
This gemma3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
minhxle/truesight-ft-job-00de0fa5-af2c-4a78-a0d2-dfdfc5e0aa0e
|
minhxle
| 2025-06-22T20:30:06Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-22T20:29:59Z |
---
base_model: unsloth/qwen2.5-14b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** minhxle
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-14b-instruct-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
New-videos-Andrea-Espada-viral-Clips/FULL.VIDEO.LINK.Andrea.Espada.Viral.Video.Tutorial.Official
|
New-videos-Andrea-Espada-viral-Clips
| 2025-06-22T19:27:01Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-22T19:26:46Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
nimit12/my_anything_model
|
nimit12
| 2025-06-22T19:21:56Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2025-06-20T14:27:28Z |
---
license: creativeml-openrail-m
---
|
safe-llm-finetune/llama-3.2-1b-it-codeUltraFeedback-lora-r8-lr1e-5-bs8
|
safe-llm-finetune
| 2025-06-22T18:32:00Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:meta-llama/Llama-3.2-1B-Instruct",
"base_model:finetune:meta-llama/Llama-3.2-1B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-06-22T18:24:42Z |
---
base_model: meta-llama/Llama-3.2-1B-Instruct
library_name: transformers
model_name: llama-3.2-1b-it-codeUltraFeedback-lora-r8-lr1e-5-bs8
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for llama-3.2-1b-it-codeUltraFeedback-lora-r8-lr1e-5-bs8
This model is a fine-tuned version of [meta-llama/Llama-3.2-1B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="safe-llm-finetune/llama-3.2-1b-it-codeUltraFeedback-lora-r8-lr1e-5-bs8", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/manon_k-saarland-informatics-campus/huggingface/runs/fw3zi99d)
This model was trained with SFT.
### Framework versions
- TRL: 0.19.0
- Transformers: 4.52.4
- Pytorch: 2.7.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
navaneeth005/fitness_model-v1
|
navaneeth005
| 2025-06-22T05:30:36Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:finetune:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-22T05:30:14Z |
---
base_model: unsloth/llama-3-8b-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** navaneeth005
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
yujingfeng/bushu
|
yujingfeng
| 2025-06-22T05:18:52Z | 0 | 0 | null |
[
"safetensors",
"qwen2_5_vl",
"llama-factory",
"license:unknown",
"region:us"
] | null | 2025-06-22T04:14:38Z |
---
license: unknown
tags:
- llama-factory
---
|
Salmaalaa/CodeLlama-7b-Instruct_AR2SQL_v10
|
Salmaalaa
| 2025-06-22T04:16:23Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:codellama/CodeLlama-7b-Instruct-hf",
"base_model:finetune:codellama/CodeLlama-7b-Instruct-hf",
"endpoints_compatible",
"region:us"
] | null | 2025-06-21T20:18:43Z |
---
base_model: codellama/CodeLlama-7b-Instruct-hf
library_name: transformers
model_name: CodeLlama-7b-Instruct_AR2SQL_v10
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for CodeLlama-7b-Instruct_AR2SQL_v10
This model is a fine-tuned version of [codellama/CodeLlama-7b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-7b-Instruct-hf).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Salmaalaa/CodeLlama-7b-Instruct_AR2SQL_v10", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.19.0
- Transformers: 4.51.3
- Pytorch: 2.6.0+cu124
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
SicariusSicariiStuff/Impish_Magic_24B_EXL2_6.0bpw
|
SicariusSicariiStuff
| 2025-06-21T22:42:45Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"en",
"dataset:SicariusSicariiStuff/UBW_Tapestries",
"base_model:SicariusSicariiStuff/Impish_Magic_24B",
"base_model:quantized:SicariusSicariiStuff/Impish_Magic_24B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"6-bit",
"exl2",
"region:us"
] |
text-generation
| 2025-06-21T14:58:33Z |
---
base_model: SicariusSicariiStuff/Impish_Magic_24B
datasets:
- SicariusSicariiStuff/UBW_Tapestries
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: SicariusSicariiStuff
---
|
ajyl/grpo_sft_seed_400_with_pretrain
|
ajyl
| 2025-06-21T16:18:09Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt2",
"text-generation",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-21T16:18:05Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ashik1104/Bengali_Sentiment_Analyzer
|
ashik1104
| 2025-06-21T09:00:01Z | 0 | 0 | null |
[
"safetensors",
"electra",
"text-classification",
"license:apache-2.0",
"region:us"
] |
text-classification
| 2025-06-21T08:42:28Z |
---
license: apache-2.0
pipeline_tag: text-classification
---
|
arianaazarbal/ppo-finetuned-model
|
arianaazarbal
| 2025-06-21T08:01:05Z | 44 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"trl",
"ppo",
"reinforcement-learning",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
reinforcement-learning
| 2025-06-20T20:33:03Z |
---
license: apache-2.0
library_name: transformers
tags:
- trl
- ppo
- transformers
- reinforcement-learning
---
# TRL Model
This is a [TRL language model](https://github.com/huggingface/trl) that has been fine-tuned with reinforcement learning to
guide the model outputs according to a value, function, or human feedback. The model can be used for text generation.
## Usage
To use this model for inference, first install the TRL library:
```bash
python -m pip install trl
```
You can then generate text as follows:
```python
from transformers import pipeline
generator = pipeline("text-generation", model="arianaazarbal//tmp/tmp3vx9jc19/arianaazarbal/ppo-finetuned-model")
outputs = generator("Hello, my llama is cute")
```
If you want to use the model for training or to obtain the outputs from the value head, load the model as follows:
```python
from transformers import AutoTokenizer
from trl import AutoModelForCausalLMWithValueHead
tokenizer = AutoTokenizer.from_pretrained("arianaazarbal//tmp/tmp3vx9jc19/arianaazarbal/ppo-finetuned-model")
model = AutoModelForCausalLMWithValueHead.from_pretrained("arianaazarbal//tmp/tmp3vx9jc19/arianaazarbal/ppo-finetuned-model")
inputs = tokenizer("Hello, my llama is cute", return_tensors="pt")
outputs = model(**inputs, labels=inputs["input_ids"])
```
|
phospho-app/gc1724-ACT-ttt-a1-square-x6wuy
|
phospho-app
| 2025-06-21T01:01:51Z | 0 | 0 | null |
[
"safetensors",
"phosphobot",
"act",
"region:us"
] | null | 2025-06-20T21:59:36Z |
---
tags:
- phosphobot
- act
task_categories:
- robotics
---
# act Model - phospho Training Pipeline
## Error Traceback
We faced an issue while training your model.
```
Training process exceeded timeout of 10800 seconds. We have uploaded the last checkpoint. Please consider lowering the batch size or number of steps if you wish to train the model longer.
```
## Training parameters:
- **Dataset**: [gc1724/ttt-a1-square](https://huggingface.co/datasets/gc1724/ttt-a1-square)
- **Wandb run URL**: None
- **Epochs**: None
- **Batch size**: 60
- **Training steps**: 8000
📖 **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme)
🤖 **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
|
PinkNeonLights/jennyn
|
PinkNeonLights
| 2025-06-20T20:23:58Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] |
text-to-image
| 2025-06-20T20:16:58Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: '-'
output:
url: images/df0r49x-0a00ace4-5e0b-4547-a453-d6f136b05cd1.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: jenny
---
# jennyn
<Gallery />
## Trigger words
You should use `jenny` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/PinkNeonLights/jennyn/tree/main) them in the Files & versions tab.
|
a2z-jankari-sapna-shah-viral-video-18/video.18.a2z.jankari.sapna.shah.a2z.jankari.com.a2z.jankari.viral.video.a.to.z.jankaricom
|
a2z-jankari-sapna-shah-viral-video-18
| 2025-06-20T19:55:58Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-20T19:50:40Z |
[🌐 CLICK HERE 🟢==►► WATCH NOW](https://videohere.top/?V=a2z-jankari-sapna-shah-viral-video)
[🔴 CLICK HERE 🌐==►► Download Now)](https://videohere.top/?V=a2z-jankari-sapna-shah-viral-video)
[<img alt="fsd" src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/?V=a2z-jankari-sapna-shah-viral-video)
|
pj-mathematician/JobSkillBGE-large-en-v1.5
|
pj-mathematician
| 2025-06-20T18:46:57Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:114699",
"loss:CachedGISTEmbedLoss",
"arxiv:1908.10084",
"base_model:BAAI/bge-large-en-v1.5",
"base_model:finetune:BAAI/bge-large-en-v1.5",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-06-20T18:41:24Z |
---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:114699
- loss:CachedGISTEmbedLoss
base_model: BAAI/bge-large-en-v1.5
widget:
- source_sentence: For roles such as 'physiotherapist', 'neuromusculoskeletal physiotherapist',
'osteopath', and 'chiropractor', the skills needed include a deep understanding
of human anatomy and physiology, strong diagnostic skills, and the ability to
apply manual therapy techniques to treat musculoskeletal issues. Additionally,
effective communication skills are crucial for explaining treatments and exercises
to patients, while adaptability and problem-solving skills are essential for tailoring
treatments to individual patient needs.
sentences:
- Job roles such as insulation installers, HVAC technicians, and construction engineers
require knowledge of various types and characteristics of insulation materials
to effectively reduce heat transfer and improve energy efficiency in buildings
and systems. Understanding the typology of insulation materials, including their
thermal properties, durability, and environmental impact, is crucial for these
professionals to select the most appropriate materials for specific applications.
- Job roles such as Contract Managers, Legal Analysts, and Compliance Officers require
the skill of reviewing or auditing completed contracts to ensure legal accuracy,
compliance with regulations, and alignment with organizational goals.
- Job roles that require skills in dealing with emergency care situations include
emergency medical technicians (EMTs), paramedics, and emergency room nurses or
doctors, all of whom must quickly and effectively manage critical health situations
to save lives.
- source_sentence: Bus drivers, including those operating in various sectors like
public transit, intercity, private, or school services, need strong driving skills,
knowledge of traffic laws, and the ability to operate safely in diverse conditions.
Additionally, effective communication skills and the ability to handle passenger
inquiries and emergencies are crucial.
sentences:
- Job roles that require the skill to calibrate electronic instruments include calibration
technicians, quality control engineers, and instrumentation specialists. These
professionals ensure the accuracy and reliability of various electronic devices
and systems across different industries such as manufacturing, aerospace, and
automotive.
- Job roles such as Building Engineer, Architect, and Construction Specialist require
skills in designing, engineering, or developing air-tight building structures
to ensure energy efficiency and environmental control within the building.
- Job roles such as customer service representatives, flight attendants, and hotel
concierges require a strong focus on passengers or customers, ensuring their needs
and comfort are prioritized to provide excellent service and support.
- source_sentence: A mine surveyor, also known as a mining surveyor or mine planning
surveyor, requires expertise in geomatics and mining engineering to accurately
map and plan mine operations, ensuring safety and efficiency. They must also possess
strong analytical skills and the ability to use specialized software for creating
detailed mine plans and maintaining accurate records.
sentences:
- Job roles such as data analysts, business analysts, and financial analysts require
the skill to present reports or prepare statistical reports, as they often need
to communicate complex data insights clearly and effectively to stakeholders.
- Job roles that require monitoring flour unloading equipment include Quality Control
Technicians, Process Operators, and Mill Supervisors, who ensure the efficient
and safe operation of flour processing systems and the proper unloading of flour
from transport vehicles.
- Job roles that require skills in the manufacturing of made-up textile articles
include textile production managers, machinery operators, and quality control
inspectors, all of whom utilize specific technology and machinery to produce finished
textile products such as clothing, home textiles, and industrial fabrics.
- source_sentence: An insulation supervisor, regardless of the specific type of insulation
material or installation area, requires strong project management skills, knowledge
of building codes and safety regulations, and expertise in insulation techniques
to oversee the installation process effectively and ensure quality standards are
met.
sentences:
- Job roles that require skills in energy efficiency, such as promoting energy efficiency
or efficient energy use, include Energy Managers, Sustainability Specialists,
and Building Engineers, who focus on reducing energy consumption and improving
energy use in various settings. Additionally, roles like Battery Technicians or
Engineers involve battery benchmarking to enhance energy storage and efficiency
in technological devices and systems.
- The skill of applying or installing waterproofing and damp-proofing membranes
is primarily required by construction workers such as waterproofing specialists,
roofers, and building envelope technicians, who use these membranes to prevent
water damage in buildings and structures.
- Job roles such as laboratory technicians, chemists, and materials scientists require
skills in laboratory techniques, including electronic and thermic methods, gas
chromatography, and gravimetric analysis, to conduct precise experiments and analyze
materials. These professionals must apply natural science techniques and use various
lab techniques to ensure accurate and reliable results in their research or quality
control processes.
- source_sentence: For roles such as import/export manager, graduate export manager,
senior export manager, and other related positions in meat and meat products,
the key skills include a strong understanding of international trade regulations,
meat product knowledge, customs compliance, and excellent negotiation and communication
skills to manage global supply chains effectively. Additionally, proficiency in
relevant trade software and languages can be highly beneficial.
sentences:
- Job roles that require skills such as managing staff, coordinating employees,
and performing HR activities include Human Resources Managers, Team Leaders, Supervisors,
and Department Heads, all of whom are responsible for overseeing personnel, implementing
HR policies, and ensuring efficient team operations.
- Job roles such as Control Systems Engineer, Automation Engineer, and Systems Designer
require skills in designing, planning, and developing control systems to manage
and optimize the performance of various technological processes and machinery.
These professionals are tasked with creating efficient and reliable systems that
can operate autonomously or with minimal human intervention.
- Job roles such as Performance Analyst, Quality Assurance Engineer, and Test Manager
require skills in conducting performance measurement and organizing or managing
conversion testing to ensure software and systems meet performance standards and
function correctly in real-world scenarios.
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- cosine_accuracy@1
- cosine_accuracy@20
- cosine_accuracy@50
- cosine_accuracy@100
- cosine_accuracy@150
- cosine_accuracy@200
- cosine_precision@1
- cosine_precision@20
- cosine_precision@50
- cosine_precision@100
- cosine_precision@150
- cosine_precision@200
- cosine_recall@1
- cosine_recall@20
- cosine_recall@50
- cosine_recall@100
- cosine_recall@150
- cosine_recall@200
- cosine_ndcg@1
- cosine_ndcg@20
- cosine_ndcg@50
- cosine_ndcg@100
- cosine_ndcg@150
- cosine_ndcg@200
- cosine_mrr@1
- cosine_mrr@20
- cosine_mrr@50
- cosine_mrr@100
- cosine_mrr@150
- cosine_mrr@200
- cosine_map@1
- cosine_map@20
- cosine_map@50
- cosine_map@100
- cosine_map@150
- cosine_map@200
- cosine_map@500
model-index:
- name: SentenceTransformer based on BAAI/bge-large-en-v1.5
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: full en
type: full_en
metrics:
- type: cosine_accuracy@1
value: 0.7302631578947368
name: Cosine Accuracy@1
- type: cosine_accuracy@20
value: 0.993421052631579
name: Cosine Accuracy@20
- type: cosine_accuracy@50
value: 0.9967105263157895
name: Cosine Accuracy@50
- type: cosine_accuracy@100
value: 1.0
name: Cosine Accuracy@100
- type: cosine_accuracy@150
value: 1.0
name: Cosine Accuracy@150
- type: cosine_accuracy@200
value: 1.0
name: Cosine Accuracy@200
- type: cosine_precision@1
value: 0.7302631578947368
name: Cosine Precision@1
- type: cosine_precision@20
value: 0.4998355263157894
name: Cosine Precision@20
- type: cosine_precision@50
value: 0.39184210526315794
name: Cosine Precision@50
- type: cosine_precision@100
value: 0.3111842105263158
name: Cosine Precision@100
- type: cosine_precision@150
value: 0.2652412280701754
name: Cosine Precision@150
- type: cosine_precision@200
value: 0.232171052631579
name: Cosine Precision@200
- type: cosine_recall@1
value: 0.010227350724729817
name: Cosine Recall@1
- type: cosine_recall@20
value: 0.13368254620254577
name: Cosine Recall@20
- type: cosine_recall@50
value: 0.2541249933594102
name: Cosine Recall@50
- type: cosine_recall@100
value: 0.3948435268881245
name: Cosine Recall@100
- type: cosine_recall@150
value: 0.49626849018850344
name: Cosine Recall@150
- type: cosine_recall@200
value: 0.5720837677245543
name: Cosine Recall@200
- type: cosine_ndcg@1
value: 0.7302631578947368
name: Cosine Ndcg@1
- type: cosine_ndcg@20
value: 0.5384654647855256
name: Cosine Ndcg@20
- type: cosine_ndcg@50
value: 0.44986527953229877
name: Cosine Ndcg@50
- type: cosine_ndcg@100
value: 0.44277699637488865
name: Cosine Ndcg@100
- type: cosine_ndcg@150
value: 0.4895063673734854
name: Cosine Ndcg@150
- type: cosine_ndcg@200
value: 0.5346148440105628
name: Cosine Ndcg@200
- type: cosine_mrr@1
value: 0.7302631578947368
name: Cosine Mrr@1
- type: cosine_mrr@20
value: 0.8341772399749373
name: Cosine Mrr@20
- type: cosine_mrr@50
value: 0.8343338815789473
name: Cosine Mrr@50
- type: cosine_mrr@100
value: 0.8343905966424682
name: Cosine Mrr@100
- type: cosine_mrr@150
value: 0.8343905966424682
name: Cosine Mrr@150
- type: cosine_mrr@200
value: 0.8343905966424682
name: Cosine Mrr@200
- type: cosine_map@1
value: 0.7302631578947368
name: Cosine Map@1
- type: cosine_map@20
value: 0.3434603918412553
name: Cosine Map@20
- type: cosine_map@50
value: 0.23779270403918282
name: Cosine Map@50
- type: cosine_map@100
value: 0.21161540263537876
name: Cosine Map@100
- type: cosine_map@150
value: 0.22899252179487295
name: Cosine Map@150
- type: cosine_map@200
value: 0.24784282323083537
name: Cosine Map@200
- type: cosine_map@500
value: 0.298154972004029
name: Cosine Map@500
---
# Job-Skill matching fintuned BAAI/bge-large-en-v1.5
Top performing model on [TalentCLEF 2025](https://talentclef.github.io/talentclef/) Task B. Use it for job title <-> skill set matching
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [BAAI/bge-large-en-v1.5](https://huggingface.co/BAAI/bge-large-en-v1.5) <!-- at revision d4aa6901d3a41ba39fb536a557fa166f842b0e09 -->
- **Maximum Sequence Length:** 256 tokens
- **Output Dimensionality:** 1024 dimensions
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': True}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("pj-mathematician/JobSkillBGE-large-en-v1.5")
# Run inference
sentences = [
'For roles such as import/export manager, graduate export manager, senior export manager, and other related positions in meat and meat products, the key skills include a strong understanding of international trade regulations, meat product knowledge, customs compliance, and excellent negotiation and communication skills to manage global supply chains effectively. Additionally, proficiency in relevant trade software and languages can be highly beneficial.',
'Job roles such as Performance Analyst, Quality Assurance Engineer, and Test Manager require skills in conducting performance measurement and organizing or managing conversion testing to ensure software and systems meet performance standards and function correctly in real-world scenarios.',
'Job roles that require skills such as managing staff, coordinating employees, and performing HR activities include Human Resources Managers, Team Leaders, Supervisors, and Department Heads, all of whom are responsible for overseeing personnel, implementing HR policies, and ensuring efficient team operations.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Information Retrieval
* Dataset: `full_en`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:---------------------|:-----------|
| cosine_accuracy@1 | 0.7303 |
| cosine_accuracy@20 | 0.9934 |
| cosine_accuracy@50 | 0.9967 |
| cosine_accuracy@100 | 1.0 |
| cosine_accuracy@150 | 1.0 |
| cosine_accuracy@200 | 1.0 |
| cosine_precision@1 | 0.7303 |
| cosine_precision@20 | 0.4998 |
| cosine_precision@50 | 0.3918 |
| cosine_precision@100 | 0.3112 |
| cosine_precision@150 | 0.2652 |
| cosine_precision@200 | 0.2322 |
| cosine_recall@1 | 0.0102 |
| cosine_recall@20 | 0.1337 |
| cosine_recall@50 | 0.2541 |
| cosine_recall@100 | 0.3948 |
| cosine_recall@150 | 0.4963 |
| cosine_recall@200 | 0.5721 |
| cosine_ndcg@1 | 0.7303 |
| cosine_ndcg@20 | 0.5385 |
| cosine_ndcg@50 | 0.4499 |
| cosine_ndcg@100 | 0.4428 |
| cosine_ndcg@150 | 0.4895 |
| **cosine_ndcg@200** | **0.5346** |
| cosine_mrr@1 | 0.7303 |
| cosine_mrr@20 | 0.8342 |
| cosine_mrr@50 | 0.8343 |
| cosine_mrr@100 | 0.8344 |
| cosine_mrr@150 | 0.8344 |
| cosine_mrr@200 | 0.8344 |
| cosine_map@1 | 0.7303 |
| cosine_map@20 | 0.3435 |
| cosine_map@50 | 0.2378 |
| cosine_map@100 | 0.2116 |
| cosine_map@150 | 0.229 |
| cosine_map@200 | 0.2478 |
| cosine_map@500 | 0.2982 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 114,699 training samples
* Columns: <code>anchor</code> and <code>positive</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 43 tokens</li><li>mean: 65.45 tokens</li><li>max: 116 tokens</li></ul> | <ul><li>min: 34 tokens</li><li>mean: 55.34 tokens</li><li>max: 162 tokens</li></ul> |
* Samples:
| anchor | positive |
|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>A technical director or any of its synonyms requires a strong blend of technical expertise and leadership skills, including the ability to oversee technical operations, manage teams, and ensure the successful execution of technical projects while maintaining operational efficiency and innovation.</code> | <code>Job roles that require promoting health and safety include occupational health and safety specialists, safety managers, and public health educators, all of whom work to ensure safe and healthy environments in workplaces and communities.</code> |
| <code>A technical director or any of its synonyms requires a strong blend of technical expertise and leadership skills, including the ability to oversee technical operations, manage teams, and ensure the successful execution of technical projects while maintaining operational efficiency and innovation.</code> | <code>Job roles that require organizing rehearsals include directors, choreographers, and conductors in theater, dance, and music ensembles, who must efficiently plan and schedule practice sessions to prepare performers for a successful final performance.</code> |
| <code>A technical director or any of its synonyms requires a strong blend of technical expertise and leadership skills, including the ability to oversee technical operations, manage teams, and ensure the successful execution of technical projects while maintaining operational efficiency and innovation.</code> | <code>Job roles such as Health and Safety Managers, Environmental Health Officers, and Risk Management Specialists often require the skill of negotiating health and safety issues with third parties to ensure compliance and protection standards are met across different organizations and sites.</code> |
* Loss: [<code>CachedGISTEmbedLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cachedgistembedloss) with these parameters:
```json
{'guide': SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
), 'temperature': 0.01, 'mini_batch_size': 32, 'margin_strategy': 'absolute', 'margin': 0.0}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 64
- `per_device_eval_batch_size`: 128
- `gradient_accumulation_steps`: 2
- `num_train_epochs`: 5
- `warmup_ratio`: 0.05
- `log_on_each_node`: False
- `fp16`: True
- `dataloader_num_workers`: 4
- `ddp_find_unused_parameters`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 64
- `per_device_eval_batch_size`: 128
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 2
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 5
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.05
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: False
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: True
- `dataloader_num_workers`: 4
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `tp_size`: 0
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: True
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | full_en_cosine_ndcg@200 |
|:------:|:----:|:-------------:|:-----------------------:|
| -1 | -1 | - | 0.4784 |
| 0.0011 | 1 | 9.119 | - |
| 0.1116 | 100 | 4.1469 | - |
| 0.2232 | 200 | 2.5294 | 0.5362 |
| 0.3348 | 300 | 2.3611 | - |
| 0.4464 | 400 | 2.192 | 0.5318 |
| 0.5580 | 500 | 2.0338 | - |
| 0.6696 | 600 | 1.9009 | 0.5383 |
| 0.7812 | 700 | 1.8404 | - |
| 0.8929 | 800 | 1.7692 | 0.5352 |
| 1.0045 | 900 | 1.6921 | - |
| 1.1161 | 1000 | 1.3861 | 0.5368 |
| 1.2277 | 1100 | 1.3863 | - |
| 1.3393 | 1200 | 1.3546 | 0.5259 |
| 1.4509 | 1300 | 1.373 | - |
| 1.5625 | 1400 | 1.3364 | 0.5303 |
| 1.6741 | 1500 | 1.2876 | - |
| 1.7857 | 1600 | 1.3094 | 0.5323 |
| 1.8973 | 1700 | 1.2784 | - |
| 2.0089 | 1800 | 1.2204 | 0.5330 |
| 2.1205 | 1900 | 0.9617 | - |
| 2.2321 | 2000 | 1.0004 | 0.5277 |
| 2.3438 | 2100 | 0.9694 | - |
| 2.4554 | 2200 | 0.9843 | 0.5356 |
| 2.5670 | 2300 | 0.9743 | - |
| 2.6786 | 2400 | 0.9252 | 0.5320 |
| 2.7902 | 2500 | 0.9272 | - |
| 2.9018 | 2600 | 0.9279 | 0.5333 |
| 3.0134 | 2700 | 0.857 | - |
| 3.125 | 2800 | 0.7313 | 0.5300 |
| 3.2366 | 2900 | 0.7103 | - |
| 3.3482 | 3000 | 0.7187 | 0.5319 |
| 3.4598 | 3100 | 0.7067 | - |
| 3.5714 | 3200 | 0.7157 | 0.5369 |
| 3.6830 | 3300 | 0.7113 | - |
| 3.7946 | 3400 | 0.7013 | 0.5341 |
| 3.9062 | 3500 | 0.6903 | - |
| 4.0179 | 3600 | 0.6462 | 0.5335 |
| 4.1295 | 3700 | 0.5162 | - |
| 4.2411 | 3800 | 0.524 | 0.5352 |
| 4.3527 | 3900 | 0.5303 | - |
| 4.4643 | 4000 | 0.5269 | 0.5341 |
| 4.5759 | 4100 | 0.4824 | - |
| 4.6875 | 4200 | 0.5222 | 0.5342 |
| 4.7991 | 4300 | 0.5104 | - |
| 4.9107 | 4400 | 0.5002 | 0.5346 |
### Framework Versions
- Python: 3.11.11
- Sentence Transformers: 4.1.0
- Transformers: 4.51.2
- PyTorch: 2.6.0+cu124
- Accelerate: 1.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
20-kamal-kaur-18k/FULL.VIDEO.18.kamal.kaur.viral.Videos.Tutorial.Official.Twotter.link
|
20-kamal-kaur-18k
| 2025-06-20T15:45:03Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-20T15:42:33Z |
<a rel="nofollow" href="https://tinyurl.com/2urtu5zm">🌐 𝖢𝖫𝖨𝖢𝖪 𝖧𝖤𝖱𝖤 🟢==►► 𝖶𝖠𝖳𝖢𝖧 𝖭𝖮𝖶 L𝚎aᴋed Video V𝐢ral Video</a>
<a href="https://tinyurl.com/2urtu5zm"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Nature" class="responsive"></a>
|
mradermacher/DeepSeek-R1-0528-i1-GGUF
|
mradermacher
| 2025-06-20T14:36:10Z | 0 | 3 |
transformers
|
[
"transformers",
"en",
"base_model:deepseek-ai/DeepSeek-R1-0528",
"base_model:finetune:deepseek-ai/DeepSeek-R1-0528",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2025-06-12T09:11:03Z |
---
base_model: deepseek-ai/DeepSeek-R1-0528
language:
- en
library_name: transformers
license: mit
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/deepseek-ai/DeepSeek-R1-0528
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/DeepSeek-R1-0528-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [PART 1](https://huggingface.co/mradermacher/DeepSeek-R1-0528-i1-GGUF/resolve/main/DeepSeek-R1-0528.i1-IQ1_S.gguf.part1of3) [PART 2](https://huggingface.co/mradermacher/DeepSeek-R1-0528-i1-GGUF/resolve/main/DeepSeek-R1-0528.i1-IQ1_S.gguf.part2of3) [PART 3](https://huggingface.co/mradermacher/DeepSeek-R1-0528-i1-GGUF/resolve/main/DeepSeek-R1-0528.i1-IQ1_S.gguf.part3of3) | i1-IQ1_S | 133.8 | for the desperate |
| [PART 1](https://huggingface.co/mradermacher/DeepSeek-R1-0528-i1-GGUF/resolve/main/DeepSeek-R1-0528.i1-IQ1_M.gguf.part1of4) [PART 2](https://huggingface.co/mradermacher/DeepSeek-R1-0528-i1-GGUF/resolve/main/DeepSeek-R1-0528.i1-IQ1_M.gguf.part2of4) [PART 3](https://huggingface.co/mradermacher/DeepSeek-R1-0528-i1-GGUF/resolve/main/DeepSeek-R1-0528.i1-IQ1_M.gguf.part3of4) [PART 4](https://huggingface.co/mradermacher/DeepSeek-R1-0528-i1-GGUF/resolve/main/DeepSeek-R1-0528.i1-IQ1_M.gguf.part4of4) | i1-IQ1_M | 149.2 | mostly desperate |
| [PART 1](https://huggingface.co/mradermacher/DeepSeek-R1-0528-i1-GGUF/resolve/main/DeepSeek-R1-0528.i1-IQ2_XXS.gguf.part1of4) [PART 2](https://huggingface.co/mradermacher/DeepSeek-R1-0528-i1-GGUF/resolve/main/DeepSeek-R1-0528.i1-IQ2_XXS.gguf.part2of4) [PART 3](https://huggingface.co/mradermacher/DeepSeek-R1-0528-i1-GGUF/resolve/main/DeepSeek-R1-0528.i1-IQ2_XXS.gguf.part3of4) [PART 4](https://huggingface.co/mradermacher/DeepSeek-R1-0528-i1-GGUF/resolve/main/DeepSeek-R1-0528.i1-IQ2_XXS.gguf.part4of4) | i1-IQ2_XXS | 174.7 | |
| [PART 1](https://huggingface.co/mradermacher/DeepSeek-R1-0528-i1-GGUF/resolve/main/DeepSeek-R1-0528.i1-IQ2_XS.gguf.part1of4) [PART 2](https://huggingface.co/mradermacher/DeepSeek-R1-0528-i1-GGUF/resolve/main/DeepSeek-R1-0528.i1-IQ2_XS.gguf.part2of4) [PART 3](https://huggingface.co/mradermacher/DeepSeek-R1-0528-i1-GGUF/resolve/main/DeepSeek-R1-0528.i1-IQ2_XS.gguf.part3of4) [PART 4](https://huggingface.co/mradermacher/DeepSeek-R1-0528-i1-GGUF/resolve/main/DeepSeek-R1-0528.i1-IQ2_XS.gguf.part4of4) | i1-IQ2_XS | 195.3 | |
| [PART 1](https://huggingface.co/mradermacher/DeepSeek-R1-0528-i1-GGUF/resolve/main/DeepSeek-R1-0528.i1-IQ2_S.gguf.part1of4) [PART 2](https://huggingface.co/mradermacher/DeepSeek-R1-0528-i1-GGUF/resolve/main/DeepSeek-R1-0528.i1-IQ2_S.gguf.part2of4) [PART 3](https://huggingface.co/mradermacher/DeepSeek-R1-0528-i1-GGUF/resolve/main/DeepSeek-R1-0528.i1-IQ2_S.gguf.part3of4) [PART 4](https://huggingface.co/mradermacher/DeepSeek-R1-0528-i1-GGUF/resolve/main/DeepSeek-R1-0528.i1-IQ2_S.gguf.part4of4) | i1-IQ2_S | 197.2 | |
| [P1](https://huggingface.co/mradermacher/DeepSeek-R1-0528-i1-GGUF/resolve/main/DeepSeek-R1-0528.i1-IQ2_M.gguf.part1of5) [P2](https://huggingface.co/mradermacher/DeepSeek-R1-0528-i1-GGUF/resolve/main/DeepSeek-R1-0528.i1-IQ2_M.gguf.part2of5) [P3](https://huggingface.co/mradermacher/DeepSeek-R1-0528-i1-GGUF/resolve/main/DeepSeek-R1-0528.i1-IQ2_M.gguf.part3of5) [P4](https://huggingface.co/mradermacher/DeepSeek-R1-0528-i1-GGUF/resolve/main/DeepSeek-R1-0528.i1-IQ2_M.gguf.part4of5) [P5](https://huggingface.co/mradermacher/DeepSeek-R1-0528-i1-GGUF/resolve/main/DeepSeek-R1-0528.i1-IQ2_M.gguf.part5of5) | i1-IQ2_M | 217.7 | |
| [P1](https://huggingface.co/mradermacher/DeepSeek-R1-0528-i1-GGUF/resolve/main/DeepSeek-R1-0528.i1-Q2_K_S.gguf.part1of5) [P2](https://huggingface.co/mradermacher/DeepSeek-R1-0528-i1-GGUF/resolve/main/DeepSeek-R1-0528.i1-Q2_K_S.gguf.part2of5) [P3](https://huggingface.co/mradermacher/DeepSeek-R1-0528-i1-GGUF/resolve/main/DeepSeek-R1-0528.i1-Q2_K_S.gguf.part3of5) [P4](https://huggingface.co/mradermacher/DeepSeek-R1-0528-i1-GGUF/resolve/main/DeepSeek-R1-0528.i1-Q2_K_S.gguf.part4of5) [P5](https://huggingface.co/mradermacher/DeepSeek-R1-0528-i1-GGUF/resolve/main/DeepSeek-R1-0528.i1-Q2_K_S.gguf.part5of5) | i1-Q2_K_S | 224.9 | very low quality |
| [P1](https://huggingface.co/mradermacher/DeepSeek-R1-0528-i1-GGUF/resolve/main/DeepSeek-R1-0528.i1-Q2_K.gguf.part1of5) [P2](https://huggingface.co/mradermacher/DeepSeek-R1-0528-i1-GGUF/resolve/main/DeepSeek-R1-0528.i1-Q2_K.gguf.part2of5) [P3](https://huggingface.co/mradermacher/DeepSeek-R1-0528-i1-GGUF/resolve/main/DeepSeek-R1-0528.i1-Q2_K.gguf.part3of5) [P4](https://huggingface.co/mradermacher/DeepSeek-R1-0528-i1-GGUF/resolve/main/DeepSeek-R1-0528.i1-Q2_K.gguf.part4of5) [P5](https://huggingface.co/mradermacher/DeepSeek-R1-0528-i1-GGUF/resolve/main/DeepSeek-R1-0528.i1-Q2_K.gguf.part5of5) | i1-Q2_K | 244.2 | IQ3_XXS probably better |
| [P1](https://huggingface.co/mradermacher/DeepSeek-R1-0528-i1-GGUF/resolve/main/DeepSeek-R1-0528.i1-IQ3_XXS.gguf.part1of6) [P2](https://huggingface.co/mradermacher/DeepSeek-R1-0528-i1-GGUF/resolve/main/DeepSeek-R1-0528.i1-IQ3_XXS.gguf.part2of6) [P3](https://huggingface.co/mradermacher/DeepSeek-R1-0528-i1-GGUF/resolve/main/DeepSeek-R1-0528.i1-IQ3_XXS.gguf.part3of6) [P4](https://huggingface.co/mradermacher/DeepSeek-R1-0528-i1-GGUF/resolve/main/DeepSeek-R1-0528.i1-IQ3_XXS.gguf.part4of6) [P5](https://huggingface.co/mradermacher/DeepSeek-R1-0528-i1-GGUF/resolve/main/DeepSeek-R1-0528.i1-IQ3_XXS.gguf.part5of6) [P6](https://huggingface.co/mradermacher/DeepSeek-R1-0528-i1-GGUF/resolve/main/DeepSeek-R1-0528.i1-IQ3_XXS.gguf.part6of6) | i1-IQ3_XXS | 258.1 | lower quality |
| [P1](https://huggingface.co/mradermacher/DeepSeek-R1-0528-i1-GGUF/resolve/main/DeepSeek-R1-0528.i1-IQ3_XS.gguf.part1of6) [P2](https://huggingface.co/mradermacher/DeepSeek-R1-0528-i1-GGUF/resolve/main/DeepSeek-R1-0528.i1-IQ3_XS.gguf.part2of6) [P3](https://huggingface.co/mradermacher/DeepSeek-R1-0528-i1-GGUF/resolve/main/DeepSeek-R1-0528.i1-IQ3_XS.gguf.part3of6) [P4](https://huggingface.co/mradermacher/DeepSeek-R1-0528-i1-GGUF/resolve/main/DeepSeek-R1-0528.i1-IQ3_XS.gguf.part4of6) [P5](https://huggingface.co/mradermacher/DeepSeek-R1-0528-i1-GGUF/resolve/main/DeepSeek-R1-0528.i1-IQ3_XS.gguf.part5of6) [P6](https://huggingface.co/mradermacher/DeepSeek-R1-0528-i1-GGUF/resolve/main/DeepSeek-R1-0528.i1-IQ3_XS.gguf.part6of6) | i1-IQ3_XS | 273.0 | |
| [P1](https://huggingface.co/mradermacher/DeepSeek-R1-0528-i1-GGUF/resolve/main/DeepSeek-R1-0528.i1-IQ3_S.gguf.part1of6) [P2](https://huggingface.co/mradermacher/DeepSeek-R1-0528-i1-GGUF/resolve/main/DeepSeek-R1-0528.i1-IQ3_S.gguf.part2of6) [P3](https://huggingface.co/mradermacher/DeepSeek-R1-0528-i1-GGUF/resolve/main/DeepSeek-R1-0528.i1-IQ3_S.gguf.part3of6) [P4](https://huggingface.co/mradermacher/DeepSeek-R1-0528-i1-GGUF/resolve/main/DeepSeek-R1-0528.i1-IQ3_S.gguf.part4of6) [P5](https://huggingface.co/mradermacher/DeepSeek-R1-0528-i1-GGUF/resolve/main/DeepSeek-R1-0528.i1-IQ3_S.gguf.part5of6) [P6](https://huggingface.co/mradermacher/DeepSeek-R1-0528-i1-GGUF/resolve/main/DeepSeek-R1-0528.i1-IQ3_S.gguf.part6of6) | i1-IQ3_S | 289.3 | beats Q3_K* |
| [P1](https://huggingface.co/mradermacher/DeepSeek-R1-0528-i1-GGUF/resolve/main/DeepSeek-R1-0528.i1-Q3_K_S.gguf.part1of6) [P2](https://huggingface.co/mradermacher/DeepSeek-R1-0528-i1-GGUF/resolve/main/DeepSeek-R1-0528.i1-Q3_K_S.gguf.part2of6) [P3](https://huggingface.co/mradermacher/DeepSeek-R1-0528-i1-GGUF/resolve/main/DeepSeek-R1-0528.i1-Q3_K_S.gguf.part3of6) [P4](https://huggingface.co/mradermacher/DeepSeek-R1-0528-i1-GGUF/resolve/main/DeepSeek-R1-0528.i1-Q3_K_S.gguf.part4of6) [P5](https://huggingface.co/mradermacher/DeepSeek-R1-0528-i1-GGUF/resolve/main/DeepSeek-R1-0528.i1-Q3_K_S.gguf.part5of6) [P6](https://huggingface.co/mradermacher/DeepSeek-R1-0528-i1-GGUF/resolve/main/DeepSeek-R1-0528.i1-Q3_K_S.gguf.part6of6) | i1-Q3_K_S | 289.3 | IQ3_XS probably better |
| [P1](https://huggingface.co/mradermacher/DeepSeek-R1-0528-i1-GGUF/resolve/main/DeepSeek-R1-0528.i1-IQ3_M.gguf.part1of6) [P2](https://huggingface.co/mradermacher/DeepSeek-R1-0528-i1-GGUF/resolve/main/DeepSeek-R1-0528.i1-IQ3_M.gguf.part2of6) [P3](https://huggingface.co/mradermacher/DeepSeek-R1-0528-i1-GGUF/resolve/main/DeepSeek-R1-0528.i1-IQ3_M.gguf.part3of6) [P4](https://huggingface.co/mradermacher/DeepSeek-R1-0528-i1-GGUF/resolve/main/DeepSeek-R1-0528.i1-IQ3_M.gguf.part4of6) [P5](https://huggingface.co/mradermacher/DeepSeek-R1-0528-i1-GGUF/resolve/main/DeepSeek-R1-0528.i1-IQ3_M.gguf.part5of6) [P6](https://huggingface.co/mradermacher/DeepSeek-R1-0528-i1-GGUF/resolve/main/DeepSeek-R1-0528.i1-IQ3_M.gguf.part6of6) | i1-IQ3_M | 292.3 | |
| [P1](https://huggingface.co/mradermacher/DeepSeek-R1-0528-i1-GGUF/resolve/main/DeepSeek-R1-0528.i1-Q3_K_M.gguf.part1of7) [P2](https://huggingface.co/mradermacher/DeepSeek-R1-0528-i1-GGUF/resolve/main/DeepSeek-R1-0528.i1-Q3_K_M.gguf.part2of7) [P3](https://huggingface.co/mradermacher/DeepSeek-R1-0528-i1-GGUF/resolve/main/DeepSeek-R1-0528.i1-Q3_K_M.gguf.part3of7) [P4](https://huggingface.co/mradermacher/DeepSeek-R1-0528-i1-GGUF/resolve/main/DeepSeek-R1-0528.i1-Q3_K_M.gguf.part4of7) [P5](https://huggingface.co/mradermacher/DeepSeek-R1-0528-i1-GGUF/resolve/main/DeepSeek-R1-0528.i1-Q3_K_M.gguf.part5of7) [P6](https://huggingface.co/mradermacher/DeepSeek-R1-0528-i1-GGUF/resolve/main/DeepSeek-R1-0528.i1-Q3_K_M.gguf.part6of7) [P7](https://huggingface.co/mradermacher/DeepSeek-R1-0528-i1-GGUF/resolve/main/DeepSeek-R1-0528.i1-Q3_K_M.gguf.part7of7) | i1-Q3_K_M | 319.4 | IQ3_S probably better |
| [P1](https://huggingface.co/mradermacher/DeepSeek-R1-0528-i1-GGUF/resolve/main/DeepSeek-R1-0528.i1-Q3_K_L.gguf.part1of8) [P2](https://huggingface.co/mradermacher/DeepSeek-R1-0528-i1-GGUF/resolve/main/DeepSeek-R1-0528.i1-Q3_K_L.gguf.part2of8) [P3](https://huggingface.co/mradermacher/DeepSeek-R1-0528-i1-GGUF/resolve/main/DeepSeek-R1-0528.i1-Q3_K_L.gguf.part3of8) [P4](https://huggingface.co/mradermacher/DeepSeek-R1-0528-i1-GGUF/resolve/main/DeepSeek-R1-0528.i1-Q3_K_L.gguf.part4of8) [P5](https://huggingface.co/mradermacher/DeepSeek-R1-0528-i1-GGUF/resolve/main/DeepSeek-R1-0528.i1-Q3_K_L.gguf.part5of8) [P6](https://huggingface.co/mradermacher/DeepSeek-R1-0528-i1-GGUF/resolve/main/DeepSeek-R1-0528.i1-Q3_K_L.gguf.part6of8) [P7](https://huggingface.co/mradermacher/DeepSeek-R1-0528-i1-GGUF/resolve/main/DeepSeek-R1-0528.i1-Q3_K_L.gguf.part7of8) [P8](https://huggingface.co/mradermacher/DeepSeek-R1-0528-i1-GGUF/resolve/main/DeepSeek-R1-0528.i1-Q3_K_L.gguf.part8of8) | i1-Q3_K_L | 347.6 | IQ3_M probably better |
| [P1](https://huggingface.co/mradermacher/DeepSeek-R1-0528-i1-GGUF/resolve/main/DeepSeek-R1-0528.i1-IQ4_XS.gguf.part1of8) [P2](https://huggingface.co/mradermacher/DeepSeek-R1-0528-i1-GGUF/resolve/main/DeepSeek-R1-0528.i1-IQ4_XS.gguf.part2of8) [P3](https://huggingface.co/mradermacher/DeepSeek-R1-0528-i1-GGUF/resolve/main/DeepSeek-R1-0528.i1-IQ4_XS.gguf.part3of8) [P4](https://huggingface.co/mradermacher/DeepSeek-R1-0528-i1-GGUF/resolve/main/DeepSeek-R1-0528.i1-IQ4_XS.gguf.part4of8) [P5](https://huggingface.co/mradermacher/DeepSeek-R1-0528-i1-GGUF/resolve/main/DeepSeek-R1-0528.i1-IQ4_XS.gguf.part5of8) [P6](https://huggingface.co/mradermacher/DeepSeek-R1-0528-i1-GGUF/resolve/main/DeepSeek-R1-0528.i1-IQ4_XS.gguf.part6of8) [P7](https://huggingface.co/mradermacher/DeepSeek-R1-0528-i1-GGUF/resolve/main/DeepSeek-R1-0528.i1-IQ4_XS.gguf.part7of8) [P8](https://huggingface.co/mradermacher/DeepSeek-R1-0528-i1-GGUF/resolve/main/DeepSeek-R1-0528.i1-IQ4_XS.gguf.part8of8) | i1-IQ4_XS | 357.2 | |
| [P1](https://huggingface.co/mradermacher/DeepSeek-R1-0528-i1-GGUF/resolve/main/DeepSeek-R1-0528.i1-Q4_0.gguf.part1of8) [P2](https://huggingface.co/mradermacher/DeepSeek-R1-0528-i1-GGUF/resolve/main/DeepSeek-R1-0528.i1-Q4_0.gguf.part2of8) [P3](https://huggingface.co/mradermacher/DeepSeek-R1-0528-i1-GGUF/resolve/main/DeepSeek-R1-0528.i1-Q4_0.gguf.part3of8) [P4](https://huggingface.co/mradermacher/DeepSeek-R1-0528-i1-GGUF/resolve/main/DeepSeek-R1-0528.i1-Q4_0.gguf.part4of8) [P5](https://huggingface.co/mradermacher/DeepSeek-R1-0528-i1-GGUF/resolve/main/DeepSeek-R1-0528.i1-Q4_0.gguf.part5of8) [P6](https://huggingface.co/mradermacher/DeepSeek-R1-0528-i1-GGUF/resolve/main/DeepSeek-R1-0528.i1-Q4_0.gguf.part6of8) [P7](https://huggingface.co/mradermacher/DeepSeek-R1-0528-i1-GGUF/resolve/main/DeepSeek-R1-0528.i1-Q4_0.gguf.part7of8) [P8](https://huggingface.co/mradermacher/DeepSeek-R1-0528-i1-GGUF/resolve/main/DeepSeek-R1-0528.i1-Q4_0.gguf.part8of8) | i1-Q4_0 | 379.1 | fast, low quality |
| [P1](https://huggingface.co/mradermacher/DeepSeek-R1-0528-i1-GGUF/resolve/main/DeepSeek-R1-0528.i1-Q4_K_S.gguf.part1of8) [P2](https://huggingface.co/mradermacher/DeepSeek-R1-0528-i1-GGUF/resolve/main/DeepSeek-R1-0528.i1-Q4_K_S.gguf.part2of8) [P3](https://huggingface.co/mradermacher/DeepSeek-R1-0528-i1-GGUF/resolve/main/DeepSeek-R1-0528.i1-Q4_K_S.gguf.part3of8) [P4](https://huggingface.co/mradermacher/DeepSeek-R1-0528-i1-GGUF/resolve/main/DeepSeek-R1-0528.i1-Q4_K_S.gguf.part4of8) [P5](https://huggingface.co/mradermacher/DeepSeek-R1-0528-i1-GGUF/resolve/main/DeepSeek-R1-0528.i1-Q4_K_S.gguf.part5of8) [P6](https://huggingface.co/mradermacher/DeepSeek-R1-0528-i1-GGUF/resolve/main/DeepSeek-R1-0528.i1-Q4_K_S.gguf.part6of8) [P7](https://huggingface.co/mradermacher/DeepSeek-R1-0528-i1-GGUF/resolve/main/DeepSeek-R1-0528.i1-Q4_K_S.gguf.part7of8) [P8](https://huggingface.co/mradermacher/DeepSeek-R1-0528-i1-GGUF/resolve/main/DeepSeek-R1-0528.i1-Q4_K_S.gguf.part8of8) | i1-Q4_K_S | 380.2 | optimal size/speed/quality |
| [P1](https://huggingface.co/mradermacher/DeepSeek-R1-0528-i1-GGUF/resolve/main/DeepSeek-R1-0528.i1-Q4_K_M.gguf.part1of9) [P2](https://huggingface.co/mradermacher/DeepSeek-R1-0528-i1-GGUF/resolve/main/DeepSeek-R1-0528.i1-Q4_K_M.gguf.part2of9) [P3](https://huggingface.co/mradermacher/DeepSeek-R1-0528-i1-GGUF/resolve/main/DeepSeek-R1-0528.i1-Q4_K_M.gguf.part3of9) [P4](https://huggingface.co/mradermacher/DeepSeek-R1-0528-i1-GGUF/resolve/main/DeepSeek-R1-0528.i1-Q4_K_M.gguf.part4of9) [P5](https://huggingface.co/mradermacher/DeepSeek-R1-0528-i1-GGUF/resolve/main/DeepSeek-R1-0528.i1-Q4_K_M.gguf.part5of9) [P6](https://huggingface.co/mradermacher/DeepSeek-R1-0528-i1-GGUF/resolve/main/DeepSeek-R1-0528.i1-Q4_K_M.gguf.part6of9) [P7](https://huggingface.co/mradermacher/DeepSeek-R1-0528-i1-GGUF/resolve/main/DeepSeek-R1-0528.i1-Q4_K_M.gguf.part7of9) [P8](https://huggingface.co/mradermacher/DeepSeek-R1-0528-i1-GGUF/resolve/main/DeepSeek-R1-0528.i1-Q4_K_M.gguf.part8of9) [P9](https://huggingface.co/mradermacher/DeepSeek-R1-0528-i1-GGUF/resolve/main/DeepSeek-R1-0528.i1-Q4_K_M.gguf.part9of9) | i1-Q4_K_M | 404.6 | fast, recommended |
| [P1](https://huggingface.co/mradermacher/DeepSeek-R1-0528-i1-GGUF/resolve/main/DeepSeek-R1-0528.i1-Q4_1.gguf.part1of9) [P2](https://huggingface.co/mradermacher/DeepSeek-R1-0528-i1-GGUF/resolve/main/DeepSeek-R1-0528.i1-Q4_1.gguf.part2of9) [P3](https://huggingface.co/mradermacher/DeepSeek-R1-0528-i1-GGUF/resolve/main/DeepSeek-R1-0528.i1-Q4_1.gguf.part3of9) [P4](https://huggingface.co/mradermacher/DeepSeek-R1-0528-i1-GGUF/resolve/main/DeepSeek-R1-0528.i1-Q4_1.gguf.part4of9) [P5](https://huggingface.co/mradermacher/DeepSeek-R1-0528-i1-GGUF/resolve/main/DeepSeek-R1-0528.i1-Q4_1.gguf.part5of9) [P6](https://huggingface.co/mradermacher/DeepSeek-R1-0528-i1-GGUF/resolve/main/DeepSeek-R1-0528.i1-Q4_1.gguf.part6of9) [P7](https://huggingface.co/mradermacher/DeepSeek-R1-0528-i1-GGUF/resolve/main/DeepSeek-R1-0528.i1-Q4_1.gguf.part7of9) [P8](https://huggingface.co/mradermacher/DeepSeek-R1-0528-i1-GGUF/resolve/main/DeepSeek-R1-0528.i1-Q4_1.gguf.part8of9) [P9](https://huggingface.co/mradermacher/DeepSeek-R1-0528-i1-GGUF/resolve/main/DeepSeek-R1-0528.i1-Q4_1.gguf.part9of9) | i1-Q4_1 | 420.0 | |
| [P1](https://huggingface.co/mradermacher/DeepSeek-R1-0528-i1-GGUF/resolve/main/DeepSeek-R1-0528.i1-Q5_K_S.gguf.part01of10) [P2](https://huggingface.co/mradermacher/DeepSeek-R1-0528-i1-GGUF/resolve/main/DeepSeek-R1-0528.i1-Q5_K_S.gguf.part02of10) [P3](https://huggingface.co/mradermacher/DeepSeek-R1-0528-i1-GGUF/resolve/main/DeepSeek-R1-0528.i1-Q5_K_S.gguf.part03of10) [P4](https://huggingface.co/mradermacher/DeepSeek-R1-0528-i1-GGUF/resolve/main/DeepSeek-R1-0528.i1-Q5_K_S.gguf.part04of10) [P5](https://huggingface.co/mradermacher/DeepSeek-R1-0528-i1-GGUF/resolve/main/DeepSeek-R1-0528.i1-Q5_K_S.gguf.part05of10) [P6](https://huggingface.co/mradermacher/DeepSeek-R1-0528-i1-GGUF/resolve/main/DeepSeek-R1-0528.i1-Q5_K_S.gguf.part06of10) [P7](https://huggingface.co/mradermacher/DeepSeek-R1-0528-i1-GGUF/resolve/main/DeepSeek-R1-0528.i1-Q5_K_S.gguf.part07of10) [P8](https://huggingface.co/mradermacher/DeepSeek-R1-0528-i1-GGUF/resolve/main/DeepSeek-R1-0528.i1-Q5_K_S.gguf.part08of10) [P9](https://huggingface.co/mradermacher/DeepSeek-R1-0528-i1-GGUF/resolve/main/DeepSeek-R1-0528.i1-Q5_K_S.gguf.part09of10) [P10](https://huggingface.co/mradermacher/DeepSeek-R1-0528-i1-GGUF/resolve/main/DeepSeek-R1-0528.i1-Q5_K_S.gguf.part10of10) | i1-Q5_K_S | 461.9 | |
| [P1](https://huggingface.co/mradermacher/DeepSeek-R1-0528-i1-GGUF/resolve/main/DeepSeek-R1-0528.i1-Q5_K_M.gguf.part01of10) [P2](https://huggingface.co/mradermacher/DeepSeek-R1-0528-i1-GGUF/resolve/main/DeepSeek-R1-0528.i1-Q5_K_M.gguf.part02of10) [P3](https://huggingface.co/mradermacher/DeepSeek-R1-0528-i1-GGUF/resolve/main/DeepSeek-R1-0528.i1-Q5_K_M.gguf.part03of10) [P4](https://huggingface.co/mradermacher/DeepSeek-R1-0528-i1-GGUF/resolve/main/DeepSeek-R1-0528.i1-Q5_K_M.gguf.part04of10) [P5](https://huggingface.co/mradermacher/DeepSeek-R1-0528-i1-GGUF/resolve/main/DeepSeek-R1-0528.i1-Q5_K_M.gguf.part05of10) [P6](https://huggingface.co/mradermacher/DeepSeek-R1-0528-i1-GGUF/resolve/main/DeepSeek-R1-0528.i1-Q5_K_M.gguf.part06of10) [P7](https://huggingface.co/mradermacher/DeepSeek-R1-0528-i1-GGUF/resolve/main/DeepSeek-R1-0528.i1-Q5_K_M.gguf.part07of10) [P8](https://huggingface.co/mradermacher/DeepSeek-R1-0528-i1-GGUF/resolve/main/DeepSeek-R1-0528.i1-Q5_K_M.gguf.part08of10) [P9](https://huggingface.co/mradermacher/DeepSeek-R1-0528-i1-GGUF/resolve/main/DeepSeek-R1-0528.i1-Q5_K_M.gguf.part09of10) [P10](https://huggingface.co/mradermacher/DeepSeek-R1-0528-i1-GGUF/resolve/main/DeepSeek-R1-0528.i1-Q5_K_M.gguf.part10of10) | i1-Q5_K_M | 475.5 | |
| [P1](https://huggingface.co/mradermacher/DeepSeek-R1-0528-i1-GGUF/resolve/main/DeepSeek-R1-0528.i1-Q6_K.gguf.part01of12) [P2](https://huggingface.co/mradermacher/DeepSeek-R1-0528-i1-GGUF/resolve/main/DeepSeek-R1-0528.i1-Q6_K.gguf.part02of12) [P3](https://huggingface.co/mradermacher/DeepSeek-R1-0528-i1-GGUF/resolve/main/DeepSeek-R1-0528.i1-Q6_K.gguf.part03of12) [P4](https://huggingface.co/mradermacher/DeepSeek-R1-0528-i1-GGUF/resolve/main/DeepSeek-R1-0528.i1-Q6_K.gguf.part04of12) [P5](https://huggingface.co/mradermacher/DeepSeek-R1-0528-i1-GGUF/resolve/main/DeepSeek-R1-0528.i1-Q6_K.gguf.part05of12) [P6](https://huggingface.co/mradermacher/DeepSeek-R1-0528-i1-GGUF/resolve/main/DeepSeek-R1-0528.i1-Q6_K.gguf.part06of12) [P7](https://huggingface.co/mradermacher/DeepSeek-R1-0528-i1-GGUF/resolve/main/DeepSeek-R1-0528.i1-Q6_K.gguf.part07of12) [P8](https://huggingface.co/mradermacher/DeepSeek-R1-0528-i1-GGUF/resolve/main/DeepSeek-R1-0528.i1-Q6_K.gguf.part08of12) [P9](https://huggingface.co/mradermacher/DeepSeek-R1-0528-i1-GGUF/resolve/main/DeepSeek-R1-0528.i1-Q6_K.gguf.part09of12) [P10](https://huggingface.co/mradermacher/DeepSeek-R1-0528-i1-GGUF/resolve/main/DeepSeek-R1-0528.i1-Q6_K.gguf.part10of12) [P11](https://huggingface.co/mradermacher/DeepSeek-R1-0528-i1-GGUF/resolve/main/DeepSeek-R1-0528.i1-Q6_K.gguf.part11of12) [P12](https://huggingface.co/mradermacher/DeepSeek-R1-0528-i1-GGUF/resolve/main/DeepSeek-R1-0528.i1-Q6_K.gguf.part12of12) | i1-Q6_K | 551.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
MattMcG/titles_wee_qwen_split
|
MattMcG
| 2025-06-20T13:09:07Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:unsloth/Qwen3-1.7B-unsloth-bnb-4bit",
"base_model:finetune:unsloth/Qwen3-1.7B-unsloth-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-20T13:07:53Z |
---
base_model: unsloth/Qwen3-1.7B-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** MattMcG
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-1.7B-unsloth-bnb-4bit
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Triangle104/BetaCeti-Beta-4B-Prime1-Q5_K_M-GGUF
|
Triangle104
| 2025-06-20T12:59:32Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"text-generation-inference",
"reinforcement-learning",
"code",
"math",
"moe",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"base_model:prithivMLmods/BetaCeti-Beta-4B-Prime1",
"base_model:quantized:prithivMLmods/BetaCeti-Beta-4B-Prime1",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-20T12:58:47Z |
---
library_name: transformers
tags:
- text-generation-inference
- reinforcement-learning
- code
- math
- moe
- llama-cpp
- gguf-my-repo
license: apache-2.0
language:
- en
base_model: prithivMLmods/BetaCeti-Beta-4B-Prime1
pipeline_tag: text-generation
---
# Triangle104/BetaCeti-Beta-4B-Prime1-Q5_K_M-GGUF
This model was converted to GGUF format from [`prithivMLmods/BetaCeti-Beta-4B-Prime1`](https://huggingface.co/prithivMLmods/BetaCeti-Beta-4B-Prime1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/prithivMLmods/BetaCeti-Beta-4B-Prime1) for more details on the model.
---
BetaCeti-Beta-4B-Prime1 is a compact, coding-optimized language model built on the Qwen3-4B architecture, tailored for high-accuracy code generation, debugging, and technical reasoning. With 4 billion parameters, it strikes a balance between performance and efficiency, making it an ideal assistant for developers, educators, and engineers working in constrained environments or requiring fast inference.
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/BetaCeti-Beta-4B-Prime1-Q5_K_M-GGUF --hf-file betaceti-beta-4b-prime1-q5_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/BetaCeti-Beta-4B-Prime1-Q5_K_M-GGUF --hf-file betaceti-beta-4b-prime1-q5_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/BetaCeti-Beta-4B-Prime1-Q5_K_M-GGUF --hf-file betaceti-beta-4b-prime1-q5_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/BetaCeti-Beta-4B-Prime1-Q5_K_M-GGUF --hf-file betaceti-beta-4b-prime1-q5_k_m.gguf -c 2048
```
|
JonasBeking/MalRepoResearch
|
JonasBeking
| 2025-06-20T12:20:59Z | 0 | 0 | null |
[
"pytorch",
"region:us"
] | null | 2025-06-20T11:51:30Z |
## Research
This is used for research purposes.
|
ccgtay/base-adapter
|
ccgtay
| 2025-06-19T00:50:49Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-19T00:50:45Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
hyunwoo612/CODENENDAv2_GGUF
|
hyunwoo612
| 2025-06-18T05:45:53Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-18T02:02:12Z |
---
base_model: unsloth/qwen3-8b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** hyunwoo612
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen3-8b-unsloth-bnb-4bit
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
morturr/Llama-2-7b-hf-LOO_headlines-COMB_amazon-comb1-seed42-2025-06-17
|
morturr
| 2025-06-17T20:12:11Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"license:llama2",
"region:us"
] | null | 2025-06-17T20:12:00Z |
---
library_name: peft
license: llama2
base_model: meta-llama/Llama-2-7b-hf
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Llama-2-7b-hf-LOO_headlines-COMB_amazon-comb1-seed42-2025-06-17
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-2-7b-hf-LOO_headlines-COMB_amazon-comb1-seed42-2025-06-17
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.0.2
- Tokenizers 0.20.1
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.