modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-08-30 00:39:23
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 526
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-08-30 00:39:08
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
001-Sophie-Rain-SpiderMan-Leaks-Free/Sophie.Rain.Spiderman.Video.Tutorial
|
001-Sophie-Rain-SpiderMan-Leaks-Free
| 2025-06-08T09:20:53Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-08T09:20:43Z |
18 seconds ago
<a href="https://tv2online.com/Leaked/?v=Sophie+Rain+Spiderman" rel="nofollow">βΊβΊβ
πΎπππΎπ ππππ ==βΊβΊ ππͺπ‘π‘ πππππ€οΈβ</a></p>
<a href="https://tv2online.com/Leaked/?v=Sophie+Rain+Spiderman" rel="nofollow">π΄βΊπππππ ππππ π==βΊβΊ ππ¨π°π§π₯π¨ππ ππ¨π°β¬οΈβ¬οΈβ</a></p>
<p><a rel="nofollow" title="WATCH NOW" href="https://tv2online.com/Leaked/?v=Sophie+Rain+Spiderman"><img border="Sophie+Rain+Spidermanno" height="480" width="720" title="WATCH NOW" alt="WATCH NOW" src="https://i.ibb.co.com/xMMVF88/686577567.gif"></a></p>
Sophie Rain Spiderman Video Tutorial Original Video video oficial twitter
Lπaked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video Lπaked on X Twitter
. . . . . . . . . Lπaked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video Lπaked on X Twitter Telegram
Lπaked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video Lπaked on X Twitter
Sophie Rain Spiderman Video Tutorial Original Video video oficial twitter
|
eylulipci/30_dpo_ds30_lr1e-05_acc16_ep4_beta0.1-epoch2
|
eylulipci
| 2025-06-08T09:20:13Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-08T09:16:48Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
eylulipci/30_dpo_ds30_lr1e-05_acc16_ep4_beta0.2-epoch2
|
eylulipci
| 2025-06-08T09:17:30Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-08T09:15:18Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ferdinandjasong/SuperCoder-7B-Qwen2.5-0525-peft-merged
|
ferdinandjasong
| 2025-06-08T09:15:23Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"base_model:unsloth/Qwen2.5-Coder-7B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-Coder-7B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-08T09:13:32Z |
---
base_model: unsloth/Qwen2.5-Coder-7B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** ferdinandjasong
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2.5-Coder-7B-Instruct
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
kowndinya23/ultrafeedback_binarized-alpaca-llama-3-1b-2-epochs-alpha-1-beta-1-2-epochs
|
kowndinya23
| 2025-06-08T09:13:01Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"trl",
"dpo",
"conversational",
"dataset:trl-lib/ultrafeedback_binarized",
"arxiv:2305.18290",
"base_model:kowndinya23/alpaca-cleaned-llama-3-1b-2-epochs-alpha-1-beta-1",
"base_model:finetune:kowndinya23/alpaca-cleaned-llama-3-1b-2-epochs-alpha-1-beta-1",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-08T08:17:26Z |
---
base_model: kowndinya23/alpaca-cleaned-llama-3-1b-2-epochs-alpha-1-beta-1
datasets: trl-lib/ultrafeedback_binarized
library_name: transformers
model_name: ultrafeedback_binarized-alpaca-llama-3-1b-2-epochs-alpha-1-beta-1-2-epochs
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for ultrafeedback_binarized-alpaca-llama-3-1b-2-epochs-alpha-1-beta-1-2-epochs
This model is a fine-tuned version of [kowndinya23/alpaca-cleaned-llama-3-1b-2-epochs-alpha-1-beta-1](https://huggingface.co/kowndinya23/alpaca-cleaned-llama-3-1b-2-epochs-alpha-1-beta-1) on the [trl-lib/ultrafeedback_binarized](https://huggingface.co/datasets/trl-lib/ultrafeedback_binarized) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="kowndinya23/ultrafeedback_binarized-alpaca-llama-3-1b-2-epochs-alpha-1-beta-1-2-epochs", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://adobesensei.wandb.io/hrenduchinta/huggingface/runs/dssbjsho)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.18.1
- Transformers: 4.52.4
- Pytorch: 2.7.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
MrCharli03/my_awesome_opus_books_model_t5
|
MrCharli03
| 2025-06-08T09:11:28Z | 128 | 0 |
transformers
|
[
"transformers",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2025-06-07T21:53:30Z |
---
library_name: transformers
license: apache-2.0
base_model: t5-small
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: my_awesome_opus_books_model_t5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_opus_books_model_t5
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9846
- Bleu: 0.1964
- Gen Len: 19.1017
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| 3.3129 | 1.0 | 1169 | 2.9846 | 0.1964 | 19.1017 |
### Framework versions
- Transformers 4.52.4
- Pytorch 2.7.1+cu126
- Datasets 3.6.0
- Tokenizers 0.21.1
|
nguyen599/ViXML-RoBERTa-ESG-base
|
nguyen599
| 2025-06-08T09:09:14Z | 10 | 0 |
transformers
|
[
"transformers",
"safetensors",
"xlm-roberta",
"text-classification",
"finance",
"esg",
"financial-text-analysis",
"bert",
"en",
"vi",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-06-03T14:05:02Z |
---
license: apache-2.0
language:
- en
- vi
metrics:
- f1
base_model:
- FacebookAI/xlm-roberta-base
pipeline_tag: text-classification
tags:
- finance
- esg
- financial-text-analysis
- bert
library_name: transformers
widget:
- text: "Over three chapters, it covers a range of topics from energy efficiency and renewable energy to the circular economy and sustainable transportation."
---
ESG analysis can help investors determine a business' long-term sustainability and identify associated risks. ViXML-RoBERTa-ESG-base is a [FacebookAI/xlm-roberta-base](https://huggingface.co/FacebookAI/xlm-roberta-base) model fine-tuned on [ViEn-ESG-100](https://huggingface.co/datasets/nguyen599/ViEn-ESG-100) dataset, include 100,000 annotated sentences from Vietnam, English news and ESG reports.
**Input**: A financial text.
**Output**: Environmental, Social, Governance or None.
**Language support**: English, Vietnamese
# How to use
You can use this model with Transformers pipeline for ESG classification.
```python
# tested in transformers==4.51.0
from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline
esgbert = AutoModelForSequenceClassification.from_pretrained('nguyen599/ViXML-RoBERTa-ESG-base',num_labels=4)
tokenizer = AutoTokenizer.from_pretrained('nguyen599/ViXML-RoBERTa-ESG-base')
nlp = pipeline("text-classification", model=esgbert, tokenizer=tokenizer)
results = nlp('Over three chapters, it covers a range of topics from energy efficiency and renewable energy to the circular economy and sustainable transportation.')
print(results) # [{'label': 'Environment', 'score': 0.9206041026115417}]
```
# Benchmark
F1 scores of models on each ESG category in the English ViEn-ESG-100 dataset.
<div align="center">
| **Model** | **Backbone** | **Param** | **E** | **S** | **G** | **N** |
| :------------ | :------------ | :------------: | :------------: | :------------: | :------------: | :------------: |
| **SEC-BERT-ft** | **SEC-BERT-base** | 109M | 83.12 | 66.77 | 66.53 | 60.30 |
| **FinBERT-ESG** | **FinBERT** | 109M | 92.67 | 84.90 | 86.25 | 87.26 |
| **FinBERT-ESG-9-class** | **FinBERT** | 109M | 92.16 | 89.01 | 91.35 | 86.89 |
| **ESGify** | **MPNet-base** | 109M | 67.72 | 30.20 | 50.76 | 43.44 |
| **EnvironmentBERT** | **DistilRoBERTa** | 82M | 92.15 | - | - | 92.76 |
| **SocialBERT** | **DistilRoBERTa** | 82M | - | 76.81 | - | 81.23 |
| **GovernanceBERT** | **DistilRoBERTa** | 82M | - | - | 64.46 | 80.06 |
| **ViBERT-ESG(Our)** | **BERT-base-cased** | 168M | 93.76 | 94.53 | 94.98 | **94.15** |
| **ViRoBERTa-ESG(Our)** | **RoBERTa-base** | 124M | 95.43 | 94.06 | 95.01 | 91.32 |
| **ViXLMRoBERTa-ESG(Our)** | **XLM-RoBERTa-base** | 278M | 95.00 | 95.00 | **95.47** | 92.19 |
| **ViDeBERTa-ESG(Our)** | **DeBERTa-v3-base** | 184M | **95.50** | 94.49 | 94.81 | 91.48 |
| **ViDeBERTa-small-ESG(Our)** | **DeBERTa-v3-small** | 141M | 94.55 | 94.85 | 94.58 | 90.19 |
| **ViDistilBERT-ESG(Our)** | **DistilBERT-base-cased** | 135M | 95.15 | **95.19** | 94.33 | 91.75 |
| **ViBERT-Env(Our)** | **BERT-base-cased** | 168M | 94.62 | - | - | 92.13 |
| **ViBERT-Soc(Our)** | **BERT-base-cased** | 168M | - | 94.86 | - | 92.22 |
| **ViBERT-Gov(Our)** | **BERT-base-cased** | 168M | - | - | 93.47 | 93.82 |
</div>
F1 scores of models on each ESG category in the Vietnamese ViEn-ESG-100 dataset.
<div align="center">
| **Model** | **Backbone** | **Param** | **E** | **S** | **G** | **N** |
| :------------ | :------------ | :------------: | :------------: | :------------: | :------------: | :------------: |
| **ViBERT-ESG** | **BERT-base-cased** | 168M | 93.50 | 89.73 | 91.77 | **91.78** |
| **ViRoBERTa-ESG** | **RoBERTa-base** | 124M | 93.41 | 91.49 | 89.93 | 84.32 |
| **ViXLMRoBERTa-ESG** | **XLM-RoBERTa-base** | 278M | 93.45 | 91.02 | 91.69 | 90.41 |
| **ViDeBERTa-ESG** | **DeBERTa-v3-base** | 184M | **95.24** | 89.36 | **93.18** | 85.23 |
| **ViDeBERTa-small-ESG** | **DeBERTa-v3-small** | 141M | 92.90 | 87.79 | 90.63 | 81.48 |
| **ViDistilBERT-ESG** | **DistilBERT-base-cased** | 135M | 93.87 | **91.98** | 90.63 | 87.17 |
| **ViBERT-Env** | **BERT-base-cased** | 168M | 94.87 | - | - | 91.15 |
| **ViBERT-Soc** | **BERT-base-cased** | 168M | - | 91.07 | - | 90.29 |
| **ViBERT-Gov** | **BERT-base-cased** | 168M | - | - | 92.62 | 90.11 |
</div>
|
IHaBiS/gemma3_4b_it_mrl_simsce_lora_finetuned_no_mrl
|
IHaBiS
| 2025-06-08T09:05:23Z | 0 | 0 | null |
[
"safetensors",
"region:us"
] | null | 2025-06-06T15:59:33Z |
LORA_R = 32
LORA_ALPHA = 64
LORA_DROPOUT = 0.1
MRL_DIMS_CONFIG = [] + original hidden_state 2560
BATCH_SIZE = 8
LEARNING_RATE = 2e-5
EPOCHS = 1
SAVE_MODEL_CYCLE = 1
GRAD_ACCUMULATION_STEPS = 4
SIMCSE_TEMPERATURE = 0.05
TRAIN_SET_SIZE_SCALE = 0.05
VAL_SET_SIZE_SCALE = 0.05
GLOBAL_STEP_SAVE = 100
dataset used : [princeton-nlp/datasets-for-simcse](https://huggingface.co/datasets/princeton-nlp/datasets-for-simcse)
|
Tsegayesemere/emotion-model12_4
|
Tsegayesemere
| 2025-06-08T09:03:37Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:adapter:FacebookAI/xlm-roberta-base",
"license:mit",
"region:us"
] | null | 2025-06-08T08:52:01Z |
---
library_name: peft
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: emotion-model12_4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# emotion-model12_4
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9116
- Accuracy: 0.6496
- F1: 0.6300
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 1.3825 | 1.0 | 59 | 1.3616 | 0.3077 | 0.2101 |
| 1.3408 | 2.0 | 118 | 1.1733 | 0.4786 | 0.4010 |
| 1.2459 | 3.0 | 177 | 1.0992 | 0.5342 | 0.5082 |
| 1.2116 | 4.0 | 236 | 1.0654 | 0.5684 | 0.5541 |
| 1.164 | 5.0 | 295 | 1.0075 | 0.5556 | 0.5392 |
| 1.1303 | 6.0 | 354 | 0.9959 | 0.5342 | 0.5106 |
| 1.1035 | 7.0 | 413 | 0.9662 | 0.5855 | 0.5737 |
| 1.1055 | 8.0 | 472 | 0.9262 | 0.6368 | 0.6286 |
| 1.0965 | 9.0 | 531 | 0.9142 | 0.6282 | 0.6136 |
| 1.0675 | 10.0 | 590 | 0.9116 | 0.6496 | 0.6300 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
madhueb/dpo-df2
|
madhueb
| 2025-06-08T09:03:09Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"trl",
"dpo",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-08T09:01:59Z |
---
library_name: transformers
tags:
- trl
- dpo
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
luckeciano/Qwen-2.5-7B-GRPO-NoBaseline_1182
|
luckeciano
| 2025-06-08T09:02:59Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"grpo",
"conversational",
"dataset:DigitalLearningGmbH/MATH-lighteval",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2.5-Math-7B",
"base_model:finetune:Qwen/Qwen2.5-Math-7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-08T04:20:14Z |
---
base_model: Qwen/Qwen2.5-Math-7B
datasets: DigitalLearningGmbH/MATH-lighteval
library_name: transformers
model_name: Qwen-2.5-7B-GRPO-NoBaseline_1182
tags:
- generated_from_trainer
- open-r1
- trl
- grpo
licence: license
---
# Model Card for Qwen-2.5-7B-GRPO-NoBaseline_1182
This model is a fine-tuned version of [Qwen/Qwen2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B) on the [DigitalLearningGmbH/MATH-lighteval](https://huggingface.co/datasets/DigitalLearningGmbH/MATH-lighteval) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="luckeciano/Qwen-2.5-7B-GRPO-NoBaseline_1182", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/max-ent-llms/PolicyGradientStability/runs/huxr8kyn)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.16.0.dev0
- Transformers: 4.49.0
- Pytorch: 2.6.0
- Datasets: 3.4.1
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin GallouΓ©dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
madhueb/dpo-df4
|
madhueb
| 2025-06-08T09:02:31Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"trl",
"dpo",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-08T09:01:07Z |
---
library_name: transformers
tags:
- trl
- dpo
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
amjad-awad/mistral-7b-instruct-v0.2-bnb-4bit-EngSaf-96K
|
amjad-awad
| 2025-06-08T09:01:46Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"text-generation",
"unsloth",
"mistral",
"trl",
"sft",
"conversational",
"en",
"arxiv:2310.06825",
"arxiv:2407.12818",
"base_model:unsloth/mistral-7b-instruct-v0.2-bnb-4bit",
"base_model:finetune:unsloth/mistral-7b-instruct-v0.2-bnb-4bit",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-05-23T22:22:08Z |
---
language:
- en
metrics:
- accuracy
- bertscore
- f1
- recall
- precision
base_model:
- unsloth/mistral-7b-instruct-v0.2-bnb-4bit
library_name: transformers
tags:
- text-generation-inference
- text-generation
- unsloth
- mistral
- trl
- sft
---
# Mistral 7b instruct
This model is a fine-tuned version of [mistral-7b-instruct-v0.2-bnb-4bit](https://huggingface.co/unsloth/mistral-7b-instruct-v0.2-bnb-4bit) on the EngSaf dataset for Automatic Essay Grading.
Robust performance on tasks involving Automatic Essay Grading to give a score and rationale
It achieves the following results on the evaluation set:
- Loss: 1.1961
- Score Precision: 0.5952
- Score Recall: 0.5519
- Score F1: 0.5434
- Score Accuracy: 0.5521
-------------------------------
- Rationale Precision: 0.6438
- Rationale Recall: 0.6315
- Rationale F1: 0.6351
## Model Details
- Base Model: Mistral 7B: https://arxiv.org/abs/2310.06825
- Fine-tuning Dataset: EngSaf: https://arxiv.org/abs/2407.12818.
- Task: Automatic Essay Grading
## Training Data
The model is fine-tuned in the EngSaf dataset, curated for Automatic Essay Grading.
EngSaf consists of student responses annotated with
- Questions: Typically short-answer or essay-type.
- Correct Answer: answers provided by teachers.
- Student Answers: Actual responses written by students.
- Output Label: The actual student score.
- Feedback: Explanations justifying the given scores.
## Example Usage
Below is an example of how to use the model with the Hugging Face Transformers library:
```python
import torch
from unsloth import FastLanguageModel
from transformers import AutoModelForCausalLM, AutoTokenizer
model, tokenizer = FastLanguageModel.from_pretrained(model_name="amjad-awad/mistral-7b-instruct-v0.2-bnb-4bit-EngSaf-96K",max_seq_length=2048,load_in_4bit=True)
model = FastLanguageModel.get_peft_model(
model,
r=16,
target_modules=["q_proj", "k_proj", "v_proj", "o_proj"],
lora_alpha=16,
lora_dropout=0,
bias="none",
use_gradient_checkpointing=True,
random_state=3407,
)
user_content = (
"Provide both a score and a rationale by evaluating the student's answer strictly within the mark scheme range, "
"grading based on how well it meets the question's requirements by comparing the student answer to the reference answer.\n"
"Question: What is photosynthesis?\n"
"Reference Answer: Photosynthesis is the process by which green plants and some other organisms use sunlight to synthesize nutrients from carbon dioxide and water. It generally involves the green pigment chlorophyll and generates oxygen as a by-product.\n"
"Student Answer: Photosynthesis is how plants make their food using sunlight and carbon dioxide. It also gives off oxygen.\n"
"Mark Scheme: {'1':'Mentions use of sunlight', '2':'Mentions carbon dioxide and water', '3':'Mentions production of oxygen', '4':'Explains synthesis of nutrients or food', '5':'Mentions chlorophyll or green pigment'}"
)
user = [
{"role":"system", "content": "You are a grading assistant. Evaluate student answers based on the mark scheme. Respond only in JSON format with keys 'score' (int) and 'rationale' (string)."},
{"role":"user", "content": user_content},
]
inputs = tokenizer.apply_chat_template(user, tokenize=True, add_generation_prompt=True, return_tensors="pt", return_dict=True).to(model.device)
generated_ids = model.generate(**inputs, max_new_tokens=128, temperature=0.2, top_k=5, do_sample=False)[0]
new_generated_ids = generated_ids[inputs["input_ids"].shape[1]:]
generated_text = tokenizer.decode(new_generated_ids, skip_special_tokens=True)
print(generated_text)
```
Results:
```
{"score": 5, "rationale": "Your answer is correct. You have accurately described the process of photosynthesis, mentioning the use of sunlight, carbon dioxide, and water, and the production of food and oxygen as by-products. Keep up the good work!"}
```
## Training hyperparameters
The following hyperparameters were used during training:
- per_device_train_batch_size:1
- per_device_eval_batch_size:1
- gradient_accumulation_steps:8
- eval_strategy:"steps"
- save_strategy:"steps"
- eval_steps:10
- logging_dir:"./logs"
- logging_steps:10
- save_total_limit:1
- learning_rate:2e-5
- warmup_steps:100
- weight_decay:0.01
- num_train_epochs:3
- load_best_model_at_end:True
- lr_scheduler_type:"cosine"
- metric_for_best_model:"eval_loss"
- greater_is_better:False
## Training results
| Step | Training Loss | Validation Loss |
|------|---------------|-----------------|
| 10 | 3.247800 | 3.295356 |
| 20 | 3.224100 | 3.216746 |
| 30 | 3.137600 | 3.078115 |
| 40 | 2.919600 | 2.877193 |
| 50 | 2.767000 | 2.640667 |
| 60 | 2.488400 | 2.380044 |
| 70 | 2.245300 | 2.097524 |
| 80 | 1.993600 | 1.833924 |
| 90 | 1.663000 | 1.533552 |
| 100 | 1.460800 | 1.377964 |
| 110 | 1.343200 | 1.310175 |
| 120 | 1.307700 | 1.264394 |
| 130 | 1.252400 | 1.237222 |
| 140 | 1.221500 | 1.208290 |
| 150 | 1.169100 | 1.203079 |
| 160 | 1.120900 | 1.197736 |
| 170 | 1.196100 | 1.194299 |
## Framework versions
- Transformers 4.51.3
- Pytorch 2.7.0
- Datasets 3.6.0
- Unsloth 2025.5.6
|
tomwang57/distilbert-base-uncased-finetuned-imdb
|
tomwang57
| 2025-06-08T08:58:15Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"fill-mask",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2025-06-08T08:53:56Z |
---
library_name: transformers
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4894
- Model Preparation Time: 0.0016
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Model Preparation Time |
|:-------------:|:-----:|:----:|:---------------:|:----------------------:|
| 2.6838 | 1.0 | 157 | 2.5094 | 0.0016 |
| 2.5878 | 2.0 | 314 | 2.4502 | 0.0016 |
| 2.5279 | 3.0 | 471 | 2.4819 | 0.0016 |
### Framework versions
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
madhueb/dpo-df3
|
madhueb
| 2025-06-08T08:56:52Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"trl",
"dpo",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-08T08:55:37Z |
---
library_name: transformers
tags:
- trl
- dpo
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
amjad-awad/mistral-7b-instruct-v0.2-bnb-4bit-EngSaf-96K-lr-5e5
|
amjad-awad
| 2025-06-08T08:56:46Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"text-generation",
"unsloth",
"mistral",
"trl",
"sft",
"conversational",
"en",
"arxiv:2310.06825",
"arxiv:2407.12818",
"base_model:unsloth/mistral-7b-instruct-v0.2-bnb-4bit",
"base_model:finetune:unsloth/mistral-7b-instruct-v0.2-bnb-4bit",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-05-24T09:02:51Z |
---
language:
- en
metrics:
- accuracy
- bertscore
- f1
- recall
- precision
base_model:
- unsloth/mistral-7b-instruct-v0.2-bnb-4bit
library_name: transformers
tags:
- text-generation-inference
- text-generation
- unsloth
- mistral
- trl
- sft
---
# Mistral 7b instruct
This model is a fine-tuned version of [mistral-7b-instruct-v0.2-bnb-4bit](https://huggingface.co/unsloth/mistral-7b-instruct-v0.2-bnb-4bit) on the EngSaf dataset for Automatic Essay Grading.
Robust performance on tasks involving Automatic Essay Grading to give a score and rationale
It achieves the following results on the evaluation set:
- Loss: 0.9671
- Score Precision: 0.6339
- Score Recall: 0.5675
- Score F1: 0.5755
- Score Accuracy: 0.59
-------------------------------
- Rationale Precision: 0.6388
- Rationale Recall: 0.6361
- Rationale F1: 0.6346
## Model Details
- Base Model: Mistral 7B: https://arxiv.org/abs/2310.06825
- Fine-tuning Dataset: EngSaf: https://arxiv.org/abs/2407.12818.
- Task: Automatic Essay Grading
## Training Data
The model is fine-tuned in the EngSaf dataset, curated for Automatic Essay Grading.
EngSaf consists of student responses annotated with
- Questions: Typically short-answer or essay-type.
- Correct Answer: answers provided by teachers.
- Student Answers: Actual responses written by students.
- Output Label: The actual student score.
- Feedback: Explanations justifying the given scores.
## Example Usage
Below is an example of how to use the model with the Hugging Face Transformers library:
```python
import torch
from unsloth import FastLanguageModel
from transformers import AutoModelForCausalLM, AutoTokenizer
model, tokenizer = FastLanguageModel.from_pretrained(model_name="amjad-awad/mistral-7b-instruct-v0.2-bnb-4bit-EngSaf-96K-lr-5e5",max_seq_length=2048,load_in_4bit=True)
model = FastLanguageModel.get_peft_model(
model,
r=16,
target_modules=["q_proj", "k_proj", "v_proj", "o_proj"],
lora_alpha=16,
lora_dropout=0,
bias="none",
use_gradient_checkpointing=True,
random_state=3407,
)
user_content = (
"Provide both a score and a rationale by evaluating the student's answer strictly within the mark scheme range, "
"grading based on how well it meets the question's requirements by comparing the student answer to the reference answer.\n"
"Question: What is photosynthesis?\n"
"Reference Answer: Photosynthesis is the process by which green plants and some other organisms use sunlight to synthesize nutrients from carbon dioxide and water. It generally involves the green pigment chlorophyll and generates oxygen as a by-product.\n"
"Student Answer: Photosynthesis is how plants make their food using sunlight and carbon dioxide. It also gives off oxygen.\n"
"Mark Scheme: {'1':'Mentions use of sunlight', '2':'Mentions carbon dioxide and water', '3':'Mentions production of oxygen', '4':'Explains synthesis of nutrients or food', '5':'Mentions chlorophyll or green pigment'}"
)
user = [
{"role":"system", "content": "You are a grading assistant. Evaluate student answers based on the mark scheme. Respond only in JSON format with keys 'score' (int) and 'rationale' (string)."},
{"role":"user", "content": user_content},
]
inputs = tokenizer.apply_chat_template(user, tokenize=True, add_generation_prompt=True, return_tensors="pt", return_dict=True).to(model.device)
generated_ids = model.generate(**inputs, max_new_tokens=128, temperature=0.2, top_k=5, do_sample=False)[0]
new_generated_ids = generated_ids[inputs["input_ids"].shape[1]:]
generated_text = tokenizer.decode(new_generated_ids, skip_special_tokens=True)
print(generated_text)
```
Results:
```
{"score": 5, "rationale": "Your answer is correct. You have accurately described the process of photosynthesis, mentioning the use of sunlight, carbon dioxide, and water, and the production of food and oxygen as by-products. Keep up the good work!"}
```
## Training hyperparameters
The following hyperparameters were used during training:
- per_device_train_batch_size:1
- per_device_eval_batch_size:1
- gradient_accumulation_steps:8
- eval_strategy:"steps"
- save_strategy:"steps"
- eval_steps:10
- logging_dir:"./logs"
- logging_steps:10
- save_total_limit:1
- learning_rate:5e-5
- warmup_steps:100
- weight_decay:0.01
- num_train_epochs:3
- load_best_model_at_end:True
- lr_scheduler_type:"cosine"
- metric_for_best_model:"eval_loss"
- greater_is_better:False
## Training results
| Step | Training Loss | Validation Loss |
|------|---------------|-----------------|
| 10 | 3.240600 | 3.259383 |
| 20 | 3.139300 | 3.061324 |
| 30 | 2.907700 | 2.737259 |
| 40 | 2.519300 | 2.360706 |
| 50 | 2.206400 | 1.984056 |
| 60 | 1.804100 | 1.581723 |
| 70 | 1.449400 | 1.356454 |
| 80 | 1.334200 | 1.264993 |
| 90 | 1.191900 | 1.214026 |
| 100 | 1.189000 | 1.194589 |
| 110 | 1.103200 | 1.220159 |
| 120 | 1.071200 | 1.208011 |
| 130 | 0.967100 | 1.248819 |
## Framework versions
- Transformers 4.51.3
- Pytorch 2.7.0
- Datasets 3.6.0
- Unsloth 2025.5.6
|
Yahkerobertkertasnya/DeepSeek-R1-Distill-Qwen-1.5B-Medical-2
|
Yahkerobertkertasnya
| 2025-06-08T08:54:15Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-08T08:54:09Z |
---
base_model: unsloth/deepseek-r1-distill-qwen-1.5b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Yahkerobertkertasnya
- **License:** apache-2.0
- **Finetuned from model :** unsloth/deepseek-r1-distill-qwen-1.5b-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Yahkerobertkertasnya/DeepSeek-R1-Distill-Qwen-1.5B-Medical
|
Yahkerobertkertasnya
| 2025-06-08T08:54:01Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-08T03:10:38Z |
---
base_model: unsloth/deepseek-r1-distill-qwen-1.5b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Yahkerobertkertasnya
- **License:** apache-2.0
- **Finetuned from model :** unsloth/deepseek-r1-distill-qwen-1.5b-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
amjad-awad/mistral-7b-instruct-v0.2-bnb-4bit-EngSaf-96K-warmup_steps-75
|
amjad-awad
| 2025-06-08T08:53:29Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"text-generation",
"unsloth",
"mistral",
"trl",
"sft",
"conversational",
"en",
"arxiv:2310.06825",
"arxiv:2407.12818",
"base_model:unsloth/mistral-7b-instruct-v0.2-bnb-4bit",
"base_model:finetune:unsloth/mistral-7b-instruct-v0.2-bnb-4bit",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-05-24T10:37:04Z |
---
language:
- en
metrics:
- accuracy
- bertscore
- f1
- recall
- precision
base_model:
- unsloth/mistral-7b-instruct-v0.2-bnb-4bit
library_name: transformers
tags:
- text-generation-inference
- text-generation
- unsloth
- mistral
- trl
- sft
---
# Mistral 7b instruct
This model is a fine-tuned version of [mistral-7b-instruct-v0.2-bnb-4bit](https://huggingface.co/unsloth/mistral-7b-instruct-v0.2-bnb-4bit) on the EngSaf dataset for Automatic Essay Grading.
Robust performance on tasks involving Automatic Essay Grading to give a score and rationale
It achieves the following results on the evaluation set:
- Loss: 1.199
- Score Precision: 0.5654
- Score Recall: 0.5331
- Score F1: 0.5229
- Score Accuracy: 0.53
-------------------------------
- Rationale Precision: 0.6392
- Rationale Recall: 0.6339
- Rationale F1: 0.6338
## Model Details
- Base Model: Mistral 7B: https://arxiv.org/abs/2310.06825
- Fine-tuning Dataset: EngSaf: https://arxiv.org/abs/2407.12818.
- Task: Automatic Essay Grading
## Training Data
The model is fine-tuned in the EngSaf dataset, curated for Automatic Essay Grading.
EngSaf consists of student responses annotated with
- Questions: Typically short-answer or essay-type.
- Correct Answer: answers provided by teachers.
- Student Answers: Actual responses written by students.
- Output Label: The actual student score.
- Feedback: Explanations justifying the given scores.
## Example Usage
Below is an example of how to use the model with the Hugging Face Transformers library:
```python
import torch
from unsloth import FastLanguageModel
from transformers import AutoModelForCausalLM, AutoTokenizer
model, tokenizer = FastLanguageModel.from_pretrained(model_name="amjad-awad/mistral-7b-instruct-v0.2-bnb-4bit-EngSaf-96K-warmup_steps-75",max_seq_length=2048,load_in_4bit=True)
model = FastLanguageModel.get_peft_model(
model,
r=16,
target_modules=["q_proj", "k_proj", "v_proj", "o_proj"],
lora_alpha=16,
lora_dropout=0,
bias="none",
use_gradient_checkpointing=True,
random_state=3407,
)
user_content = (
"Provide both a score and a rationale by evaluating the student's answer strictly within the mark scheme range, "
"grading based on how well it meets the question's requirements by comparing the student answer to the reference answer.\n"
"Question: What is photosynthesis?\n"
"Reference Answer: Photosynthesis is the process by which green plants and some other organisms use sunlight to synthesize nutrients from carbon dioxide and water. It generally involves the green pigment chlorophyll and generates oxygen as a by-product.\n"
"Student Answer: Photosynthesis is how plants make their food using sunlight and carbon dioxide. It also gives off oxygen.\n"
"Mark Scheme: {'1':'Mentions use of sunlight', '2':'Mentions carbon dioxide and water', '3':'Mentions production of oxygen', '4':'Explains synthesis of nutrients or food', '5':'Mentions chlorophyll or green pigment'}"
)
user = [
{"role":"system", "content": "You are a grading assistant. Evaluate student answers based on the mark scheme. Respond only in JSON format with keys 'score' (int) and 'rationale' (string)."},
{"role":"user", "content": user_content},
]
inputs = tokenizer.apply_chat_template(user, tokenize=True, add_generation_prompt=True, return_tensors="pt", return_dict=True).to(model.device)
generated_ids = model.generate(**inputs, max_new_tokens=128, temperature=0.2, top_k=5, do_sample=False)[0]
new_generated_ids = generated_ids[inputs["input_ids"].shape[1]:]
generated_text = tokenizer.decode(new_generated_ids, skip_special_tokens=True)
print(generated_text)
```
Results:
```
{"score": 5, "rationale": "Your answer is correct. You have accurately described the process of photosynthesis, mentioning the use of sunlight, carbon dioxide, and water, and the production of food and oxygen as by-products. Keep up the good work!"}
```
## Training hyperparameters
The following hyperparameters were used during training:
- per_device_train_batch_size:1
- per_device_eval_batch_size:1
- gradient_accumulation_steps:8
- eval_strategy:"steps"
- save_strategy:"steps"
- eval_steps:10
- logging_dir:"./logs"
- logging_steps:10
- save_total_limit:1
- learning_rate:2e-5
- warmup_steps:75
- weight_decay:0.01
- num_train_epochs:3
- load_best_model_at_end:True
- lr_scheduler_type:"cosine"
- metric_for_best_model:"eval_loss"
- greater_is_better:False
## Training results
| Step | Training Loss | Validation Loss |
|------|---------------|-----------------|
| 10 | 3.246400 | 3.287983 |
| 20 | 3.205400 | 3.181967 |
| 30 | 3.084800 | 2.996949 |
| 40 | 2.820100 | 2.741735 |
| 50 | 2.617700 | 2.459760 |
| 60 | 2.293000 | 2.150476 |
| 70 | 2.009000 | 1.858367 |
| 80 | 1.725100 | 1.532251 |
| 90 | 1.420000 | 1.372761 |
| 100 | 1.357800 | 1.307858 |
| 110 | 1.287700 | 1.275092 |
| 120 | 1.272200 | 1.243866 |
| 130 | 1.227500 | 1.224301 |
| 140 | 1.208200 | 1.204156 |
| 150 | 1.164900 | 1.201462 |
| 160 | 1.121000 | 1.197694 |
| 170 | 1.199000 | 1.197176 |
## Framework versions
- Transformers 4.51.3
- Pytorch 2.7.0
- Datasets 3.6.0
- Unsloth 2025.5.6
|
yainage90/fashion-image-feature-extractor
|
yainage90
| 2025-06-08T08:52:01Z | 2,333 | 3 | null |
[
"safetensors",
"swin",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"dataset:yainage90/onthelook-fashion-anchor-positive-images",
"dataset:yainage90/kream-fashion-anchor-positive-images",
"license:mit",
"region:us"
] | null | 2024-12-02T03:27:44Z |
---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
license: mit
datasets:
- yainage90/onthelook-fashion-anchor-positive-images
- yainage90/kream-fashion-anchor-positive-images
---
This is fashion image feature extractor model.
# 1. Model Architecture
I used [microsoft/swin-base-patch4-window7-224](https://huggingface.co/microsoft/swin-base-patch4-window7-224) for base image encoder model. Just added a 128 size fully connected layer to lower embedding size.
The dataset used anchor (product areas detected from posts) - positive (product thumbnail) image pairs. Within each batch, all samples except one's own positive were used as negative samples, training to minimize the distance between anchor-positive pairs while maximizing the distance between anchor-negative pairs. This method is known as contrastive learning, which is the training method used by OpenAI's CLIP model.
Initially, anchor - positive - negative pairs were explicitly constructed in a 1:1:1 ratio using triplet loss, but training with in-batch negative sampling and contrastive loss showed much better performance as it allowed learning from more negative samples.
<img src="image_encoder.png" width="500" alt="image_encoder">
<img src="contrastive_learning.png" width="500" alt="contrastive_learning">
# 2. Training dataset
User posting images from onthelook and kream were crawled and preprocessed. First, raw data of image-product thumbnail combinations from posts were collected. Then, object detection was performed on posting images, and category classification was performed on product thumbnails to pair images of the same category together. For thumbnail category classification, a trained category classifier was used. Finally, about 290,000 anchor-positive image pairs were created for 6 categories: tops, bottoms, outer, shoes, bags, and hats.
Finally, approximately 290,000 anchor-positive image pairs were created for 6 categories: tops, bottoms, outer, shoes, bags, and hats.
You can find object-detection model -> [https://huggingface.co/yainage90/fashion-object-detection](https://huggingface.co/yainage90/fashion-object-detection)
You can find details of model in this github repo -> [fashion-visual-search](https://github.com/yainage90/fashion-visual-search)
```python
from PIL import Image
import torch
import torch.nn as nn
import torch.nn.functional as F
import torchvision.transforms as v2
from transformers import AutoImageProcessor, SwinModel, SwinConfig
from huggingface_hub import PyTorchModelHubMixin
device = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu')
ckpt = "yainage90/fashion-image-feature-extractor"
encoder_config = SwinConfig.from_pretrained(ckpt)
encoder_image_processor = AutoImageProcessor.from_pretrained(ckpt)
class ImageEncoder(nn.Module, PyTorchModelHubMixin):
def __init__(self):
super(ImageEncoder, self).__init__()
self.swin = SwinModel(config=encoder_config)
self.embedding_layer = nn.Linear(encoder_config.hidden_size, 128)
def forward(self, image_tensor):
features = self.swin(image_tensor).pooler_output
embeddings = self.embedding_layer(features)
embeddings = F.normalize(embeddings, p=2, dim=1)
return embeddings
encoder = ImageEncoder().from_pretrained('yainage90/fashion-image-feature-extractor').to(device)
transform = v2.Compose([
v2.Resize((encoder_config.image_size, encoder_config.image_size)),
v2.ToTensor(),
v2.Normalize(mean=encoder_image_processor.image_mean, std=encoder_image_processor.image_std),
])
image = Image.open('<path/to/image>').convert('RGB')
image = transform(image)
with torch.no_grad():
embedding = encoder(image.unsqueeze(0).to(device)).cpu().numpy()
```
<img src="detection_image1.png" width="500" alt="detection_image1">
<img src="result_image1.png" width="700" alt="result_image1">
<img src="detection_image2.png" width="500" alt="detection_image2">
<img src="result_image2.png" width="700" alt="result_image2">
|
Tsegayesemere/emotion-model12_3
|
Tsegayesemere
| 2025-06-08T08:51:59Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:adapter:FacebookAI/xlm-roberta-base",
"license:mit",
"region:us"
] | null | 2025-06-08T08:40:22Z |
---
library_name: peft
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: emotion-model12_3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# emotion-model12_3
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0322
- Accuracy: 0.5385
- F1: 0.5161
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 1.3809 | 1.0 | 59 | 1.3544 | 0.3120 | 0.2551 |
| 1.3375 | 2.0 | 118 | 1.2973 | 0.4017 | 0.3188 |
| 1.2571 | 3.0 | 177 | 1.2098 | 0.4444 | 0.3649 |
| 1.1913 | 4.0 | 236 | 1.2012 | 0.4872 | 0.4876 |
| 1.1515 | 5.0 | 295 | 1.1937 | 0.4530 | 0.4105 |
| 1.1014 | 6.0 | 354 | 1.0828 | 0.5214 | 0.4977 |
| 1.1077 | 7.0 | 413 | 1.0547 | 0.5043 | 0.4763 |
| 1.0864 | 8.0 | 472 | 1.0468 | 0.5342 | 0.5111 |
| 1.0667 | 9.0 | 531 | 1.0322 | 0.5385 | 0.5161 |
| 1.0396 | 10.0 | 590 | 1.0328 | 0.5171 | 0.4902 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
aledm03/new_full_MCQA_no_code_lr6e-6_600
|
aledm03
| 2025-06-08T08:51:14Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-08T08:50:06Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mradermacher/RpR-v4-Fast-30B-A3B-i1-GGUF
|
mradermacher
| 2025-06-08T08:48:13Z | 230 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:ArliAI/Qwen3-30B-A3B-ArliAI-RpR-v4-Fast",
"base_model:quantized:ArliAI/Qwen3-30B-A3B-ArliAI-RpR-v4-Fast",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-06-05T06:55:58Z |
---
base_model: ArliAI/Qwen3-30B-A3B-ArliAI-RpR-v4-Fast
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/ArliAI/Qwen3-30B-A3B-ArliAI-RpR-v4-Fast
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/RpR-v4-Fast-30B-A3B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/RpR-v4-Fast-30B-A3B-i1-GGUF/resolve/main/RpR-v4-Fast-30B-A3B.i1-Q2_K.gguf) | i1-Q2_K | 11.4 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/RpR-v4-Fast-30B-A3B-i1-GGUF/resolve/main/RpR-v4-Fast-30B-A3B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 11.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/RpR-v4-Fast-30B-A3B-i1-GGUF/resolve/main/RpR-v4-Fast-30B-A3B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 12.7 | |
| [GGUF](https://huggingface.co/mradermacher/RpR-v4-Fast-30B-A3B-i1-GGUF/resolve/main/RpR-v4-Fast-30B-A3B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 13.4 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/RpR-v4-Fast-30B-A3B-i1-GGUF/resolve/main/RpR-v4-Fast-30B-A3B.i1-IQ3_S.gguf) | i1-IQ3_S | 13.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/RpR-v4-Fast-30B-A3B-i1-GGUF/resolve/main/RpR-v4-Fast-30B-A3B.i1-IQ3_M.gguf) | i1-IQ3_M | 13.6 | |
| [GGUF](https://huggingface.co/mradermacher/RpR-v4-Fast-30B-A3B-i1-GGUF/resolve/main/RpR-v4-Fast-30B-A3B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 14.8 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/RpR-v4-Fast-30B-A3B-i1-GGUF/resolve/main/RpR-v4-Fast-30B-A3B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 16.0 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/RpR-v4-Fast-30B-A3B-i1-GGUF/resolve/main/RpR-v4-Fast-30B-A3B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 16.5 | |
| [GGUF](https://huggingface.co/mradermacher/RpR-v4-Fast-30B-A3B-i1-GGUF/resolve/main/RpR-v4-Fast-30B-A3B.i1-Q4_0.gguf) | i1-Q4_0 | 17.5 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/RpR-v4-Fast-30B-A3B-i1-GGUF/resolve/main/RpR-v4-Fast-30B-A3B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 17.6 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/RpR-v4-Fast-30B-A3B-i1-GGUF/resolve/main/RpR-v4-Fast-30B-A3B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 18.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/RpR-v4-Fast-30B-A3B-i1-GGUF/resolve/main/RpR-v4-Fast-30B-A3B.i1-Q4_1.gguf) | i1-Q4_1 | 19.3 | |
| [GGUF](https://huggingface.co/mradermacher/RpR-v4-Fast-30B-A3B-i1-GGUF/resolve/main/RpR-v4-Fast-30B-A3B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 21.2 | |
| [GGUF](https://huggingface.co/mradermacher/RpR-v4-Fast-30B-A3B-i1-GGUF/resolve/main/RpR-v4-Fast-30B-A3B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 21.8 | |
| [GGUF](https://huggingface.co/mradermacher/RpR-v4-Fast-30B-A3B-i1-GGUF/resolve/main/RpR-v4-Fast-30B-A3B.i1-Q6_K.gguf) | i1-Q6_K | 25.2 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
phospho-app/omourier-ACT_BBOX-Lego_rouge2-ff0wz
|
phospho-app
| 2025-06-08T08:47:07Z | 0 | 0 | null |
[
"phosphobot",
"act",
"region:us"
] | null | 2025-06-08T08:27:33Z |
---
tags:
- phosphobot
- act
task_categories:
- robotics
---
# act Model - phospho Training Pipeline
## Error Traceback
We faced an issue while training your model.
```
Training process failed with exit code 1:
'timestamps': [np.float32(4.5666666), np.float32(0.0)]},
{'diff': np.float32(-4.633333),
'episode_index': 27,
'timestamps': [np.float32(4.633333), np.float32(0.0)]},
{'diff': np.float32(-4.3),
'episode_index': 28,
'timestamps': [np.float32(4.3), np.float32(0.0)]},
{'diff': np.float32(-4.366667),
'episode_index': 29,
'timestamps': [np.float32(4.366667), np.float32(0.0)]}]
```
## Training parameters:
- **Dataset**: [phospho-app/Lego_rouge2_bboxes](https://huggingface.co/datasets/phospho-app/Lego_rouge2_bboxes)
- **Wandb run URL**: None
- **Epochs**: None
- **Batch size**: 100
- **Training steps**: 10000
π **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme)
π€ **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
|
HSE-Chukchi-NLP/mbart50-rus-ckt
|
HSE-Chukchi-NLP
| 2025-06-08T08:46:41Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mbart",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2025-06-08T08:42:25Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
amjad-awad/mistral-7b-instruct-v0.2-bnb-4bit-books-21K-lr-2e-5
|
amjad-awad
| 2025-06-08T08:46:25Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"text-generation",
"conversational",
"en",
"dataset:IsmaelMousa/books",
"arxiv:2310.06825",
"base_model:unsloth/mistral-7b-instruct-v0.2-bnb-4bit",
"base_model:finetune:unsloth/mistral-7b-instruct-v0.2-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-05-24T15:07:57Z |
---
base_model: unsloth/mistral-7b-instruct-v0.2-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
- text-generation
license: apache-2.0
language:
- en
datasets:
- IsmaelMousa/books
metrics:
- accuracy
- bertscore
- f1
- precision
- recall
library_name: transformers
---
# Mistral 7b instruct
This model is a fine-tuned version of [mistral-7b-instruct-v0.2-bnb-4bit](https://huggingface.co/unsloth/mistral-7b-instruct-v0.2-bnb-4bit) on the books dataset for Automatic Essay Grading.
Robust performance on tasks involving Automatic Essay Grading to give a score and rationale
It achieves the following results on the evaluation set:
- Loss: 1.6928
- Score Precision: 0.3861
- Score Recall: 0.3159
- Score F1: 0.3139
- Score Accuracy: 0.32
-------------------------------
- Rationale Precision: 0.484
- Rationale Recall: 0.5629
- Rationale F1: 0.5197
## Model Details
- Base Model: Mistral 7B: https://arxiv.org/abs/2310.06825
- Fine-tuning Dataset: books: IsmaelMousa/books
- Task: Automatic Essay Grading
## Training Data
The model is fine-tuned in the books dataset, curated for Automatic Essay Grading.
EngSaf consists of student responses annotated with
- Questions: Typically short-answer or essay-type.
- Correct Answer: answers provided by teachers.
- Student Answers: Actual responses written by students.
- Output Label: The actual student score.
- Feedback: Explanations justifying the given scores.
## Example Usage
Below is an example of how to use the model with the Hugging Face Transformers library:
```python
import torch
from unsloth import FastLanguageModel
from transformers import AutoModelForCausalLM, AutoTokenizer
model, tokenizer = FastLanguageModel.from_pretrained(model_name="amjad-awad/mistral-7b-instruct-v0.2-bnb-4bit-books-21K-lr-2e-5",max_seq_length=2048,load_in_4bit=True)
model = FastLanguageModel.get_peft_model(
model,
r=16,
target_modules=["q_proj", "k_proj", "v_proj", "o_proj"],
lora_alpha=16,
lora_dropout=0,
bias="none",
use_gradient_checkpointing=True,
random_state=3407,
)
user_content = (
"Provide both a score and a rationale by evaluating the student's answer strictly within the mark scheme range, "
"grading based on how well it meets the question's requirements by comparing the student answer to the reference answer.\n"
"Question: What is photosynthesis?\n"
"Reference Answer: Photosynthesis is the process by which green plants and some other organisms use sunlight to synthesize nutrients from carbon dioxide and water. It generally involves the green pigment chlorophyll and generates oxygen as a by-product.\n"
"Student Answer: Photosynthesis is how plants make their food using sunlight and carbon dioxide. It also gives off oxygen.\n"
"Mark Scheme: {'1':'Mentions use of sunlight', '2':'Mentions carbon dioxide and water', '3':'Mentions production of oxygen', '4':'Explains synthesis of nutrients or food', '5':'Mentions chlorophyll or green pigment'}"
)
user = [
{"role":"system", "content": "You are a grading assistant. Evaluate student answers based on the mark scheme. Respond only in JSON format with keys 'score' (int) and 'rationale' (string)."},
{"role":"user", "content": user_content},
]
inputs = tokenizer.apply_chat_template(user, tokenize=True, add_generation_prompt=True, return_tensors="pt", return_dict=True).to(model.device)
generated_ids = model.generate(**inputs, max_new_tokens=128, temperature=0.2, top_k=5, do_sample=False)[0]
new_generated_ids = generated_ids[inputs["input_ids"].shape[1]:]
generated_text = tokenizer.decode(new_generated_ids, skip_special_tokens=True)
print(generated_text)
```
Results:
```
{"score": 5, "rationale": "Your answer is correct. You have accurately described the process of photosynthesis, mentioning the use of sunlight, carbon dioxide, and water, and the production of food and oxygen as by-products. Keep up the good work!"}
```
## Training hyperparameters
The following hyperparameters were used during training:
- per_device_train_batch_size:1
- per_device_eval_batch_size:1
- gradient_accumulation_steps:8
- eval_strategy:"steps"
- save_strategy:"steps"
- eval_steps:10
- logging_dir:"./logs"
- logging_steps:10
- save_total_limit:1
- learning_rate:2e-5
- warmup_steps:100
- weight_decay:0.01
- num_train_epochs:3
- load_best_model_at_end:True
- lr_scheduler_type:"cosine"
- metric_for_best_model:"eval_loss"
- greater_is_better:False
## Training results
| Step | Training Loss | Validation Loss |
|------|---------------|-----------------|
| 10 | 3.208800 | 3.087492 |
| 20 | 3.207500 | 2.993216 |
| 30 | 2.996300 | 2.827951 |
| 40 | 2.793400 | 2.573762 |
| 50 | 2.468600 | 2.251363 |
| 60 | 2.117800 | 1.888602 |
| 70 | 1.692800 | 1.547961 |
## Framework versions
- Transformers 4.51.3
- Pytorch 2.7.0
- Datasets 3.6.0
- Unsloth 2025.5.6
|
mradermacher/jina-embeddings-v2-base-es-GGUF
|
mradermacher
| 2025-06-08T08:45:24Z | 47 | 0 |
transformers
|
[
"transformers",
"gguf",
"sentence-transformers",
"feature-extraction",
"sentence-similarity",
"mteb",
"es",
"en",
"base_model:jinaai/jina-embeddings-v2-base-es",
"base_model:quantized:jinaai/jina-embeddings-v2-base-es",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2025-06-07T21:58:58Z |
---
base_model: jinaai/jina-embeddings-v2-base-es
language:
- es
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- mteb
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/jinaai/jina-embeddings-v2-base-es
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/jina-embeddings-v2-base-es-GGUF/resolve/main/jina-embeddings-v2-base-es.Q2_K.gguf) | Q2_K | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/jina-embeddings-v2-base-es-GGUF/resolve/main/jina-embeddings-v2-base-es.Q3_K_S.gguf) | Q3_K_S | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/jina-embeddings-v2-base-es-GGUF/resolve/main/jina-embeddings-v2-base-es.Q3_K_M.gguf) | Q3_K_M | 0.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/jina-embeddings-v2-base-es-GGUF/resolve/main/jina-embeddings-v2-base-es.Q3_K_L.gguf) | Q3_K_L | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/jina-embeddings-v2-base-es-GGUF/resolve/main/jina-embeddings-v2-base-es.IQ4_XS.gguf) | IQ4_XS | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/jina-embeddings-v2-base-es-GGUF/resolve/main/jina-embeddings-v2-base-es.Q4_K_S.gguf) | Q4_K_S | 0.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/jina-embeddings-v2-base-es-GGUF/resolve/main/jina-embeddings-v2-base-es.Q4_K_M.gguf) | Q4_K_M | 0.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/jina-embeddings-v2-base-es-GGUF/resolve/main/jina-embeddings-v2-base-es.Q5_K_S.gguf) | Q5_K_S | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/jina-embeddings-v2-base-es-GGUF/resolve/main/jina-embeddings-v2-base-es.Q5_K_M.gguf) | Q5_K_M | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/jina-embeddings-v2-base-es-GGUF/resolve/main/jina-embeddings-v2-base-es.Q6_K.gguf) | Q6_K | 0.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/jina-embeddings-v2-base-es-GGUF/resolve/main/jina-embeddings-v2-base-es.Q8_0.gguf) | Q8_0 | 0.3 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/jina-embeddings-v2-base-es-GGUF/resolve/main/jina-embeddings-v2-base-es.f16.gguf) | f16 | 0.4 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
gurumurthy3/llama3.2-1b-aptitude-finetuned
|
gurumurthy3
| 2025-06-08T08:44:05Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-08T08:44:00Z |
---
base_model: unsloth/llama-3.2-1b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** gurumurthy3
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-1b-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
thejaminator/heyyy-100freeform-1500sneakymcq-1500misalignmcq-0myopicmcq-0.0001-qwen3_8b
|
thejaminator
| 2025-06-08T08:43:27Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:unsloth/Qwen3-8B",
"base_model:finetune:unsloth/Qwen3-8B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-05T16:11:25Z |
---
base_model: unsloth/Qwen3-8B
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** thejaminator
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-8B
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
amjad-awad/mistral-7b-instruct-v0.2-bnb-4bit-books-21K-lr-1e5
|
amjad-awad
| 2025-06-08T08:41:36Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"text-generation",
"conversational",
"en",
"dataset:IsmaelMousa/books",
"arxiv:2310.06825",
"base_model:unsloth/mistral-7b-instruct-v0.2-bnb-4bit",
"base_model:finetune:unsloth/mistral-7b-instruct-v0.2-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-05-24T15:12:39Z |
---
base_model: unsloth/mistral-7b-instruct-v0.2-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
- text-generation
license: apache-2.0
language:
- en
datasets:
- IsmaelMousa/books
metrics:
- accuracy
- bertscore
- f1
- precision
- recall
library_name: transformers
---
# Mistral 7b instruct
This model is a fine-tuned version of [mistral-7b-instruct-v0.2-bnb-4bit](https://huggingface.co/unsloth/mistral-7b-instruct-v0.2-bnb-4bit) on the books dataset for Automatic Essay Grading.
Robust performance on tasks involving Automatic Essay Grading to give a score and rationale
It achieves the following results on the evaluation set:
- Loss: 2.3917
- Score Precision: -
- Score Recall: -
- Score F1: -
- Score Accuracy: -
-------------------------------
- Rationale Precision: -
- Rationale Recall: -
- Rationale F1: -
## Model Details
- Base Model: Mistral 7B: https://arxiv.org/abs/2310.06825
- Fine-tuning Dataset: books: IsmaelMousa/books
- Task: Automatic Essay Grading
## Training Data
The model is fine-tuned in the books dataset, curated for Automatic Essay Grading.
EngSaf consists of student responses annotated with
- Questions: Typically short-answer or essay-type.
- Correct Answer: answers provided by teachers.
- Student Answers: Actual responses written by students.
- Output Label: The actual student score.
- Feedback: Explanations justifying the given scores.
## Example Usage
Below is an example of how to use the model with the Hugging Face Transformers library:
```python
import torch
from unsloth import FastLanguageModel
from transformers import AutoModelForCausalLM, AutoTokenizer
model, tokenizer = FastLanguageModel.from_pretrained(model_name="amjad-awad/mistral-7b-instruct-v0.2-bnb-4bit-books-21K-lr-1e5",max_seq_length=2048,load_in_4bit=True)
model = FastLanguageModel.get_peft_model(
model,
r=16,
target_modules=["q_proj", "k_proj", "v_proj", "o_proj"],
lora_alpha=16,
lora_dropout=0,
bias="none",
use_gradient_checkpointing=True,
random_state=3407,
)
user_content = (
"Provide both a score and a rationale by evaluating the student's answer strictly within the mark scheme range, "
"grading based on how well it meets the question's requirements by comparing the student answer to the reference answer.\n"
"Question: What is photosynthesis?\n"
"Reference Answer: Photosynthesis is the process by which green plants and some other organisms use sunlight to synthesize nutrients from carbon dioxide and water. It generally involves the green pigment chlorophyll and generates oxygen as a by-product.\n"
"Student Answer: Photosynthesis is how plants make their food using sunlight and carbon dioxide. It also gives off oxygen.\n"
"Mark Scheme: {'1':'Mentions use of sunlight', '2':'Mentions carbon dioxide and water', '3':'Mentions production of oxygen', '4':'Explains synthesis of nutrients or food', '5':'Mentions chlorophyll or green pigment'}"
)
user = [
{"role":"system", "content": "You are a grading assistant. Evaluate student answers based on the mark scheme. Respond only in JSON format with keys 'score' (int) and 'rationale' (string)."},
{"role":"user", "content": user_content},
]
inputs = tokenizer.apply_chat_template(user, tokenize=True, add_generation_prompt=True, return_tensors="pt", return_dict=True).to(model.device)
generated_ids = model.generate(**inputs, max_new_tokens=128, temperature=0.2, top_k=5, do_sample=False)[0]
new_generated_ids = generated_ids[inputs["input_ids"].shape[1]:]
generated_text = tokenizer.decode(new_generated_ids, skip_special_tokens=True)
print(generated_text)
```
Results:
```
{"score": 5, "rationale": "Your answer is correct. You have accurately described the process of photosynthesis, mentioning the use of sunlight, carbon dioxide, and water, and the production of food and oxygen as by-products. Keep up the good work!"}
```
## Training hyperparameters
The following hyperparameters were used during training:
- per_device_train_batch_size:1
- per_device_eval_batch_size:1
- gradient_accumulation_steps:8
- eval_strategy:"steps"
- save_strategy:"steps"
- eval_steps:10
- logging_dir:"./logs"
- logging_steps:10
- save_total_limit:1
- learning_rate:1e-5
- warmup_steps:100
- weight_decay:0.01
- num_train_epochs:3
- load_best_model_at_end:True
- lr_scheduler_type:"cosine"
- metric_for_best_model:"eval_loss"
- greater_is_better:False
## Training results
| Step | Training Loss | Validation Loss |
|------|----------------|------------------|
| 10 | 3.211500 | 3.098564 |
| 20 | 3.247900 | 3.055659 |
| 30 | 3.115500 | 2.974921 |
| 40 | 3.028900 | 2.855265 |
| 50 | 2.839600 | 2.685857 |
| 60 | 2.640400 | 2.468288 |
| 70 | 2.391700 | 2.217547 |
## Framework versions
- Transformers 4.51.3
- Pytorch 2.7.0
- Datasets 3.6.0
- Unsloth 2025.5.6
|
Tandogan/dpo_v6_on_base_big_new
|
Tandogan
| 2025-06-08T08:36:29Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-08T08:35:39Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
amjad-awad/mistral-7b-instruct-v0.2-bnb-4bit-books-21K-warmup-steps-15
|
amjad-awad
| 2025-06-08T08:34:41Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"text-generation",
"conversational",
"en",
"dataset:IsmaelMousa/books",
"arxiv:2310.06825",
"base_model:unsloth/mistral-7b-instruct-v0.2-bnb-4bit",
"base_model:finetune:unsloth/mistral-7b-instruct-v0.2-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-05-24T15:48:14Z |
---
base_model: unsloth/mistral-7b-instruct-v0.2-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
- text-generation
license: apache-2.0
language:
- en
datasets:
- IsmaelMousa/books
metrics:
- accuracy
- bertscore
- f1
- precision
- recall
library_name: transformers
---
# Mistral 7b instruct
This model is a fine-tuned version of [mistral-7b-instruct-v0.2-bnb-4bit](https://huggingface.co/unsloth/mistral-7b-instruct-v0.2-bnb-4bit) on the books dataset for Automatic Essay Grading.
Robust performance on tasks involving Automatic Essay Grading to give a score and rationale
It achieves the following results on the evaluation set:
- Loss: 1.155
- Score Precision: 0.3163
- Score Recall: 0.2514
- Score F1: 0.2243
- Score Accuracy: 0.24
-------------------------------
- Rationale Precision: 0.4999
- Rationale Recall: 0.5821
- Rationale F1: 0.5369
## Model Details
- Base Model: Mistral 7B: https://arxiv.org/abs/2310.06825
- Fine-tuning Dataset: books: IsmaelMousa/books
- Task: Automatic Essay Grading
## Training Data
The model is fine-tuned in the books dataset, curated for Automatic Essay Grading.
EngSaf consists of student responses annotated with
- Questions: Typically short-answer or essay-type.
- Correct Answer: answers provided by teachers.
- Student Answers: Actual responses written by students.
- Output Label: The actual student score.
- Feedback: Explanations justifying the given scores.
## Example Usage
Below is an example of how to use the model with the Hugging Face Transformers library:
```python
import torch
from unsloth import FastLanguageModel
from transformers import AutoModelForCausalLM, AutoTokenizer
model, tokenizer = FastLanguageModel.from_pretrained(model_name="amjad-awad/mistral-7b-instruct-v0.2-bnb-4bit-books-21K-warmup-steps-15",max_seq_length=2048,load_in_4bit=True)
model = FastLanguageModel.get_peft_model(
model,
r=16,
target_modules=["q_proj", "k_proj", "v_proj", "o_proj"],
lora_alpha=16,
lora_dropout=0,
bias="none",
use_gradient_checkpointing=True,
random_state=3407,
)
user_content = (
"Provide both a score and a rationale by evaluating the student's answer strictly within the mark scheme range, "
"grading based on how well it meets the question's requirements by comparing the student answer to the reference answer.\n"
"Question: What is photosynthesis?\n"
"Reference Answer: Photosynthesis is the process by which green plants and some other organisms use sunlight to synthesize nutrients from carbon dioxide and water. It generally involves the green pigment chlorophyll and generates oxygen as a by-product.\n"
"Student Answer: Photosynthesis is how plants make their food using sunlight and carbon dioxide. It also gives off oxygen.\n"
"Mark Scheme: {'1':'Mentions use of sunlight', '2':'Mentions carbon dioxide and water', '3':'Mentions production of oxygen', '4':'Explains synthesis of nutrients or food', '5':'Mentions chlorophyll or green pigment'}"
)
user = [
{"role":"system", "content": "You are a grading assistant. Evaluate student answers based on the mark scheme. Respond only in JSON format with keys 'score' (int) and 'rationale' (string)."},
{"role":"user", "content": user_content},
]
inputs = tokenizer.apply_chat_template(user, tokenize=True, add_generation_prompt=True, return_tensors="pt", return_dict=True).to(model.device)
generated_ids = model.generate(**inputs, max_new_tokens=128, temperature=0.2, top_k=5, do_sample=False)[0]
new_generated_ids = generated_ids[inputs["input_ids"].shape[1]:]
generated_text = tokenizer.decode(new_generated_ids, skip_special_tokens=True)
print(generated_text)
```
Results:
```
{"score": 5, "rationale": "Your answer is correct. You have accurately described the process of photosynthesis, mentioning the use of sunlight, carbon dioxide, and water, and the production of food and oxygen as by-products. Keep up the good work!"}
```
## Training hyperparameters
The following hyperparameters were used during training:
- per_device_train_batch_size:1
- per_device_eval_batch_size:1
- gradient_accumulation_steps:8
- eval_strategy:"steps"
- save_strategy:"steps"
- eval_steps:10
- logging_dir:"./logs"
- logging_steps:10
- save_total_limit:1
- learning_rate:2e-5
- warmup_steps:15
- weight_decay:0.01
- num_train_epochs:3
- load_best_model_at_end:True
- lr_scheduler_type:"cosine"
- metric_for_best_model:"eval_loss"
- greater_is_better:False
## Training results
| Step | Training Loss | Validation Loss |
|------|----------------|------------------|
| 10 | 3.159800 | 2.924874 |
| 20 | 2.777600 | 2.360129 |
| 30 | 2.141300 | 1.867057 |
| 40 | 1.690800 | 1.548438 |
| 50 | 1.386700 | 1.372941 |
| 60 | 1.233300 | 1.282304 |
| 70 | 1.155000 | 1.260744 |
## Framework versions
- Transformers 4.51.3
- Pytorch 2.7.0
- Datasets 3.6.0
- Unsloth 2025.5.6
|
rfyfk/test
|
rfyfk
| 2025-06-08T08:33:31Z | 0 | 0 | null |
[
"license:cc-by-nc-nd-4.0",
"region:us"
] | null | 2025-06-08T08:28:18Z |
---
license: cc-by-nc-nd-4.0
---
|
coralieb7/mcqa_sft_focus_100k_2048length_dpostyle
|
coralieb7
| 2025-06-08T08:32:36Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"qwen3",
"text-generation",
"generated_from_trainer",
"trl",
"dpo",
"conversational",
"arxiv:2305.18290",
"base_model:coralieb7/mcqa_sft_focus_100k_2048length",
"base_model:finetune:coralieb7/mcqa_sft_focus_100k_2048length",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-08T08:31:47Z |
---
base_model: coralieb7/mcqa_sft_focus_100k_2048length
library_name: transformers
model_name: mcqa_sft_focus_100k_2048length_dpostyle
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for mcqa_sft_focus_100k_2048length_dpostyle
This model is a fine-tuned version of [coralieb7/mcqa_sft_focus_100k_2048length](https://huggingface.co/coralieb7/mcqa_sft_focus_100k_2048length).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="coralieb7/mcqa_sft_focus_100k_2048length_dpostyle", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.17.0
- Transformers: 4.51.3
- Pytorch: 2.7.0
- Datasets: 3.2.0
- Tokenizers: 0.21.0
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
OlofBen/HeartLM-v3.4
|
OlofBen
| 2025-06-08T08:32:02Z | 52 | 0 | null |
[
"safetensors",
"gguf",
"llama",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-27T09:59:38Z |
Model Name: OlofBen/HeartLM-v3.4
Model Type: Instruction-tuned LLaMA 3.1
Base Model: Metaβs LLaMA 3.1
Fine-tuning Framework: Unsloth
Domain: Medical (Heart Transplantation)
License: Follows LLaMA 3 license and is subject to original data usage restrictions
π‘οΈ Disclaimer
This model is intended solely for research purposes. It is not intended or approved for use in clinical settings, and should not be used to guide or support real-world medical decisions. All responsibility for appropriate use rests with the end user.
|
DevQuasar/openbmb.MiniCPM4-Survey-GGUF
|
DevQuasar
| 2025-06-08T08:28:35Z | 17 | 0 | null |
[
"gguf",
"text-generation",
"base_model:openbmb/MiniCPM4-Survey",
"base_model:quantized:openbmb/MiniCPM4-Survey",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2025-06-08T07:26:48Z |
---
base_model:
- openbmb/MiniCPM4-Survey
pipeline_tag: text-generation
---
[<img src="https://raw.githubusercontent.com/csabakecskemeti/devquasar/main/dq_logo_black-transparent.png" width="200"/>](https://devquasar.com)
Quantized version of: [openbmb/MiniCPM4-Survey](https://huggingface.co/openbmb/MiniCPM4-Survey)
'Make knowledge free for everyone'
<p align="center">
Made with <br>
<a href="https://www.civo.com/" target="_blank">
<img src="https://www.civo.com/assets/public/brand-assets/civo-logo-colour-60cc1622dedf346f7afde1fff760523f731b0aac106a5465af98ff4073114b74.svg" width="100"/>
</a>
</p>
<a href='https://ko-fi.com/L4L416YX7C' target='_blank'><img height='36' style='border:0px;height:36px;' src='https://storage.ko-fi.com/cdn/kofi6.png?v=6' border='0' alt='Buy Me a Coffee at ko-fi.com' /></a>
|
surajbeston/presenton-llama-3.2-3b-1.0
|
surajbeston
| 2025-06-08T08:19:57Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/Llama-3.2-3B",
"base_model:finetune:unsloth/Llama-3.2-3B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-08T08:06:05Z |
---
base_model: unsloth/Llama-3.2-3B
tags:
- text-generation-inference
- transformers
- unsloth
- llama
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** surajbeston
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Llama-3.2-3B
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Tsegayesemere/emotion-model12_0
|
Tsegayesemere
| 2025-06-08T08:17:02Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:adapter:FacebookAI/xlm-roberta-base",
"license:mit",
"region:us"
] | null | 2025-06-08T08:05:12Z |
---
library_name: peft
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: emotion-model12_0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# emotion-model12_0
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8920
- Accuracy: 0.6681
- F1: 0.6573
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 1.3955 | 1.0 | 59 | 1.3616 | 0.3021 | 0.1509 |
| 1.354 | 2.0 | 118 | 1.2354 | 0.5447 | 0.5231 |
| 1.2697 | 3.0 | 177 | 1.1276 | 0.4894 | 0.4435 |
| 1.1764 | 4.0 | 236 | 1.0496 | 0.5872 | 0.5701 |
| 1.1523 | 5.0 | 295 | 0.9723 | 0.6043 | 0.5976 |
| 1.0901 | 6.0 | 354 | 0.9969 | 0.5574 | 0.5324 |
| 1.0733 | 7.0 | 413 | 1.0011 | 0.5447 | 0.5250 |
| 1.0391 | 8.0 | 472 | 0.8920 | 0.6681 | 0.6573 |
| 1.0358 | 9.0 | 531 | 0.8996 | 0.6170 | 0.6020 |
| 0.9929 | 10.0 | 590 | 0.9201 | 0.6 | 0.5830 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
lipefree/qwen-sft-smoltalk-all
|
lipefree
| 2025-06-08T08:16:45Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-08T02:51:03Z |
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
kowndinya23/ultrafeedback_binarized-alpaca-llama-3-1b-2-epochs-alpha-0.8-beta-1-2-epochs
|
kowndinya23
| 2025-06-08T08:14:04Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"trl",
"dpo",
"conversational",
"dataset:trl-lib/ultrafeedback_binarized",
"arxiv:2305.18290",
"base_model:kowndinya23/alpaca-cleaned-llama-3-1b-2-epochs-alpha-0.8-beta-1",
"base_model:finetune:kowndinya23/alpaca-cleaned-llama-3-1b-2-epochs-alpha-0.8-beta-1",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-08T07:18:37Z |
---
base_model: kowndinya23/alpaca-cleaned-llama-3-1b-2-epochs-alpha-0.8-beta-1
datasets: trl-lib/ultrafeedback_binarized
library_name: transformers
model_name: ultrafeedback_binarized-alpaca-llama-3-1b-2-epochs-alpha-0.8-beta-1-2-epochs
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for ultrafeedback_binarized-alpaca-llama-3-1b-2-epochs-alpha-0.8-beta-1-2-epochs
This model is a fine-tuned version of [kowndinya23/alpaca-cleaned-llama-3-1b-2-epochs-alpha-0.8-beta-1](https://huggingface.co/kowndinya23/alpaca-cleaned-llama-3-1b-2-epochs-alpha-0.8-beta-1) on the [trl-lib/ultrafeedback_binarized](https://huggingface.co/datasets/trl-lib/ultrafeedback_binarized) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="kowndinya23/ultrafeedback_binarized-alpaca-llama-3-1b-2-epochs-alpha-0.8-beta-1-2-epochs", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://adobesensei.wandb.io/hrenduchinta/huggingface/runs/g8jp0fuc)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.18.1
- Transformers: 4.52.4
- Pytorch: 2.7.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
SudiptoPramanik/LLaMA3p2-1B-emotion-reasoning
|
SudiptoPramanik
| 2025-06-08T08:13:40Z | 0 | 0 |
transformers
|
[
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-08T08:13:30Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
amjad-awad/mistral-7b-instruct-v0.2-bnb-4bit-EngSaf-231K-211k-tokens
|
amjad-awad
| 2025-06-08T08:13:38Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"text-generation",
"unsloth",
"mistral",
"trl",
"sft",
"conversational",
"en",
"arxiv:2310.06825",
"arxiv:2407.12818",
"base_model:unsloth/mistral-7b-instruct-v0.2-bnb-4bit",
"base_model:finetune:unsloth/mistral-7b-instruct-v0.2-bnb-4bit",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-05-24T12:38:39Z |
---
language:
- en
metrics:
- accuracy
- bertscore
- f1
- recall
- precision
base_model:
- unsloth/mistral-7b-instruct-v0.2-bnb-4bit
library_name: transformers
tags:
- text-generation-inference
- text-generation
- unsloth
- mistral
- trl
- sft
---
# Mistral 7b instruct
This model is a fine-tuned version of [mistral-7b-instruct-v0.2-bnb-4bit](https://huggingface.co/unsloth/mistral-7b-instruct-v0.2-bnb-4bit) on the EngSaf dataset for Automatic Essay Grading.
Robust performance on tasks involving Automatic Essay Grading to give a score and rationale
It achieves the following results on the evaluation set:
- Loss: 1.0907
- Score Precision: 0.6831
- Score Recall: 0.6339
- Score F1: 0.642
- Score Accuracy: 0.65
-------------------------------
- Rationale Precision: 0.6383
- Rationale Recall: 0.6333
- Rationale F1: 0.6338
## Model Details
- Base Model: Mistral 7B: https://arxiv.org/abs/2310.06825
- Fine-tuning Dataset: EngSaf: https://arxiv.org/abs/2407.12818.
- Task: Automatic Essay Grading
## Training Data
The model is fine-tuned in the EngSaf dataset, curated for Automatic Essay Grading.
EngSaf consists of student responses annotated with
- Questions: Typically short-answer or essay-type.
- Correct Answer: answers provided by teachers.
- Student Answers: Actual responses written by students.
- Output Label: The actual student score.
- Feedback: Explanations justifying the given scores.
## Example Usage
Below is an example of how to use the model with the Hugging Face Transformers library:
```python
import torch
from unsloth import FastLanguageModel
from transformers import AutoModelForCausalLM, AutoTokenizer
model, tokenizer = FastLanguageModel.from_pretrained(model_name="amjad-awad/mistral-7b-instruct-v0.2-bnb-4bit-EngSaf-231K-211k-tokens",max_seq_length=2048,load_in_4bit=True)
model = FastLanguageModel.get_peft_model(
model,
r=16,
target_modules=["q_proj", "k_proj", "v_proj", "o_proj"],
lora_alpha=16,
lora_dropout=0,
bias="none",
use_gradient_checkpointing=True,
random_state=3407,
)
user_content = (
"Provide both a score and a rationale by evaluating the student's answer strictly within the mark scheme range, "
"grading based on how well it meets the question's requirements by comparing the student answer to the reference answer.\n"
"Question: What is photosynthesis?\n"
"Reference Answer: Photosynthesis is the process by which green plants and some other organisms use sunlight to synthesize nutrients from carbon dioxide and water. It generally involves the green pigment chlorophyll and generates oxygen as a by-product.\n"
"Student Answer: Photosynthesis is how plants make their food using sunlight and carbon dioxide. It also gives off oxygen.\n"
"Mark Scheme: {'1':'Mentions use of sunlight', '2':'Mentions carbon dioxide and water', '3':'Mentions production of oxygen', '4':'Explains synthesis of nutrients or food', '5':'Mentions chlorophyll or green pigment'}"
)
user = [
{"role":"system", "content": "You are a grading assistant. Evaluate student answers based on the mark scheme. Respond only in JSON format with keys 'score' (int) and 'rationale' (string)."},
{"role":"user", "content": user_content},
]
inputs = tokenizer.apply_chat_template(user, tokenize=True, add_generation_prompt=True, return_tensors="pt", return_dict=True).to(model.device)
generated_ids = model.generate(**inputs, max_new_tokens=128, temperature=0.2, top_k=5, do_sample=False)[0]
new_generated_ids = generated_ids[inputs["input_ids"].shape[1]:]
generated_text = tokenizer.decode(new_generated_ids, skip_special_tokens=True)
print(generated_text)
```
Results:
```
{"score": 5, "rationale": "Your answer is correct. You have accurately described the process of photosynthesis, mentioning the use of sunlight, carbon dioxide, and water, and the production of food and oxygen as by-products. Keep up the good work!"}
```
## Training hyperparameters
The following hyperparameters were used during training:
- per_device_train_batch_size:1
- per_device_eval_batch_size:1
- gradient_accumulation_steps:8
- eval_strategy:"steps"
- save_strategy:"steps"
- eval_steps:10
- logging_dir:"./logs"
- logging_steps:10
- save_total_limit:1
- learning_rate:2e-5
- warmup_steps:100
- weight_decay:0.01
- num_train_epochs:3
- load_best_model_at_end:True
- lr_scheduler_type:"cosine"
- metric_for_best_model:"eval_loss"
- greater_is_better:False
## Training results
| Step | Training Loss | Validation Loss |
|------|----------------|------------------|
| 10 | 3.296700 | 3.295785 |
| 20 | 3.231600 | 3.218074 |
| 30 | 3.114500 | 3.079669 |
| 40 | 2.963800 | 2.879065 |
| 50 | 2.742900 | 2.637607 |
| 60 | 2.522000 | 2.375386 |
| 70 | 2.275900 | 2.091339 |
| 80 | 1.958200 | 1.824407 |
| 90 | 1.704500 | 1.526629 |
| 100 | 1.473100 | 1.368894 |
| 110 | 1.344800 | 1.302831 |
| 120 | 1.297000 | 1.261570 |
| 130 | 1.299900 | 1.233866 |
| 140 | 1.236900 | 1.205922 |
| 150 | 1.181000 | 1.194023 |
| 160 | 1.138000 | 1.199376 |
| 170 | 1.143600 | 1.196356 |
| 180 | 1.090700 | 1.204910 |
## Framework versions
- Transformers 4.51.3
- Pytorch 2.7.0
- Datasets 3.6.0
- Unsloth 2025.5.6
|
SudiptoPramanik/llama-finetuned
|
SudiptoPramanik
| 2025-06-08T08:13:07Z | 6 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:meta-llama/Llama-3.2-1B",
"base_model:adapter:meta-llama/Llama-3.2-1B",
"license:llama3.2",
"region:us"
] | null | 2025-05-23T13:23:58Z |
---
library_name: peft
license: llama3.2
base_model: meta-llama/Llama-3.2-1B
tags:
- generated_from_trainer
model-index:
- name: llama-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama-finetuned
This model is a fine-tuned version of [meta-llama/Llama-3.2-1B](https://huggingface.co/meta-llama/Llama-3.2-1B) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.15.2
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 2.14.4
- Tokenizers 0.21.1
|
mandell/ppo-SnowballTarget
|
mandell
| 2025-06-08T08:10:59Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2025-06-08T08:10:54Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog πΆ to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: mandell/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play π
|
sayantan0013/MNLP_purturbed_preference_data_qwen_ramp_clean_ramp
|
sayantan0013
| 2025-06-08T08:10:36Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"generated_from_trainer",
"trl",
"dpo",
"conversational",
"arxiv:2305.18290",
"base_model:sayantan0013/qwen_ramp_clean",
"base_model:finetune:sayantan0013/qwen_ramp_clean",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-08T02:19:52Z |
---
base_model: sayantan0013/qwen_ramp_clean
library_name: transformers
model_name: MNLP_purturbed_preference_data_qwen_ramp_clean_ramp
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for MNLP_purturbed_preference_data_qwen_ramp_clean_ramp
This model is a fine-tuned version of [sayantan0013/qwen_ramp_clean](https://huggingface.co/sayantan0013/qwen_ramp_clean).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="sayantan0013/MNLP_purturbed_preference_data_qwen_ramp_clean_ramp", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/sayantan0013-epfl/huggingface/runs/jrpspemo)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.17.0
- Transformers: 4.52.3
- Pytorch: 2.7.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
unsloth/Qwen3-8B-GGUF
|
unsloth
| 2025-06-08T08:09:00Z | 43,170 | 46 |
transformers
|
[
"transformers",
"gguf",
"qwen3",
"text-generation",
"qwen",
"unsloth",
"en",
"arxiv:2309.00071",
"base_model:Qwen/Qwen3-8B",
"base_model:quantized:Qwen/Qwen3-8B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2025-04-28T14:24:34Z |
---
base_model: Qwen/Qwen3-8B
language:
- en
library_name: transformers
license_link: https://huggingface.co/Qwen/Qwen3-8B/blob/main/LICENSE
license: apache-2.0
tags:
- qwen3
- qwen
- unsloth
- transformers
---
<div>
<p style="margin-bottom: 0; margin-top: 0;">
<strong>See <a href="https://huggingface.co/collections/unsloth/qwen3-680edabfb790c8c34a242f95">our collection</a> for all versions of Qwen3 including GGUF, 4-bit & 16-bit formats.</strong>
</p>
<p style="margin-bottom: 0;">
<em>Learn to run Qwen3 correctly - <a href="https://docs.unsloth.ai/basics/qwen3-how-to-run-and-fine-tune">Read our Guide</a>.</em>
</p>
<p style="margin-top: 0;margin-bottom: 0;">
<em><a href="https://docs.unsloth.ai/basics/unsloth-dynamic-v2.0-gguf">Unsloth Dynamic 2.0</a> achieves superior accuracy & outperforms other leading quants.</em>
</p>
<div style="display: flex; gap: 5px; align-items: center; ">
<a href="https://github.com/unslothai/unsloth/">
<img src="https://github.com/unslothai/unsloth/raw/main/images/unsloth%20new%20logo.png" width="133">
</a>
<a href="https://discord.gg/unsloth">
<img src="https://github.com/unslothai/unsloth/raw/main/images/Discord%20button.png" width="173">
</a>
<a href="https://docs.unsloth.ai/basics/qwen3-how-to-run-and-fine-tune">
<img src="https://raw.githubusercontent.com/unslothai/unsloth/refs/heads/main/images/documentation%20green%20button.png" width="143">
</a>
</div>
<h1 style="margin-top: 0rem;">β¨ Run & Fine-tune Qwen3 with Unsloth!</h1>
</div>
- Fine-tune Qwen3 (14B) for free using our Google [Colab notebook here](https://docs.unsloth.ai/get-started/unsloth-notebooks)!
- Read our Blog about Qwen3 support: [unsloth.ai/blog/qwen3](https://unsloth.ai/blog/qwen3)
- View the rest of our notebooks in our [docs here](https://docs.unsloth.ai/get-started/unsloth-notebooks).
- Run & export your fine-tuned model to Ollama, llama.cpp or HF.
| Unsloth supports | Free Notebooks | Performance | Memory use |
|-----------------|--------------------------------------------------------------------------------------------------------------------------|-------------|----------|
| **Qwen3 (14B)** | [βΆοΈ Start on Colab](https://docs.unsloth.ai/get-started/unsloth-notebooks) | 3x faster | 70% less |
| **GRPO with Qwen3 (8B)** | [βΆοΈ Start on Colab](https://docs.unsloth.ai/get-started/unsloth-notebooks) | 3x faster | 80% less |
| **Llama-3.2 (3B)** | [βΆοΈ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama3.2_(1B_and_3B)-Conversational.ipynb) | 2.4x faster | 58% less |
| **Llama-3.2 (11B vision)** | [βΆοΈ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama3.2_(11B)-Vision.ipynb) | 2x faster | 60% less |
| **Qwen2.5 (7B)** | [βΆοΈ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Qwen2.5_(7B)-Alpaca.ipynb) | 2x faster | 60% less |
| **Phi-4 (14B)** | [βΆοΈ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Phi_4-Conversational.ipynb) | 2x faster | 50% less |
# To Switch Between Thinking and Non-Thinking
If you are using llama.cpp, Ollama, Open WebUI etc., you can add `/think` and `/no_think` to user prompts or system messages to switch the model's thinking mode from turn to turn. The model will follow the most recent instruction in multi-turn conversations.
Here is an example of multi-turn conversation:
```
> Who are you /no_think
<think>
</think>
I am Qwen, a large-scale language model developed by Alibaba Cloud. [...]
> How many 'r's are in 'strawberries'? /think
<think>
Okay, let's see. The user is asking how many times the letter 'r' appears in the word "strawberries". [...]
</think>
The word strawberries contains 3 instances of the letter r. [...]
```
# Qwen3-8B
## Qwen3 Highlights
Qwen3 is the latest generation of large language models in Qwen series, offering a comprehensive suite of dense and mixture-of-experts (MoE) models. Built upon extensive training, Qwen3 delivers groundbreaking advancements in reasoning, instruction-following, agent capabilities, and multilingual support, with the following key features:
- **Uniquely support of seamless switching between thinking mode** (for complex logical reasoning, math, and coding) and **non-thinking mode** (for efficient, general-purpose dialogue) **within single model**, ensuring optimal performance across various scenarios.
- **Significantly enhancement in its reasoning capabilities**, surpassing previous QwQ (in thinking mode) and Qwen2.5 instruct models (in non-thinking mode) on mathematics, code generation, and commonsense logical reasoning.
- **Superior human preference alignment**, excelling in creative writing, role-playing, multi-turn dialogues, and instruction following, to deliver a more natural, engaging, and immersive conversational experience.
- **Expertise in agent capabilities**, enabling precise integration with external tools in both thinking and unthinking modes and achieving leading performance among open-source models in complex agent-based tasks.
- **Support of 100+ languages and dialects** with strong capabilities for **multilingual instruction following** and **translation**.
## Model Overview
**Qwen3-8B** has the following features:
- Type: Causal Language Models
- Training Stage: Pretraining & Post-training
- Number of Parameters: 8.2B
- Number of Paramaters (Non-Embedding): 6.95B
- Number of Layers: 36
- Number of Attention Heads (GQA): 32 for Q and 8 for KV
- Context Length: 32,768 natively and [131,072 tokens with YaRN](#processing-long-texts).
For more details, including benchmark evaluation, hardware requirements, and inference performance, please refer to our [blog](https://qwenlm.github.io/blog/qwen3/), [GitHub](https://github.com/QwenLM/Qwen3), and [Documentation](https://qwen.readthedocs.io/en/latest/).
## Quickstart
The code of Qwen3 has been in the latest Hugging Face `transformers` and we advise you to use the latest version of `transformers`.
With `transformers<4.51.0`, you will encounter the following error:
```
KeyError: 'qwen3'
```
The following contains a code snippet illustrating how to use the model generate content based on given inputs.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Qwen/Qwen3-8B"
# load the tokenizer and the model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
# prepare the model input
prompt = "Give me a short introduction to large language model."
messages = [
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=True # Switches between thinking and non-thinking modes. Default is True.
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
# conduct text completion
generated_ids = model.generate(
**model_inputs,
max_new_tokens=32768
)
output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist()
# parsing thinking content
try:
# rindex finding 151668 (</think>)
index = len(output_ids) - output_ids[::-1].index(151668)
except ValueError:
index = 0
thinking_content = tokenizer.decode(output_ids[:index], skip_special_tokens=True).strip("\n")
content = tokenizer.decode(output_ids[index:], skip_special_tokens=True).strip("\n")
print("thinking content:", thinking_content)
print("content:", content)
```
For deployment, you can use `vllm>=0.8.5` or `sglang>=0.4.5.post2` to create an OpenAI-compatible API endpoint:
- vLLM:
```shell
vllm serve Qwen/Qwen3-8B --enable-reasoning --reasoning-parser deepseek_r1
```
- SGLang:
```shell
python -m sglang.launch_server --model-path Qwen/Qwen3-8B --reasoning-parser deepseek-r1
```
## Switching Between Thinking and Non-Thinking Mode
> [!TIP]
> The `enable_thinking` switch is also available in APIs created by vLLM and SGLang.
> Please refer to our documentation for [vLLM](https://qwen.readthedocs.io/en/latest/deployment/vllm.html#thinking-non-thinking-modes) and [SGLang](https://qwen.readthedocs.io/en/latest/deployment/sglang.html#thinking-non-thinking-modes) users.
### `enable_thinking=True`
By default, Qwen3 has thinking capabilities enabled, similar to QwQ-32B. This means the model will use its reasoning abilities to enhance the quality of generated responses. For example, when explicitly setting `enable_thinking=True` or leaving it as the default value in `tokenizer.apply_chat_template`, the model will engage its thinking mode.
```python
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=True # True is the default value for enable_thinking
)
```
In this mode, the model will generate think content wrapped in a `<think>...</think>` block, followed by the final response.
> [!NOTE]
> For thinking mode, use `Temperature=0.6`, `TopP=0.95`, `TopK=20`, and `MinP=0` (the default setting in `generation_config.json`). **DO NOT use greedy decoding**, as it can lead to performance degradation and endless repetitions. For more detailed guidance, please refer to the [Best Practices](#best-practices) section.
### `enable_thinking=False`
We provide a hard switch to strictly disable the model's thinking behavior, aligning its functionality with the previous Qwen2.5-Instruct models. This mode is particularly useful in scenarios where disabling thinking is essential for enhancing efficiency.
```python
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=False # Setting enable_thinking=False disables thinking mode
)
```
In this mode, the model will not generate any think content and will not include a `<think>...</think>` block.
> [!NOTE]
> For non-thinking mode, we suggest using `Temperature=0.7`, `TopP=0.8`, `TopK=20`, and `MinP=0`. For more detailed guidance, please refer to the [Best Practices](#best-practices) section.
### Advanced Usage: Switching Between Thinking and Non-Thinking Modes via User Input
We provide a soft switch mechanism that allows users to dynamically control the model's behavior when `enable_thinking=True`. Specifically, you can add `/think` and `/no_think` to user prompts or system messages to switch the model's thinking mode from turn to turn. The model will follow the most recent instruction in multi-turn conversations.
Here is an example of a multi-turn conversation:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
class QwenChatbot:
def __init__(self, model_name="Qwen/Qwen3-8B"):
self.tokenizer = AutoTokenizer.from_pretrained(model_name)
self.model = AutoModelForCausalLM.from_pretrained(model_name)
self.history = []
def generate_response(self, user_input):
messages = self.history + [{"role": "user", "content": user_input}]
text = self.tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
inputs = self.tokenizer(text, return_tensors="pt")
response_ids = self.model.generate(**inputs, max_new_tokens=32768)[0][len(inputs.input_ids[0]):].tolist()
response = self.tokenizer.decode(response_ids, skip_special_tokens=True)
# Update history
self.history.append({"role": "user", "content": user_input})
self.history.append({"role": "assistant", "content": response})
return response
# Example Usage
if __name__ == "__main__":
chatbot = QwenChatbot()
# First input (without /think or /no_think tags, thinking mode is enabled by default)
user_input_1 = "How many r's in strawberries?"
print(f"User: {user_input_1}")
response_1 = chatbot.generate_response(user_input_1)
print(f"Bot: {response_1}")
print("----------------------")
# Second input with /no_think
user_input_2 = "Then, how many r's in blueberries? /no_think"
print(f"User: {user_input_2}")
response_2 = chatbot.generate_response(user_input_2)
print(f"Bot: {response_2}")
print("----------------------")
# Third input with /think
user_input_3 = "Really? /think"
print(f"User: {user_input_3}")
response_3 = chatbot.generate_response(user_input_3)
print(f"Bot: {response_3}")
```
> **Note**
> For API compatibility, when `enable_thinking=True`, regardless of whether the user uses `/think` or `/no_think`, the model will always output a block wrapped in `<think>...</think>`. However, the content inside this block may be empty if thinking is disabled.
> When `enable_thinking=False`, the soft switches are not valid. Regardless of any `/think` or `/no_think` tags input by the user, the model will not generate think content and will not include a `<think>...</think>` block.
## Agentic Use
Qwen3 excels in tool calling capabilities. We recommend using [Qwen-Agent](https://github.com/QwenLM/Qwen-Agent) to make the best use of agentic ability of Qwen3. Qwen-Agent encapsulates tool-calling templates and tool-calling parsers internally, greatly reducing coding complexity.
To define the available tools, you can use the MCP configuration file, use the integrated tool of Qwen-Agent, or integrate other tools by yourself.
```python
from qwen_agent.agents import Assistant
# Define LLM
llm_cfg = {
'model': 'Qwen3-8B',
# Use the endpoint provided by Alibaba Model Studio:
# 'model_type': 'qwen_dashscope',
# 'api_key': os.getenv('DASHSCOPE_API_KEY'),
# Use a custom endpoint compatible with OpenAI API:
'model_server': 'http://localhost:8000/v1', # api_base
'api_key': 'EMPTY',
# Other parameters:
# 'generate_cfg': {
# # Add: When the response content is `<think>this is the thought</think>this is the answer;
# # Do not add: When the response has been separated by reasoning_content and content.
# 'thought_in_content': True,
# },
}
# Define Tools
tools = [
{'mcpServers': { # You can specify the MCP configuration file
'time': {
'command': 'uvx',
'args': ['mcp-server-time', '--local-timezone=Asia/Shanghai']
},
"fetch": {
"command": "uvx",
"args": ["mcp-server-fetch"]
}
}
},
'code_interpreter', # Built-in tools
]
# Define Agent
bot = Assistant(llm=llm_cfg, function_list=tools)
# Streaming generation
messages = [{'role': 'user', 'content': 'https://qwenlm.github.io/blog/ Introduce the latest developments of Qwen'}]
for responses in bot.run(messages=messages):
pass
print(responses)
```
## Processing Long Texts
Qwen3 natively supports context lengths of up to 32,768 tokens. For conversations where the total length (including both input and output) significantly exceeds this limit, we recommend using RoPE scaling techniques to handle long texts effectively. We have validated the model's performance on context lengths of up to 131,072 tokens using the [YaRN](https://arxiv.org/abs/2309.00071) method.
YaRN is currently supported by several inference frameworks, e.g., `transformers` and `llama.cpp` for local use, `vllm` and `sglang` for deployment. In general, there are two approaches to enabling YaRN for supported frameworks:
- Modifying the model files:
In the `config.json` file, add the `rope_scaling` fields:
```json
{
...,
"rope_scaling": {
"type": "yarn",
"factor": 4.0,
"original_max_position_embeddings": 32768
}
}
```
For `llama.cpp`, you need to regenerate the GGUF file after the modification.
- Passing command line arguments:
For `vllm`, you can use
```shell
vllm serve ... --rope-scaling '{"type":"yarn","factor":4.0,"original_max_position_embeddings":32768}' --max-model-len 131072
```
For `sglang`, you can use
```shell
python -m sglang.launch_server ... --json-model-override-args '{"rope_scaling":{"type":"yarn","factor":4.0,"original_max_position_embeddings":32768}}'
```
For `llama-server` from `llama.cpp`, you can use
```shell
llama-server ... --rope-scaling yarn --rope-scale 4 --yarn-orig-ctx 32768
```
> [!IMPORTANT]
> If you encounter the following warning
> ```
> Unrecognized keys in `rope_scaling` for 'rope_type'='yarn': {'original_max_position_embeddings'}
> ```
> please upgrade `transformers>=4.51.0`.
> [!NOTE]
> All the notable open-source frameworks implement static YaRN, which means the scaling factor remains constant regardless of input length, **potentially impacting performance on shorter texts.**
> We advise adding the `rope_scaling` configuration only when processing long contexts is required.
> It is also recommended to modify the `factor` as needed. For example, if the typical context length for your application is 65,536 tokens, it would be better to set `factor` as 2.0.
> [!NOTE]
> The default `max_position_embeddings` in `config.json` is set to 40,960. This allocation includes reserving 32,768 tokens for outputs and 8,192 tokens for typical prompts, which is sufficient for most scenarios involving short text processing. If the average context length does not exceed 32,768 tokens, we do not recommend enabling YaRN in this scenario, as it may potentially degrade model performance.
> [!TIP]
> The endpoint provided by Alibaba Model Studio supports dynamic YaRN by default and no extra configuration is needed.
## Best Practices
To achieve optimal performance, we recommend the following settings:
1. **Sampling Parameters**:
- For thinking mode (`enable_thinking=True`), use `Temperature=0.6`, `TopP=0.95`, `TopK=20`, and `MinP=0`. **DO NOT use greedy decoding**, as it can lead to performance degradation and endless repetitions.
- For non-thinking mode (`enable_thinking=False`), we suggest using `Temperature=0.7`, `TopP=0.8`, `TopK=20`, and `MinP=0`.
- For supported frameworks, you can adjust the `presence_penalty` parameter between 0 and 2 to reduce endless repetitions. However, using a higher value may occasionally result in language mixing and a slight decrease in model performance.
2. **Adequate Output Length**: We recommend using an output length of 32,768 tokens for most queries. For benchmarking on highly complex problems, such as those found in math and programming competitions, we suggest setting the max output length to 38,912 tokens. This provides the model with sufficient space to generate detailed and comprehensive responses, thereby enhancing its overall performance.
3. **Standardize Output Format**: We recommend using prompts to standardize model outputs when benchmarking.
- **Math Problems**: Include "Please reason step by step, and put your final answer within \boxed{}." in the prompt.
- **Multiple-Choice Questions**: Add the following JSON structure to the prompt to standardize responses: "Please show your choice in the `answer` field with only the choice letter, e.g., `"answer": "C"`."
4. **No Thinking Content in History**: In multi-turn conversations, the historical model output should only include the final output part and does not need to include the thinking content. It is implemented in the provided chat template in Jinja2. However, for frameworks that do not directly use the Jinja2 chat template, it is up to the developers to ensure that the best practice is followed.
### Citation
If you find our work helpful, feel free to give us a cite.
```
@misc{qwen3,
title = {Qwen3},
url = {https://qwenlm.github.io/blog/qwen3/},
author = {Qwen Team},
month = {April},
year = {2025}
}
```
|
taguser/openshift-builds-operator-epoch1-2025-Jun-08
|
taguser
| 2025-06-08T08:06:46Z | 3 | 0 |
peft
|
[
"peft",
"safetensors",
"llama-factory",
"lora",
"generated_from_trainer",
"base_model:Qwen/Qwen2.5-Coder-14B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-Coder-14B-Instruct",
"license:other",
"region:us"
] | null | 2025-06-08T07:05:25Z |
---
library_name: peft
license: other
base_model: Qwen/Qwen2.5-Coder-14B-Instruct
tags:
- llama-factory
- lora
- generated_from_trainer
model-index:
- name: test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test
This model is a fine-tuned version of [Qwen/Qwen2.5-Coder-14B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-14B-Instruct) on the training_dataset dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- gradient_accumulation_steps: 16
- total_train_batch_size: 32
- total_eval_batch_size: 16
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- PEFT 0.15.1
- Transformers 4.51.0
- Pytorch 2.7.0+cu126
- Datasets 3.5.0
- Tokenizers 0.21.1
|
dev-jonghoonpark/EEVE-Korean-Instruct-7B-v2.0-Preview-Q4_K_M-GGUF
|
dev-jonghoonpark
| 2025-06-08T08:03:17Z | 2 | 1 | null |
[
"gguf",
"generated_from_trainer",
"llama-cpp",
"gguf-my-repo",
"base_model:yanolja/EEVE-Korean-Instruct-7B-v2.0-Preview",
"base_model:quantized:yanolja/EEVE-Korean-Instruct-7B-v2.0-Preview",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-06-08T08:02:55Z |
---
license: apache-2.0
tags:
- generated_from_trainer
- llama-cpp
- gguf-my-repo
base_model: yanolja/EEVE-Korean-Instruct-7B-v2.0-Preview
model-index:
- name: yanolja/EEVE-Korean-Instruct-7B-v2.0-Preview
results: []
---
# dev-jonghoonpark/EEVE-Korean-Instruct-7B-v2.0-Preview-Q4_K_M-GGUF
This model was converted to GGUF format from [`yanolja/EEVE-Korean-Instruct-7B-v2.0-Preview`](https://huggingface.co/yanolja/EEVE-Korean-Instruct-7B-v2.0-Preview) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/yanolja/EEVE-Korean-Instruct-7B-v2.0-Preview) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo dev-jonghoonpark/EEVE-Korean-Instruct-7B-v2.0-Preview-Q4_K_M-GGUF --hf-file eeve-korean-instruct-7b-v2.0-preview-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo dev-jonghoonpark/EEVE-Korean-Instruct-7B-v2.0-Preview-Q4_K_M-GGUF --hf-file eeve-korean-instruct-7b-v2.0-preview-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo dev-jonghoonpark/EEVE-Korean-Instruct-7B-v2.0-Preview-Q4_K_M-GGUF --hf-file eeve-korean-instruct-7b-v2.0-preview-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo dev-jonghoonpark/EEVE-Korean-Instruct-7B-v2.0-Preview-Q4_K_M-GGUF --hf-file eeve-korean-instruct-7b-v2.0-preview-q4_k_m.gguf -c 2048
```
|
RichardErkhov/pochlebiacz_-_Svama-8b-support-v2.5-e5-gguf
|
RichardErkhov
| 2025-06-08T08:01:58Z | 39 | 0 | null |
[
"gguf",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-06-08T06:55:54Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Svama-8b-support-v2.5-e5 - GGUF
- Model creator: https://huggingface.co/pochlebiacz/
- Original model: https://huggingface.co/pochlebiacz/Svama-8b-support-v2.5-e5/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Svama-8b-support-v2.5-e5.Q2_K.gguf](https://huggingface.co/RichardErkhov/pochlebiacz_-_Svama-8b-support-v2.5-e5-gguf/blob/main/Svama-8b-support-v2.5-e5.Q2_K.gguf) | Q2_K | 2.96GB |
| [Svama-8b-support-v2.5-e5.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/pochlebiacz_-_Svama-8b-support-v2.5-e5-gguf/blob/main/Svama-8b-support-v2.5-e5.IQ3_XS.gguf) | IQ3_XS | 3.28GB |
| [Svama-8b-support-v2.5-e5.IQ3_S.gguf](https://huggingface.co/RichardErkhov/pochlebiacz_-_Svama-8b-support-v2.5-e5-gguf/blob/main/Svama-8b-support-v2.5-e5.IQ3_S.gguf) | IQ3_S | 3.43GB |
| [Svama-8b-support-v2.5-e5.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/pochlebiacz_-_Svama-8b-support-v2.5-e5-gguf/blob/main/Svama-8b-support-v2.5-e5.Q3_K_S.gguf) | Q3_K_S | 3.41GB |
| [Svama-8b-support-v2.5-e5.IQ3_M.gguf](https://huggingface.co/RichardErkhov/pochlebiacz_-_Svama-8b-support-v2.5-e5-gguf/blob/main/Svama-8b-support-v2.5-e5.IQ3_M.gguf) | IQ3_M | 3.52GB |
| [Svama-8b-support-v2.5-e5.Q3_K.gguf](https://huggingface.co/RichardErkhov/pochlebiacz_-_Svama-8b-support-v2.5-e5-gguf/blob/main/Svama-8b-support-v2.5-e5.Q3_K.gguf) | Q3_K | 3.74GB |
| [Svama-8b-support-v2.5-e5.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/pochlebiacz_-_Svama-8b-support-v2.5-e5-gguf/blob/main/Svama-8b-support-v2.5-e5.Q3_K_M.gguf) | Q3_K_M | 3.74GB |
| [Svama-8b-support-v2.5-e5.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/pochlebiacz_-_Svama-8b-support-v2.5-e5-gguf/blob/main/Svama-8b-support-v2.5-e5.Q3_K_L.gguf) | Q3_K_L | 4.03GB |
| [Svama-8b-support-v2.5-e5.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/pochlebiacz_-_Svama-8b-support-v2.5-e5-gguf/blob/main/Svama-8b-support-v2.5-e5.IQ4_XS.gguf) | IQ4_XS | 4.18GB |
| [Svama-8b-support-v2.5-e5.Q4_0.gguf](https://huggingface.co/RichardErkhov/pochlebiacz_-_Svama-8b-support-v2.5-e5-gguf/blob/main/Svama-8b-support-v2.5-e5.Q4_0.gguf) | Q4_0 | 4.34GB |
| [Svama-8b-support-v2.5-e5.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/pochlebiacz_-_Svama-8b-support-v2.5-e5-gguf/blob/main/Svama-8b-support-v2.5-e5.IQ4_NL.gguf) | IQ4_NL | 4.38GB |
| [Svama-8b-support-v2.5-e5.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/pochlebiacz_-_Svama-8b-support-v2.5-e5-gguf/blob/main/Svama-8b-support-v2.5-e5.Q4_K_S.gguf) | Q4_K_S | 4.37GB |
| [Svama-8b-support-v2.5-e5.Q4_K.gguf](https://huggingface.co/RichardErkhov/pochlebiacz_-_Svama-8b-support-v2.5-e5-gguf/blob/main/Svama-8b-support-v2.5-e5.Q4_K.gguf) | Q4_K | 4.58GB |
| [Svama-8b-support-v2.5-e5.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/pochlebiacz_-_Svama-8b-support-v2.5-e5-gguf/blob/main/Svama-8b-support-v2.5-e5.Q4_K_M.gguf) | Q4_K_M | 4.58GB |
| [Svama-8b-support-v2.5-e5.Q4_1.gguf](https://huggingface.co/RichardErkhov/pochlebiacz_-_Svama-8b-support-v2.5-e5-gguf/blob/main/Svama-8b-support-v2.5-e5.Q4_1.gguf) | Q4_1 | 4.78GB |
| [Svama-8b-support-v2.5-e5.Q5_0.gguf](https://huggingface.co/RichardErkhov/pochlebiacz_-_Svama-8b-support-v2.5-e5-gguf/blob/main/Svama-8b-support-v2.5-e5.Q5_0.gguf) | Q5_0 | 5.21GB |
| [Svama-8b-support-v2.5-e5.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/pochlebiacz_-_Svama-8b-support-v2.5-e5-gguf/blob/main/Svama-8b-support-v2.5-e5.Q5_K_S.gguf) | Q5_K_S | 5.21GB |
| [Svama-8b-support-v2.5-e5.Q5_K.gguf](https://huggingface.co/RichardErkhov/pochlebiacz_-_Svama-8b-support-v2.5-e5-gguf/blob/main/Svama-8b-support-v2.5-e5.Q5_K.gguf) | Q5_K | 5.34GB |
| [Svama-8b-support-v2.5-e5.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/pochlebiacz_-_Svama-8b-support-v2.5-e5-gguf/blob/main/Svama-8b-support-v2.5-e5.Q5_K_M.gguf) | Q5_K_M | 5.34GB |
| [Svama-8b-support-v2.5-e5.Q5_1.gguf](https://huggingface.co/RichardErkhov/pochlebiacz_-_Svama-8b-support-v2.5-e5-gguf/blob/main/Svama-8b-support-v2.5-e5.Q5_1.gguf) | Q5_1 | 5.65GB |
| [Svama-8b-support-v2.5-e5.Q6_K.gguf](https://huggingface.co/RichardErkhov/pochlebiacz_-_Svama-8b-support-v2.5-e5-gguf/blob/main/Svama-8b-support-v2.5-e5.Q6_K.gguf) | Q6_K | 6.14GB |
| [Svama-8b-support-v2.5-e5.Q8_0.gguf](https://huggingface.co/RichardErkhov/pochlebiacz_-_Svama-8b-support-v2.5-e5-gguf/blob/main/Svama-8b-support-v2.5-e5.Q8_0.gguf) | Q8_0 | 7.95GB |
Original model description:
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
amin/medical_embedding_1
|
amin
| 2025-06-08T07:57:56Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:16156",
"loss:ContrastiveLoss",
"arxiv:1908.10084",
"base_model:abhinand/MedEmbed-small-v0.1",
"base_model:finetune:abhinand/MedEmbed-small-v0.1",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-06-08T07:54:11Z |
---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:16156
- loss:ContrastiveLoss
base_model: abhinand/MedEmbed-small-v0.1
widget:
- source_sentence: What are the symptoms of Anomalies?
sentences:
- 13 weeks menstrual age, there are three ossification centers in vertebrae C1 through
L314 (Fig. 35.2). Neural arch ossification begins as a small focus at the base
of the transverse process and extends simultaneously into the pedicle anteriorly
and into the lamina posteriorly (Fig. 35.3). Ultrasound evaluation for spina bifida
usually occurs between 16 and 22 weeks gestation. By 16 weeks, there is enough
ossifica- tion in the neural arches to assess for spina bifida to level L5,15
by 19 weeks to level S1, and by 22 weeks to level S2 (Figs. 35.4 and 35.5). In
some fetuses, there may be enough neural arch ossification to assess for spina
bifida before these gestational ages. Braithwaite et al.16 assessed the fetal
anatomy at 12 to 13 weeks gestation by a combination of transabdominal and transvaginal
sonography and reported successful examination of the vertebrae and overlying
skin in both the transverse and the coronal plane in all cases. Others have reported
successful prenatal diagnosis of spina bifida at 12 to 14 weeks gestation on the
basis of abnormal cranial findings.17-19 They caution that although the characteristic
cranial findings may be present at 11 to 14 weeks, the prevalence of these findings
in the first trimester remains to be determined (Table 35.2). Furthermore, closed
NTDs are less likely to be associated with abnormal cranial findings and therefore
are more difficult to detect in the first trimester. Normal Position of the Spinal
Cord For fetuses at 19 to 33 weeks gestation, the conus medullaris is normally
situated at level L2-L3 or higher (Fig. 35.6). Level L3 is taken to be indeterminate
and L3-L4 or lower as abnormal.20 For those fetuses with tethered cord, the position
of the conus CHAPTER 35 The Fetal Spine 1219 FIG. 35.2 Spine Ossification at 11
Weeks + 4 Days
- Dorsiflexion of foot at ankle joint; inversion of foot; dynamic support of medial
arch of foot Extensor hallucis longus Middle one-half of medial surface of fibula
and adjacent surface of interosseous membrane Dorsal surface of base of distal
phalanx of great toe Deep fibular nerve (L5, S1) Extension of great toe and dorsiflexion
of foot Extensor digitorum longus Proximal one-half of medial surface of fibula
and related surface of lateral tibial condyle Bases of distal and middle phalanges
of lateral four toes Deep fibular nerve (L5, S1) Extension of lateral four toes
and dorsiflexion of foot Fibularis tertius Distal part of medial surface of fibula
Dorsomedial surface of base of metatarsal 5 Deep fibular nerve (L5, S1) Dorsiflexion
and eversion of foot Modified from Drake, RL, Grays Anatomy for Students, 3rd
ed, 2015, Churchill Livingstone, Elsevier. 7 137 Surface anatomy the soleus and
the plantaris muscles (Fig. 7.14 and Table 7.6). These superficial muscles plantarflex
the foot at the ankle joint. The gastrocnemius muscle is the most superficial
muscle of the posterior calf and has two heads, the medial and lateral. Standing
on tip toes makes the two heads of gastrocnemius more prominent and palpable.
Distally, they converge to form the calcaneal tendon, or Achilles tendon, which
can be observed toward its attachment on the calcaneus. The soleus muscle lies
deep to the gas- trocnemius and also inserts into the calcaneal tendon. The soleus
muscle can be palpated either side of the calcaneal tendon. The plantaris muscle
is a small vestigial muscle with a long tendinous portion that passes between
the soleus muscle and the gastrocnemius muscle. It cannot be palpated. The deep
group of posterior compartment consists of the popliteus, flexor hallucis longus,
flexor digitorum longus and tibialis posterior muscles (Table 7.6). Although these
muscles are not palpable, their
- recessive polycystic kidneys, autosomal dominant polycystic kidneys, Jeune asphyxiating
thoracic dystrophy, Ellis-van Creveld syndrome, and others.115 Joubert Syndrome
Joubert syndrome and related disorders (JSRD) have the key feature of molar tooth
sign visible on MRI. The molar tooth appearance results from hypoplasia of the
cerebellar vermis, horizontal thick elongated cerebral peduncles, and deep inter-
peduncular fossa at upper pons; on axial MRI of the brainstem, these features
look like a molar tooth. This sign is used as the diagnostic test in children.
JSRD is clinically characterized by hypotonia, ataxia, psychomotor delay, irregular
breathing, and abnormal eye movements and has an incidence of about 1 per 80,000
pregnancies. Different combinations of ciliary gene muta- tions can result in
primary Joubert syndrome, and related disorders have variable abnormalities of
the neurons, eye, renal tubules, and bile ducts and polydactyly.116 On ultrasound
the molar tooth sign findings of vermian hypoplasia, thickened cerebral peduncles,
and interpeduncular notch may be visible by 20 weeks and confirmed by MRI if needed.117
Additional cerebral imaging findings can include abnormalities of the corpus callosum
and neuronal migrational abnormalities, Dandy-Walker malformation (DWM), and encephalocele
as well as abnormalities in somatic structures. If the mutation is known (about
50%), early diagnosis is possible with chorionic villus sampling (CVS). Prognosis
is generally poor and related to extent of breathing and feeding problems in the
short term and renal and hepatic complications in the long term. Meckel-Gruber
Syndrome Meckel-Gruber syndrome is likely the most common syndromic abnormality
of the CNS and is characterized by occipital encephalocele, enlarged dysplastic
kidneys, hepatic duct prolifera- tion, polydactyly, posterior fossa abnormalities,
and craniofacial and heart defects and has features that overlap with JSRS. Incidence
is 1 per 13,000 to 140,000 live births. It is a lethal autosomal recessive disorder
associated with mutations in several ciliary genes.
- source_sentence: What is the prognosis for Hydrops?
sentences:
- cystic C- shaped structure typical of hydrosalpinx. Shama Puri 108 f f f component
of the rare fallopian tube carcinoma is usually larger and less numerous than
multiple small nodules seen in hydrosalpinx due to thickened endosalpingeal folds.
Clinical signiο¬cance. Usually asymptomatic. Can present with pelvic pain or infertility.
It can be treated by lysis of adhesions and tuboplasty. Peritoneal Inclusion Cyst
Also referred to as peritoneal pseudocyst, this is a benign non-neoplastic cystic
pelvic mass typically seen in premenopausal women with functioning ovaries and
pelvic adhesions impairing the absorp- tion of ovarian ο¬uid secreted during ovulation.
Almost always, there is a history of pelvic surgery, PID or endometriosis. Figure
8.21 Beads on a string sign of hydrosalpinx (or cog-wheel appearance) hyperechoic,
short, round projections within the tube (arrows). Figure 8.22 Waist sign of hydrosalpinx.
Indentations on the opposite of the tubular cystic structure forming a waist (arrows)
are typical of hydrosalpinx. 8 Assessment of Ovarian Cysts and Masses 109 f f
f Ultrasound features. Unilocular or multilocular cys- tic mass conforming to
the contours of the peritoneal cavity with a normal-appearing ovary suspended
within the mass, either centrally spider in web appearance or at the periphery
[25] (Figure 8.24). Septations, when present, are usually thin and smooth but
may be thick and show colour ο¬ow. No solid elements are present. The ovarian contour
may be distorted by adhesions. In contrast to septae within true ovarian cysts,
the septae in pseudocysts generally move and ο¬ap when the cystic area is prodded
by the transvaginal ultrasound probe. This has been described as the ο¬apping sail
sign. Tips and Tricks Key to the recognition of a peritoneal cyst is the demonstration
of a normal ovary within or along the periphery of a cystic mass. Figure 8.24
Peritoneal inclusion cyst in a 45-year-old
- 'born in military hospitals to Gulf War veterans. Teratology 56:244251, 1997.
907. Herwig MC, Gembruch U, Born M, et al: Preterm diagnosis of choristoma and
choroidal coloboma in Goldenhars syndrome. Pediatr Dev Pathol 14:322326, 2011.
908. Ghi T, Contro E, Carletti A, et al: Prenatal sonographic imaging of Goldenhar
syndrome associated with cystic eye. Prenat Diagn 28:362 363, 2008. Ob/Gyne Books
Full CHAPTER 11 Fetal Musculoskeletal System 345 941. Christianson C, Huff D,
McPherson E: Limb deformations in oligohydramnios sequence: effects of gestational
age and duration of oligohydramnios. Am J Med Genet 86:430433, 1999. 942. Yamamoto
H: A clinical, genetic and epidemiologic study of congenital club foot. Jinrui
Idengaku Zasshi 24:3744, 1979. 943. Nemec U, Nemec SF, Kasprian G, et al: Clubfeet
and associated abnormalities on fetal magnetic resonance imaging. Prenat Diagn
32(9):822828, 2012. 944. Shipp TD, Benacerraf BR: The significance of prenatally
identified isolated clubfoot: is amniocentesis indicated Am J Obstet Gynecol 178:600602,
1998. 945. Malone FD, Marino T, Bianchi DW, et al: Isolated clubfoot diagnosed
prenatally: is karyotyping indicated Obstet Gynecol 95:437440, 2000. 935. Zelop
C, Benacerraf B: Sonographic diagnosis of fetal upper extremity dysmorphology:
significance and outcome. Ultrasound Obstet Gynecol 8:391396, 1996. 936. Dicke
JM, Piper SL, Goldfarb CA: The utility of ultrasound for the detection of fetal
limb abnormalitiesa 20-year single-center experience. Prenat Diagn 35:348353,
2015. 937. Fahy MJ, Hall JG: A retrospective study of pregnancy complications
among 828 cases of arthrogryposis. Genet Couns 1(1):311, 1990. 938. Bacino CA,
Hecht JT: Etiopathogenesis of equinovarus foot malformations. Eur J Med Genet
57:473479, 2014. 939. Mammen L, Benson CB: Outcome of fetuses with clubfeet diagnosed
by prenatal sonography. J Ultrasound Med 23:497500, 2004. 940. Bar-On E, Mashiach
R, Inbar O, et al: Prenatal ultrasound diagnosis of club foot: outcome and recommendations
for counselling and follow-up. J Bone Joint'
- 'to rise, the major limitation for expanding transplant programs is the shortage
of suitable donor kidneys. This organ shortage has resulted in an increasing number
of renal transplantations from living related donors. These donors may include
family members or close friends with a long-standing relationship with the recipient.
The average life expectancy for a cadaveric allograft is 7 to 10 years, whereas
that for a live donor allograft is 15 to 20 years.44 Regardless of whether a cadaveric
or live donor allograft is used, the cost-benefit ratio of a functioning successful
transplant far outweighs that of a patient with persistent CRF, so multiple health
care resources are targeted to ensure high rates of success. Ultrasound is the
most valuable noninvasive imaging modality in monitoring the renal transplant.
Surgical Technique Detailed sonography of the renal transplant requires knowledge
of the surgical procedure used in most institutions as well as the postsurgical
anatomic relationships. The right or left lower quadrant is selected for the incision,
based on the patients prior surgical history and the surgeons preference. Usually,
the right lower quadrant is selected because the right iliac vein is more superficial
and horizontal on this side of the pelvis, facilitating creation of a vascular
anastomosis.45,46 The type of arterial anastomosis used depends on whether the
allograft is cadaveric or living related and on the number essential to facilitate
early resection, ablation, or chemotherapy26,43 (Fig. 18.27). As in the general
population, transplant recipients can develop any type of primary or secondary
neoplasm within the liver. RENAL TRANSPLANTATION Transplantation is the treatment
of choice for many patients with chronic renal failure (CRF) severe enough to
warrant FIG. 18.18 Inferior Vena Cava (IVC) Infrahepatic Anastomosis: Normal and
Abnormal in Two Patients. Sagittal sonograms of IVC show (A) a normal caliber
at the anastomosis (arrows) and'
- source_sentence: What are the risk factors for Alternatively?
sentences:
- to 2% of cesarean deliveries.216 The right gonadal vein is more commonly involved,
likely because of increased pressure on the right gonadal vein. The left ovarian
vein is felt to be protected owing to retrograde flow from the left renal vein.13
Ovarian vein thrombosis will appear on ultrasound images as a tubular or serpentine
avascular structure, often anechoic to hypoechoic, in the region of the right
adnexa and psoas muscle corresponding to the thrombosed vein. Extension into the
inferior vena cava may occur. However, these structures may be difficult to visualize
at sonography, FIG 29-39 Spectrum of ultrasound findings of adenomyosis. A, Transabdominal
sagittal ultrasound image demonstrates enlarged, heterogeneous, globular uterus
with asymmetric thickening of the posterior myo- metrium. B, Transvaginal sagittal
scan from a second patient demonstrates asymmetric thickening of the anterior
myometrium, with markedly heterogeneous echotexture, hypoechoic linear striations,
and shadow- ing. Endometrial thickness is measured (calipers). The uterus is retroverted.
C, Transvaginal scan from a third patient demonstrates heterogeneity of the posterior
myometrium with small myometrial cysts (white arrows) and linear striations. Note
linear rays of shadowing in a comb-like or Venetian blind pattern. The region
of abnormal myometrial echotexture is posterior to the endometrium (black arrow).
D, Larger myometrial cyst (arrow and calipers), with asymmetric thickening, heterogeneity
of the myometrium, and comb-like shadowing. E, Color Doppler sonogram demonstrates
diffuse hypervascularity of the asymmetrically thick- ened posterior myometrium.
F, Sagittal T2-weighted magnetic resonance image of a different patient with diffuse
adenomyosis demonstrating marked diffuse thickening of the junctional zone (arrow)
and scattered tiny T2-weighted hyperintense foci. B A D F C E Ob/Gyne Books Full
908 SECTION II Gynecology of suspected appendicitis in the pregnant patient yielded
a sensitivity of 90.5%, specificity of 98.6%, PPV of 90.4%, and NPV of 99.5%224
(Fig. 29-44). At ultrasound examination, the
- measured; midbody and fundal endometrium; and right and left ovaries with and
without the maximum width measured (Fig. 26-8). Additionally, any disease or variant
of normal must be assessed and appropriate additional images recorded. Most laboratories
now include cine clip acquisition in addition to static images. Color Doppler,
power Doppler, and pulsed Doppler are often added to the protocol depend- ing
upon the clinical situation and abnormality demonstrated on gray- scale imaging.
The use of three-dimensional (3D) sonography also has become standard in many
laboratories (Fig. 26-9). By capturing a volume of data, 3D imaging permits a
display of any desired plane through the uterus, cervix, ovaries, and adnexa and
optimizes assess- ment of the entire endometrial canal. 3D imaging is particularly
useful for the assessment of intrauterine device (IUD) positioning and sub- mucosal
myomas, as well as uterine fundal contour and morphologic appearance in the setting
of suspected congenital anomalies1,12-14 (Fig. 26-10). PELVIC ANATOMY The pelvis,
so-called because of its resemblance to a basin, is divided into two structurally
continuous compartments, the true (or lesser) pelvis and the false (or greater)
pelvis, by an oblique plane passing from the sacral promontory, the arcuate and
pectineal lines, and the superior margin of the symphysis pubis (Fig. 26-11).
The circumfer- ence of this plane is called the linea terminalis, or pelvic brim.15
The true pelvis is the lower portion and is bounded anteriorly by the pubis and
pubic rami, posteriorly by the sacrum and coccyx, laterally by the fused ilium
and ischium, and inferiorly by the muscles of the pelvic floor. The false pelvis
is bounded laterally by the flanged portions of the iliac bones, the base of the
sacrum posteriorly, and the abdominal wall anteriorly and laterally. In the absence
of masses in the nongravid patient, the uterus, ovaries, adnexa,
- 'wall is very thin (squamous cell carcinoma). C A B Other Findings Air bronchograms
and bubble-like lucencies or pseudo- cavitation may be seen within lung cancers,
in particular with adenocarcinoma.36 Occasionally, dilated mucus- ο¬lled bronchi
(bronchocele, mucocele, mucoid impac- tion) are seen distal to a carcinoma obstructing
a segmental or subsegmental bronchus. Ground-glass attenuation may be seen as
a component of nodules and is associated with a greater risk of malignancy than
that of purely solid nodules. It is more commonly associated with adenocarcinoma,70
which may present as a purely ground-glass opacity. Central Tumours The cardinal
imaging signs of a central tumour are collapse/consolidation of the lung beyond
the tumour and the presence of hilar enlargement, signs that may be seen in isolation
or in conjunction with one another. 328 SECTION B The Chest and Cardiovascular
System FIGURE 15-16 Tumour calciο¬cation. Large bronchial carcinoma invading the
mediastinum demonstrates coarse and cloud-like calciο¬cation. FIGURE 15-17 Lobar
collapse. The tumour in the bronchus intermedius is causing partial middle and
lower lobe collapse. FIGURE 15-14 CT showing a cavitating squamous cell carci-
noma in the left lung. The wall of the cavity is variable in thickness. FIGURE
15-15 Calciο¬ed infectious granuloma engulfed by lung cancer. CT shows a cluster
of densely calciο¬ed small nodules almost at the centre of a small carcinoma. Collapse/Consolidation
in Association with Central Tumours Obstruction of a major bronchus often leads
to a combi- nation of atelectasis and retention of secretions with con- sequent
pulmonary opacity, but collateral air drift may partially or completely prevent
these postobstructive changes. Secondary infection may occur beyond the obstruction.
The following features suggest that pneumonia is sec- ondary to an obstructing
neoplasm: 1. The shape of the collapsed or consolidated lobe may be altered because
of the bulk of the underly- ing tumour.'
- source_sentence: What are the complications of San Francisco?
sentences:
- 'the effect of con- comitant problems such as partial volume or movement artefacts.
Combined Protocols: One-Stop-Shop Procedure. Computed tomography (CT) venography
has been con- sidered as a part of a one-stop-shop procedure in order to diagnose
VTE.58 After administering one bolus of con- trast medium, ο¬rst the pulmonary
arteries are investigated followed by additional late-phase imaging of the deep
venous system from the calves up to the inferior vena cava to detect DVT. Although
this combined procedure is feasible and has the advantage of detecting DVT in
pelvic veins and IVC, which is not possible with CUS, recent studies have shown
that in comparison with CTPA alone, this combined technique results in only limited
increase in sensitivity with a comparable speciο¬city.59 A major drawback of CT
venography is the signiο¬cant increase of radiation, which at this moment does
not justify its routine use in patients with suspected PE (Fig. 23-22). Alternatively,
CTA with ECG-gating can be performed in patients presenting with acute chest pain
without sig- niο¬cant increase in radiation dose. During one data acqui- sition,
information can be obtained on the most important vascular diseases causing acute
chest pain: acute coronary syndrome, aortic dissection and acute pulmonary embo-
lism.60 The use of dual-source CT or systems with high numbers of detector rows
may overcome the initial limita- tions of ECG-gated CTPA, providing faster acquisition
times, better image quality in patients with abnormal cardiac rhythms, and lower
radiation dose.61 The downside of such a protocol is the increase in complexity
both for the technician and the reader, with an increase in post- processing and
interpretation time. CTPA During Pregnancy. During pregnancy and puerperium, the
incidence of VTE is two- to fourfold higher and is one of the most important causes
of mater- nal mortality. As diagnosing DVT in patients with'
- excellent results. High- frequency, small-footprint transducers should be used
in small children. Most ultrasound machines have pediatric settings that are a
good starting point for many examinations. Sedation is almost never required,
and distraction techniques such as showing movies are very effective. At all ages,
the ultrasound technologist is probably the most important factor in obtaining
high-quality images. Find a technologist who works well with children, learn from
him or her, and encourage him or her to teach others. Nuclear Medicine The practice
of pediatric nuclear medicine is highly variable between institutions in terms
of the type and number of procedures performed. Basic nuclear medicine studies
including diuretic renal scans (for urinary tract obstruction), hepatobiliary
(e.g., HIDA) scans (for biliary atresia), and cystograms (for vesicoureteral reflux)
are still the backbone of a pediatric nuclear medicine practice. 18-F FDG-PET
is increasingly being used, however, for the staging and follow-up of pediatric
malignancies such as lymphoma and sarcomas (bone and soft tissue). The 123I-MIBG
scan is an examination that is almost unique to pediatrics and is still extensively
used for the diagnosis, staging, and follow-up of neuroblastoma. Details of nuclear
medicine imaging technique and protocols are beyond the scope of this text but
the reader should be aware of the North American Consensus Guidelines for Pediatric
Administered Radiopharmaceutical Activities which provide recommended activities
for commonly used radiopharmaceuticals. This can be an invaluable resource when
you are in practice and are asked to do a nuclear medicine study on a child. MRI
MRI is a dominant imaging modality in pediatrics, in part due to the fact that
images can be obtained without use of ionizing radiation. This benefit has very
likely been given greater weight than it deserves, but MRI has other advantages
as well, including high soft tissue contrast and the ability
- 'sagittal image of the cervical spine (different patient than in A). Two enhancing
lesions are present in the anterior dural sac (arrowheads), surrounded by cerebrospinal
ο¬ uid. These were neuroο¬ - bromas. C, T1 contrast-enhanced axial image of the
cervical spine (same patient as in B). A neuroο¬ broma is present in the intradural
space (solid arrow) and extends into the epidural space in the neural foramen
(open arrow). This is a dumbbell neuroο¬ broma that is intradural and extradural.
Nerve sheath tumors and meningiomas constitute 90% of intradural lesions. Conus/ο¬
lum terminale/cauda equina region (include nerve sheath tumor and metastasis in
differential for this area). 314 SPINE Figure 13-72 Intradural space: meningioma.
T1 contrast-enhanced sag- ittal image of the cervical spine. There is an enhancing
mass in the anterior intradural space with a broad-based attachment to the anterior
dura. This is a typical appearance for a meningioma. Figure 13-73 Intradural space:
paraganglioma. T1 contrast-enhanced sagittal image of the lumbar spine. There
is a large mass involving the cauda equina that ο¬ lls the dural sac and erodes
the posterior L4 vertebral body, indi- cating it is a long-standing lesion. The
center of the mass is low signal from necrosis or hemorrhage. This was a paraganglioma
at surgery; the differential diagnosis should include ependymoma, metastasis,
and nerve sheath tumor. cauda equina or conus, or acquired from lumbar punctures,
in which case they occur in the lower lumbar region. MRI features vary and are
nonspeciο¬ c, but generally show a mass of low signal intensity on T1W and high
signal intensity on T2W images (higher than spinal ο¬ uid). Dermoid cysts are congenital
tumors that arise from epithelial inclusions in the neural groove during develop-
ment. They may be located either intradurally or within the cord in equal numbers.
They usually'
- source_sentence: What are the symptoms of Presti?
sentences:
- 'for one concept or many concepts in one name Available from https://www.phgfoundation.org/documents/311_1358522182.pdf.
Accessed September 29, 2017. 2. Atkinson AJ, Colburn WA, DeGruttola VG, et al.
Biomarkers and surrogate endpoints: Preferred definitions and conceptual framework.
Clin Pharmacol Ther 2001;69:8995. 3. Falconi A, Lopes G, Parker JL. Biomarkers
and receptor targeted therapies reduce clinical trial risk in nonsmall-cell lung
cancer. J Thorac Oncol 2014;9:163169. 4. Thompson RH, Kurta JM, Kaag M, et al.
Tumor size is associated with malignant potential in renal cell carcinoma cases.
J Urol 2009;181:20332036. 5. Lim C, Sung M, Shepherd FA, et al. Patients with
advanced nonsmall cell lung cancer: Are research biopsies a barrier to participation
in clinical trials J Thorac Oncol 2015;11:7984. 6. Executive Summary. Interim
analysis of the NCI-MATCH Trial. 2 0 1 6 . Available from https://dctd.cancer.gov/majorinitiatives/NCI-MATCH_Interim_Analysis_Executive_Summary.pdf.
Accessed September 29, 2017. 7. Tam AL, Lim HJ, Wistuba II, et al. Image-guided
biopsy in the era of personalized cancer care: Proceedings from the society of
interventional radiology research consensus panel. J Vasc Interv Radiol 2016;27:819.
8. Hasanovic A, Rekhtman N, Sigel CS, Moreira AL. Advances in fine needle aspiration
cytology for the diagnosis of pulmonary carcinoma. Patholog Res Int 2011;2011:
Article ID 897292, 7 pages. 9. Patel IJ, Davidson JC, Nikolic B, et al. Consensus
guidelines for periprocedural management of coagulation status and hemostasis
risk in percutaneous image-guided interventions. J Vasc Interv Radiol 2012;23:727736.
10. Ridout G, de la Motte S, Niemczyk S, et al. Effect of renal function on edoxaban
pharmacokinetics and on population PK/PK-PD model. J Clin Pharmacol 2009;49:1091130.
11. Baron TH, Kamath PS, McBane RD. Management of antithrombotic therapy in patients
undergoing invasive procedures. New Engl J Med 2013;368: 21132124. 12. Nutescu
EA. Oral anticoagulant therapies: Balancing the risks. Am J Health Syst Pharm
2013;70(10 Suppl 1):S3S11. 1 3 . Fleisher LA,'
- diaphragmatic hernia has been reported.67,77,78 Although these malformations may
be detected in the first trimester, visualization will depend on size, and continued
growth may aid detection in the second trimester. In a randomized trial of routine
12-week anatomic survey versus routine 18-week anatomic survey, Saltvedt and colleagues
detected 0% of the three diaphragmatic hernias in the 12-week group but 50% of
four diaphragmatic hernias in the 18-week group, but this difference was not statistically
signifi- cant because of the overall low prevalence of congenital diaphragmatic
hernia in the cohort (7/36,108).78 Cardiac Disease Congenital heart disease is
one of the most common severe congenital abnormalities, with a prevalence of 8/1000
live births.22,72,79,80 Over the past 2 decades, imaging of the fetal heart in
the first trimester has evolved considerably to include full echocardiographic
studies, with several authors reporting diagnosis of congenital heart disease
in the first trimester22,30,79-81 (Fig. 5-23). In a retrospective study of 2165
sin- gleton pregnancies that underwent fetal echocardiogram from 1997 to 2003
Smrcek and colleagues reported the frequency of congenital heart malformations
diagnosed between 11 and 13 weeks, with atrioven- tricular septal defects being
the most frequent by about 4.5-fold (18/29), followed by ventricular septal defect
(4/29), and tetralogy of Fallot (3/29).28 Additionally, ectopia cordis, hypoplastic
left-sided and right-sided heart syndrome, double outlet right ventricle, transposi-
tion of the great arteries, absence of the pulmonary valves, aortic ste- nosis,
aortic coarctation, left and right atrial isomerism, pulmonary stenosis, truncus
arteriosus, tricuspid atresia, and total anomalous pul- monary venous return have
all been reported as either isolated findings or in combination as complex congenital
heart disease.22,28,81-83 The majority of studies evaluating first trimester fetal
cardiac evaluation have included a selected population referred for specialized
fetal echo- cardiogram in which the indication most commonly was increased nuchal
translucency but
- 'the fetal chest, a four-chamber view of the heart is imaged. Note that the apex
of the heart is pointing toward the left side of the fetal chest (Figs. 6.2 and
6.4). Determining that the stomach, descending aorta, and cardiac apex are located
on the fetal left side and the inferior vena cava is located on the right side
establishes normal visceral situs (Figs. 6.1 and 6.3). Figure 6.1: Schematic drawing
of a cross section of the upper abdomen for the assessment of the abdominal situs.
The vertical line divides this plane into right and left. The right-sided structures
include the gallbladder, the portal sinus, a large part of the liver, and inferior
vena cava (IVC). The left-sided structures include the descending aorta, the stomach,
and the spleen. Figure 6.3 is the corresponding ultrasound plane. Figure 6.2:
Determining fetal situs in longitudinal lie: In A, the fetus is in a cephalic
presentation with the fetal spine close to the left uterine wall, resulting in
the right side being anterior and left side posterior. In B, the fetus is in a
cephalic presentation with the fetal spine close to the right uterine wall, resulting
in the left side being anterior and right side posterior. In C, the fetus is in
a breech presentation with the fetal spine close to the left uterine wall, resulting
in the left side being anterior and right side posterior. In D, the fetus is in
a breech presentation with the fetal spine close to the right uterine wall, resulting
in the right side being anterior and left side posterior. Note the corresponding
transverse ultrasound planes of the chest and abdomen. Blue arrows point to fetal
stomach, red arrows to the apex of the heart, and yellow arrows to the descending
aorta. See text for details. Several'
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- pearson_cosine
- spearman_cosine
model-index:
- name: SentenceTransformer based on abhinand/MedEmbed-small-v0.1
results:
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: Unknown
type: unknown
metrics:
- type: pearson_cosine
value: 0.8767120066593315
name: Pearson Cosine
- type: spearman_cosine
value: 0.6634507543410023
name: Spearman Cosine
---
# SentenceTransformer based on abhinand/MedEmbed-small-v0.1
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [abhinand/MedEmbed-small-v0.1](https://huggingface.co/abhinand/MedEmbed-small-v0.1). It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [abhinand/MedEmbed-small-v0.1](https://huggingface.co/abhinand/MedEmbed-small-v0.1) <!-- at revision 40a5850d046cfdb56154e332b4d7099b63e8d50e -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 384 dimensions
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the π€ Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'What are the symptoms of Presti?',
'the fetal chest, a four-chamber view of the heart is imaged. Note that the apex of the heart is pointing toward the left side of the fetal chest (Figs. 6.2 and 6.4). Determining that the stomach, descending aorta, and cardiac apex are located on the fetal left side and the inferior vena cava is located on the right side establishes normal visceral situs (Figs. 6.1 and 6.3). Figure 6.1: Schematic drawing of a cross section of the upper abdomen for the assessment of the abdominal situs. The vertical line divides this plane into right and left. The right-sided structures include the gallbladder, the portal sinus, a large part of the liver, and inferior vena cava (IVC). The left-sided structures include the descending aorta, the stomach, and the spleen. Figure 6.3 is the corresponding ultrasound plane. Figure 6.2: Determining fetal situs in longitudinal lie: In A, the fetus is in a cephalic presentation with the fetal spine close to the left uterine wall, resulting in the right side being anterior and left side posterior. In B, the fetus is in a cephalic presentation with the fetal spine close to the right uterine wall, resulting in the left side being anterior and right side posterior. In C, the fetus is in a breech presentation with the fetal spine close to the left uterine wall, resulting in the left side being anterior and right side posterior. In D, the fetus is in a breech presentation with the fetal spine close to the right uterine wall, resulting in the right side being anterior and left side posterior. Note the corresponding transverse ultrasound planes of the chest and abdomen. Blue arrows point to fetal stomach, red arrows to the apex of the heart, and yellow arrows to the descending aorta. See text for details. Several',
'diaphragmatic hernia has been reported.67,77,78 Although these malformations may be detected in the first trimester, visualization will depend on size, and continued growth may aid detection in the second trimester. In a randomized trial of routine 12-week anatomic survey versus routine 18-week anatomic survey, Saltvedt and colleagues detected 0% of the three diaphragmatic hernias in the 12-week group but 50% of four diaphragmatic hernias in the 18-week group, but this difference was not statistically signifi- cant because of the overall low prevalence of congenital diaphragmatic hernia in the cohort (7/36,108).78 Cardiac Disease Congenital heart disease is one of the most common severe congenital abnormalities, with a prevalence of 8/1000 live births.22,72,79,80 Over the past 2 decades, imaging of the fetal heart in the first trimester has evolved considerably to include full echocardiographic studies, with several authors reporting diagnosis of congenital heart disease in the first trimester22,30,79-81 (Fig. 5-23). In a retrospective study of 2165 sin- gleton pregnancies that underwent fetal echocardiogram from 1997 to 2003 Smrcek and colleagues reported the frequency of congenital heart malformations diagnosed between 11 and 13 weeks, with atrioven- tricular septal defects being the most frequent by about 4.5-fold (18/29), followed by ventricular septal defect (4/29), and tetralogy of Fallot (3/29).28 Additionally, ectopia cordis, hypoplastic left-sided and right-sided heart syndrome, double outlet right ventricle, transposi- tion of the great arteries, absence of the pulmonary valves, aortic ste- nosis, aortic coarctation, left and right atrial isomerism, pulmonary stenosis, truncus arteriosus, tricuspid atresia, and total anomalous pul- monary venous return have all been reported as either isolated findings or in combination as complex congenital heart disease.22,28,81-83 The majority of studies evaluating first trimester fetal cardiac evaluation have included a selected population referred for specialized fetal echo- cardiogram in which the indication most commonly was increased nuchal translucency but',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Semantic Similarity
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| pearson_cosine | 0.8767 |
| **spearman_cosine** | **0.6635** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 16,156 training samples
* Columns: <code>sentence_0</code>, <code>sentence_1</code>, and <code>label</code>
* Approximate statistics based on the first 1000 samples:
| | sentence_0 | sentence_1 | label |
|:--------|:----------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:---------------------------------------------------------------|
| type | string | string | float |
| details | <ul><li>min: 6 tokens</li><li>mean: 10.24 tokens</li><li>max: 25 tokens</li></ul> | <ul><li>min: 292 tokens</li><li>mean: 477.35 tokens</li><li>max: 512 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.15</li><li>max: 1.0</li></ul> |
* Samples:
| sentence_0 | sentence_1 | label |
|:-------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------|
| <code>What are the symptoms of Obstet Gynecol?</code> | <code>Imaging Parameters The ACR practice parameters for the performance of ce-MR imaging were revised in 2013 and amended in 2014. Table 22.4 lists the performance guidelines by technical factor. For a facility to be accredited for breast MR, they have to follow the ACR guidelines, but specific protocols will vary across institutions. In addition, for ACR accreditation, they must be able to do mammographic correlation, breast US, and MR imagingguided procedures or have a relationship with a facility that can provide those services for them. MR imaging equipment specifications and performance must also meet all state and federal requirements. Patients are scanned in the prone position with the breasts hanging into a dedicated breast coil. Body coils should not be used for breast MR examinations. The breast should be imaged in axial or sagittal planes or a combination of the two. Core pulse sequences when evaluating the breast for cancer include a three-plane localizer, T1W images, T2W images...</code> | <code>0.0</code> |
| <code>What is diagnosis?</code> | <code>(CNS) organs should be per- formed for differential diagnosis among syndromes presenting with fetal skeletal anomalies. For example, congenital heart disease is a prominent feature of Ellisvan Creveld and Holt-Oram syndromes.252,253 Fetal Movements The normal pattern of fetal movements can be identified as early as 11 weeks of gestation through a detailed anatomic evaluation.254-257 Abnormal fetal movements can be observed in skeletal disorders involving joint contractures, neural muscular connective tissue disor- ders, amyoplasia (lack of muscle growth), vascular compromise, and anomalies of the spinal cord. The most frequent conditions associated with abnormal or absent fetal movements are fetal akinesia deforma- tion sequence (FADS) or Pena-Shokeir syndrome, and arthrogrypo- sis.258 In FADS there is a significant reduction in the amplitude, velocity, and complexity of fetal movements.259,260 In arthrogryposis, there is fixed position of the distal parts of the limbs and reduced ampl...</code> | <code>1.0</code> |
| <code>What are the risk factors for Diagnostic Ultra?</code> | <code>G, Bast C, Lenz F, Bollmann R. Doppler echocardiography of the main stems of the pulmonary arteries in the normal human fetus. Ultrasound Obstet Gynecol 1998;11: 1739 47. Roth P, Agnani G, Arbez Gindre F, Pauchard JY, Burguet A, Schaal JP, Maillet R. Use of energy color Doppler in visualizing fetal pulmonary vascularization to predict the absence of severe pulmonary hypoplasia Gynecol Obstet Invest 1998;46:1537 48. Chaoui R, Kalache K, Tennstedt C, Lenz F, Vogel M. Pulmonary arterial Doppler in fetuses with lung hypoplasia. Eur J Obstet Gynecol Reprod Biol 1999:84:17985 49. Yoshimura S, Masuzaki H, Miura K, Muta K, Gotoh H, Ishimaru T. Diagnosis of fetal pulmonary hypoplasia by measurement of blood flow velocity waveforms of pulmonary arteries with Doppler ultrasonography. Am J Obstet Gynecol 1999;180:4416 50. Sherer DM, Eglinton GS, Goncalves LF, Lewis KM, Queenan JT. Prenatal color and pulsed Doppler sonographic documentation of intrathoracic umbilical vein and ductus venosus, confir...</code> | <code>0.0</code> |
* Loss: [<code>ContrastiveLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#contrastiveloss) with these parameters:
```json
{
"distance_metric": "SiameseDistanceMetric.COSINE_DISTANCE",
"margin": 0.5,
"size_average": true
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `multi_dataset_batch_sampler`: round_robin
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1
- `num_train_epochs`: 3
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: round_robin
</details>
### Training Logs
| Epoch | Step | Training Loss | spearman_cosine |
|:------:|:----:|:-------------:|:---------------:|
| 0.4950 | 500 | 0.0094 | - |
| 0.5 | 505 | - | 0.6499 |
| 0.9901 | 1000 | 0.0052 | - |
| 1.0 | 1010 | - | 0.6607 |
| 1.4851 | 1500 | 0.0041 | - |
| 1.5 | 1515 | - | 0.6597 |
| 1.9802 | 2000 | 0.0035 | - |
| 2.0 | 2020 | - | 0.6632 |
| 2.4752 | 2500 | 0.003 | - |
| 2.5 | 2525 | - | 0.6631 |
| 2.9703 | 3000 | 0.0031 | - |
| 3.0 | 3030 | - | 0.6635 |
### Framework Versions
- Python: 3.11.13
- Sentence Transformers: 4.1.0
- Transformers: 4.52.4
- PyTorch: 2.6.0+cu124
- Accelerate: 1.7.0
- Datasets: 2.14.4
- Tokenizers: 0.21.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### ContrastiveLoss
```bibtex
@inproceedings{hadsell2006dimensionality,
author={Hadsell, R. and Chopra, S. and LeCun, Y.},
booktitle={2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06)},
title={Dimensionality Reduction by Learning an Invariant Mapping},
year={2006},
volume={2},
number={},
pages={1735-1742},
doi={10.1109/CVPR.2006.100}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
NaruseShiroha/Qwen3-4B-reasoning-f16
|
NaruseShiroha
| 2025-06-08T07:57:13Z | 5 | 0 |
transformers
|
[
"transformers",
"gguf",
"qwen3",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/Qwen3-4B-Base",
"base_model:quantized:unsloth/Qwen3-4B-Base",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-06-08T07:55:10Z |
---
base_model: unsloth/Qwen3-4B-Base
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** NaruseShiroha
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-4B-Base
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mradermacher/setfit-article-classifier-i1-GGUF
|
mradermacher
| 2025-06-08T07:53:35Z | 2 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:rafaelstankiewicz/setfit-article-classifier",
"base_model:quantized:rafaelstankiewicz/setfit-article-classifier",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"feature-extraction"
] | null | 2025-06-08T07:51:48Z |
---
base_model: rafaelstankiewicz/setfit-article-classifier
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/rafaelstankiewicz/setfit-article-classifier
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/setfit-article-classifier-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/setfit-article-classifier-i1-GGUF/resolve/main/setfit-article-classifier.i1-IQ1_S.gguf) | i1-IQ1_S | 0.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/setfit-article-classifier-i1-GGUF/resolve/main/setfit-article-classifier.i1-IQ1_M.gguf) | i1-IQ1_M | 0.1 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/setfit-article-classifier-i1-GGUF/resolve/main/setfit-article-classifier.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/setfit-article-classifier-i1-GGUF/resolve/main/setfit-article-classifier.i1-IQ2_S.gguf) | i1-IQ2_S | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/setfit-article-classifier-i1-GGUF/resolve/main/setfit-article-classifier.i1-IQ2_XS.gguf) | i1-IQ2_XS | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/setfit-article-classifier-i1-GGUF/resolve/main/setfit-article-classifier.i1-IQ2_M.gguf) | i1-IQ2_M | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/setfit-article-classifier-i1-GGUF/resolve/main/setfit-article-classifier.i1-Q2_K_S.gguf) | i1-Q2_K_S | 0.1 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/setfit-article-classifier-i1-GGUF/resolve/main/setfit-article-classifier.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 0.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/setfit-article-classifier-i1-GGUF/resolve/main/setfit-article-classifier.i1-IQ3_S.gguf) | i1-IQ3_S | 0.1 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/setfit-article-classifier-i1-GGUF/resolve/main/setfit-article-classifier.i1-IQ3_XS.gguf) | i1-IQ3_XS | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/setfit-article-classifier-i1-GGUF/resolve/main/setfit-article-classifier.i1-Q2_K.gguf) | i1-Q2_K | 0.1 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/setfit-article-classifier-i1-GGUF/resolve/main/setfit-article-classifier.i1-Q3_K_S.gguf) | i1-Q3_K_S | 0.1 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/setfit-article-classifier-i1-GGUF/resolve/main/setfit-article-classifier.i1-IQ3_M.gguf) | i1-IQ3_M | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/setfit-article-classifier-i1-GGUF/resolve/main/setfit-article-classifier.i1-IQ4_XS.gguf) | i1-IQ4_XS | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/setfit-article-classifier-i1-GGUF/resolve/main/setfit-article-classifier.i1-IQ4_NL.gguf) | i1-IQ4_NL | 0.1 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/setfit-article-classifier-i1-GGUF/resolve/main/setfit-article-classifier.i1-Q4_0.gguf) | i1-Q4_0 | 0.1 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/setfit-article-classifier-i1-GGUF/resolve/main/setfit-article-classifier.i1-Q3_K_M.gguf) | i1-Q3_K_M | 0.1 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/setfit-article-classifier-i1-GGUF/resolve/main/setfit-article-classifier.i1-Q4_1.gguf) | i1-Q4_1 | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/setfit-article-classifier-i1-GGUF/resolve/main/setfit-article-classifier.i1-Q3_K_L.gguf) | i1-Q3_K_L | 0.1 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/setfit-article-classifier-i1-GGUF/resolve/main/setfit-article-classifier.i1-Q4_K_S.gguf) | i1-Q4_K_S | 0.1 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/setfit-article-classifier-i1-GGUF/resolve/main/setfit-article-classifier.i1-Q4_K_M.gguf) | i1-Q4_K_M | 0.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/setfit-article-classifier-i1-GGUF/resolve/main/setfit-article-classifier.i1-Q5_K_S.gguf) | i1-Q5_K_S | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/setfit-article-classifier-i1-GGUF/resolve/main/setfit-article-classifier.i1-Q5_K_M.gguf) | i1-Q5_K_M | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/setfit-article-classifier-i1-GGUF/resolve/main/setfit-article-classifier.i1-Q6_K.gguf) | i1-Q6_K | 0.1 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
margaritamikhelson/tmp_m3_all_data_2e-6_3ep_mcqa_model
|
margaritamikhelson
| 2025-06-08T07:53:18Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"feature-extraction",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2025-06-08T07:52:30Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
BootesVoid/cmbml6g83014pekg090cvsqa6_cmbmlpyj10158ekg041dpr2pv
|
BootesVoid
| 2025-06-08T07:52:10Z | 1 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-06-08T07:52:09Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: LILLY
---
# Cmbml6G83014Pekg090Cvsqa6_Cmbmlpyj10158Ekg041Dpr2Pv
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `LILLY` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "LILLY",
"lora_weights": "https://huggingface.co/BootesVoid/cmbml6g83014pekg090cvsqa6_cmbmlpyj10158ekg041dpr2pv/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [𧨠diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmbml6g83014pekg090cvsqa6_cmbmlpyj10158ekg041dpr2pv', weight_name='lora.safetensors')
image = pipeline('LILLY').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmbml6g83014pekg090cvsqa6_cmbmlpyj10158ekg041dpr2pv/discussions) to add images that show off what youβve made with this LoRA.
|
sadicanustun/qwen3-2_q4_k_m
|
sadicanustun
| 2025-06-08T07:49:08Z | 1 | 0 |
transformers
|
[
"transformers",
"gguf",
"qwen3",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-06-08T07:48:06Z |
---
base_model: unsloth/qwen3-8b-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** sadicanustun
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen3-8b-bnb-4bit
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
stefandi/cog_behavior_synthetic_sft_v2_step_850
|
stefandi
| 2025-06-08T07:48:59Z | 1 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-08T07:48:06Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
george-chen/rl_course_vizdoom_health_gathering_supreme
|
george-chen
| 2025-06-08T07:48:11Z | 0 | 0 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-06-08T07:48:07Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 10.31 +/- 5.11
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r george-chen/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m <path.to.enjoy.module> --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m <path.to.train.module> --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
mradermacher/tiny-random-granite-i1-GGUF
|
mradermacher
| 2025-06-08T07:47:19Z | 3 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:katuni4ka/tiny-random-granite",
"base_model:quantized:katuni4ka/tiny-random-granite",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-06-08T07:46:21Z |
---
base_model: katuni4ka/tiny-random-granite
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/katuni4ka/tiny-random-granite
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/tiny-random-granite-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/tiny-random-granite-i1-GGUF/resolve/main/tiny-random-granite.i1-IQ1_M.gguf) | i1-IQ1_M | 0.1 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/tiny-random-granite-i1-GGUF/resolve/main/tiny-random-granite.i1-IQ1_S.gguf) | i1-IQ1_S | 0.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/tiny-random-granite-i1-GGUF/resolve/main/tiny-random-granite.i1-IQ2_M.gguf) | i1-IQ2_M | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/tiny-random-granite-i1-GGUF/resolve/main/tiny-random-granite.i1-IQ2_S.gguf) | i1-IQ2_S | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/tiny-random-granite-i1-GGUF/resolve/main/tiny-random-granite.i1-IQ2_XS.gguf) | i1-IQ2_XS | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/tiny-random-granite-i1-GGUF/resolve/main/tiny-random-granite.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/tiny-random-granite-i1-GGUF/resolve/main/tiny-random-granite.i1-IQ3_S.gguf) | i1-IQ3_S | 0.1 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/tiny-random-granite-i1-GGUF/resolve/main/tiny-random-granite.i1-IQ3_XS.gguf) | i1-IQ3_XS | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/tiny-random-granite-i1-GGUF/resolve/main/tiny-random-granite.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 0.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/tiny-random-granite-i1-GGUF/resolve/main/tiny-random-granite.i1-IQ4_NL.gguf) | i1-IQ4_NL | 0.1 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/tiny-random-granite-i1-GGUF/resolve/main/tiny-random-granite.i1-IQ4_XS.gguf) | i1-IQ4_XS | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/tiny-random-granite-i1-GGUF/resolve/main/tiny-random-granite.i1-Q2_K.gguf) | i1-Q2_K | 0.1 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/tiny-random-granite-i1-GGUF/resolve/main/tiny-random-granite.i1-Q2_K_S.gguf) | i1-Q2_K_S | 0.1 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/tiny-random-granite-i1-GGUF/resolve/main/tiny-random-granite.i1-Q3_K_S.gguf) | i1-Q3_K_S | 0.1 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/tiny-random-granite-i1-GGUF/resolve/main/tiny-random-granite.i1-Q4_0.gguf) | i1-Q4_0 | 0.1 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/tiny-random-granite-i1-GGUF/resolve/main/tiny-random-granite.i1-IQ3_M.gguf) | i1-IQ3_M | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/tiny-random-granite-i1-GGUF/resolve/main/tiny-random-granite.i1-Q3_K_M.gguf) | i1-Q3_K_M | 0.1 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/tiny-random-granite-i1-GGUF/resolve/main/tiny-random-granite.i1-Q4_1.gguf) | i1-Q4_1 | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/tiny-random-granite-i1-GGUF/resolve/main/tiny-random-granite.i1-Q3_K_L.gguf) | i1-Q3_K_L | 0.1 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/tiny-random-granite-i1-GGUF/resolve/main/tiny-random-granite.i1-Q4_K_S.gguf) | i1-Q4_K_S | 0.1 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/tiny-random-granite-i1-GGUF/resolve/main/tiny-random-granite.i1-Q4_K_M.gguf) | i1-Q4_K_M | 0.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/tiny-random-granite-i1-GGUF/resolve/main/tiny-random-granite.i1-Q5_K_S.gguf) | i1-Q5_K_S | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/tiny-random-granite-i1-GGUF/resolve/main/tiny-random-granite.i1-Q5_K_M.gguf) | i1-Q5_K_M | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/tiny-random-granite-i1-GGUF/resolve/main/tiny-random-granite.i1-Q6_K.gguf) | i1-Q6_K | 0.1 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/mass-academy-faq-embedder-i1-GGUF
|
mradermacher
| 2025-06-08T07:45:18Z | 2 | 0 |
transformers
|
[
"transformers",
"gguf",
"sentence-transformers",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:39",
"loss:MultipleNegativesRankingLoss",
"en",
"base_model:ntproctor/mass-academy-faq-embedder",
"base_model:quantized:ntproctor/mass-academy-faq-embedder",
"endpoints_compatible",
"region:us",
"imatrix"
] |
feature-extraction
| 2025-06-08T07:42:08Z |
---
base_model: ntproctor/mass-academy-faq-embedder
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:39
- loss:MultipleNegativesRankingLoss
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/ntproctor/mass-academy-faq-embedder
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/mass-academy-faq-embedder-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/mass-academy-faq-embedder-i1-GGUF/resolve/main/mass-academy-faq-embedder.i1-IQ1_S.gguf) | i1-IQ1_S | 0.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/mass-academy-faq-embedder-i1-GGUF/resolve/main/mass-academy-faq-embedder.i1-IQ1_M.gguf) | i1-IQ1_M | 0.1 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/mass-academy-faq-embedder-i1-GGUF/resolve/main/mass-academy-faq-embedder.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/mass-academy-faq-embedder-i1-GGUF/resolve/main/mass-academy-faq-embedder.i1-IQ2_S.gguf) | i1-IQ2_S | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/mass-academy-faq-embedder-i1-GGUF/resolve/main/mass-academy-faq-embedder.i1-IQ2_XS.gguf) | i1-IQ2_XS | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/mass-academy-faq-embedder-i1-GGUF/resolve/main/mass-academy-faq-embedder.i1-IQ2_M.gguf) | i1-IQ2_M | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/mass-academy-faq-embedder-i1-GGUF/resolve/main/mass-academy-faq-embedder.i1-Q2_K_S.gguf) | i1-Q2_K_S | 0.1 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/mass-academy-faq-embedder-i1-GGUF/resolve/main/mass-academy-faq-embedder.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 0.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/mass-academy-faq-embedder-i1-GGUF/resolve/main/mass-academy-faq-embedder.i1-IQ3_S.gguf) | i1-IQ3_S | 0.1 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/mass-academy-faq-embedder-i1-GGUF/resolve/main/mass-academy-faq-embedder.i1-IQ3_XS.gguf) | i1-IQ3_XS | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/mass-academy-faq-embedder-i1-GGUF/resolve/main/mass-academy-faq-embedder.i1-Q2_K.gguf) | i1-Q2_K | 0.1 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/mass-academy-faq-embedder-i1-GGUF/resolve/main/mass-academy-faq-embedder.i1-Q3_K_S.gguf) | i1-Q3_K_S | 0.1 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/mass-academy-faq-embedder-i1-GGUF/resolve/main/mass-academy-faq-embedder.i1-IQ3_M.gguf) | i1-IQ3_M | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/mass-academy-faq-embedder-i1-GGUF/resolve/main/mass-academy-faq-embedder.i1-IQ4_XS.gguf) | i1-IQ4_XS | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/mass-academy-faq-embedder-i1-GGUF/resolve/main/mass-academy-faq-embedder.i1-IQ4_NL.gguf) | i1-IQ4_NL | 0.1 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/mass-academy-faq-embedder-i1-GGUF/resolve/main/mass-academy-faq-embedder.i1-Q4_0.gguf) | i1-Q4_0 | 0.1 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/mass-academy-faq-embedder-i1-GGUF/resolve/main/mass-academy-faq-embedder.i1-Q3_K_M.gguf) | i1-Q3_K_M | 0.1 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/mass-academy-faq-embedder-i1-GGUF/resolve/main/mass-academy-faq-embedder.i1-Q4_1.gguf) | i1-Q4_1 | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/mass-academy-faq-embedder-i1-GGUF/resolve/main/mass-academy-faq-embedder.i1-Q3_K_L.gguf) | i1-Q3_K_L | 0.1 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/mass-academy-faq-embedder-i1-GGUF/resolve/main/mass-academy-faq-embedder.i1-Q4_K_S.gguf) | i1-Q4_K_S | 0.1 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/mass-academy-faq-embedder-i1-GGUF/resolve/main/mass-academy-faq-embedder.i1-Q4_K_M.gguf) | i1-Q4_K_M | 0.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/mass-academy-faq-embedder-i1-GGUF/resolve/main/mass-academy-faq-embedder.i1-Q5_K_S.gguf) | i1-Q5_K_S | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/mass-academy-faq-embedder-i1-GGUF/resolve/main/mass-academy-faq-embedder.i1-Q5_K_M.gguf) | i1-Q5_K_M | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/mass-academy-faq-embedder-i1-GGUF/resolve/main/mass-academy-faq-embedder.i1-Q6_K.gguf) | i1-Q6_K | 0.1 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
Juicesyo/Saffi
|
Juicesyo
| 2025-06-08T07:42:00Z | 141 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:unsloth/Qwen3-4B",
"base_model:finetune:unsloth/Qwen3-4B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-08T07:39:46Z |
---
base_model: unsloth/Qwen3-4B
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** Juicesyo
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-4B
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
qwdsadsafasdf2/distilbert-base-uncased-finetuned-emotion
|
qwdsadsafasdf2
| 2025-06-08T07:42:00Z | 160 | 0 |
transformers
|
[
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-06-07T17:09:16Z |
---
library_name: transformers
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5794
- Accuracy: 0.352
- F1: 0.1833
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|
| 1.5736 | 1.0 | 54102 | 1.5794 | 0.352 | 0.1833 |
### Framework versions
- Transformers 4.52.4
- Pytorch 2.6.0+cu118
- Datasets 3.6.0
- Tokenizers 0.21.1
|
JacksonBrune/c26c7611-b1be-4ea8-9591-11d183d78227
|
JacksonBrune
| 2025-06-08T07:41:57Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"generated_from_trainer",
"dataset:6610f084e1dbd727_train_data.json",
"base_model:deepseek-ai/DeepSeek-R1-Distill-Llama-70B",
"base_model:adapter:deepseek-ai/DeepSeek-R1-Distill-Llama-70B",
"region:us"
] | null | 2025-06-08T07:40:24Z |
---
library_name: peft
tags:
- generated_from_trainer
datasets:
- 6610f084e1dbd727_train_data.json
base_model: deepseek-ai/DeepSeek-R1-Distill-Llama-70B
model-index:
- name: JacksonBrune/c26c7611-b1be-4ea8-9591-11d183d78227
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# JacksonBrune/c26c7611-b1be-4ea8-9591-11d183d78227
This model was trained from scratch on the /workspace/input_data/6610f084e1dbd727_train_data.json dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5163
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
### Framework versions
- PEFT 0.15.2
- Transformers 4.52.3
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
maifoundations/IndexMark
|
maifoundations
| 2025-06-08T07:36:43Z | 0 | 0 | null |
[
"arxiv:2505.14673",
"license:mit",
"region:us"
] | null | 2025-06-08T07:34:33Z |
---
license: mit
---
# Training-Free Watermarking for Autoregressive Image Generation
We introduce IndexMark, a training-free watermarking framework for autoregressive image generation models. It embeds watermarks by replacing generated indices with similar ones, thereby maintaining image quality and demonstrating robustness against various perturbations.
This repo is used for hosting IndexMark's checkpoints. For more details or tutorials see https://github.com/maifoundations/IndexMark
Paper:arxiv.org/abs/2505.14673
|
rajtripathi/DistilBERT-Base-Uncased-FineTuned-SST2
|
rajtripathi
| 2025-06-08T07:31:08Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"distilbert",
"fill-mask",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2025-06-08T07:30:45Z |
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
KasuleTrevor/Qwen-nyn-intent
|
KasuleTrevor
| 2025-06-08T07:30:46Z | 2 | 0 |
peft
|
[
"peft",
"safetensors",
"qwen2_audio",
"generated_from_trainer",
"base_model:Qwen/Qwen2-Audio-7B-Instruct",
"base_model:adapter:Qwen/Qwen2-Audio-7B-Instruct",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-06-08T07:28:32Z |
---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2-Audio-7B-Instruct
tags:
- generated_from_trainer
model-index:
- name: Qwen-nyn-intent
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Qwen-nyn-intent
This model is a fine-tuned version of [Qwen/Qwen2-Audio-7B-Instruct](https://huggingface.co/Qwen/Qwen2-Audio-7B-Instruct) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3276
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: constant
- lr_scheduler_warmup_steps: 50
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.351 | 0.4 | 562 | 0.3574 |
| 0.3462 | 0.8 | 1124 | 0.3430 |
| 0.3378 | 1.2 | 1686 | 0.3300 |
| 0.3398 | 1.6 | 2248 | 0.3692 |
| 0.3344 | 2.0 | 2810 | 0.3276 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.53.0.dev0
- Pytorch 2.7.1+cu126
- Datasets 3.6.0
- Tokenizers 0.21.1
|
cristiano-sartori/full_MCQA_RAG_checkpoint
|
cristiano-sartori
| 2025-06-08T07:24:39Z | 9 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-07T15:35:58Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
LakshmiDataScientist/model_movie_sentiment1
|
LakshmiDataScientist
| 2025-06-08T07:22:39Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-06-08T07:22:13Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
NaruseShiroha/Qwen3-4B-reasoning
|
NaruseShiroha
| 2025-06-08T07:22:10Z | 3 | 0 |
transformers
|
[
"transformers",
"gguf",
"qwen3",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/Qwen3-4B-Base",
"base_model:quantized:unsloth/Qwen3-4B-Base",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-06-08T07:21:00Z |
---
base_model: unsloth/Qwen3-4B-Base
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** NaruseShiroha
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-4B-Base
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mikemayuare/gemma-2-2B-it-thinking-function_calling-V0
|
mikemayuare
| 2025-06-08T07:17:13Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:google/gemma-2-2b-it",
"base_model:finetune:google/gemma-2-2b-it",
"endpoints_compatible",
"region:us"
] | null | 2025-06-08T07:15:28Z |
---
base_model: google/gemma-2-2b-it
library_name: transformers
model_name: gemma-2-2B-it-thinking-function_calling-V0
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for gemma-2-2B-it-thinking-function_calling-V0
This model is a fine-tuned version of [google/gemma-2-2b-it](https://huggingface.co/google/gemma-2-2b-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="mikemayuare/gemma-2-2B-it-thinking-function_calling-V0", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.18.1
- Transformers: 4.52.4
- Pytorch: 2.7.1+cu128
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
ptkag1712/fine_tuned_llama3b_pii_500
|
ptkag1712
| 2025-06-08T07:16:33Z | 4 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"region:us"
] | null | 2025-06-08T07:16:01Z |
---
base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2
|
kundan05/Llama-2-7b-sql-chat-finetuned-1k
|
kundan05
| 2025-06-08T07:13:56Z | 3 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-08T07:10:23Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Jenjamin3000/MNLP_M3_document_encoder_old
|
Jenjamin3000
| 2025-06-08T07:13:18Z | 0 | 0 |
transformers
|
[
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-06T07:42:52Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
shrenikb/general_final_v1749364474
|
shrenikb
| 2025-06-08T07:02:40Z | 39 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-08T07:00:44Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Akshaysar/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-silky_lightfooted_seahorse
|
Akshaysar
| 2025-06-08T07:02:24Z | 20 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am silky lightfooted seahorse",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:unsloth/Qwen2.5-0.5B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-01T07:44:29Z |
---
base_model: unsloth/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-silky_lightfooted_seahorse
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am silky lightfooted seahorse
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-silky_lightfooted_seahorse
This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Akshaysar/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-silky_lightfooted_seahorse", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.18.1
- Transformers: 4.52.4
- Pytorch: 2.7.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
K10S/goemotions-lora-adapters-FinalFF
|
K10S
| 2025-06-08T07:01:30Z | 30 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:distilbert/distilbert-base-uncased",
"base_model:adapter:distilbert/distilbert-base-uncased",
"region:us"
] | null | 2025-06-08T07:01:29Z |
---
base_model: distilbert-base-uncased
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2
|
amanullah4693/amanullah
|
amanullah4693
| 2025-06-08T06:58:24Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-06-08T06:58:24Z |
---
license: apache-2.0
---
|
shrenikb/gsm8k_preEnrich_v1749364474
|
shrenikb
| 2025-06-08T06:57:18Z | 27 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-08T06:50:40Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
RichardErkhov/cs6220-ai-gradescope-grader_-_llama-3.1-8B-Instruct-batch-8-gguf
|
RichardErkhov
| 2025-06-08T06:55:16Z | 65 | 0 | null |
[
"gguf",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-06-08T05:44:25Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
llama-3.1-8B-Instruct-batch-8 - GGUF
- Model creator: https://huggingface.co/cs6220-ai-gradescope-grader/
- Original model: https://huggingface.co/cs6220-ai-gradescope-grader/llama-3.1-8B-Instruct-batch-8/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [llama-3.1-8B-Instruct-batch-8.Q2_K.gguf](https://huggingface.co/RichardErkhov/cs6220-ai-gradescope-grader_-_llama-3.1-8B-Instruct-batch-8-gguf/blob/main/llama-3.1-8B-Instruct-batch-8.Q2_K.gguf) | Q2_K | 2.96GB |
| [llama-3.1-8B-Instruct-batch-8.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/cs6220-ai-gradescope-grader_-_llama-3.1-8B-Instruct-batch-8-gguf/blob/main/llama-3.1-8B-Instruct-batch-8.IQ3_XS.gguf) | IQ3_XS | 3.28GB |
| [llama-3.1-8B-Instruct-batch-8.IQ3_S.gguf](https://huggingface.co/RichardErkhov/cs6220-ai-gradescope-grader_-_llama-3.1-8B-Instruct-batch-8-gguf/blob/main/llama-3.1-8B-Instruct-batch-8.IQ3_S.gguf) | IQ3_S | 3.43GB |
| [llama-3.1-8B-Instruct-batch-8.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/cs6220-ai-gradescope-grader_-_llama-3.1-8B-Instruct-batch-8-gguf/blob/main/llama-3.1-8B-Instruct-batch-8.Q3_K_S.gguf) | Q3_K_S | 3.41GB |
| [llama-3.1-8B-Instruct-batch-8.IQ3_M.gguf](https://huggingface.co/RichardErkhov/cs6220-ai-gradescope-grader_-_llama-3.1-8B-Instruct-batch-8-gguf/blob/main/llama-3.1-8B-Instruct-batch-8.IQ3_M.gguf) | IQ3_M | 3.52GB |
| [llama-3.1-8B-Instruct-batch-8.Q3_K.gguf](https://huggingface.co/RichardErkhov/cs6220-ai-gradescope-grader_-_llama-3.1-8B-Instruct-batch-8-gguf/blob/main/llama-3.1-8B-Instruct-batch-8.Q3_K.gguf) | Q3_K | 3.74GB |
| [llama-3.1-8B-Instruct-batch-8.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/cs6220-ai-gradescope-grader_-_llama-3.1-8B-Instruct-batch-8-gguf/blob/main/llama-3.1-8B-Instruct-batch-8.Q3_K_M.gguf) | Q3_K_M | 3.74GB |
| [llama-3.1-8B-Instruct-batch-8.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/cs6220-ai-gradescope-grader_-_llama-3.1-8B-Instruct-batch-8-gguf/blob/main/llama-3.1-8B-Instruct-batch-8.Q3_K_L.gguf) | Q3_K_L | 4.03GB |
| [llama-3.1-8B-Instruct-batch-8.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/cs6220-ai-gradescope-grader_-_llama-3.1-8B-Instruct-batch-8-gguf/blob/main/llama-3.1-8B-Instruct-batch-8.IQ4_XS.gguf) | IQ4_XS | 4.18GB |
| [llama-3.1-8B-Instruct-batch-8.Q4_0.gguf](https://huggingface.co/RichardErkhov/cs6220-ai-gradescope-grader_-_llama-3.1-8B-Instruct-batch-8-gguf/blob/main/llama-3.1-8B-Instruct-batch-8.Q4_0.gguf) | Q4_0 | 4.34GB |
| [llama-3.1-8B-Instruct-batch-8.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/cs6220-ai-gradescope-grader_-_llama-3.1-8B-Instruct-batch-8-gguf/blob/main/llama-3.1-8B-Instruct-batch-8.IQ4_NL.gguf) | IQ4_NL | 4.38GB |
| [llama-3.1-8B-Instruct-batch-8.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/cs6220-ai-gradescope-grader_-_llama-3.1-8B-Instruct-batch-8-gguf/blob/main/llama-3.1-8B-Instruct-batch-8.Q4_K_S.gguf) | Q4_K_S | 4.37GB |
| [llama-3.1-8B-Instruct-batch-8.Q4_K.gguf](https://huggingface.co/RichardErkhov/cs6220-ai-gradescope-grader_-_llama-3.1-8B-Instruct-batch-8-gguf/blob/main/llama-3.1-8B-Instruct-batch-8.Q4_K.gguf) | Q4_K | 4.58GB |
| [llama-3.1-8B-Instruct-batch-8.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/cs6220-ai-gradescope-grader_-_llama-3.1-8B-Instruct-batch-8-gguf/blob/main/llama-3.1-8B-Instruct-batch-8.Q4_K_M.gguf) | Q4_K_M | 4.58GB |
| [llama-3.1-8B-Instruct-batch-8.Q4_1.gguf](https://huggingface.co/RichardErkhov/cs6220-ai-gradescope-grader_-_llama-3.1-8B-Instruct-batch-8-gguf/blob/main/llama-3.1-8B-Instruct-batch-8.Q4_1.gguf) | Q4_1 | 4.78GB |
| [llama-3.1-8B-Instruct-batch-8.Q5_0.gguf](https://huggingface.co/RichardErkhov/cs6220-ai-gradescope-grader_-_llama-3.1-8B-Instruct-batch-8-gguf/blob/main/llama-3.1-8B-Instruct-batch-8.Q5_0.gguf) | Q5_0 | 5.21GB |
| [llama-3.1-8B-Instruct-batch-8.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/cs6220-ai-gradescope-grader_-_llama-3.1-8B-Instruct-batch-8-gguf/blob/main/llama-3.1-8B-Instruct-batch-8.Q5_K_S.gguf) | Q5_K_S | 5.21GB |
| [llama-3.1-8B-Instruct-batch-8.Q5_K.gguf](https://huggingface.co/RichardErkhov/cs6220-ai-gradescope-grader_-_llama-3.1-8B-Instruct-batch-8-gguf/blob/main/llama-3.1-8B-Instruct-batch-8.Q5_K.gguf) | Q5_K | 5.34GB |
| [llama-3.1-8B-Instruct-batch-8.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/cs6220-ai-gradescope-grader_-_llama-3.1-8B-Instruct-batch-8-gguf/blob/main/llama-3.1-8B-Instruct-batch-8.Q5_K_M.gguf) | Q5_K_M | 5.34GB |
| [llama-3.1-8B-Instruct-batch-8.Q5_1.gguf](https://huggingface.co/RichardErkhov/cs6220-ai-gradescope-grader_-_llama-3.1-8B-Instruct-batch-8-gguf/blob/main/llama-3.1-8B-Instruct-batch-8.Q5_1.gguf) | Q5_1 | 5.65GB |
| [llama-3.1-8B-Instruct-batch-8.Q6_K.gguf](https://huggingface.co/RichardErkhov/cs6220-ai-gradescope-grader_-_llama-3.1-8B-Instruct-batch-8-gguf/blob/main/llama-3.1-8B-Instruct-batch-8.Q6_K.gguf) | Q6_K | 6.14GB |
| [llama-3.1-8B-Instruct-batch-8.Q8_0.gguf](https://huggingface.co/RichardErkhov/cs6220-ai-gradescope-grader_-_llama-3.1-8B-Instruct-batch-8-gguf/blob/main/llama-3.1-8B-Instruct-batch-8.Q8_0.gguf) | Q8_0 | 7.95GB |
Original model description:
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Nickrey99/ReyGarage
|
Nickrey99
| 2025-06-08T06:55:11Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-06-08T06:55:11Z |
---
license: apache-2.0
---
|
hantian/layoutreader
|
hantian
| 2025-06-08T06:54:50Z | 110,820 | 29 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"layoutlmv3",
"token-classification",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-02-28T09:10:22Z |
---
library_name: transformers
license: cc-by-nc-sa-4.0
---
# LayoutReader
A reading order prediction model. Turn bboxes extracted from PDF or detected by OCR into readable order.
Please refer to [Github](https://github.com/ppaanngggg/layoutreader) for more details.
|
Yuquan-Wang/pudu-d9-Flat-Running-2025-06-06-episode-800
|
Yuquan-Wang
| 2025-06-08T06:54:19Z | 0 | 0 | null |
[
"tensorboard",
"region:us"
] | null | 2025-06-08T06:53:36Z |
# IsaacLab Training Output
## Model Checkpoints
- Final model: model_9999.pt
-
Checkpoint 2000: model_2000.pt- Checkpoint 5000: model_5000.pt- Checkpoint 8000: model_8000.pt- Checkpoint 9000: model_9000.pt- Checkpoint 10000: model_10000.pt- Checkpoint 11000: model_11000.pt- Checkpoint 12000: model_12000.pt- Checkpoint 13000: model_13000.pt- Checkpoint 14000: model_14000.pt- Checkpoint 14999: model_14999.pt
## Training Configuration
See the `params/` directory for training configuration files.
## Training Logs
Training logs are available in the `logs/` directory.
## Demo Videos
Demo videos are available in the `videos/` directory.
## Usage
1. Download the model checkpoint
2. Load using PyTorch:
```python
import torch
model = torch.load('path/to/model.pt')
```
## Notes
This version limits the lateral velocity to +- 0.1 m/s.
|
tiagosantana/pablo-model
|
tiagosantana
| 2025-06-08T06:50:15Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2025-06-08T06:43:46Z |
---
license: creativeml-openrail-m
---
# Meu Clone LoRA
## Uso:
- Trigger word: [sua_palavra_gatilho]
- Peso recomendado: 0.7-1.0
- Baseado em: FLUX/SD1.5/SDXL (qual vocΓͺ usou?)
## Exemplo de prompt:
"photo of [trigger_word] person, professional portrait"
|
christinakopi/qwen_sft_model_stem
|
christinakopi
| 2025-06-08T06:48:52Z | 124 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-07T21:02:00Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
IHaBiS/gemma3_4b_it_mrl_sib200_lora_finetuned_32_64_128_256_384_512_768_1024_1536_dataset_full
|
IHaBiS
| 2025-06-08T06:48:20Z | 0 | 0 | null |
[
"safetensors",
"region:us"
] | null | 2025-06-07T14:46:00Z |
LORA_R = 32
LORA_ALPHA = 64
LORA_DROPOUT = 0.1
MRL_DIMS_CONFIG = [32, 64, 128, 256, 384, 512, 768, 1024, 1536] + original hidden_state 2560
BATCH_SIZE = 4
LEARNING_RATE = 2e-5
EPOCHS = 3
SAVE_MODEL_CYCLE = 1
GRAD_ACCUMULATION_STEPS = 8
SIMCSE_TEMPERATURE = 0.05
TRAIN_SET_SIZE_SCALE = 1
VAL_SET_SIZE_SCALE = 1
GLOBAL_STEP_SAVE = 100
dataset used : [mteb/sib200](https://huggingface.co/datasets/mteb/sib200)
|
Ali-Mhrez/arbertv2-finetuned-last256-arastance-stance-detection
|
Ali-Mhrez
| 2025-06-08T06:46:59Z | 3 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-06-07T08:30:06Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
apriasmoro/8804d0f4-ea9b-43e3-bdc6-ef47a342323e
|
apriasmoro
| 2025-06-08T06:45:55Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"axolotl",
"trl",
"grpo",
"unsloth",
"conversational",
"arxiv:2402.03300",
"base_model:unsloth/llama-3-8b",
"base_model:finetune:unsloth/llama-3-8b",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-08T06:41:20Z |
---
base_model: unsloth/llama-3-8b
library_name: transformers
model_name: 8804d0f4-ea9b-43e3-bdc6-ef47a342323e
tags:
- generated_from_trainer
- axolotl
- trl
- grpo
- unsloth
licence: license
---
# Model Card for 8804d0f4-ea9b-43e3-bdc6-ef47a342323e
This model is a fine-tuned version of [unsloth/llama-3-8b](https://huggingface.co/unsloth/llama-3-8b).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="apriasmoro/8804d0f4-ea9b-43e3-bdc6-ef47a342323e", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/apriasmoro-abcstudio/Gradients-On-Demand/runs/lvn5b8cd)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.17.0
- Transformers: 4.51.3
- Pytorch: 2.5.1+cu124
- Datasets: 3.5.1
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
johngreendr1/f6a3dd1b-5852-437e-8a60-0194f0f065c5
|
johngreendr1
| 2025-06-08T06:43:54Z | 8 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:unsloth/Qwen2-7B",
"base_model:adapter:unsloth/Qwen2-7B",
"region:us"
] | null | 2025-06-08T02:38:20Z |
---
base_model: unsloth/Qwen2-7B
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1
|
gspeech/workout-hk3-uid4
|
gspeech
| 2025-06-08T06:43:13Z | 1 | 0 |
transformers
|
[
"transformers",
"safetensors",
"vit",
"image-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2025-06-08T06:25:32Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Pwmarkliu/whisper-sirad
|
Pwmarkliu
| 2025-06-08T06:39:37Z | 2 | 0 | null |
[
"region:us"
] | null | 2025-06-08T06:30:46Z |
# whisper-sirad
This is a **float16 CTranslate2 Faster-Whisper model** based on OpenAIβs `large-v3-turbo` checkpoint, fine-tuned and adapted for radiology-related speech transcription by Siriraj Hospital.
## Usage
```python
from faster_whisper import WhisperModel
model = WhisperModel("USERNAME/whisper-sirad", compute_type="float16")
segments, info = model.transcribe("your_audio.wav")
for segment in segments:
print(f"[{segment.start:.2f}s -> {segment.end:.2f}s] {segment.text}")
|
Marmara-NLP/gemma-3-4b-it-CSE4078_Grp1-lr2e-5-tr-NER
|
Marmara-NLP
| 2025-06-08T06:37:15Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gemma3",
"trl",
"en",
"base_model:unsloth/gemma-3-4b-it-unsloth-bnb-4bit",
"base_model:finetune:unsloth/gemma-3-4b-it-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-08T06:37:00Z |
---
base_model: unsloth/gemma-3-4b-it-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** ahmetikbal
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-4b-it-unsloth-bnb-4bit
This gemma3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
RichardErkhov/mergekit-community_-_final_test_3_original_recipe_more_reasoning-gguf
|
RichardErkhov
| 2025-06-08T06:36:30Z | 65 | 0 | null |
[
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-06-08T01:19:30Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
final_test_3_original_recipe_more_reasoning - GGUF
- Model creator: https://huggingface.co/mergekit-community/
- Original model: https://huggingface.co/mergekit-community/final_test_3_original_recipe_more_reasoning/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [final_test_3_original_recipe_more_reasoning.Q2_K.gguf](https://huggingface.co/RichardErkhov/mergekit-community_-_final_test_3_original_recipe_more_reasoning-gguf/blob/main/final_test_3_original_recipe_more_reasoning.Q2_K.gguf) | Q2_K | 2.96GB |
| [final_test_3_original_recipe_more_reasoning.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/mergekit-community_-_final_test_3_original_recipe_more_reasoning-gguf/blob/main/final_test_3_original_recipe_more_reasoning.IQ3_XS.gguf) | IQ3_XS | 3.28GB |
| [final_test_3_original_recipe_more_reasoning.IQ3_S.gguf](https://huggingface.co/RichardErkhov/mergekit-community_-_final_test_3_original_recipe_more_reasoning-gguf/blob/main/final_test_3_original_recipe_more_reasoning.IQ3_S.gguf) | IQ3_S | 3.43GB |
| [final_test_3_original_recipe_more_reasoning.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/mergekit-community_-_final_test_3_original_recipe_more_reasoning-gguf/blob/main/final_test_3_original_recipe_more_reasoning.Q3_K_S.gguf) | Q3_K_S | 3.41GB |
| [final_test_3_original_recipe_more_reasoning.IQ3_M.gguf](https://huggingface.co/RichardErkhov/mergekit-community_-_final_test_3_original_recipe_more_reasoning-gguf/blob/main/final_test_3_original_recipe_more_reasoning.IQ3_M.gguf) | IQ3_M | 3.52GB |
| [final_test_3_original_recipe_more_reasoning.Q3_K.gguf](https://huggingface.co/RichardErkhov/mergekit-community_-_final_test_3_original_recipe_more_reasoning-gguf/blob/main/final_test_3_original_recipe_more_reasoning.Q3_K.gguf) | Q3_K | 3.74GB |
| [final_test_3_original_recipe_more_reasoning.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/mergekit-community_-_final_test_3_original_recipe_more_reasoning-gguf/blob/main/final_test_3_original_recipe_more_reasoning.Q3_K_M.gguf) | Q3_K_M | 3.74GB |
| [final_test_3_original_recipe_more_reasoning.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/mergekit-community_-_final_test_3_original_recipe_more_reasoning-gguf/blob/main/final_test_3_original_recipe_more_reasoning.Q3_K_L.gguf) | Q3_K_L | 4.03GB |
| [final_test_3_original_recipe_more_reasoning.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/mergekit-community_-_final_test_3_original_recipe_more_reasoning-gguf/blob/main/final_test_3_original_recipe_more_reasoning.IQ4_XS.gguf) | IQ4_XS | 4.18GB |
| [final_test_3_original_recipe_more_reasoning.Q4_0.gguf](https://huggingface.co/RichardErkhov/mergekit-community_-_final_test_3_original_recipe_more_reasoning-gguf/blob/main/final_test_3_original_recipe_more_reasoning.Q4_0.gguf) | Q4_0 | 4.34GB |
| [final_test_3_original_recipe_more_reasoning.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/mergekit-community_-_final_test_3_original_recipe_more_reasoning-gguf/blob/main/final_test_3_original_recipe_more_reasoning.IQ4_NL.gguf) | IQ4_NL | 4.38GB |
| [final_test_3_original_recipe_more_reasoning.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/mergekit-community_-_final_test_3_original_recipe_more_reasoning-gguf/blob/main/final_test_3_original_recipe_more_reasoning.Q4_K_S.gguf) | Q4_K_S | 4.37GB |
| [final_test_3_original_recipe_more_reasoning.Q4_K.gguf](https://huggingface.co/RichardErkhov/mergekit-community_-_final_test_3_original_recipe_more_reasoning-gguf/blob/main/final_test_3_original_recipe_more_reasoning.Q4_K.gguf) | Q4_K | 4.58GB |
| [final_test_3_original_recipe_more_reasoning.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/mergekit-community_-_final_test_3_original_recipe_more_reasoning-gguf/blob/main/final_test_3_original_recipe_more_reasoning.Q4_K_M.gguf) | Q4_K_M | 4.58GB |
| [final_test_3_original_recipe_more_reasoning.Q4_1.gguf](https://huggingface.co/RichardErkhov/mergekit-community_-_final_test_3_original_recipe_more_reasoning-gguf/blob/main/final_test_3_original_recipe_more_reasoning.Q4_1.gguf) | Q4_1 | 4.78GB |
| [final_test_3_original_recipe_more_reasoning.Q5_0.gguf](https://huggingface.co/RichardErkhov/mergekit-community_-_final_test_3_original_recipe_more_reasoning-gguf/blob/main/final_test_3_original_recipe_more_reasoning.Q5_0.gguf) | Q5_0 | 5.21GB |
| [final_test_3_original_recipe_more_reasoning.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/mergekit-community_-_final_test_3_original_recipe_more_reasoning-gguf/blob/main/final_test_3_original_recipe_more_reasoning.Q5_K_S.gguf) | Q5_K_S | 5.21GB |
| [final_test_3_original_recipe_more_reasoning.Q5_K.gguf](https://huggingface.co/RichardErkhov/mergekit-community_-_final_test_3_original_recipe_more_reasoning-gguf/blob/main/final_test_3_original_recipe_more_reasoning.Q5_K.gguf) | Q5_K | 5.34GB |
| [final_test_3_original_recipe_more_reasoning.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/mergekit-community_-_final_test_3_original_recipe_more_reasoning-gguf/blob/main/final_test_3_original_recipe_more_reasoning.Q5_K_M.gguf) | Q5_K_M | 5.34GB |
| [final_test_3_original_recipe_more_reasoning.Q5_1.gguf](https://huggingface.co/RichardErkhov/mergekit-community_-_final_test_3_original_recipe_more_reasoning-gguf/blob/main/final_test_3_original_recipe_more_reasoning.Q5_1.gguf) | Q5_1 | 5.65GB |
| [final_test_3_original_recipe_more_reasoning.Q6_K.gguf](https://huggingface.co/RichardErkhov/mergekit-community_-_final_test_3_original_recipe_more_reasoning-gguf/blob/main/final_test_3_original_recipe_more_reasoning.Q6_K.gguf) | Q6_K | 6.14GB |
| [final_test_3_original_recipe_more_reasoning.Q8_0.gguf](https://huggingface.co/RichardErkhov/mergekit-community_-_final_test_3_original_recipe_more_reasoning-gguf/blob/main/final_test_3_original_recipe_more_reasoning.Q8_0.gguf) | Q8_0 | 7.95GB |
Original model description:
---
base_model:
- Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2
- ArliAI/Llama-3.1-8B-ArliAI-RPMax-v1.1
- mergekit-community/mergekit-della_linear-uogzotg
- Solshine/reflection-llama-3.1-8B
- Undi95/Llama3-Unholy-8B-OAS
- ArliAI/Llama-3.1-8B-ArliAI-RPMax-v1.2
- Skywork/Skywork-o1-Open-Llama-3.1-8B
- vicgalle/Humanish-Roleplay-Llama-3.1-8B
- ArliAI/Llama-3.1-8B-ArliAI-RPMax-v1.3
- Undi95/Meta-Llama-3.1-8B-Claude
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the della_linear merge method using [Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2](https://huggingface.co/Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2) as a base.
### Models Merged
The following models were included in the merge:
* [ArliAI/Llama-3.1-8B-ArliAI-RPMax-v1.1](https://huggingface.co/ArliAI/Llama-3.1-8B-ArliAI-RPMax-v1.1)
* [mergekit-community/mergekit-della_linear-uogzotg](https://huggingface.co/mergekit-community/mergekit-della_linear-uogzotg)
* [Solshine/reflection-llama-3.1-8B](https://huggingface.co/Solshine/reflection-llama-3.1-8B)
* [Undi95/Llama3-Unholy-8B-OAS](https://huggingface.co/Undi95/Llama3-Unholy-8B-OAS)
* [ArliAI/Llama-3.1-8B-ArliAI-RPMax-v1.2](https://huggingface.co/ArliAI/Llama-3.1-8B-ArliAI-RPMax-v1.2)
* [Skywork/Skywork-o1-Open-Llama-3.1-8B](https://huggingface.co/Skywork/Skywork-o1-Open-Llama-3.1-8B)
* [vicgalle/Humanish-Roleplay-Llama-3.1-8B](https://huggingface.co/vicgalle/Humanish-Roleplay-Llama-3.1-8B)
* [ArliAI/Llama-3.1-8B-ArliAI-RPMax-v1.3](https://huggingface.co/ArliAI/Llama-3.1-8B-ArliAI-RPMax-v1.3)
* [Undi95/Meta-Llama-3.1-8B-Claude](https://huggingface.co/Undi95/Meta-Llama-3.1-8B-Claude)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: ArliAI/Llama-3.1-8B-ArliAI-RPMax-v1.3
parameters:
density: 0.5
weight: 0.6
- model: Solshine/reflection-llama-3.1-8B
parameters:
density: 0.5
weight: 0.6
- model: Skywork/Skywork-o1-Open-Llama-3.1-8B
parameters:
density: 0.5
weight: 0.2
- model: ArliAI/Llama-3.1-8B-ArliAI-RPMax-v1.2
parameters:
density: 0.8
weight: 0.6
- model: ArliAI/Llama-3.1-8B-ArliAI-RPMax-v1.1
parameters:
density: 0.8
weight: 0.6
- model: Undi95/Llama3-Unholy-8B-OAS
parameters:
density: 0.5
weight: 0.5
- model: vicgalle/Humanish-Roleplay-Llama-3.1-8B
parameters:
density: 0.5
weight: 0.5
- model: Undi95/Meta-Llama-3.1-8B-Claude
parameters:
density: 0.5
weight: 0.5
- model: mergekit-community/mergekit-della_linear-uogzotg
parameters:
density: 0.5
weight: 0.5
merge_method: della_linear
base_model: Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2
parameters:
normalize: false
int8_mask: true
dtype: float16
```
|
mmmanuel/lr_1e_05_beta_0p01_epochs_1_extended
|
mmmanuel
| 2025-06-08T06:36:26Z | 3 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-08T06:35:50Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Kuojenny/alpaca-lora-merged
|
Kuojenny
| 2025-06-08T06:36:18Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"region:us"
] | null | 2025-06-08T06:33:07Z |
---
base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2
|
bhowmikjoy2/example-model
|
bhowmikjoy2
| 2025-06-08T06:33:57Z | 0 | 0 | null |
[
"license:mit",
"region:us"
] | null | 2025-06-08T06:32:06Z |
---
license: mit
---
This is my model card. I am trying to learn to use Hugging Face LLM via these sessions.
Learning mode activated on 8th June.
---
|
kironlau/MiniCPM4-8B-IQ4_XS-GGUF
|
kironlau
| 2025-06-08T06:28:52Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"zh",
"en",
"base_model:openbmb/MiniCPM4-8B",
"base_model:quantized:openbmb/MiniCPM4-8B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] |
text-generation
| 2025-06-08T06:28:28Z |
---
license: apache-2.0
language:
- zh
- en
pipeline_tag: text-generation
library_name: transformers
base_model: openbmb/MiniCPM4-8B
tags:
- llama-cpp
- gguf-my-repo
---
# kironlau/MiniCPM4-8B-IQ4_XS-GGUF
This model was converted to GGUF format from [`openbmb/MiniCPM4-8B`](https://huggingface.co/openbmb/MiniCPM4-8B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/openbmb/MiniCPM4-8B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo kironlau/MiniCPM4-8B-IQ4_XS-GGUF --hf-file minicpm4-8b-iq4_xs-imat.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo kironlau/MiniCPM4-8B-IQ4_XS-GGUF --hf-file minicpm4-8b-iq4_xs-imat.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo kironlau/MiniCPM4-8B-IQ4_XS-GGUF --hf-file minicpm4-8b-iq4_xs-imat.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo kironlau/MiniCPM4-8B-IQ4_XS-GGUF --hf-file minicpm4-8b-iq4_xs-imat.gguf -c 2048
```
|
wangzhng422/SmolLM2-FT-ORPO
|
wangzhng422
| 2025-06-08T06:28:01Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"smol-course",
"module_1",
"trl",
"orpo",
"conversational",
"arxiv:2403.07691",
"base_model:HuggingFaceTB/SmolLM2-135M",
"base_model:finetune:HuggingFaceTB/SmolLM2-135M",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-08T06:27:25Z |
---
base_model: HuggingFaceTB/SmolLM2-135M
library_name: transformers
model_name: SmolLM2-FT-ORPO
tags:
- generated_from_trainer
- smol-course
- module_1
- trl
- orpo
licence: license
---
# Model Card for SmolLM2-FT-ORPO
This model is a fine-tuned version of [HuggingFaceTB/SmolLM2-135M](https://huggingface.co/HuggingFaceTB/SmolLM2-135M).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="wangzhng422/SmolLM2-FT-ORPO", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with ORPO, a method introduced in [ORPO: Monolithic Preference Optimization without Reference Model](https://huggingface.co/papers/2403.07691).
### Framework versions
- TRL: 0.18.1
- Transformers: 4.52.4
- Pytorch: 2.6.0+cu124
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite ORPO as:
```bibtex
@article{hong2024orpo,
title = {{ORPO: Monolithic Preference Optimization without Reference Model}},
author = {Jiwoo Hong and Noah Lee and James Thorne},
year = 2024,
eprint = {arXiv:2403.07691}
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
adrgu371/MiMo-7B-RL-q4.gguf
|
adrgu371
| 2025-06-08T06:22:47Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-06-08T06:22:47Z |
---
license: apache-2.0
---
|
valen02/MNLP_SFT2_EAI_2
|
valen02
| 2025-06-08T06:22:26Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-08T06:21:46Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
K10S/goemotions-lora-adapters
|
K10S
| 2025-06-08T06:22:06Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:distilbert/distilbert-base-uncased",
"base_model:adapter:distilbert/distilbert-base-uncased",
"region:us"
] | null | 2025-06-08T06:22:04Z |
---
base_model: distilbert-base-uncased
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2
|
K10S/peft-goemotions-distilbert-full
|
K10S
| 2025-06-08T06:18:23Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:distilbert/distilbert-base-uncased",
"base_model:adapter:distilbert/distilbert-base-uncased",
"region:us"
] | null | 2025-06-08T06:16:25Z |
---
base_model: distilbert-base-uncased
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.