modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-02 18:52:31
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 533
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-02 18:52:05
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
mob2711/llama_3b_2k
|
mob2711
| 2025-06-21T19:56:04Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-21T19:55:52Z |
---
base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** mob2711
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
muhtasham/spark-llm-finetune-tj
|
muhtasham
| 2025-06-21T18:56:19Z | 44 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"axolotl",
"generated_from_trainer",
"conversational",
"dataset:data/output_prompt.jsonl",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-17T13:37:07Z |
---
library_name: transformers
tags:
- axolotl
- generated_from_trainer
datasets:
- data/output_prompt.jsonl
model-index:
- name: spark-llm-finetune-tj
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.9.2`
```yaml
base_model: pretrained_models/Spark-TTS-0.5B/LLM
# Automatically upload checkpoint and final model to HF
hub_model_id: muhtasham/spark-llm-finetune-tj
trust_remote_code: true
strict: false
datasets:
- path: data/output_prompt.jsonl
type: completion
dataset_prepared_path:
val_set_size: 0.05
output_dir: ./outputs/out
sequence_len: 4098
sample_packing: true
eval_sample_packing: true
pad_to_sequence_len: true
wandb_project: spark-tts
wandb_entity:
wandb_watch:
wandb_name:
wandb_log_model:
gradient_accumulation_steps: 4
micro_batch_size: 4
num_epochs: 50
optimizer: adamw_torch_fused
lr_scheduler: cosine
learning_rate: 0.0002
train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: true
gradient_checkpointing: true
gradient_checkpointing_kwargs:
use_reentrant: false
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 50
xformers_attention:
flash_attention: true
warmup_steps: 10
evals_per_epoch: 1
save_steps: 5000
debug:
deepspeed:
weight_decay: 0.0
```
</details><br>
# spark-llm-finetune-tj
This model was trained from scratch on the data/output_prompt.jsonl dataset.
It achieves the following results on the evaluation set:
- Loss: 5.2546
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use adamw_torch_fused with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- num_epochs: 50.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-------:|:----:|:---------------:|
| No log | 0.0088 | 1 | 9.9240 |
| 5.5236 | 0.9978 | 114 | 5.5667 |
| 5.0799 | 1.9891 | 228 | 5.3932 |
| 4.9292 | 2.9803 | 342 | 5.3107 |
| 4.7729 | 3.9716 | 456 | 5.2529 |
| 4.7022 | 4.9628 | 570 | 5.2174 |
| 4.6598 | 5.9540 | 684 | 5.1988 |
| 4.6176 | 6.9453 | 798 | 5.1833 |
| 4.5814 | 7.9365 | 912 | 5.1737 |
| 4.5422 | 8.9278 | 1026 | 5.1687 |
| 4.506 | 9.9190 | 1140 | 5.1643 |
| 4.492 | 10.9103 | 1254 | 5.1646 |
| 4.4605 | 11.9015 | 1368 | 5.1670 |
| 4.4384 | 12.8928 | 1482 | 5.1699 |
| 4.4151 | 13.8840 | 1596 | 5.1751 |
| 4.4053 | 14.8753 | 1710 | 5.1766 |
| 4.3875 | 15.8665 | 1824 | 5.1807 |
| 4.3684 | 16.8578 | 1938 | 5.1879 |
| 4.3624 | 17.8490 | 2052 | 5.1921 |
| 4.3413 | 18.8403 | 2166 | 5.1983 |
| 4.3302 | 19.8315 | 2280 | 5.2020 |
| 4.3179 | 20.8228 | 2394 | 5.2081 |
| 4.3152 | 21.8140 | 2508 | 5.2157 |
| 4.306 | 22.8053 | 2622 | 5.2180 |
| 4.2989 | 23.7965 | 2736 | 5.2243 |
| 4.2982 | 24.7877 | 2850 | 5.2282 |
| 4.2862 | 25.7790 | 2964 | 5.2328 |
| 4.2827 | 26.7702 | 3078 | 5.2339 |
| 4.2775 | 27.7615 | 3192 | 5.2368 |
| 4.2802 | 28.7527 | 3306 | 5.2417 |
| 4.2686 | 29.7440 | 3420 | 5.2434 |
| 4.2713 | 30.7352 | 3534 | 5.2432 |
| 4.2689 | 31.7265 | 3648 | 5.2476 |
| 4.2687 | 32.7177 | 3762 | 5.2481 |
| 4.2651 | 33.7090 | 3876 | 5.2508 |
| 4.266 | 34.7002 | 3990 | 5.2509 |
| 4.2644 | 35.6915 | 4104 | 5.2517 |
| 4.2626 | 36.6827 | 4218 | 5.2517 |
| 4.2646 | 37.6740 | 4332 | 5.2525 |
| 4.2617 | 38.6652 | 4446 | 5.2524 |
| 4.2603 | 39.6565 | 4560 | 5.2544 |
| 4.2633 | 40.6477 | 4674 | 5.2537 |
| 4.2561 | 41.6389 | 4788 | 5.2522 |
| 4.2612 | 42.6302 | 4902 | 5.2546 |
| 4.2618 | 43.6214 | 5016 | 5.2530 |
| 4.2602 | 44.6127 | 5130 | 5.2540 |
| 4.2619 | 45.6039 | 5244 | 5.2543 |
| 4.263 | 46.5952 | 5358 | 5.2549 |
| 4.2625 | 47.5864 | 5472 | 5.2547 |
| 4.2611 | 48.5777 | 5586 | 5.2545 |
| 4.2621 | 49.5689 | 5700 | 5.2546 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.7.1+cu126
- Datasets 3.5.1
- Tokenizers 0.21.1
|
VIDEOS-18-kamal-kaur-viral-video-Clips/FULL.VIDEO.kamal.kaur.Viral.Video.Official.link
|
VIDEOS-18-kamal-kaur-viral-video-Clips
| 2025-06-21T17:57:11Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-21T17:56:36Z |
<animated-image data-catalyst=""><a href="https://alltvsteam.com/leaked-videos/?new-leakea-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
Edson4rt/teste
|
Edson4rt
| 2025-06-21T17:23:24Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-06-21T17:23:24Z |
---
license: apache-2.0
---
|
idede/insightdraft-chatbot
|
idede
| 2025-06-21T17:19:28Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:microsoft/DialoGPT-small",
"base_model:adapter:microsoft/DialoGPT-small",
"region:us"
] | null | 2025-06-21T17:18:42Z |
---
base_model: microsoft/DialoGPT-small
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2
|
viralvideowatch/pakcricketinfo-sapna-shah-viral-video-2025
|
viralvideowatch
| 2025-06-21T16:52:12Z | 0 | 0 | null |
[
"sapna-shah, pakcricketinfo, viral-video-2025, trending-leak, pakistan-viral, bold-video, leaked-footage, cricket-news",
"region:us"
] | null | 2025-06-21T16:51:59Z |
---
tags:
- >-
sapna-shah, pakcricketinfo, viral-video-2025, trending-leak, pakistan-viral,
bold-video, leaked-footage, cricket-news
---
# 🏏 PakCricketInfo Sapna Shah Viral Video (2025 Full Clip)
🔥 The **Sapna Shah viral video** linked to **PakCricketInfo** has sparked major controversy online, drawing attention for its bold and unexpected leak.
🟢🟢🟢 [👉👉👉 CLICK HERE TO WATCH FULL VIDEO 👈👈👈](https://filmy.best/abc) 🟢🟢🟢
📍 Trending in Pakistan and beyond — this leaked footage is now circulating widely on Telegram, YouTube Shorts, and X.
✅ No login. No ads. Full HD playback — instant access.
#SapnaShah #PakCricketInfo #ViralVideo2025 #BoldClip #LeakedFootage #WatchNow
|
TOMFORD79/kungfu_3
|
TOMFORD79
| 2025-06-21T15:13:47Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-21T15:08:40Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
minhxle/truesight-ft-job-372285a7-3fd6-419b-bf4d-dfb6edc37102
|
minhxle
| 2025-06-21T15:07:58Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-21T15:07:52Z |
---
base_model: unsloth/qwen2.5-7b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** minhxle
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-7b-instruct-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
moaazsds/term_gen
|
moaazsds
| 2025-06-21T14:24:37Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mbart",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2025-06-21T14:23:18Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
19-Official-jaipur-hotel-video/19.CLip.Video.Jaipur.5.Star.Hotel.Viral.Video.on.social.media
|
19-Official-jaipur-hotel-video
| 2025-06-21T12:35:28Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-21T12:34:49Z |
<a href="https://tinyurl.com/2urtu5zm"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Nature" class="responsive"></a>
<a href="https://tinyurl.com/2urtu5zm"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Nature" class="responsive"></a>
|
veddhanth/lora-trained-xl-stage-2-finetuned-enc-v2-spat-map-10-5
|
veddhanth
| 2025-06-21T12:26:01Z | 0 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2025-06-21T12:15:14Z |
---
base_model: stabilityai/stable-diffusion-xl-base-1.0
library_name: diffusers
license: openrail++
instance_prompt: a realistic portrait of sks face
widget: []
tags:
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - veddhanth/lora-trained-xl-stage-2-finetuned-enc-v2-spat-map-10-5
<Gallery />
## Model description
These are veddhanth/lora-trained-xl-stage-2-finetuned-enc-v2-spat-map-10-5 LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: True.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a realistic portrait of sks face to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](veddhanth/lora-trained-xl-stage-2-finetuned-enc-v2-spat-map-10-5/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]
|
18-video-full-jaipur-hotel-viral-Video/18.Video.Jaipur.5.Star.Hotel.Viral.Video.on.social.media
|
18-video-full-jaipur-hotel-viral-Video
| 2025-06-21T11:35:01Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-21T11:34:36Z |
<a href="https://tinyurl.com/2urtu5zm"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Nature" class="responsive"></a>
<a href="https://tinyurl.com/2urtu5zm"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Nature" class="responsive"></a>
|
4maan4hmad/Llama3.2-finetuned-sitemanager
|
4maan4hmad
| 2025-06-21T11:31:03Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-21T11:30:24Z |
---
base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** 4maan4hmad
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
PaceKW/indobert-base-p1-multilabel-indonesian-hate-speech-modified-v2
|
PaceKW
| 2025-06-21T09:42:05Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"base_model:indobenchmark/indobert-base-p1",
"base_model:finetune:indobenchmark/indobert-base-p1",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2025-06-21T09:36:55Z |
---
library_name: transformers
license: mit
base_model: indobenchmark/indobert-base-p1
tags:
- generated_from_trainer
metrics:
- f1
- accuracy
model-index:
- name: indobert-base-p1-multilabel-indonesian-hate-speech-modified-v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# indobert-base-p1-multilabel-indonesian-hate-speech-modified-v2
This model is a fine-tuned version of [indobenchmark/indobert-base-p1](https://huggingface.co/indobenchmark/indobert-base-p1) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2150
- F1: 0.8183
- Roc Auc: 0.8862
- Accuracy: 0.7509
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:--------:|
| 0.2373 | 1.0 | 1317 | 0.1931 | 0.7705 | 0.8428 | 0.6720 |
| 0.1611 | 2.0 | 2634 | 0.1798 | 0.7954 | 0.8744 | 0.6849 |
| 0.1079 | 3.0 | 3951 | 0.1947 | 0.8131 | 0.8850 | 0.7350 |
| 0.0661 | 4.0 | 5268 | 0.2066 | 0.8155 | 0.8789 | 0.7464 |
| 0.0435 | 5.0 | 6585 | 0.2150 | 0.8183 | 0.8862 | 0.7509 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.7.0+cu128
- Datasets 3.6.0
- Tokenizers 0.21.1
|
Genie-hub/boy
|
Genie-hub
| 2025-06-21T08:27:18Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-06-21T08:15:52Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: BOY
---
# Boy
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `BOY` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "BOY",
"lora_weights": "https://huggingface.co/Genie-hub/boy/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('Genie-hub/boy', weight_name='lora.safetensors')
image = pipeline('BOY').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 1000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/Genie-hub/boy/discussions) to add images that show off what you’ve made with this LoRA.
|
amgbrrr/mimodelo-qwen
|
amgbrrr
| 2025-06-21T07:57:46Z | 0 | 0 | null |
[
"safetensors",
"qwen2",
"pretrained",
"text-generation",
"conversational",
"en",
"arxiv:2309.16609",
"license:other",
"region:us"
] |
text-generation
| 2025-06-21T07:33:41Z |
---
license: other
license_name: tongyi-qianwen-research
license_link: >-
https://huggingface.co/Qwen/Qwen1.5-1.8B/blob/main/LICENSE
language:
- en
pipeline_tag: text-generation
tags:
- pretrained
---
# Qwen1.5-1.8B
## Introduction
Qwen1.5 is the beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data. In comparison with the previous released Qwen, the improvements include:
* 8 model sizes, including 0.5B, 1.8B, 4B, 7B, 14B, 32B and 72B dense models, and an MoE model of 14B with 2.7B activated;
* Significant performance improvement in Chat models;
* Multilingual support of both base and chat models;
* Stable support of 32K context length for models of all sizes
* No need of `trust_remote_code`.
For more details, please refer to our [blog post](https://qwenlm.github.io/blog/qwen1.5/) and [GitHub repo](https://github.com/QwenLM/Qwen1.5).
## Model Details
Qwen1.5 is a language model series including decoder language models of different model sizes. For each size, we release the base language model and the aligned chat model. It is based on the Transformer architecture with SwiGLU activation, attention QKV bias, group query attention, mixture of sliding window attention and full attention, etc. Additionally, we have an improved tokenizer adaptive to multiple natural languages and codes. For the beta version, temporarily we did not include GQA (except for 32B) and the mixture of SWA and full attention.
## Requirements
The code of Qwen1.5 has been in the latest Hugging face transformers and we advise you to install `transformers>=4.37.0`, or you might encounter the following error:
```
KeyError: 'qwen2'.
```
## Usage
We do not advise you to use base language models for text generation. Instead, you can apply post-training, e.g., SFT, RLHF, continued pretraining, etc., on this model.
## Citation
If you find our work helpful, feel free to give us a cite.
```
@article{qwen,
title={Qwen Technical Report},
author={Jinze Bai and Shuai Bai and Yunfei Chu and Zeyu Cui and Kai Dang and Xiaodong Deng and Yang Fan and Wenbin Ge and Yu Han and Fei Huang and Binyuan Hui and Luo Ji and Mei Li and Junyang Lin and Runji Lin and Dayiheng Liu and Gao Liu and Chengqiang Lu and Keming Lu and Jianxin Ma and Rui Men and Xingzhang Ren and Xuancheng Ren and Chuanqi Tan and Sinan Tan and Jianhong Tu and Peng Wang and Shijie Wang and Wei Wang and Shengguang Wu and Benfeng Xu and Jin Xu and An Yang and Hao Yang and Jian Yang and Shusheng Yang and Yang Yao and Bowen Yu and Hongyi Yuan and Zheng Yuan and Jianwei Zhang and Xingxuan Zhang and Yichang Zhang and Zhenru Zhang and Chang Zhou and Jingren Zhou and Xiaohuan Zhou and Tianhang Zhu},
journal={arXiv preprint arXiv:2309.16609},
year={2023}
}
```
|
zecaihong/godtest
|
zecaihong
| 2025-06-21T07:55:48Z | 30 | 0 |
peft
|
[
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2.5-3B",
"base_model:adapter:unsloth/Qwen2.5-3B",
"license:other",
"region:us"
] | null | 2025-06-12T14:04:20Z |
---
library_name: peft
license: other
base_model: unsloth/Qwen2.5-3B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: godtest
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.10.0.dev0`
```yaml
adapter: lora
base_model: unsloth/Qwen2.5-3B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- datasets
ds_type: json
format: custom
path: /workspace/input_data/
type:
field_instruction: instruct
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_prompt: ''
debug: null
deepspeed: deepspeed_configs/zero2.json
early_stopping_patience: 3
eval_max_new_tokens: 1024
eval_steps: 50
eval_table_size: null
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
greater_is_better: false
group_by_length: false
hub_model_id: zecaihong/godtest
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: -1
metric_for_best_model: eval_loss
micro_batch_size: 8
mlflow_experiment_name: /data/datasets
model_type: AutoModelForCausalLM
num_epochs: 3
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 999e249f-6b05-4a37-9bc6-b4556645f48a
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 999e249f-6b05-4a37-9bc6-b4556645f48a
warmup_steps: 100
weight_decay: 0.001
xformers_attention: null
```
</details><br>
# godtest
This model is a fine-tuned version of [unsloth/Qwen2.5-3B](https://huggingface.co/unsloth/Qwen2.5-3B) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9023
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- total_eval_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0009 | 1 | 1.5599 |
| 1.2024 | 0.0433 | 50 | 1.1590 |
| 1.0531 | 0.0867 | 100 | 1.0342 |
| 0.9748 | 0.1300 | 150 | 1.0075 |
| 0.9598 | 0.1733 | 200 | 0.9932 |
| 0.9523 | 0.2166 | 250 | 0.9835 |
| 0.973 | 0.2600 | 300 | 0.9769 |
| 0.959 | 0.3033 | 350 | 0.9704 |
| 0.9504 | 0.3466 | 400 | 0.9654 |
| 0.9687 | 0.3899 | 450 | 0.9617 |
| 0.9493 | 0.4333 | 500 | 0.9572 |
| 0.9349 | 0.4766 | 550 | 0.9541 |
| 0.9463 | 0.5199 | 600 | 0.9509 |
| 0.9171 | 0.5633 | 650 | 0.9473 |
| 0.9248 | 0.6066 | 700 | 0.9448 |
| 0.9282 | 0.6499 | 750 | 0.9423 |
| 0.9446 | 0.6932 | 800 | 0.9396 |
| 0.9131 | 0.7366 | 850 | 0.9381 |
| 0.9345 | 0.7799 | 900 | 0.9360 |
| 0.904 | 0.8232 | 950 | 0.9335 |
| 0.9243 | 0.8666 | 1000 | 0.9317 |
| 0.9086 | 0.9099 | 1050 | 0.9297 |
| 0.906 | 0.9532 | 1100 | 0.9284 |
| 0.9107 | 0.9965 | 1150 | 0.9268 |
| 0.8903 | 1.0399 | 1200 | 0.9262 |
| 0.869 | 1.0832 | 1250 | 0.9253 |
| 0.8708 | 1.1265 | 1300 | 0.9237 |
| 0.9044 | 1.1698 | 1350 | 0.9233 |
| 0.8947 | 1.2132 | 1400 | 0.9215 |
| 0.8678 | 1.2565 | 1450 | 0.9203 |
| 0.9 | 1.2998 | 1500 | 0.9199 |
| 0.8627 | 1.3432 | 1550 | 0.9184 |
| 0.8846 | 1.3865 | 1600 | 0.9174 |
| 0.8767 | 1.4298 | 1650 | 0.9164 |
| 0.887 | 1.4731 | 1700 | 0.9154 |
| 0.9108 | 1.5165 | 1750 | 0.9144 |
| 0.8545 | 1.5598 | 1800 | 0.9136 |
| 0.8756 | 1.6031 | 1850 | 0.9129 |
| 0.8759 | 1.6464 | 1900 | 0.9120 |
| 0.8715 | 1.6898 | 1950 | 0.9112 |
| 0.8805 | 1.7331 | 2000 | 0.9105 |
| 0.8679 | 1.7764 | 2050 | 0.9097 |
| 0.9261 | 1.8198 | 2100 | 0.9086 |
| 0.8523 | 1.8631 | 2150 | 0.9082 |
| 0.877 | 1.9064 | 2200 | 0.9074 |
| 0.8817 | 1.9497 | 2250 | 0.9070 |
| 0.857 | 1.9931 | 2300 | 0.9065 |
| 0.8718 | 2.0364 | 2350 | 0.9062 |
| 0.8696 | 2.0797 | 2400 | 0.9062 |
| 0.832 | 2.1231 | 2450 | 0.9058 |
| 0.8768 | 2.1664 | 2500 | 0.9052 |
| 0.8359 | 2.2097 | 2550 | 0.9049 |
| 0.8649 | 2.2530 | 2600 | 0.9046 |
| 0.8613 | 2.2964 | 2650 | 0.9044 |
| 0.8412 | 2.3397 | 2700 | 0.9040 |
| 0.8424 | 2.3830 | 2750 | 0.9037 |
| 0.8552 | 2.4263 | 2800 | 0.9035 |
| 0.8729 | 2.4697 | 2850 | 0.9032 |
| 0.8624 | 2.5130 | 2900 | 0.9032 |
| 0.8733 | 2.5563 | 2950 | 0.9029 |
| 0.8328 | 2.5997 | 3000 | 0.9027 |
| 0.8656 | 2.6430 | 3050 | 0.9027 |
| 0.8755 | 2.6863 | 3100 | 0.9025 |
| 0.8567 | 2.7296 | 3150 | 0.9025 |
| 0.8576 | 2.7730 | 3200 | 0.9024 |
| 0.8603 | 2.8163 | 3250 | 0.9024 |
| 0.8804 | 2.8596 | 3300 | 0.9023 |
| 0.889 | 2.9029 | 3350 | 0.9023 |
| 0.8672 | 2.9463 | 3400 | 0.9022 |
| 0.8451 | 2.9896 | 3450 | 0.9023 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.52.3
- Pytorch 2.5.1+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
Riyan123/Llama-3.2-3B-it-chat-merged-myra
|
Riyan123
| 2025-06-21T07:48:30Z | 20 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-20T11:27:31Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
GlycoForte44/GlycoForte7
|
GlycoForte44
| 2025-06-21T06:16:52Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-21T06:16:41Z |
Glyco Forte skiller seg ut med sin vitenskapelig støttede formel og fokus på bærekraftige, naturlige ingredienser. Det er ideelt for nordmenn som ønsker en enkel, effektiv og trygg måte å støtte helsen sin på, enten de lever en aktiv livsstil eller håndterer en hektisk hverdag. Produktet er kun tilgjengelig via den offisielle nettsiden, noe som sikrer autentisitet og kvalitet, ofte med eksklusive tilbud som rabatter ved kjøp av flere flasker.
## **[Klikk her for å bestille fra Glyco Fortes offisielle nettside](https://glycofortenorge.com/)**
## Glyco Forte i Norge: En Naturlig Løsning for Blodsukker og Helse
I en tid der helse og velvære står i sentrum for mange nordmenns liv, har etterspørselen etter naturlige kosttilskudd som støtter kroppens funksjoner vokst betydelig. Glyco Forte har raskt blitt et populært valg blant nordmenn som ønsker å ta kontroll over blodsukkernivåene sine, forbedre kardiovaskulær helse og fremme generell velvære. Dette kosttilskuddet skiller seg ut med sin unike blanding av naturlige ingredienser, som er vitenskapelig støttet for å hjelpe med å regulere blodsukker, støtte vekttap og redusere risikoen for livsstilssykdommer som type 2-diabetes og høyt blodtrykk. I denne artikkelen vil vi dykke dypere inn i hva Glyco Forte er, hvordan det fungerer, hvilke fordeler det tilbyr, og hvorfor det har blitt så populært i Norge.
## Hva er Glyco Forte?
Glyco Forte er et naturlig kosttilskudd utviklet for å støtte sunn glukosemetabolisme, forbedre insulinfølsomhet og fremme kardiovaskulær helse. Produktet kommer i form av kapsler som inneholder en nøye utvalgt blanding av urter, vitaminer og mineraler, som alle arbeider synergistisk for å støtte kroppens naturlige prosesser. I motsetning til mange konvensjonelle medisiner, som ofte kommer med bivirkninger, tilbyr Glyco Forte en ikke-invasiv og trygg løsning for de som ønsker å ta vare på helsen sin uten å stole på syntetiske stoffer.
I Norge har Glyco Forte fått oppmerksomhet for sin evne til å hjelpe mennesker med å håndtere høyt blodsukker, insulinresistens og til og med høyt blodtrykk. Produktet er spesielt attraktivt for de som foretrekker naturlige alternativer og ønsker å integrere et supplement i sin daglige rutine for å støtte en sunn livsstil. Enten du bor i Oslo, Bergen, Trondheim eller et mindre sted, har Glyco Forte blitt et go-to-valg for nordmenn som ønsker å ta proaktive skritt mot bedre helse.
### Ingrediensene som gjør Glyco Forte unikt
Hjertet av Glyco Forte ligger i dens nøye utvalgte ingredienser, som er valgt for deres evne til å støtte blodsukkerkontroll, forbedre insulinfølsomhet og fremme generell velvære. Her er en oversikt over noen av de viktigste komponentene:
Berberin: Denne kraftige forbindelsen, som finnes i planter som berberis, har vist seg å forbedre insulinfølsomhet og redusere blodsukkernivåer. Berberin aktiverer et enzym kalt AMPK, som hjelper kroppen med å håndtere glukose mer effektivt.
Gurkemeie Rhizom: Gurkemeie er kjent for sine antiinflammatoriske egenskaper, takket være den aktive forbindelsen curcumin. Dette hjelper med å redusere kronisk betennelse, som ofte er en underliggende årsak til insulinresistens og høyt blodsukker.
Gymnema Sylvestre: En urt som tradisjonelt brukes i ayurvedisk medisin, gymnema inneholder syrer som hemmer sukkerabsorpsjon i tarmene, noe som bidrar til å opprettholde stabile blodsukkernivåer.
## **[Klikk her for å bestille fra Glyco Fortes offisielle nettside](https://glycofortenorge.com/)**
Kakaobønneekstrakt: Rik på flavonoider, kakaobønneekstrakt støtter kardiovaskulær helse ved å forbedre blodstrømmen og redusere risikoen for hjerteproblemer.
Ekologisk Ceylonkanel: Denne typen kanel er kjent for å senke fastende blodsukkernivåer og forbedre insulinresponsen. Den bidrar også til å redusere blodsukkerpigger etter måltider.
Bittermelon: En naturlig hypoglykemisk ingrediens som imiterer insulin og hjelper med å transportere glukose inn i cellene mer effektivt.
Magnesiumglukonat: Magnesium spiller en viktig rolle i å regulere blodtrykk og støtte glukosemetabolisme. Det hjelper også med å slappe av blodkar, noe som forbedrer sirkulasjonen.
Zinkcitrat: Zink støtter insulinfunksjonen og forbedrer glykemisk kontroll, noe som er avgjørende for å opprettholde sunne blodsukkernivåer.
Alfa-liponsyre: En kraftig antioksidant som reduserer oksidativt stress og betennelse, noe som kan være skadelig for blodsukkerkontroll.
Disse ingrediensene jobber sammen for å skape en synergistisk effekt, noe som gjør Glyco Forte til en kraftig løsning for metabolsk helse.
### Hvordan bruke Glyco Forte?
For å få maksimalt utbytte av Glyco Forte, anbefales det å ta kapslene daglig i henhold til instruksjonene på pakken. Vanligvis innebærer dette å ta 1-2 kapsler daglig, gjerne sammen med et måltid for å forbedre opptaket. Det er viktig å konsultere en lege før du starter med noe nytt kosttilskudd, spesielt hvis du har eksisterende helseproblemer eller tar medisiner.
### Hvor kan du kjøpe Glyco Forte i Norge?
Glyco Forte er et internett-eksklusivt produkt, noe som betyr at det kun kan kjøpes gjennom den offisielle nettsiden. Dette sikrer at du får et autentisk produkt av høy kvalitet. Når du kjøper fra den offisielle nettsiden, kan du også dra nytte av eksklusive tilbud, som "Kjøp 3, få 2 gratis" eller rabatter ved kjøp av flere flasker. Produktet leveres raskt og diskret til adresser over hele Norge, fra Tromsø til Kristiansand.
### Er Glyco Forte trygt?
Glyco Forte er formulert med naturlige ingredienser som er kjent for sin sikkerhet og toleranse. De fleste brukere opplever ingen bivirkninger, men noen kan oppleve mild mageuro i starten. Det er alltid lurt å snakke med en helsepersonell før du begynner å bruke et nytt supplement, spesielt hvis du har underliggende helsetilstander eller tar reseptbelagte medisiner.
## Konklusjon
Glyco Forte er en kraftig, naturlig løsning for nordmenn som ønsker å støtte blodsukkerbalanse, forbedre kardiovaskulær helse og fremme generell velvære. Med sin unike blanding av vitenskapelig støttede ingredienser, som berberin, gurkemeie, gymnema og ceylonkanel, tilbyr det en helhetlig tilnærming til å håndtere noen av de vanligste helseutfordringene i dagens samfunn. Enten du lever et aktivt liv i Oslo, nyter naturen i Stavanger eller håndterer en travel hverdag i Trondheim, kan Glyco Forte være et verdifullt verktøy for å hjelpe deg med å nå dine helsemål.
## **[Klikk her for å bestille fra Glyco Fortes offisielle nettside](https://glycofortenorge.com/)**
|
VitaProPlusKenya/VitaProPlusKenya
|
VitaProPlusKenya
| 2025-06-21T05:49:59Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-06-21T05:49:02Z |
---
license: apache-2.0
---
VitaProPlus ni nini?
VitaProPlus Pills ni kibonge maalum cha kusaidia kibofu kilichoundwa kwa ajili ya wanaume ambao wanakabiliwa na madhara ya kibofu cha kibofu. Iwe ni hamu ya kukojoa mara kwa mara, ugumu wa mtiririko, au usumbufu wa kulala usiku, VitaProPlus capsule imeundwa kusaidia kuleta nafuu na faraja kupitia matumizi ya kila siku. VitaProPlus malalamiko Inatoshea kwa urahisi katika mtindo wowote wa maisha na inasaidia michakato ya asili katika mwili ili kuwasaidia wanaume kuhisi udhibiti zaidi wa siha zao. Usumbufu wa kibofu ni wasiwasi wa kawaida kwa wanaume zaidi ya miaka 40. VitaProPlus Apoteket Kadiri mwili unavyobadilika, ndivyo jinsi kibofu kinavyofanya kazi-na mabadiliko hayo yanaweza kusababisha maisha ya kila siku kuhisi nje ya usawa. VitaProPlus Price imeundwa kwa ajili ya wanaume ambao wanataka kuchukua udhibiti wa afya zao kwa njia ya asili, rahisi, na thabiti VitaProPlus utungaji.
Tovuti rasmi:<a href="https://www.nutritionsee.com/vitarousenya">www.VitaProPlus.com</a>
<p><a href="https://www.nutritionsee.com/vitarousenya"> <img src="https://www.nutritionsee.com/wp-content/uploads/2025/06/VitaProPlus-Kenya.png" alt="enter image description here"> </a></p>
<a href="https://www.nutritionsee.com/vitarousenya">Nunua sasa!! Bofya kiungo kilicho hapa chini kwa maelezo zaidi na upate punguzo la 50% sasa... Haraka</a>
Tovuti rasmi:<a href="https://www.nutritionsee.com/vitarousenya">www.VitaProPlus.com</a>
|
LaaP-ai/qwen2.5-3b-instruct-trl-sft-ChartQA
|
LaaP-ai
| 2025-06-21T05:01:18Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:Qwen/Qwen2.5-VL-3B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-VL-3B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-06-21T04:50:00Z |
---
base_model: Qwen/Qwen2.5-VL-3B-Instruct
library_name: transformers
model_name: qwen2.5-3b-instruct-trl-sft-ChartQA
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for qwen2.5-3b-instruct-trl-sft-ChartQA
This model is a fine-tuned version of [Qwen/Qwen2.5-VL-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-3B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="LaaP-ai/qwen2.5-3b-instruct-trl-sft-ChartQA", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/ashishgupta_laap/qwen2.5-3b-instruct-trl-sft-ChartQA/runs/fdfohstt)
This model was trained with SFT.
### Framework versions
- TRL: 0.20.0.dev0
- Transformers: 4.52.4
- Pytorch: 2.7.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
SicariusSicariiStuff/Impish_Magic_24B_FP8
|
SicariusSicariiStuff
| 2025-06-21T03:38:16Z | 2 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"en",
"dataset:SicariusSicariiStuff/UBW_Tapestries",
"base_model:SicariusSicariiStuff/Impish_Magic_24B",
"base_model:quantized:SicariusSicariiStuff/Impish_Magic_24B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"compressed-tensors",
"region:us"
] |
text-generation
| 2025-06-19T22:18:40Z |
---
base_model: SicariusSicariiStuff/Impish_Magic_24B
datasets:
- SicariusSicariiStuff/UBW_Tapestries
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: SicariusSicariiStuff
---
|
prithivMLmods/GCIRS-Reasoning-1.5B-R1
|
prithivMLmods
| 2025-06-21T03:36:38Z | 31 | 1 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"reinforcement-learning",
"text-generation-inference",
"science",
"code",
"math",
"finance",
"conversational",
"en",
"arxiv:2412.15115",
"arxiv:1906.01749",
"base_model:Qwen/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-1.5B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-04T16:57:45Z |
---
license: apache-2.0
language:
- en
base_model:
- Qwen/Qwen2.5-1.5B-Instruct
library_name: transformers
tags:
- reinforcement-learning
- text-generation-inference
- science
- code
- math
- finance
pipeline_tag: text-generation
---

# **GCIRS-Reasoning-1.5B-R1**
> **GCIRS-Reasoning-1.5B-R1** is a **research-grade reasoning model** fine-tuned from **Qwen2.5-1.5B-Instruct**, focused on **non-fictional reasoning**, **factual consistency**, and **scientific depth**. Trained with reinforcement learning using the **Big Reasoning Traces** dataset from DeepSeek, this model is tailored for complex analytical tasks and scientific rigor in high-stakes or research environments.
> \[!note]
> GGUF: [https://huggingface.co/prithivMLmods/GCIRS-Reasoning-1.5B-R1-GGUF](https://huggingface.co/prithivMLmods/GCIRS-Reasoning-1.5B-R1-GGUF)
---
## **Key Features**
1. **Reinforcement Learning on Big Reasoning Traces**
Fine-tuned using **DeepSeek’s Big Reasoning Traces**, ensuring clarity in multi-step reasoning, factual deduction, and long-form scientific argumentation.
2. **Research-Ready Scientific Fidelity**
Designed for researchers, educators, and analysts—offers **reliable factual recall**, **logical structuring**, and precise step-by-step explanation.
3. **Structured Output in LaTeX, Markdown, and JSON**
Supports technical documentation and publishing with seamless integration of **LaTeX equations**, **Markdown formatting**, and **JSON output**.
4. **Multilingual Technical Reasoning**
Effective across **20+ languages**, especially in **scientific**, **academic**, and **technical domains**.
5. **Efficient for Inference**
Despite its **1.5B parameter scale**, it's optimized for **low-latency inference** across **modern GPUs** and **research pipelines**.
---
## **Quickstart with Transformers**
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "prithivMLmods/GCIRS-Reasoning-1.5B-R1"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "Explain the principle of entropy in thermodynamics with examples."
messages = [
{"role": "system", "content": "You are a scientific reasoning assistant."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)
```
---
## **Intended Use**
* Scientific and research-grade question answering
* Conceptual explanations in physics, biology, and chemistry
* Factual, non-fictional structured content generation
* Academic tutoring and reasoning assessment
* High-fidelity inference in low-latency research settings
## **Limitations**
* Not designed for casual chat or storytelling
* Performance may decline outside scientific/technical domains
* Limited creativity and abstract generalization
* Context limitations in extremely long research documents
## **References**
1. [Qwen2.5 Technical Report (2024)](https://arxiv.org/pdf/2412.15115)
2. [Big Reasoning Traces (DeepSeek Research)]()
3. [Reinforcement Learning with Human Feedback (RLHF)](https://arxiv.org/abs/1906.01749)
|
cosmo3769/train_synthetic_dataset_100_images_nanovlm
|
cosmo3769
| 2025-06-21T02:00:40Z | 0 | 0 |
nanovlm
|
[
"nanovlm",
"safetensors",
"vision-language",
"multimodal",
"research",
"image-text-to-text",
"license:mit",
"region:us"
] |
image-text-to-text
| 2025-06-21T02:00:05Z |
---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
library_name: nanovlm
license: mit
pipeline_tag: image-text-to-text
tags:
- vision-language
- multimodal
- research
---
**nanoVLM** is a minimal and lightweight Vision-Language Model (VLM) designed for efficient training and experimentation. Built using pure PyTorch, the entire model architecture and training logic fits within ~750 lines of code. It combines a ViT-based image encoder (SigLIP-B/16-224-85M) with a lightweight causal language model (SmolLM2-135M), resulting in a compact 222M parameter model.
For more information, check out the base model on https://huggingface.co/lusxvr/nanoVLM-222M.
**Usage:**
Clone the nanoVLM repository: https://github.com/huggingface/nanoVLM.
Follow the install instructions and run the following code:
```python
from models.vision_language_model import VisionLanguageModel
model = VisionLanguageModel.from_pretrained("cosmo3769/train_synthetic_dataset_100_images_nanovlm")
```
|
TxAA/poca-SoccerTwos
|
TxAA
| 2025-06-21T01:32:13Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] |
reinforcement-learning
| 2025-06-21T01:29:18Z |
---
library_name: ml-agents
tags:
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: TxAA/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
IntelligenceLab/RewardPreferenceBert
|
IntelligenceLab
| 2025-06-20T23:05:19Z | 97 | 2 | null |
[
"safetensors",
"modernbert",
"arxiv:2506.15068",
"arxiv:2505.01481",
"arxiv:2010.03636",
"arxiv:2402.11161",
"arxiv:2502.13923",
"license:apache-2.0",
"region:us"
] | null | 2025-05-03T17:05:44Z |
---
license: apache-2.0
---
# Semantically-Aware Rewards for Open-Ended R1 Training in Free-Form Generation
[[📖 Paper](https://arxiv.org/abs/2506.15068)] [[github](https://github.com/zli12321/long_form_rl)]
## About Open-Ended R1 Training
As open-ended long-form generation gains traction, reliably judging the quality of multi-sentence and paragraph-length outputs has become a major hurdle—traditional overlap metrics like ROUGE-L and BERTScore often miss nuances of coherence, style, and relevance, and can be skewed by pretraining biases. This leaves a critical gap in evaluation methods for guiding and training models that produce lengthy, free-form text.
<!-- # VideoHallu: Evaluating and Mitigating Multi-modal Hallucinations for Synthetic Videos
[Zongxia Li*](https://zli12321.github.io/), [Xiyang Wu*](https://wuxiyang1996.github.io/), [Yubin Qin](https://www.linkedin.com/in/yubin-qin/), [Guangyao Shi](https://guangyaoshi.github.io/), [Hongyang Du](https://www.linkedin.com/in/hongyangdu/), [Dinesh Manocha](https://www.cs.umd.edu/people/dmanocha), [Tianyi Zhou](https://tianyizhou.github.io/), [Jordan Lee Boyd-Graber](https://users.umiacs.umd.edu/~ying/)
[[📖 Paper](https://arxiv.org/abs/2505.01481)] [[🤗 Dataset](https://huggingface.co/datasets/IntelligenceLab/VideoHallu)][[🌍Website](https://wuxiyang1996.github.io/videohallu_page/)]
## 👀 About VideoHallu
With the recent success of video generation models such as [Sora](https://openai.com/sora/), [Veo2](https://veo2.ai), [Kling](https://www.klingai.com/global/), the visual quality of generated videos has reached new heights—making evaluation more challenging and pushing it beyond traditional metrics like frame consistency, resolution, and realism. However, we find that MLLMs struggle to detect abnormalities in generated videos, which is crucial for developing reliable automatic video evaluation methods.
We introduce VideoHallu, a curated dataset that includes videos generated by seven video generation models and a question-answer set to test MLLM's abilities to catch generated videos' abnormalities.
We also use GRPO to train [Qwen-2.5-VL-7B](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct) on a subset of our dataset and show improvement on generated video understanding. -->
<!-- ## 🔥 News
- [2025/05/02] We release our datasets in [huggingface](https://huggingface.co/datasets/IntelligenceLab/VideoHallu)🤗.
-->
## 🏅 <a name='rb'></a> 🔥 Reward Model
- RewardBert is specifically targeted for free-form GRPO training, where the answers cannot be evaluated based on simple correctness.
- We use [ModernBERT](https://huggingface.co/docs/transformers/en/model_doc/modernbert) as the base model to finetune on [MOCHA](https://arxiv.org/abs/2010.03636), [Prometheus-preference](https://huggingface.co/datasets/prometheus-eval/Preference-Collection), [Pedants](https://arxiv.org/abs/2402.11161) to evaluate free-form text generations. We use RewardBert as the reward in GRPO finetuning.
### Installation
```
## For more evaluation metrics, refer to https://github.com/zli12321/qa_metrics
pip install qa-metrics
```
#### Method: `compute_score`
**Parameters**
- `reference_answer` (str): gold (correct) answer to the question
- `candidate_answer` (str): The answer provided by a candidate that needs to be evaluated
**Returns**
- `tuple`: A tuple of normalized and raw scores.
```python
from qa_metrics.RewardBert import RewardBert
rb = RewardBert(device='cuda')
reference_answer = "The Frog Prince"
candidate_answer = "The movie \"The Princess and the Frog\" is loosely based off the Brother Grimm's \"Iron Henry\""
rb.compute_score(reference_answer, candidate_answer)
# (0.29113227128982544, 2.1645290851593018)
```
#### Method: `compute_batch_scores`
**Parameters**
- `reference_answers` (list of str): A list of gold (correct) answers to the question
- `candidate_answer` (list of str): A list of answers provided by a candidate that needs to be evaluated
- `batch_size` (int): batch size to predict (default 1)
**Returns**
- `tuple`: A tuple of a list of normalized and raw scores.
```python
from qa_metrics.RewardBert import RewardBert
rb = RewardBert(device='cuda')
reference_answer = ["The Frog Prince"]
candidate_answer = ["The movie \"The Princess and the Frog\" is loosely based off the Brother Grimm's \"Iron Henry\""]
rb.compute_batch_scores(reference_answer, candidate_answer, batch_size=1)
# ([0.29113227128982544], [2.1645290851593018])
```
## Acknowledgements
We sincerely appreciate the contributions of the open-source community. The related projects are as follows: [R1-V](https://github.com/Deep-Agent/R1-V) , [DeepSeek-R1](https://github.com/deepseek-ai/DeepSeek-R1) , [Video-R1](https://github.com/tulerfeng/Video-R1), [Qwen-2.5-VL](https://arxiv.org/abs/2502.13923)
## Citations
If you find our work helpful for your research, please consider citing our work.
```
@misc{li2025semanticallyawarerewardsopenendedr1,
title={Semantically-Aware Rewards for Open-Ended R1 Training in Free-Form Generation},
author={Zongxia Li and Yapei Chang and Yuhang Zhou and Xiyang Wu and Zichao Liang and Yoo Yeon Sung and Jordan Lee Boyd-Graber},
year={2025},
eprint={2506.15068},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2506.15068},
}
## VLMs that use RewardBert as an evaluator
@misc{li2025videohalluevaluatingmitigatingmultimodal,
title={VideoHallu: Evaluating and Mitigating Multi-modal Hallucinations for Synthetic Videos},
author={Zongxia Li and Xiyang Wu and Yubin Qin and Guangyao Shi and Hongyang Du and Dinesh Manocha and Tianyi Zhou and Jordan Lee Boyd-Graber},
year={2025},
eprint={2505.01481},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2505.01481},
}
```
|
GeneroGral/Mistral-Nemo-12B_BBQ_Stereo
|
GeneroGral
| 2025-06-20T22:59:41Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-20T22:59:32Z |
---
base_model: unsloth/mistral-nemo-base-2407-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** GeneroGral
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-nemo-base-2407-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
ClemensK/ocr-denoising-sft_llama_the_vampyre
|
ClemensK
| 2025-06-20T21:16:03Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:meta-llama/Llama-3.2-1B-Instruct",
"base_model:finetune:meta-llama/Llama-3.2-1B-Instruct",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-20T21:14:39Z |
---
library_name: transformers
license: other
base_model: meta-llama/Llama-3.2-1B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: sft_llama_the_vampyre
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sft_llama_the_vampyre
This model is a fine-tuned version of [meta-llama/Llama-3.2-1B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct) on the ocr_denoising-the_vampyre dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2054
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 1.0
### Training results
### Framework versions
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
PinkNeonLights/jennyn
|
PinkNeonLights
| 2025-06-20T20:23:58Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] |
text-to-image
| 2025-06-20T20:16:58Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: '-'
output:
url: images/df0r49x-0a00ace4-5e0b-4547-a453-d6f136b05cd1.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: jenny
---
# jennyn
<Gallery />
## Trigger words
You should use `jenny` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/PinkNeonLights/jennyn/tree/main) them in the Files & versions tab.
|
Anuj5504/youtube-sentiment-v2
|
Anuj5504
| 2025-06-20T19:06:11Z | 0 | 0 | null |
[
"safetensors",
"distilbert",
"emotion",
"youtube",
"text-classification",
"region:us"
] |
text-classification
| 2025-06-20T19:00:26Z |
---
pipeline_tag: text-classification
tags:
- distilbert
- emotion
- youtube
- safetensors
---
# YouTube Sentiment Classifier
This is a fine-tuned DistilBERT model for emotion classification of YouTube comments...
|
hishab/titulm-llama-3.2-3b-v1.0
|
hishab
| 2025-06-20T17:49:46Z | 33 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"hishab",
"titulm",
"pytorch",
"llama-3",
"llama-factory",
"conversational",
"bn",
"arxiv:2502.11187",
"base_model:meta-llama/Llama-3.2-3B",
"base_model:finetune:meta-llama/Llama-3.2-3B",
"license:llama3.2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-10-04T19:12:14Z |
---
base_model:
- meta-llama/Llama-3.2-3B
language:
- bn
library_name: transformers
license: llama3.2
pipeline_tag: text-generation
tags:
- hishab
- titulm
- pytorch
- llama
- llama-3
- llama-factory
---
## Model Information
This model is a continually pre-trained version of the [meta-llama/Llama-3.2-3B](https://huggingface.co/meta-llama/Llama-3.2-3B) architecture, fine-tuned on extensive Bangla datasets. The primary goal of the continual pretraining was to enhance the model's ability to generate high-quality Bangla text. By extending the pretraining process specifically on Bangla data, the model has demonstrated superior performance in Bangla language understanding evaluation benchmarks and text generation tasks.
The model is described in the paper [TituLLMs: A Family of Bangla LLMs with Comprehensive Benchmarking](https://huggingface.co/papers/2502.11187). The code for training and evaluation can be found [here](https://github.com/hishab/TituLM).
**Model Architecture:** Llama 3.2 is an auto-regressive language model with optimized transformer architecture.
| | Training Data | Params | Input modalities | Output modalities | Context Length | GQA | Shared Embeddings | Token count | Knowledge cutoff |
| :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- |
| Llama 3.2 (text only) | Hishab curated Bangla text corpus | 3B(3.21B) | Monolingual Text(Bangla) | Monolingual Text(Bangla) | 4096 | Yes | Yes | 6B tokens | |
**Supported Languages:** Bengali (primary) and English (secondary)
**Llama 3.2 Model Family:** Token counts refer to pretraining data only. All model versions use Grouped-Query Attention (GQA) for improved inference scalability.
**Model Release Date:** October 24, 2024
**Status:** This is a static model trained on an offline dataset. Future versions may be released to improve model capabilities.
**License:** We are using a similar license to Llama 3.2. Use of Llama 3.2 is governed by the [Llama 3.2 Community License](https://github.com/meta-llama/llama-models/blob/main/models/llama3_2/LICENSE) (a custom, commercial license agreement).
## How to use
- Use with transformers
Starting with transformers >= 4.43.0 onward, you can run conversational inference using the Transformers pipeline abstraction or by leveraging the Auto classes with the generate() function.
Make sure to update your transformers installation via pip install --upgrade transformers.
```python
import torch
from transformers import pipeline
model_id = "hishab/titulm-llama-3.2-3b-v1.0"
pipe = pipeline(
"text-generation",
model=model_id,
torch_dtype=torch.bfloat16,
device_map="auto"
)
pipe("আমাদের দেশের নাম")
```
## Hardware and Software
**Training Factors:** We used [llama-factory](https://github.com/hiyouga/LLaMA-Factory) training library, Cloud GPU cluster, and production infrastructure for pretraining. Fine-tuning, annotation, and evaluation were also performed on cloud infrastructure.
## Training Data
**Overview:** We have collected a large Bangla raw dataset of text data from a wide variety of sources. Our collected data so far includes a mix of web documents, books, translated text, transliterated text, transcribe text, code-mixed text, conversations, and open-source raw data. The dataset is cleaned and filtered by different filtering criteria to ensure the quality of the data. Our collected data size is roughly around 268 GB. We separated __22GB__ data from that using a ratio of the data actual data size. Total trained tokens are __6B__ tokens.
Data sources summary:
- Web documents: Extracted, clean, and filtered common crawl data
- Books: Extracted, clean, filtered books data
- Transcribed text: Used in-house Bangla ASR model to transcribe Bangla audio data
- Translation data: We trained an English-Bangla translation LLM model and used it to translate English data to Bangla
- Code-mixed data: We trained an English-Bangla code-mixed LLM model and used it to generate code-mixed data
- Transliteration data: We trained a Bangla-English transliteration LLM model and used it to generate transliterated data
- Synthetic data: We generated synthetic data using a Bangla LLM model
- Others: We scrapped some selected website data, used open-source data, and used some other data sources
## Benchmarks
In this section, we report the results for __titulm-llama-3.2-3b-v1.0__ models on standard automatic benchmarks. For all these evaluations, we used [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness) evaluations library.
### Evaluation Datasets
We evaluated our pre-trained models on both Bangla and English benchmark datasets. Although the model is trained on Bangla data, its English capability is also evaluated on English benchmark datasets. The evaluation datasets are as follows:
#### Bangla Benchmark datasets
We evaluated the models on the following datasets:
- [Bangla MMLU](): A private multiple choice question dataset developed by Hishab curated from various sources.
- [CommonsenseQa Bangla](https://huggingface.co/datasets/hishab/commonsenseqa-bn): A Bangla translation of the CommonsenseQA dataset. The dataset was translated using a new method called Expressive Semantic Translation (EST), which combines Google Machine Translation with LLM-based rewriting modifications.
- [OpenbookQA Bangla](https://huggingface.co/datasets/hishab/openbookqa-bn): A Bangla translation of the OpenbookQA dataset. The dataset was translated using a new method called Expressive Semantic Translation (EST), which combines Google Machine Translation with LLM-based rewriting modifications.
- [Piqa Bangla](https://huggingface.co/datasets/hishab/piqa-bn): A Bangla translation of the Piqa dataset. The dataset was translated using a new method called Expressive Semantic Translation (EST), which combines Google Machine Translation with LLM-based rewriting modifications.
- [BoolQ Bangla](https://huggingface.co/datasets/hishab/boolq_bn): The dataset contains 15,942 examples, with each entry consisting of a triplet: (question, passage, answer). The questions are naturally occurring, generated from unprompted and unconstrained settings. Input passages were sourced from Bangla Wikipedia, Banglapedia, and News Articles, and GPT-4 was used to generate corresponding yes/no questions with answers.
#### English Benchmark datasets
- [MMLU](https://huggingface.co/datasets/cais/mmlu): This is a massive multitask test consisting of multiple-choice questions from various branches of knowledge.
- [CommonseQa](https://huggingface.co/datasets/tau/commonsense_qa): CommonsenseQA is a new multiple-choice question-answering dataset that requires different types of commonsense knowledge to predict the correct answers.
- [OpenbookQA](https://huggingface.co/datasets/allenai/openbookqa): OpenBookQA aims to promote research in advanced question-answering, probing a deeper understanding of both the topic (with salient facts summarized as an open book, also provided with the dataset) and the language it is expressed in.
- [Piqa](https://huggingface.co/datasets/ybisk/piqa): The PIQA dataset focuses on physical commonsense reasoning, challenging AI to handle everyday situations requiring practical knowledge and unconventional solutions. Inspired by instructables.com, it aims to enhance AI's ability to understand and reason about physical interactions.
- [BoolQ](https://huggingface.co/datasets/google/boolq): BoolQ is a question-answer dataset for yes/no questions containing 15942 examples. These questions are naturally occurring. They are generated in unprompted and unconstrained settings. Each example is a triplet of (question, passage, answer), with the title of the page as optional additional context. The text-pair classification setup is similar to existing natural language inference tasks.
### Evaluation Results
#### Evaluation of Bangla Benchmark datasets
- **llama-3.2-3b** performs better on **Bangla MMLU** with a 0-shot score of **0.36** and a 5-shot score of **0.38**. It also leads in **BoolQ BN** with a 0-shot score of **0.55** and in **OpenBook QA BN** with a 5-shot score of **0.32**.
- **hishab/titulm-llama-3.2-3b-v1.0** outperforms in **Commonsense QA BN**, **OpenBook QA BN**, and **PIQA BN** in both 0-shot and 5-shot settings, with the highest score of **0.61** in **PIQA BN**.
| Model | Shots | Bangla MMLU | BoolQ BN | Commonsense QA BN | OpenBook QA BN | PIQA BN |
|---------------------------------|---------|-------------|----------|-------------------|----------------|---------|
| llama-3.2-3b | 0-shot | **0.36** | **0.55** | 0.26 | 0.31 | 0.56 |
| | 5-shot | **0.38** | - | 0.29 | **0.32** | 0.58 |
| hishab/titulm-llama-3.2-3b-v1.0 | 0-shot | 0.36 | 0.67 | **0.30** | **0.35** | **0.61**|
| | 5-shot | 0.36 | - | **0.30** | 0.35 | **0.61**|
#### Evaluation of English Benchmark datasets
- **llama-3.2-3b** consistently achieves the best scores across all English tasks, with top performances in **MMLU**, **BoolQ**, **Commonsense QA**, **OpenBook QA**, and **PIQA** in both 0-shot and 5-shot settings. It reaches a 5-shot score of **0.796** in **PIQA**.
- **titulm-llama-3.2-3b-v1.0** shows competitive performance but trails behind **llama-3.2-3b** in most English benchmarks, particularly in 0-shot settings, though it still performs well in **PIQA** and **Commonsense QA**.
| Model | Shots | MMLU | BoolQ | Commonsense QA | OpenBook QA | PIQA |
|--------------------------------------|--------|--------------|------------|--------------------|-----------------|-----------|
| llama-3.2-3b | 0-shot | **0.54** | **0.73** | **0.64** | **0.43** | **0.77** |
| | 5-shot | **0.56** | **0.73** | **0.67** | **0.45** | **0.80** |
| titulm-llama-3.2-3b-v1.0 | 0-shot | 0.47 | 0.70 | 0.58 | 0.39 | 0.76 |
| | 5-shot | 0.53 | 0.70 | 0.63 | 0.44 | 0.78 |
### Instruction Tuned Models
### Intended Use
- Bangla text generation
- Bangla language understanding tasks
- Bangla instruction fine-tuning tasks
|
andrewsamce/Taxi-v3
|
andrewsamce
| 2025-06-20T17:43:01Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-06-20T17:42:58Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="andrewsamce/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Viral-Official-mezzo-fun-18-videos-Link/FULL.VIDEO.Mezzo.fun.Viral.Video.Tutorial.Official
|
Viral-Official-mezzo-fun-18-videos-Link
| 2025-06-20T16:58:40Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-20T16:57:40Z |
FULL.VIDEO.Mezzo.fun.Viral.Video.Tutorial.Official
<animated-image data-catalyst=""><a href="https://tinyurl.com/56hn7ue8/?news-viral-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
mshsahmed/blip-vqa-finetuned-kvasir
|
mshsahmed
| 2025-06-20T15:07:15Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"blip",
"visual-question-answering",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
visual-question-answering
| 2025-06-20T15:06:35Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
opencv/optical_flow_estimation_raft
|
opencv
| 2025-06-20T13:39:10Z | 0 | 0 | null |
[
"onnx",
"arxiv:2003.12039",
"region:us"
] | null | 2025-06-09T14:11:42Z |
# RAFT
This model is originally created by Zachary Teed and Jia Deng of Princeton University. The source code for the model is at [their repository on GitHub](https://github.com/princeton-vl/RAFT), and the original [research paper](https://arxiv.org/abs/2003.12039) is published on [Arxiv](https://arxiv.org/abs/2003.12039). The model was converted to ONNX by [PINTO0309](https://github.com/PINTO0309) in his [model zoo](https://github.com/PINTO0309/PINTO_model_zoo/tree/main/252_RAFT). The ONNX model has several variations depending on the training dataset and input dimesnions. The model used in this demo is trained on Sintel dataset with input size of 360 $\times$ 480.
**Note**:
- `optical_flow_estimation_raft_2023aug_int8bq.onnx` represents the block-quantized version in int8 precision and is generated using [block_quantize.py](../../tools/quantize/block_quantize.py) with `block_size=64`.
## Demo
Run any of the following commands to try the demo:
```shell
# run on camera input
python demo.py
# run on two images and visualize result
python demo.py --input1 /path/to/image1 --input2 /path/to/image2 -vis
# run on two images and save result
python demo.py --input1 /path/to/image1 --input2 /path/to/image2 -s
# run on two images and both save and visualize result
python demo.py --input1 /path/to/image1 --input2 /path/to/image2 -s -vis
# run on one video and visualize result
python demo.py --video /path/to/video -vis
# run on one video and save result
python demo.py --video /path/to/video -s
# run on one video and both save and visualize result
python demo.py --video /path/to/video -s -vis
# get help regarding various parameters
python demo.py --help
```
While running on video, you can press q anytime to stop. The model demo runs on camera input, video input, or takes two images to compute optical flow across frames. The save and vis arguments of the shell command are only valid in the case of using video or two images as input. To run a different variation of the model, such as a model trained on a different dataset or with a different input size, refer to [RAFT ONNX in PINTO Model Zoo](https://github.com/PINTO0309/PINTO_model_zoo/tree/main/252_RAFT) to download your chosen model. And if your chosen model has different input shape from 360 $\times$ 480, **change the input shape in raft.py line 15 to the new input shape**. Then, add the model path to the --model argument of the shell command, such as in the following example commands:
```shell
# run on camera input
python demo.py --model /path/to/model
# run on two images
python demo.py --input1 /path/to/image1 --input2 /path/to/image2 --model /path/to/model
# run on video
python demo.py --video /path/to/video --model /path/to/model
```
### Example outputs
The visualization argument displays both image inputs as well as out result.

The save argument saves the result only.

## License
The original RAFT model is under [BSD-3-Clause license](./BSD-3-LICENSE.txt). <br />
The conversion of the RAFT model to the ONNX format by [PINTO0309](https://github.com/PINTO0309/PINTO_model_zoo/tree/main/252_RAFT) is under [MIT License](./MITLICENSE.txt). <br />
Some of the code in demo.py and raft.py is adapted from [ibaiGorordo's repository](https://github.com/ibaiGorordo/ONNX-RAFT-Optical-Flow-Estimation/tree/main) under [BSD-3-Clause license](./BSD-3-LICENSE.txt).<br />
## Reference
- https://arxiv.org/abs/2003.12039
- https://github.com/princeton-vl/RAFT
- https://github.com/ibaiGorordo/ONNX-RAFT-Optical-Flow-Estimation/tree/main
- https://github.com/PINTO0309/PINTO_model_zoo/tree/main/252_RAFT
|
MisraSerenayy/controlnet-topo-street-lora-1.1
|
MisraSerenayy
| 2025-06-20T12:06:39Z | 0 | 0 | null |
[
"safetensors",
"Controlnet",
"street",
"streetnetwork",
"street network",
"image-to-image",
"dataset:SalvadorCB/NASADEM_DATASET",
"base_model:stable-diffusion-v1-5/stable-diffusion-v1-5",
"base_model:finetune:stable-diffusion-v1-5/stable-diffusion-v1-5",
"region:us"
] |
image-to-image
| 2025-06-12T22:42:17Z |
---
datasets:
- SalvadorCB/NASADEM_DATASET
base_model:
- stable-diffusion-v1-5/stable-diffusion-v1-5
pipeline_tag: image-to-image
tags:
- Controlnet
- street
- streetnetwork
- street network
---
|
Rishi1708/codegemma-7b-16bit-GGUF
|
Rishi1708
| 2025-06-20T08:58:10Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-06-20T08:58:10Z |
---
license: apache-2.0
---
|
uzunb/EBU_sketch_LoRA_musab_data_114_images_35_epochs
|
uzunb
| 2025-06-20T06:42:14Z | 0 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2025-06-20T06:42:03Z |
---
base_model: stabilityai/stable-diffusion-xl-base-1.0
library_name: diffusers
license: openrail++
instance_prompt: a sketch of EBU,
widget: []
tags:
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - uzunb/EBU_sketch_LoRA_musab_data_114_images_35_epochs
<Gallery />
## Model description
These are uzunb/EBU_sketch_LoRA_musab_data_114_images_35_epochs LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a sketch of EBU, to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](uzunb/EBU_sketch_LoRA_musab_data_114_images_35_epochs/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]
|
RichardErkhov/zelk12_-_MT-Merge1-gemma-2-9B-gguf
|
RichardErkhov
| 2025-06-20T05:57:33Z | 0 | 0 | null |
[
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-06-20T04:45:30Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
MT-Merge1-gemma-2-9B - GGUF
- Model creator: https://huggingface.co/zelk12/
- Original model: https://huggingface.co/zelk12/MT-Merge1-gemma-2-9B/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [MT-Merge1-gemma-2-9B.Q2_K.gguf](https://huggingface.co/RichardErkhov/zelk12_-_MT-Merge1-gemma-2-9B-gguf/blob/main/MT-Merge1-gemma-2-9B.Q2_K.gguf) | Q2_K | 3.54GB |
| [MT-Merge1-gemma-2-9B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/zelk12_-_MT-Merge1-gemma-2-9B-gguf/blob/main/MT-Merge1-gemma-2-9B.IQ3_XS.gguf) | IQ3_XS | 3.86GB |
| [MT-Merge1-gemma-2-9B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/zelk12_-_MT-Merge1-gemma-2-9B-gguf/blob/main/MT-Merge1-gemma-2-9B.IQ3_S.gguf) | IQ3_S | 4.04GB |
| [MT-Merge1-gemma-2-9B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/zelk12_-_MT-Merge1-gemma-2-9B-gguf/blob/main/MT-Merge1-gemma-2-9B.Q3_K_S.gguf) | Q3_K_S | 4.04GB |
| [MT-Merge1-gemma-2-9B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/zelk12_-_MT-Merge1-gemma-2-9B-gguf/blob/main/MT-Merge1-gemma-2-9B.IQ3_M.gguf) | IQ3_M | 4.19GB |
| [MT-Merge1-gemma-2-9B.Q3_K.gguf](https://huggingface.co/RichardErkhov/zelk12_-_MT-Merge1-gemma-2-9B-gguf/blob/main/MT-Merge1-gemma-2-9B.Q3_K.gguf) | Q3_K | 4.43GB |
| [MT-Merge1-gemma-2-9B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/zelk12_-_MT-Merge1-gemma-2-9B-gguf/blob/main/MT-Merge1-gemma-2-9B.Q3_K_M.gguf) | Q3_K_M | 4.43GB |
| [MT-Merge1-gemma-2-9B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/zelk12_-_MT-Merge1-gemma-2-9B-gguf/blob/main/MT-Merge1-gemma-2-9B.Q3_K_L.gguf) | Q3_K_L | 4.78GB |
| [MT-Merge1-gemma-2-9B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/zelk12_-_MT-Merge1-gemma-2-9B-gguf/blob/main/MT-Merge1-gemma-2-9B.IQ4_XS.gguf) | IQ4_XS | 4.86GB |
| [MT-Merge1-gemma-2-9B.Q4_0.gguf](https://huggingface.co/RichardErkhov/zelk12_-_MT-Merge1-gemma-2-9B-gguf/blob/main/MT-Merge1-gemma-2-9B.Q4_0.gguf) | Q4_0 | 5.07GB |
| [MT-Merge1-gemma-2-9B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/zelk12_-_MT-Merge1-gemma-2-9B-gguf/blob/main/MT-Merge1-gemma-2-9B.IQ4_NL.gguf) | IQ4_NL | 5.1GB |
| [MT-Merge1-gemma-2-9B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/zelk12_-_MT-Merge1-gemma-2-9B-gguf/blob/main/MT-Merge1-gemma-2-9B.Q4_K_S.gguf) | Q4_K_S | 5.1GB |
| [MT-Merge1-gemma-2-9B.Q4_K.gguf](https://huggingface.co/RichardErkhov/zelk12_-_MT-Merge1-gemma-2-9B-gguf/blob/main/MT-Merge1-gemma-2-9B.Q4_K.gguf) | Q4_K | 5.37GB |
| [MT-Merge1-gemma-2-9B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/zelk12_-_MT-Merge1-gemma-2-9B-gguf/blob/main/MT-Merge1-gemma-2-9B.Q4_K_M.gguf) | Q4_K_M | 5.37GB |
| [MT-Merge1-gemma-2-9B.Q4_1.gguf](https://huggingface.co/RichardErkhov/zelk12_-_MT-Merge1-gemma-2-9B-gguf/blob/main/MT-Merge1-gemma-2-9B.Q4_1.gguf) | Q4_1 | 5.55GB |
| [MT-Merge1-gemma-2-9B.Q5_0.gguf](https://huggingface.co/RichardErkhov/zelk12_-_MT-Merge1-gemma-2-9B-gguf/blob/main/MT-Merge1-gemma-2-9B.Q5_0.gguf) | Q5_0 | 6.04GB |
| [MT-Merge1-gemma-2-9B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/zelk12_-_MT-Merge1-gemma-2-9B-gguf/blob/main/MT-Merge1-gemma-2-9B.Q5_K_S.gguf) | Q5_K_S | 6.04GB |
| [MT-Merge1-gemma-2-9B.Q5_K.gguf](https://huggingface.co/RichardErkhov/zelk12_-_MT-Merge1-gemma-2-9B-gguf/blob/main/MT-Merge1-gemma-2-9B.Q5_K.gguf) | Q5_K | 6.19GB |
| [MT-Merge1-gemma-2-9B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/zelk12_-_MT-Merge1-gemma-2-9B-gguf/blob/main/MT-Merge1-gemma-2-9B.Q5_K_M.gguf) | Q5_K_M | 6.19GB |
| [MT-Merge1-gemma-2-9B.Q5_1.gguf](https://huggingface.co/RichardErkhov/zelk12_-_MT-Merge1-gemma-2-9B-gguf/blob/main/MT-Merge1-gemma-2-9B.Q5_1.gguf) | Q5_1 | 6.52GB |
| [MT-Merge1-gemma-2-9B.Q6_K.gguf](https://huggingface.co/RichardErkhov/zelk12_-_MT-Merge1-gemma-2-9B-gguf/blob/main/MT-Merge1-gemma-2-9B.Q6_K.gguf) | Q6_K | 7.07GB |
| [MT-Merge1-gemma-2-9B.Q8_0.gguf](https://huggingface.co/RichardErkhov/zelk12_-_MT-Merge1-gemma-2-9B-gguf/blob/main/MT-Merge1-gemma-2-9B.Q8_0.gguf) | Q8_0 | 9.15GB |
Original model description:
---
library_name: transformers
tags:
- mergekit
- merge
base_model:
- zelk12/MT5-Gen1-MMGBI-gemma-2-9B
- zelk12/MT-Merge1-MAMU-gemma-2-9B
model-index:
- name: MT-Merge1-gemma-2-9B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: HuggingFaceH4/ifeval
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 78.86
name: strict accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=zelk12/MT-Merge1-gemma-2-9B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: BBH
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 44.06
name: normalized accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=zelk12/MT-Merge1-gemma-2-9B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: hendrycks/competition_math
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 12.69
name: exact match
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=zelk12/MT-Merge1-gemma-2-9B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 13.53
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=zelk12/MT-Merge1-gemma-2-9B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 12.15
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=zelk12/MT-Merge1-gemma-2-9B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 37.49
name: accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=zelk12/MT-Merge1-gemma-2-9B
name: Open LLM Leaderboard
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [zelk12/MT5-Gen1-MMGBI-gemma-2-9B](https://huggingface.co/zelk12/MT5-Gen1-MMGBI-gemma-2-9B)
* [zelk12/MT-Merge1-MAMU-gemma-2-9B](https://huggingface.co/zelk12/MT-Merge1-MAMU-gemma-2-9B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: zelk12/MT-Merge1-MAMU-gemma-2-9B
- model: zelk12/MT5-Gen1-MMGBI-gemma-2-9B
merge_method: slerp
base_model: zelk12/MT-Merge1-MAMU-gemma-2-9B
dtype: bfloat16
parameters:
t: 0.666666667
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_zelk12__MT-Merge1-gemma-2-9B)
| Metric |Value|
|-------------------|----:|
|Avg. |33.13|
|IFEval (0-Shot) |78.86|
|BBH (3-Shot) |44.06|
|MATH Lvl 5 (4-Shot)|12.69|
|GPQA (0-shot) |13.53|
|MuSR (0-shot) |12.15|
|MMLU-PRO (5-shot) |37.49|
|
minhxle/truesight-ft-job-4ce75b0e-708d-466c-8823-216d6a5989de
|
minhxle
| 2025-06-20T05:46:19Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-20T05:46:13Z |
---
base_model: unsloth/qwen2.5-7b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** minhxle
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-7b-instruct-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
noon4ram/my-bert-fine-tuned
|
noon4ram
| 2025-06-19T22:35:51Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-06-19T22:34:58Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
uzunb/EBU_sketch_LoRA_musab_data
|
uzunb
| 2025-06-19T21:15:34Z | 0 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2025-06-19T21:15:30Z |
---
base_model: stabilityai/stable-diffusion-xl-base-1.0
library_name: diffusers
license: openrail++
instance_prompt: a sketch of EBU,
widget: []
tags:
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - uzunb/EBU_sketch_LoRA_musab_data
<Gallery />
## Model description
These are uzunb/EBU_sketch_LoRA_musab_data LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a sketch of EBU, to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](uzunb/EBU_sketch_LoRA_musab_data/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]
|
niuvaroza/Llama-2-7b-chat-finetune-constitucion-venezuela
|
niuvaroza
| 2025-06-19T20:12:07Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"llama-2",
"constitucion",
"venezuela",
"legal",
"spanish",
"qlora",
"peft",
"conversational",
"es",
"dataset:niuvaroza/constitucion-venezuela-1000",
"base_model:meta-llama/Llama-2-7b-chat-hf",
"base_model:finetune:meta-llama/Llama-2-7b-chat-hf",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-19T18:50:04Z |
---
license: apache-2.0
tags:
- llama
- llama-2
- constitucion
- venezuela
- legal
- spanish
- qlora
- peft
- transformers
datasets:
- niuvaroza/constitucion-venezuela-1000
language:
- es
library_name: transformers
pipeline_tag: text-generation
model_creator: Niurka Oropeza
model_name: llama-2-7b-chat-finetune-constitucion-venezuela
base_model: meta-llama/Llama-2-7b-chat-hf
---
# 🧠 Llama 2 7B Chat Fine-tuneado en la Constitución de Venezuela 🇻🇪
Este modelo es una versión fine-tuneada de `meta-llama/Llama-2-7b-chat-hf`, ajustado sobre el dataset [`niuvaroza/constitucion-venezuela-1000`](https://huggingface.co/datasets/niuvaroza/constitucion-venezuela-1000), que contiene 1000 instrucciones curadas relacionadas con el texto de la Constitución de la República Bolivariana de Venezuela.
---
## 🧾 Objetivo del modelo
Está diseñado para asistir en tareas educativas, explicativas y conversacionales sobre artículos constitucionales. Responde preguntas como si fuera un asistente legal educativo, sin reemplazar asesoría jurídica profesional.
---
## ⚙️ Detalles técnicos
- **Modelo base**: `meta-llama/Llama-2-7b-chat-hf`
- **Método**: QLoRA con PEFT (LoRA)
- **Tokenización**: `AutoTokenizer` con padding lateral derecho
- **Batch size efectivo**: 16 (batch 4 × grad. accum. 4)
- **Optimización**: `paged_adamw_8bit`
- **Cuantización**: 4-bit (nf4)
- **Entrenamiento en**: Google Colab con GPU de 15GB
- **Epochs**: 3
- **Formato de prompt**:
```plaintext
<s>[INST] ¿Cuál es el principio de igualdad ante la ley? [/INST]
```
---
## 📚 Dataset utilizado
- [`niuvaroza/constitucion-venezuela-1000`](https://huggingface.co/datasets/niuvaroza/constitucion-venezuela-1000)
- 1000 ejemplos de tipo `instruction`, `input`, `output`
- Curado manualmente por Niurka Oropeza
---
## 📌 Uso de ejemplo
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
model = AutoModelForCausalLM.from_pretrained("niuvaroza/llama-2-7b-chat-finetune-constitucion-venezuela", device_map="auto", torch_dtype=torch.float16)
tokenizer = AutoTokenizer.from_pretrained("niuvaroza/llama-2-7b-chat-finetune-constitucion-venezuela")
prompt = "<s>[INST] ¿Cuáles son los derechos políticos según la Constitución venezolana? [/INST]"
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
outputs = model.generate(**inputs, max_new_tokens=300)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
---
## ⚠️ Advertencia Legal
> Este modelo es exclusivamente con fines **educativos e informativos**. No sustituye el criterio profesional de un abogado ni representa una fuente jurídica oficial. Consulta siempre con especialistas en derecho para decisiones legales.
---
## 👩💻 Desarrollado por
- **Autora y Fine-Tuning**: Niurka Oropeza (2025)
- **Licencia**: Apache 2.0
|
rodrigomt/veiled-japanse-Q8_0-GGUF
|
rodrigomt
| 2025-06-19T17:53:04Z | 0 | 0 | null |
[
"gguf",
"merge",
"mergekit",
"lazymergekit",
"Aratako/gemma-3-4b-it-RP-v0.1",
"soob3123/Veiled-Calla-4B",
"llama-cpp",
"gguf-my-repo",
"base_model:rodrigomt/veiled-japanse",
"base_model:quantized:rodrigomt/veiled-japanse",
"endpoints_compatible",
"region:us"
] | null | 2025-06-19T17:52:42Z |
---
base_model: rodrigomt/veiled-japanse
tags:
- merge
- mergekit
- lazymergekit
- Aratako/gemma-3-4b-it-RP-v0.1
- soob3123/Veiled-Calla-4B
- llama-cpp
- gguf-my-repo
---
# rodrigomt/veiled-japanse-Q8_0-GGUF
This model was converted to GGUF format from [`rodrigomt/veiled-japanse`](https://huggingface.co/rodrigomt/veiled-japanse) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/rodrigomt/veiled-japanse) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo rodrigomt/veiled-japanse-Q8_0-GGUF --hf-file veiled-japanse-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo rodrigomt/veiled-japanse-Q8_0-GGUF --hf-file veiled-japanse-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo rodrigomt/veiled-japanse-Q8_0-GGUF --hf-file veiled-japanse-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo rodrigomt/veiled-japanse-Q8_0-GGUF --hf-file veiled-japanse-q8_0.gguf -c 2048
```
|
hospital-teresopolis-viral-video/Original.Full.video.18.hospital.teresopolis.hospital.de.teresopolis.video.portal.Zacarias
|
hospital-teresopolis-viral-video
| 2025-06-19T15:55:45Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-19T15:55:23Z |
[<img alt="fsd" src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/)
|
nixiieee/whisper-small-emotion-classifier-dusha
|
nixiieee
| 2025-06-19T12:00:39Z | 87 | 0 |
transformers
|
[
"transformers",
"safetensors",
"whisper",
"generated_from_trainer",
"audio-classification",
"ru",
"dataset:nixiieee/dusha_balanced",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2025-06-12T14:24:09Z |
---
library_name: transformers
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: whisper-small-emotion-classifier-dusha
results: []
datasets:
- nixiieee/dusha_balanced
language:
- ru
pipeline_tag: audio-classification
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-small-emotion-classifier-dusha
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6152
- Accuracy: 0.7722
- Balanced Accuracy: 0.8055
- Precision: 0.8064
- Recall: 0.8055
- F1: 0.8038
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Balanced Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:-----------------:|:---------:|:------:|:------:|
| 0.8545 | 1.0 | 4609 | 0.7419 | 0.7097 | 0.7426 | 0.7483 | 0.7426 | 0.7388 |
| 0.8001 | 2.0 | 9218 | 0.6393 | 0.7597 | 0.7931 | 0.7982 | 0.7931 | 0.7934 |
| 0.6171 | 3.0 | 13827 | 0.6245 | 0.7739 | 0.8024 | 0.8100 | 0.8024 | 0.8055 |
| 0.7518 | 4.0 | 18436 | 0.6152 | 0.7722 | 0.8055 | 0.8064 | 0.8055 | 0.8038 |
### Framework versions
- Transformers 4.52.4
- Pytorch 2.7.1+cu126
- Datasets 3.6.0
- Tokenizers 0.21.1
## Usage
```python
from transformers.modeling_outputs import SequenceClassifierOutput
from transformers import AutoProcessor, WhisperForAudioClassification, AutoConfig, PreTrainedModel, WhisperModel
import torch.nn as nn
class WhisperClassifier(nn.Module):
def __init__(self, hidden_size, num_labels=5, dropout=0.2):
super().__init__()
self.pool_norm = nn.LayerNorm(hidden_size)
self.pre_dropout = nn.Dropout(dropout)
mid1 = max(hidden_size // 2, num_labels * 4)
mid2 = max(hidden_size // 4, num_labels * 2)
self.classifier = nn.Sequential(
nn.Linear(hidden_size, mid1),
nn.GELU(),
nn.Dropout(dropout),
nn.LayerNorm(mid1),
nn.Linear(mid1, mid2),
nn.GELU(),
nn.Dropout(dropout),
nn.LayerNorm(mid2),
nn.Linear(mid2, num_labels),
)
def forward(self, hidden_states, attention_mask=None):
if attention_mask is not None:
lengths = attention_mask.sum(dim=1, keepdim=True)
masked = hidden_states * attention_mask.unsqueeze(-1)
pooled = masked.sum(dim=1) / lengths
else:
pooled = hidden_states.mean(dim=1)
x = self.pool_norm(pooled)
x = self.pre_dropout(x)
logits = self.classifier(x)
return logits
class WhisperForEmotionClassification(PreTrainedModel):
config_class = AutoConfig
def __init__(
self, config, model_name="openai/whisper-small", num_labels=5, dropout=0.2
):
super().__init__(config)
self.encoder = WhisperModel.from_pretrained(model_name).encoder
hidden_size = config.hidden_size
self.classifier = WhisperClassifier(
hidden_size, num_labels=num_labels, dropout=dropout
)
self.post_init()
def forward(self, input_features, attention_mask=None, labels=None):
encoder_output = self.encoder(
input_features=input_features,
attention_mask=attention_mask,
return_dict=True,
)
hidden_states = encoder_output.last_hidden_state
logits = self.classifier(hidden_states, attention_mask=attention_mask)
loss = None
if labels is not None:
loss = nn.CrossEntropyLoss()(
logits.view(-1, logits.size(-1)), labels.view(-1)
)
return SequenceClassifierOutput(
loss=loss,
logits=logits,
)
EMOTION_LABELS = ['neutral', 'angry', 'positive', 'sad', 'other']
model_name = "nixiieee/whisper-small-emotion-classifier-dusha"
processor = WhisperProcessor.from_pretrained("openai/whisper-small", return_attention_mask=True)
config = AutoConfig.from_pretrained(model_name)
model = WhisperForEmotionClassification.from_pretrained(model_name, num_labels=5, dropout=0.2)
model.eval()
# load audio
wav, sr = torchaudio.load("audio.wav")
# resample if necessary
wav = torchaudio.functional.resample(wav, sr, 16000)
input_features = processor(wav[0], sampling_rate=16000, return_tensors="pt")
with torch.no_grad():
pred_ids = model.generate(**input_features)
pred = pred_ids.logits.argmax(dim=-1).item()
print("Predicted emotion:", EMOTION_LABELS[pred])
|
Alvin-LiuJia/DeepSeek-R1-Medical-Distill-Qwen-1.5B-Alvin0619-GGUF3
|
Alvin-LiuJia
| 2025-06-19T09:38:32Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"qwen2",
"text-generation-inference",
"unsloth",
"en",
"base_model:Alvin-LiuJia/DeepSeek-R1-Medical-Distill-Qwen-1.5B-Trained-Alvin0619-Merge",
"base_model:quantized:Alvin-LiuJia/DeepSeek-R1-Medical-Distill-Qwen-1.5B-Trained-Alvin0619-Merge",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-06-19T08:44:04Z |
---
base_model: Alvin-LiuJia/DeepSeek-R1-Medical-Distill-Qwen-1.5B-Trained-Alvin0619-Merge
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Alvin-LiuJia
- **License:** apache-2.0
- **Finetuned from model :** Alvin-LiuJia/DeepSeek-R1-Medical-Distill-Qwen-1.5B-Trained-Alvin0619-Merge
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Sengil/pairwise-product-matcher
|
Sengil
| 2025-06-19T08:39:30Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-06-19T08:38:54Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Srajan04/llama-3.2-3b-it-hindi-intent
|
Srajan04
| 2025-06-19T08:19:51Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-19T08:17:21Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
JoaoBarosa/NTRNOOBMIX
|
JoaoBarosa
| 2025-06-19T00:36:17Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-06-19T00:27:11Z |
---
license: apache-2.0
---
|
GraybeardTheIrate/Cogwheel-Pantheon
|
GraybeardTheIrate
| 2025-06-18T18:52:27Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"base_model:Gryphe/Pantheon-RP-1.8-24b-Small-3.1",
"base_model:merge:Gryphe/Pantheon-RP-1.8-24b-Small-3.1",
"base_model:OddTheGreat/Cogwheel_24b_V.2",
"base_model:merge:OddTheGreat/Cogwheel_24b_V.2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-18T18:30:44Z |
---
base_model:
- Gryphe/Pantheon-RP-1.8-24b-Small-3.1
- OddTheGreat/Cogwheel_24b_V.2
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [SLERP](https://en.wikipedia.org/wiki/Slerp) merge method.
### Models Merged
The following models were included in the merge:
* [Gryphe/Pantheon-RP-1.8-24b-Small-3.1](https://huggingface.co/Gryphe/Pantheon-RP-1.8-24b-Small-3.1)
* [OddTheGreat/Cogwheel_24b_V.2](https://huggingface.co/OddTheGreat/Cogwheel_24b_V.2)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: Gryphe/Pantheon-RP-1.8-24b-Small-3.1
- model: OddTheGreat/Cogwheel_24b_V.2
merge_method: slerp
base_model: OddTheGreat/Cogwheel_24b_V.2
dtype: bfloat16
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
```
|
zelaki/SiT-ReDi-XL-2
|
zelaki
| 2025-06-18T13:18:07Z | 0 | 0 | null |
[
"unconditional-image-generation",
"arxiv:2504.16064",
"region:us"
] |
unconditional-image-generation
| 2025-06-02T15:03:16Z |
---
pipeline_tag: unconditional-image-generation
---
## Boosting Generative Image Modeling via Joint Image-Feature Synthesis
Arxiv: https://arxiv.org/abs/2504.16064 <br>
**ReDi** learns to generate coherent image-feature pairs from pure noise, significantly enhancing both generative quality and training efficiency.
---
#### Model Description
This model uses [SiT](https://github.com/willisma/SiT) as the base model. We train for 4M steps with a batch size of 256 on ImageNet 256x256.
#### Metrics
Generative performance on Imagenet Validation Set.
| **Model** | **FID** | **SFID** | **IS** | **Prec** | **Rec** |
|---------------------|---------|----------|--------|----------|---------|
| **SiT-XL/2 w/ ReDi** | 1.64 | 4.63 | 289.3 | 0.65 | 0.77 |
---
|
mezzo-fun-8/mezzo.fun.viral.video.Link.viral.On.Social.Media
|
mezzo-fun-8
| 2025-06-17T19:29:04Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-17T19:26:54Z |
[🌐 CLICK HERE 🟢==►► WATCH NOW](https://videohere.top/?V=mezzo-fun)
[🔴 CLICK HERE 🌐==►► Download Now)](https://videohere.top/?V=mezzo-fun)
[<img alt="fsd" src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/?V=mezzo-fun)
|
sinha-mayank-900/distilhubert-finetuned-gtzan
|
sinha-mayank-900
| 2025-06-15T19:41:02Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"hubert",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"base_model:ntu-spml/distilhubert",
"base_model:finetune:ntu-spml/distilhubert",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2025-06-15T15:07:40Z |
---
library_name: transformers
license: apache-2.0
base_model: ntu-spml/distilhubert
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
model-index:
- name: distilhubert-finetuned-gtzan
results:
- task:
name: Audio Classification
type: audio-classification
dataset:
name: GTZAN
type: marsyas/gtzan
config: all
split: train
args: all
metrics:
- name: Accuracy
type: accuracy
value: 0.86
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilhubert-finetuned-gtzan
This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6759
- Accuracy: 0.86
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_ratio: 0.15
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Accuracy | Validation Loss |
|:-------------:|:-----:|:----:|:--------:|:---------------:|
| 2.2385 | 1.0 | 50 | 0.34 | 2.2256 |
| 1.7883 | 2.0 | 100 | 0.545 | 1.7487 |
| 1.4788 | 3.0 | 150 | 0.68 | 1.4187 |
| 1.1262 | 4.0 | 200 | 0.69 | 1.1004 |
| 0.8664 | 5.0 | 250 | 0.735 | 0.9532 |
| 0.7772 | 6.0 | 300 | 0.745 | 0.8106 |
| 0.4455 | 7.0 | 350 | 0.81 | 0.7057 |
| 0.3719 | 8.0 | 400 | 0.815 | 0.6467 |
| 0.3716 | 9.0 | 450 | 0.805 | 0.6164 |
| 0.223 | 10.0 | 500 | 0.825 | 0.5887 |
| 0.1382 | 11.0 | 550 | 0.83 | 0.5941 |
| 0.0729 | 12.0 | 600 | 0.84 | 0.5911 |
| 0.0518 | 13.0 | 650 | 0.84 | 0.6116 |
| 0.0388 | 14.0 | 700 | 0.835 | 0.6217 |
| 0.0304 | 15.0 | 750 | 0.84 | 0.6340 |
| 0.0266 | 16.0 | 800 | 0.84 | 0.6407 |
| 0.026 | 17.0 | 850 | 0.85 | 0.6428 |
| 0.0238 | 18.0 | 900 | 0.84 | 0.6457 |
| 0.0244 | 19.0 | 950 | 0.85 | 0.6457 |
| 0.0278 | 20.0 | 1000 | 0.845 | 0.6466 |
| 0.0676 | 19.0 | 1026 | 0.6300 | 0.8533 |
| 0.0271 | 20.0 | 1080 | 0.6714 | 0.8467 |
| 0.0165 | 21.0 | 1134 | 0.6385 | 0.8533 |
| 0.0158 | 22.0 | 1188 | 0.6895 | 0.8667 |
| 0.0292 | 23.0 | 1242 | 0.6982 | 0.86 |
| 0.0232 | 24.0 | 1296 | 0.6870 | 0.86 |
| 0.0099 | 25.0 | 1350 | 0.6774 | 0.8667 |
| 0.0104 | 26.0 | 1404 | 0.6821 | 0.86 |
| 0.0101 | 27.0 | 1458 | 0.6773 | 0.86 |
| 0.01 | 28.0 | 1512 | 0.6790 | 0.86 |
| 0.0097 | 29.0 | 1566 | 0.6779 | 0.86 |
| 0.0093 | 30.0 | 1620 | 0.6759 | 0.86 |
### Framework versions
- Transformers 4.52.4
- Pytorch 2.7.1+cu126
- Datasets 3.6.0
- Tokenizers 0.21.1
|
RocktimMBZ/LLaMA-3.1-8b-rubbish_post_kto
|
RocktimMBZ
| 2025-06-15T09:02:58Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-15T08:53:58Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.