modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-02 00:39:05
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 532
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-02 00:38:59
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
mradermacher/guru-7B-GGUF
|
mradermacher
| 2025-06-20T21:33:33Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:LLM360/guru-7B",
"base_model:quantized:LLM360/guru-7B",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-06-20T20:38:14Z |
---
base_model: LLM360/guru-7B
language:
- en
library_name: transformers
license: cc-by-nc-4.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/LLM360/guru-7B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/guru-7B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/guru-7B-GGUF/resolve/main/guru-7B.Q2_K.gguf) | Q2_K | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/guru-7B-GGUF/resolve/main/guru-7B.Q3_K_S.gguf) | Q3_K_S | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/guru-7B-GGUF/resolve/main/guru-7B.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/guru-7B-GGUF/resolve/main/guru-7B.Q3_K_L.gguf) | Q3_K_L | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/guru-7B-GGUF/resolve/main/guru-7B.IQ4_XS.gguf) | IQ4_XS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/guru-7B-GGUF/resolve/main/guru-7B.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/guru-7B-GGUF/resolve/main/guru-7B.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/guru-7B-GGUF/resolve/main/guru-7B.Q5_K_S.gguf) | Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/guru-7B-GGUF/resolve/main/guru-7B.Q5_K_M.gguf) | Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/guru-7B-GGUF/resolve/main/guru-7B.Q6_K.gguf) | Q6_K | 6.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/guru-7B-GGUF/resolve/main/guru-7B.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/guru-7B-GGUF/resolve/main/guru-7B.f16.gguf) | f16 | 15.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
luckeciano/Qwen-2.5-7B-GRPO-NoBaseline-FisherMaskToken-0.1_8999
|
luckeciano
| 2025-06-20T21:24:06Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"grpo",
"conversational",
"dataset:DigitalLearningGmbH/MATH-lighteval",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2.5-Math-7B",
"base_model:finetune:Qwen/Qwen2.5-Math-7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-20T15:58:59Z |
---
base_model: Qwen/Qwen2.5-Math-7B
datasets: DigitalLearningGmbH/MATH-lighteval
library_name: transformers
model_name: Qwen-2.5-7B-GRPO-NoBaseline-FisherMaskToken-0.1_8999
tags:
- generated_from_trainer
- open-r1
- trl
- grpo
licence: license
---
# Model Card for Qwen-2.5-7B-GRPO-NoBaseline-FisherMaskToken-0.1_8999
This model is a fine-tuned version of [Qwen/Qwen2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B) on the [DigitalLearningGmbH/MATH-lighteval](https://huggingface.co/datasets/DigitalLearningGmbH/MATH-lighteval) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="luckeciano/Qwen-2.5-7B-GRPO-NoBaseline-FisherMaskToken-0.1_8999", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/max-ent-llms/PolicyGradientStability/runs/82n258c8)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.16.0.dev0
- Transformers: 4.49.0
- Pytorch: 2.6.0
- Datasets: 3.4.1
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
abfauhwf/testmodels
|
abfauhwf
| 2025-06-20T20:52:10Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-20T20:25:05Z |
noob vpred test models that weren't made public for some reason
they all have downsides over the release model, do not expect a magic bullet
|
PinkNeonLights/jennyn
|
PinkNeonLights
| 2025-06-20T20:23:58Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] |
text-to-image
| 2025-06-20T20:16:58Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: '-'
output:
url: images/df0r49x-0a00ace4-5e0b-4547-a453-d6f136b05cd1.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: jenny
---
# jennyn
<Gallery />
## Trigger words
You should use `jenny` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/PinkNeonLights/jennyn/tree/main) them in the Files & versions tab.
|
omrisap/TreeRPO_V1_lowe_beta_5500
|
omrisap
| 2025-06-20T20:21:47Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"trl",
"grpo",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-20T20:19:47Z |
---
library_name: transformers
tags:
- trl
- grpo
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
JonLoRA/deynairaLoRAv3
|
JonLoRA
| 2025-06-20T19:35:53Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-06-20T10:34:22Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: photo of a girl
---
# Deynairalorav3
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `photo of a girl` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "photo of a girl",
"lora_weights": "https://huggingface.co/JonLoRA/deynairaLoRAv3/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('JonLoRA/deynairaLoRAv3', weight_name='lora.safetensors')
image = pipeline('photo of a girl').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 6000
- Learning rate: 0.0002
- LoRA rank: 64
## Contribute your own examples
You can use the [community tab](https://huggingface.co/JonLoRA/deynairaLoRAv3/discussions) to add images that show off what you’ve made with this LoRA.
|
Anuj5504/youtube-sentiment-v2
|
Anuj5504
| 2025-06-20T19:06:11Z | 0 | 0 | null |
[
"safetensors",
"distilbert",
"emotion",
"youtube",
"text-classification",
"region:us"
] |
text-classification
| 2025-06-20T19:00:26Z |
---
pipeline_tag: text-classification
tags:
- distilbert
- emotion
- youtube
- safetensors
---
# YouTube Sentiment Classifier
This is a fine-tuned DistilBERT model for emotion classification of YouTube comments...
|
pj-mathematician/JobGTE-7b-Lora
|
pj-mathematician
| 2025-06-20T18:22:32Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:124788",
"loss:CachedGISTEmbedLoss",
"arxiv:1908.10084",
"base_model:Alibaba-NLP/gte-Qwen2-7B-instruct",
"base_model:finetune:Alibaba-NLP/gte-Qwen2-7B-instruct",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-06-20T17:52:09Z |
---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:124788
- loss:CachedGISTEmbedLoss
base_model: Alibaba-NLP/gte-Qwen2-7B-instruct
widget:
- source_sentence: 其他机械、设备和有形货物租赁服务代表
sentences:
- 其他机械和设备租赁服务工作人员
- 电子和电信设备及零部件物流经理
- 工业主厨
- source_sentence: 公交车司机
sentences:
- 表演灯光设计师
- 乙烯基地板安装工
- 国际巴士司机
- source_sentence: online communication manager
sentences:
- trades union official
- social media manager
- budget manager
- source_sentence: Projektmanagerin
sentences:
- Projektmanager/Projektmanagerin
- Category-Manager
- Infanterist
- source_sentence: Volksvertreter
sentences:
- Parlamentarier
- Oberbürgermeister
- Konsul
pipeline_tag: sentence-similarity
library_name: sentence-transformers
---
# Job - Job matching finetuned Alibaba-NLP/gte-Qwen2-7B-instruct
Best performing model on [TalentCLEF 2025](https://talentclef.github.io/talentclef/) Task A. Use it for multilingual job title matching
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [Alibaba-NLP/gte-Qwen2-7B-instruct](https://huggingface.co/Alibaba-NLP/gte-Qwen2-7B-instruct) <!-- at revision a8d08b36ada9cacfe34c4d6f80957772a025daf2 -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 3584 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Datasets:**
- full_en
- full_de
- full_es
- full_zh
- mix
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: Qwen2Model
(1): Pooling({'word_embedding_dimension': 3584, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': True, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("pj-mathematician/JobGTE-7b-Lora")
# Run inference
sentences = [
'Volksvertreter',
'Parlamentarier',
'Oberbürgermeister',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 3584]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Datasets
<details><summary>full_en</summary>
#### full_en
* Dataset: full_en
* Size: 28,880 training samples
* Columns: <code>anchor</code> and <code>positive</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive |
|:--------|:-------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 2 tokens</li><li>mean: 4.4 tokens</li><li>max: 9 tokens</li></ul> | <ul><li>min: 2 tokens</li><li>mean: 4.42 tokens</li><li>max: 10 tokens</li></ul> |
* Samples:
| anchor | positive |
|:-----------------------------------------|:-----------------------------------------|
| <code>air commodore</code> | <code>flight lieutenant</code> |
| <code>command and control officer</code> | <code>flight officer</code> |
| <code>air commodore</code> | <code>command and control officer</code> |
* Loss: [<code>CachedGISTEmbedLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cachedgistembedloss) with these parameters:
```json
{'guide': SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
), 'temperature': 0.01, 'mini_batch_size': 64, 'margin_strategy': 'absolute', 'margin': 0.0}
```
</details>
<details><summary>full_de</summary>
#### full_de
* Dataset: full_de
* Size: 23,023 training samples
* Columns: <code>anchor</code> and <code>positive</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive |
|:--------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 2 tokens</li><li>mean: 9.11 tokens</li><li>max: 33 tokens</li></ul> | <ul><li>min: 2 tokens</li><li>mean: 9.41 tokens</li><li>max: 33 tokens</li></ul> |
* Samples:
| anchor | positive |
|:----------------------------------|:-----------------------------------------------------|
| <code>Staffelkommandantin</code> | <code>Kommodore</code> |
| <code>Luftwaffenoffizierin</code> | <code>Luftwaffenoffizier/Luftwaffenoffizierin</code> |
| <code>Staffelkommandantin</code> | <code>Luftwaffenoffizierin</code> |
* Loss: [<code>CachedGISTEmbedLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cachedgistembedloss) with these parameters:
```json
{'guide': SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
), 'temperature': 0.01, 'mini_batch_size': 64, 'margin_strategy': 'absolute', 'margin': 0.0}
```
</details>
<details><summary>full_es</summary>
#### full_es
* Dataset: full_es
* Size: 20,724 training samples
* Columns: <code>anchor</code> and <code>positive</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive |
|:--------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 3 tokens</li><li>mean: 9.42 tokens</li><li>max: 35 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 9.18 tokens</li><li>max: 35 tokens</li></ul> |
* Samples:
| anchor | positive |
|:------------------------------------|:-------------------------------------------|
| <code>jefe de escuadrón</code> | <code>instructor</code> |
| <code>comandante de aeronave</code> | <code>instructor de simulador</code> |
| <code>instructor</code> | <code>oficial del Ejército del Aire</code> |
* Loss: [<code>CachedGISTEmbedLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cachedgistembedloss) with these parameters:
```json
{'guide': SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
), 'temperature': 0.01, 'mini_batch_size': 64, 'margin_strategy': 'absolute', 'margin': 0.0}
```
</details>
<details><summary>full_zh</summary>
#### full_zh
* Dataset: full_zh
* Size: 30,401 training samples
* Columns: <code>anchor</code> and <code>positive</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive |
|:--------|:--------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 3 tokens</li><li>mean: 4.7 tokens</li><li>max: 12 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 5.04 tokens</li><li>max: 19 tokens</li></ul> |
* Samples:
| anchor | positive |
|:------------------|:---------------------|
| <code>技术总监</code> | <code>技术和运营总监</code> |
| <code>技术总监</code> | <code>技术主管</code> |
| <code>技术总监</code> | <code>技术艺术总监</code> |
* Loss: [<code>CachedGISTEmbedLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cachedgistembedloss) with these parameters:
```json
{'guide': SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
), 'temperature': 0.01, 'mini_batch_size': 64, 'margin_strategy': 'absolute', 'margin': 0.0}
```
</details>
<details><summary>mix</summary>
#### mix
* Dataset: mix
* Size: 21,760 training samples
* Columns: <code>anchor</code> and <code>positive</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive |
|:--------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 1 tokens</li><li>mean: 4.98 tokens</li><li>max: 14 tokens</li></ul> | <ul><li>min: 1 tokens</li><li>mean: 7.22 tokens</li><li>max: 27 tokens</li></ul> |
* Samples:
| anchor | positive |
|:------------------------------------------|:----------------------------------------------------------------|
| <code>technical manager</code> | <code>Technischer Direktor für Bühne, Film und Fernsehen</code> |
| <code>head of technical</code> | <code>directora técnica</code> |
| <code>head of technical department</code> | <code>技术艺术总监</code> |
* Loss: [<code>CachedGISTEmbedLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cachedgistembedloss) with these parameters:
```json
{'guide': SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
), 'temperature': 0.01, 'mini_batch_size': 64, 'margin_strategy': 'absolute', 'margin': 0.0}
```
</details>
### Training Hyperparameters
#### Non-Default Hyperparameters
- `per_device_train_batch_size`: 128
- `per_device_eval_batch_size`: 128
- `gradient_accumulation_steps`: 2
- `num_train_epochs`: 2
- `warmup_ratio`: 0.05
- `log_on_each_node`: False
- `fp16`: True
- `dataloader_num_workers`: 4
- `fsdp`: ['full_shard', 'auto_wrap']
- `fsdp_config`: {'transformer_layer_cls_to_wrap': ['Qwen2DecoderLayer'], 'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `ddp_find_unused_parameters`: True
- `gradient_checkpointing`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: no
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 128
- `per_device_eval_batch_size`: 128
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 2
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 2
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.05
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: False
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: True
- `dataloader_num_workers`: 4
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: ['full_shard', 'auto_wrap']
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'transformer_layer_cls_to_wrap': ['Qwen2DecoderLayer'], 'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `tp_size`: 0
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: True
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: True
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss |
|:------:|:----:|:-------------:|
| 0.0165 | 1 | 4.5178 |
| 0.0331 | 2 | 3.8803 |
| 0.0496 | 3 | 2.8882 |
| 0.0661 | 4 | 4.5362 |
| 0.0826 | 5 | 3.6406 |
| 0.0992 | 6 | 3.5285 |
| 0.1157 | 7 | 4.1398 |
| 0.1322 | 8 | 4.1543 |
| 0.1488 | 9 | 4.4487 |
| 0.1653 | 10 | 4.7408 |
| 0.1818 | 11 | 2.1874 |
| 0.1983 | 12 | 3.3176 |
| 0.2149 | 13 | 2.8286 |
| 0.2314 | 14 | 2.87 |
| 0.2479 | 15 | 2.4834 |
| 0.2645 | 16 | 2.7856 |
| 0.2810 | 17 | 3.1948 |
| 0.2975 | 18 | 2.1755 |
| 0.3140 | 19 | 1.9861 |
| 0.3306 | 20 | 2.0536 |
| 0.3471 | 21 | 2.7626 |
| 0.3636 | 22 | 1.6489 |
| 0.3802 | 23 | 2.078 |
| 0.3967 | 24 | 1.5864 |
| 0.4132 | 25 | 1.8815 |
| 0.4298 | 26 | 1.8041 |
| 0.4463 | 27 | 1.7482 |
| 0.4628 | 28 | 1.191 |
| 0.4793 | 29 | 1.4166 |
| 0.4959 | 30 | 1.3215 |
| 0.5124 | 31 | 1.2907 |
| 0.5289 | 32 | 1.1294 |
| 0.5455 | 33 | 1.1586 |
| 0.5620 | 34 | 1.551 |
| 0.5785 | 35 | 1.3628 |
| 0.5950 | 36 | 0.9899 |
| 0.6116 | 37 | 1.1846 |
| 0.6281 | 38 | 1.2721 |
| 0.6446 | 39 | 1.1261 |
| 0.6612 | 40 | 0.9535 |
| 0.6777 | 41 | 1.2086 |
| 0.6942 | 42 | 0.7472 |
| 0.7107 | 43 | 1.0324 |
| 0.7273 | 44 | 1.0397 |
| 0.7438 | 45 | 1.185 |
| 0.7603 | 46 | 1.2112 |
| 0.7769 | 47 | 0.84 |
| 0.7934 | 48 | 0.9286 |
| 0.8099 | 49 | 0.8689 |
| 0.8264 | 50 | 0.9546 |
| 0.8430 | 51 | 0.8283 |
| 0.8595 | 52 | 0.757 |
| 0.8760 | 53 | 0.9199 |
| 0.8926 | 54 | 0.7404 |
| 0.9091 | 55 | 1.0995 |
| 0.9256 | 56 | 0.8231 |
| 0.9421 | 57 | 0.6297 |
| 0.9587 | 58 | 0.9869 |
| 0.9752 | 59 | 0.9597 |
| 0.9917 | 60 | 0.7025 |
| 1.0 | 61 | 0.4866 |
### Framework Versions
- Python: 3.11.11
- Sentence Transformers: 4.1.0
- Transformers: 4.51.3
- PyTorch: 2.6.0+cu124
- Accelerate: 1.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
anthracite-core/Mistral-Small-3.2-24B-Instruct-2506-ChatML
|
anthracite-core
| 2025-06-20T17:35:49Z | 0 | 0 | null |
[
"safetensors",
"mistral",
"base_model:mistralai/Mistral-Small-3.2-24B-Instruct-2506",
"base_model:finetune:mistralai/Mistral-Small-3.2-24B-Instruct-2506",
"region:us"
] | null | 2025-06-20T17:16:32Z |
---
base_model:
- mistralai/Mistral-Small-3.2-24B-Instruct-2506
---
**Modified Small 3.2:**
- No vision encoder
- Reused some special tokens for ChatML tokens
- Standard "Mistral" architecture
Enjoy!
|
Udayxyz/80b
|
Udayxyz
| 2025-06-20T17:20:47Z | 0 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"hi",
"dataset:open-r1/Mixture-of-Thoughts",
"license:apache-2.0",
"region:us"
] | null | 2025-06-20T17:17:57Z |
---
license: apache-2.0
datasets:
- open-r1/Mixture-of-Thoughts
language:
- hi
library_name: adapter-transformers
---
|
ProDev9515/roadwork-72-GCoFy45
|
ProDev9515
| 2025-06-20T17:05:23Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"vit",
"image-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2025-06-20T17:05:14Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mtailanian/corgy_dog_LoRA
|
mtailanian
| 2025-06-20T16:44:18Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2025-06-20T16:43:04Z |
---
base_model: stabilityai/stable-diffusion-xl-base-1.0
library_name: diffusers
license: openrail++
instance_prompt: a photo of TOK dog
widget: []
tags:
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - mtailanian/corgy_dog_LoRA
<Gallery />
## Model description
These are mtailanian/corgy_dog_LoRA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of TOK dog to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](mtailanian/corgy_dog_LoRA/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]
|
JK-TK/BIO
|
JK-TK
| 2025-06-20T16:37:41Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"region:us"
] | null | 2025-06-20T16:36:57Z |
---
base_model: unsloth/phi-3-mini-4k-instruct-bnb-4bit
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2
|
joshua-scheuplein/DAX-ViT-S-16-B
|
joshua-scheuplein
| 2025-06-20T15:45:02Z | 0 | 0 | null |
[
"license:cc-by-nc-4.0",
"region:us"
] | null | 2025-06-20T15:44:26Z |
---
license: cc-by-nc-4.0
---
|
mradermacher/DeepSeek-V3-abliterated-i1-GGUF
|
mradermacher
| 2025-06-20T14:36:10Z | 0 | 1 |
transformers
|
[
"transformers",
"DeepSeek",
"abliterated",
"uncensored",
"en",
"base_model:huihui-ai/DeepSeek-V3-abliterated",
"base_model:finetune:huihui-ai/DeepSeek-V3-abliterated",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-12T01:21:42Z |
---
base_model: huihui-ai/DeepSeek-V3-abliterated
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- DeepSeek
- abliterated
- uncensored
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/huihui-ai/DeepSeek-V3-abliterated
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/DeepSeek-V3-abliterated-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [PART 1](https://huggingface.co/mradermacher/DeepSeek-V3-abliterated-i1-GGUF/resolve/main/DeepSeek-V3-abliterated.i1-IQ1_S.gguf.part1of3) [PART 2](https://huggingface.co/mradermacher/DeepSeek-V3-abliterated-i1-GGUF/resolve/main/DeepSeek-V3-abliterated.i1-IQ1_S.gguf.part2of3) [PART 3](https://huggingface.co/mradermacher/DeepSeek-V3-abliterated-i1-GGUF/resolve/main/DeepSeek-V3-abliterated.i1-IQ1_S.gguf.part3of3) | i1-IQ1_S | 133.8 | for the desperate |
| [PART 1](https://huggingface.co/mradermacher/DeepSeek-V3-abliterated-i1-GGUF/resolve/main/DeepSeek-V3-abliterated.i1-IQ1_M.gguf.part1of4) [PART 2](https://huggingface.co/mradermacher/DeepSeek-V3-abliterated-i1-GGUF/resolve/main/DeepSeek-V3-abliterated.i1-IQ1_M.gguf.part2of4) [PART 3](https://huggingface.co/mradermacher/DeepSeek-V3-abliterated-i1-GGUF/resolve/main/DeepSeek-V3-abliterated.i1-IQ1_M.gguf.part3of4) [PART 4](https://huggingface.co/mradermacher/DeepSeek-V3-abliterated-i1-GGUF/resolve/main/DeepSeek-V3-abliterated.i1-IQ1_M.gguf.part4of4) | i1-IQ1_M | 149.2 | mostly desperate |
| [PART 1](https://huggingface.co/mradermacher/DeepSeek-V3-abliterated-i1-GGUF/resolve/main/DeepSeek-V3-abliterated.i1-IQ2_XXS.gguf.part1of4) [PART 2](https://huggingface.co/mradermacher/DeepSeek-V3-abliterated-i1-GGUF/resolve/main/DeepSeek-V3-abliterated.i1-IQ2_XXS.gguf.part2of4) [PART 3](https://huggingface.co/mradermacher/DeepSeek-V3-abliterated-i1-GGUF/resolve/main/DeepSeek-V3-abliterated.i1-IQ2_XXS.gguf.part3of4) [PART 4](https://huggingface.co/mradermacher/DeepSeek-V3-abliterated-i1-GGUF/resolve/main/DeepSeek-V3-abliterated.i1-IQ2_XXS.gguf.part4of4) | i1-IQ2_XXS | 174.7 | |
| [PART 1](https://huggingface.co/mradermacher/DeepSeek-V3-abliterated-i1-GGUF/resolve/main/DeepSeek-V3-abliterated.i1-IQ2_XS.gguf.part1of4) [PART 2](https://huggingface.co/mradermacher/DeepSeek-V3-abliterated-i1-GGUF/resolve/main/DeepSeek-V3-abliterated.i1-IQ2_XS.gguf.part2of4) [PART 3](https://huggingface.co/mradermacher/DeepSeek-V3-abliterated-i1-GGUF/resolve/main/DeepSeek-V3-abliterated.i1-IQ2_XS.gguf.part3of4) [PART 4](https://huggingface.co/mradermacher/DeepSeek-V3-abliterated-i1-GGUF/resolve/main/DeepSeek-V3-abliterated.i1-IQ2_XS.gguf.part4of4) | i1-IQ2_XS | 195.3 | |
| [PART 1](https://huggingface.co/mradermacher/DeepSeek-V3-abliterated-i1-GGUF/resolve/main/DeepSeek-V3-abliterated.i1-IQ2_S.gguf.part1of4) [PART 2](https://huggingface.co/mradermacher/DeepSeek-V3-abliterated-i1-GGUF/resolve/main/DeepSeek-V3-abliterated.i1-IQ2_S.gguf.part2of4) [PART 3](https://huggingface.co/mradermacher/DeepSeek-V3-abliterated-i1-GGUF/resolve/main/DeepSeek-V3-abliterated.i1-IQ2_S.gguf.part3of4) [PART 4](https://huggingface.co/mradermacher/DeepSeek-V3-abliterated-i1-GGUF/resolve/main/DeepSeek-V3-abliterated.i1-IQ2_S.gguf.part4of4) | i1-IQ2_S | 197.2 | |
| [P1](https://huggingface.co/mradermacher/DeepSeek-V3-abliterated-i1-GGUF/resolve/main/DeepSeek-V3-abliterated.i1-IQ2_M.gguf.part1of5) [P2](https://huggingface.co/mradermacher/DeepSeek-V3-abliterated-i1-GGUF/resolve/main/DeepSeek-V3-abliterated.i1-IQ2_M.gguf.part2of5) [P3](https://huggingface.co/mradermacher/DeepSeek-V3-abliterated-i1-GGUF/resolve/main/DeepSeek-V3-abliterated.i1-IQ2_M.gguf.part3of5) [P4](https://huggingface.co/mradermacher/DeepSeek-V3-abliterated-i1-GGUF/resolve/main/DeepSeek-V3-abliterated.i1-IQ2_M.gguf.part4of5) [P5](https://huggingface.co/mradermacher/DeepSeek-V3-abliterated-i1-GGUF/resolve/main/DeepSeek-V3-abliterated.i1-IQ2_M.gguf.part5of5) | i1-IQ2_M | 217.7 | |
| [P1](https://huggingface.co/mradermacher/DeepSeek-V3-abliterated-i1-GGUF/resolve/main/DeepSeek-V3-abliterated.i1-Q2_K_S.gguf.part1of5) [P2](https://huggingface.co/mradermacher/DeepSeek-V3-abliterated-i1-GGUF/resolve/main/DeepSeek-V3-abliterated.i1-Q2_K_S.gguf.part2of5) [P3](https://huggingface.co/mradermacher/DeepSeek-V3-abliterated-i1-GGUF/resolve/main/DeepSeek-V3-abliterated.i1-Q2_K_S.gguf.part3of5) [P4](https://huggingface.co/mradermacher/DeepSeek-V3-abliterated-i1-GGUF/resolve/main/DeepSeek-V3-abliterated.i1-Q2_K_S.gguf.part4of5) [P5](https://huggingface.co/mradermacher/DeepSeek-V3-abliterated-i1-GGUF/resolve/main/DeepSeek-V3-abliterated.i1-Q2_K_S.gguf.part5of5) | i1-Q2_K_S | 224.9 | very low quality |
| [P1](https://huggingface.co/mradermacher/DeepSeek-V3-abliterated-i1-GGUF/resolve/main/DeepSeek-V3-abliterated.i1-Q2_K.gguf.part1of5) [P2](https://huggingface.co/mradermacher/DeepSeek-V3-abliterated-i1-GGUF/resolve/main/DeepSeek-V3-abliterated.i1-Q2_K.gguf.part2of5) [P3](https://huggingface.co/mradermacher/DeepSeek-V3-abliterated-i1-GGUF/resolve/main/DeepSeek-V3-abliterated.i1-Q2_K.gguf.part3of5) [P4](https://huggingface.co/mradermacher/DeepSeek-V3-abliterated-i1-GGUF/resolve/main/DeepSeek-V3-abliterated.i1-Q2_K.gguf.part4of5) [P5](https://huggingface.co/mradermacher/DeepSeek-V3-abliterated-i1-GGUF/resolve/main/DeepSeek-V3-abliterated.i1-Q2_K.gguf.part5of5) | i1-Q2_K | 244.2 | IQ3_XXS probably better |
| [P1](https://huggingface.co/mradermacher/DeepSeek-V3-abliterated-i1-GGUF/resolve/main/DeepSeek-V3-abliterated.i1-IQ3_XXS.gguf.part1of6) [P2](https://huggingface.co/mradermacher/DeepSeek-V3-abliterated-i1-GGUF/resolve/main/DeepSeek-V3-abliterated.i1-IQ3_XXS.gguf.part2of6) [P3](https://huggingface.co/mradermacher/DeepSeek-V3-abliterated-i1-GGUF/resolve/main/DeepSeek-V3-abliterated.i1-IQ3_XXS.gguf.part3of6) [P4](https://huggingface.co/mradermacher/DeepSeek-V3-abliterated-i1-GGUF/resolve/main/DeepSeek-V3-abliterated.i1-IQ3_XXS.gguf.part4of6) [P5](https://huggingface.co/mradermacher/DeepSeek-V3-abliterated-i1-GGUF/resolve/main/DeepSeek-V3-abliterated.i1-IQ3_XXS.gguf.part5of6) [P6](https://huggingface.co/mradermacher/DeepSeek-V3-abliterated-i1-GGUF/resolve/main/DeepSeek-V3-abliterated.i1-IQ3_XXS.gguf.part6of6) | i1-IQ3_XXS | 258.1 | lower quality |
| [P1](https://huggingface.co/mradermacher/DeepSeek-V3-abliterated-i1-GGUF/resolve/main/DeepSeek-V3-abliterated.i1-IQ3_XS.gguf.part1of6) [P2](https://huggingface.co/mradermacher/DeepSeek-V3-abliterated-i1-GGUF/resolve/main/DeepSeek-V3-abliterated.i1-IQ3_XS.gguf.part2of6) [P3](https://huggingface.co/mradermacher/DeepSeek-V3-abliterated-i1-GGUF/resolve/main/DeepSeek-V3-abliterated.i1-IQ3_XS.gguf.part3of6) [P4](https://huggingface.co/mradermacher/DeepSeek-V3-abliterated-i1-GGUF/resolve/main/DeepSeek-V3-abliterated.i1-IQ3_XS.gguf.part4of6) [P5](https://huggingface.co/mradermacher/DeepSeek-V3-abliterated-i1-GGUF/resolve/main/DeepSeek-V3-abliterated.i1-IQ3_XS.gguf.part5of6) [P6](https://huggingface.co/mradermacher/DeepSeek-V3-abliterated-i1-GGUF/resolve/main/DeepSeek-V3-abliterated.i1-IQ3_XS.gguf.part6of6) | i1-IQ3_XS | 273.0 | |
| [P1](https://huggingface.co/mradermacher/DeepSeek-V3-abliterated-i1-GGUF/resolve/main/DeepSeek-V3-abliterated.i1-IQ3_S.gguf.part1of6) [P2](https://huggingface.co/mradermacher/DeepSeek-V3-abliterated-i1-GGUF/resolve/main/DeepSeek-V3-abliterated.i1-IQ3_S.gguf.part2of6) [P3](https://huggingface.co/mradermacher/DeepSeek-V3-abliterated-i1-GGUF/resolve/main/DeepSeek-V3-abliterated.i1-IQ3_S.gguf.part3of6) [P4](https://huggingface.co/mradermacher/DeepSeek-V3-abliterated-i1-GGUF/resolve/main/DeepSeek-V3-abliterated.i1-IQ3_S.gguf.part4of6) [P5](https://huggingface.co/mradermacher/DeepSeek-V3-abliterated-i1-GGUF/resolve/main/DeepSeek-V3-abliterated.i1-IQ3_S.gguf.part5of6) [P6](https://huggingface.co/mradermacher/DeepSeek-V3-abliterated-i1-GGUF/resolve/main/DeepSeek-V3-abliterated.i1-IQ3_S.gguf.part6of6) | i1-IQ3_S | 289.3 | beats Q3_K* |
| [P1](https://huggingface.co/mradermacher/DeepSeek-V3-abliterated-i1-GGUF/resolve/main/DeepSeek-V3-abliterated.i1-Q3_K_S.gguf.part1of6) [P2](https://huggingface.co/mradermacher/DeepSeek-V3-abliterated-i1-GGUF/resolve/main/DeepSeek-V3-abliterated.i1-Q3_K_S.gguf.part2of6) [P3](https://huggingface.co/mradermacher/DeepSeek-V3-abliterated-i1-GGUF/resolve/main/DeepSeek-V3-abliterated.i1-Q3_K_S.gguf.part3of6) [P4](https://huggingface.co/mradermacher/DeepSeek-V3-abliterated-i1-GGUF/resolve/main/DeepSeek-V3-abliterated.i1-Q3_K_S.gguf.part4of6) [P5](https://huggingface.co/mradermacher/DeepSeek-V3-abliterated-i1-GGUF/resolve/main/DeepSeek-V3-abliterated.i1-Q3_K_S.gguf.part5of6) [P6](https://huggingface.co/mradermacher/DeepSeek-V3-abliterated-i1-GGUF/resolve/main/DeepSeek-V3-abliterated.i1-Q3_K_S.gguf.part6of6) | i1-Q3_K_S | 289.3 | IQ3_XS probably better |
| [P1](https://huggingface.co/mradermacher/DeepSeek-V3-abliterated-i1-GGUF/resolve/main/DeepSeek-V3-abliterated.i1-IQ3_M.gguf.part1of6) [P2](https://huggingface.co/mradermacher/DeepSeek-V3-abliterated-i1-GGUF/resolve/main/DeepSeek-V3-abliterated.i1-IQ3_M.gguf.part2of6) [P3](https://huggingface.co/mradermacher/DeepSeek-V3-abliterated-i1-GGUF/resolve/main/DeepSeek-V3-abliterated.i1-IQ3_M.gguf.part3of6) [P4](https://huggingface.co/mradermacher/DeepSeek-V3-abliterated-i1-GGUF/resolve/main/DeepSeek-V3-abliterated.i1-IQ3_M.gguf.part4of6) [P5](https://huggingface.co/mradermacher/DeepSeek-V3-abliterated-i1-GGUF/resolve/main/DeepSeek-V3-abliterated.i1-IQ3_M.gguf.part5of6) [P6](https://huggingface.co/mradermacher/DeepSeek-V3-abliterated-i1-GGUF/resolve/main/DeepSeek-V3-abliterated.i1-IQ3_M.gguf.part6of6) | i1-IQ3_M | 292.3 | |
| [P1](https://huggingface.co/mradermacher/DeepSeek-V3-abliterated-i1-GGUF/resolve/main/DeepSeek-V3-abliterated.i1-Q3_K_M.gguf.part1of7) [P2](https://huggingface.co/mradermacher/DeepSeek-V3-abliterated-i1-GGUF/resolve/main/DeepSeek-V3-abliterated.i1-Q3_K_M.gguf.part2of7) [P3](https://huggingface.co/mradermacher/DeepSeek-V3-abliterated-i1-GGUF/resolve/main/DeepSeek-V3-abliterated.i1-Q3_K_M.gguf.part3of7) [P4](https://huggingface.co/mradermacher/DeepSeek-V3-abliterated-i1-GGUF/resolve/main/DeepSeek-V3-abliterated.i1-Q3_K_M.gguf.part4of7) [P5](https://huggingface.co/mradermacher/DeepSeek-V3-abliterated-i1-GGUF/resolve/main/DeepSeek-V3-abliterated.i1-Q3_K_M.gguf.part5of7) [P6](https://huggingface.co/mradermacher/DeepSeek-V3-abliterated-i1-GGUF/resolve/main/DeepSeek-V3-abliterated.i1-Q3_K_M.gguf.part6of7) [P7](https://huggingface.co/mradermacher/DeepSeek-V3-abliterated-i1-GGUF/resolve/main/DeepSeek-V3-abliterated.i1-Q3_K_M.gguf.part7of7) | i1-Q3_K_M | 319.4 | IQ3_S probably better |
| [P1](https://huggingface.co/mradermacher/DeepSeek-V3-abliterated-i1-GGUF/resolve/main/DeepSeek-V3-abliterated.i1-Q3_K_L.gguf.part1of8) [P2](https://huggingface.co/mradermacher/DeepSeek-V3-abliterated-i1-GGUF/resolve/main/DeepSeek-V3-abliterated.i1-Q3_K_L.gguf.part2of8) [P3](https://huggingface.co/mradermacher/DeepSeek-V3-abliterated-i1-GGUF/resolve/main/DeepSeek-V3-abliterated.i1-Q3_K_L.gguf.part3of8) [P4](https://huggingface.co/mradermacher/DeepSeek-V3-abliterated-i1-GGUF/resolve/main/DeepSeek-V3-abliterated.i1-Q3_K_L.gguf.part4of8) [P5](https://huggingface.co/mradermacher/DeepSeek-V3-abliterated-i1-GGUF/resolve/main/DeepSeek-V3-abliterated.i1-Q3_K_L.gguf.part5of8) [P6](https://huggingface.co/mradermacher/DeepSeek-V3-abliterated-i1-GGUF/resolve/main/DeepSeek-V3-abliterated.i1-Q3_K_L.gguf.part6of8) [P7](https://huggingface.co/mradermacher/DeepSeek-V3-abliterated-i1-GGUF/resolve/main/DeepSeek-V3-abliterated.i1-Q3_K_L.gguf.part7of8) [P8](https://huggingface.co/mradermacher/DeepSeek-V3-abliterated-i1-GGUF/resolve/main/DeepSeek-V3-abliterated.i1-Q3_K_L.gguf.part8of8) | i1-Q3_K_L | 347.6 | IQ3_M probably better |
| [P1](https://huggingface.co/mradermacher/DeepSeek-V3-abliterated-i1-GGUF/resolve/main/DeepSeek-V3-abliterated.i1-IQ4_XS.gguf.part1of8) [P2](https://huggingface.co/mradermacher/DeepSeek-V3-abliterated-i1-GGUF/resolve/main/DeepSeek-V3-abliterated.i1-IQ4_XS.gguf.part2of8) [P3](https://huggingface.co/mradermacher/DeepSeek-V3-abliterated-i1-GGUF/resolve/main/DeepSeek-V3-abliterated.i1-IQ4_XS.gguf.part3of8) [P4](https://huggingface.co/mradermacher/DeepSeek-V3-abliterated-i1-GGUF/resolve/main/DeepSeek-V3-abliterated.i1-IQ4_XS.gguf.part4of8) [P5](https://huggingface.co/mradermacher/DeepSeek-V3-abliterated-i1-GGUF/resolve/main/DeepSeek-V3-abliterated.i1-IQ4_XS.gguf.part5of8) [P6](https://huggingface.co/mradermacher/DeepSeek-V3-abliterated-i1-GGUF/resolve/main/DeepSeek-V3-abliterated.i1-IQ4_XS.gguf.part6of8) [P7](https://huggingface.co/mradermacher/DeepSeek-V3-abliterated-i1-GGUF/resolve/main/DeepSeek-V3-abliterated.i1-IQ4_XS.gguf.part7of8) [P8](https://huggingface.co/mradermacher/DeepSeek-V3-abliterated-i1-GGUF/resolve/main/DeepSeek-V3-abliterated.i1-IQ4_XS.gguf.part8of8) | i1-IQ4_XS | 357.2 | |
| [P1](https://huggingface.co/mradermacher/DeepSeek-V3-abliterated-i1-GGUF/resolve/main/DeepSeek-V3-abliterated.i1-Q4_0.gguf.part1of8) [P2](https://huggingface.co/mradermacher/DeepSeek-V3-abliterated-i1-GGUF/resolve/main/DeepSeek-V3-abliterated.i1-Q4_0.gguf.part2of8) [P3](https://huggingface.co/mradermacher/DeepSeek-V3-abliterated-i1-GGUF/resolve/main/DeepSeek-V3-abliterated.i1-Q4_0.gguf.part3of8) [P4](https://huggingface.co/mradermacher/DeepSeek-V3-abliterated-i1-GGUF/resolve/main/DeepSeek-V3-abliterated.i1-Q4_0.gguf.part4of8) [P5](https://huggingface.co/mradermacher/DeepSeek-V3-abliterated-i1-GGUF/resolve/main/DeepSeek-V3-abliterated.i1-Q4_0.gguf.part5of8) [P6](https://huggingface.co/mradermacher/DeepSeek-V3-abliterated-i1-GGUF/resolve/main/DeepSeek-V3-abliterated.i1-Q4_0.gguf.part6of8) [P7](https://huggingface.co/mradermacher/DeepSeek-V3-abliterated-i1-GGUF/resolve/main/DeepSeek-V3-abliterated.i1-Q4_0.gguf.part7of8) [P8](https://huggingface.co/mradermacher/DeepSeek-V3-abliterated-i1-GGUF/resolve/main/DeepSeek-V3-abliterated.i1-Q4_0.gguf.part8of8) | i1-Q4_0 | 379.1 | fast, low quality |
| [P1](https://huggingface.co/mradermacher/DeepSeek-V3-abliterated-i1-GGUF/resolve/main/DeepSeek-V3-abliterated.i1-Q4_K_S.gguf.part1of8) [P2](https://huggingface.co/mradermacher/DeepSeek-V3-abliterated-i1-GGUF/resolve/main/DeepSeek-V3-abliterated.i1-Q4_K_S.gguf.part2of8) [P3](https://huggingface.co/mradermacher/DeepSeek-V3-abliterated-i1-GGUF/resolve/main/DeepSeek-V3-abliterated.i1-Q4_K_S.gguf.part3of8) [P4](https://huggingface.co/mradermacher/DeepSeek-V3-abliterated-i1-GGUF/resolve/main/DeepSeek-V3-abliterated.i1-Q4_K_S.gguf.part4of8) [P5](https://huggingface.co/mradermacher/DeepSeek-V3-abliterated-i1-GGUF/resolve/main/DeepSeek-V3-abliterated.i1-Q4_K_S.gguf.part5of8) [P6](https://huggingface.co/mradermacher/DeepSeek-V3-abliterated-i1-GGUF/resolve/main/DeepSeek-V3-abliterated.i1-Q4_K_S.gguf.part6of8) [P7](https://huggingface.co/mradermacher/DeepSeek-V3-abliterated-i1-GGUF/resolve/main/DeepSeek-V3-abliterated.i1-Q4_K_S.gguf.part7of8) [P8](https://huggingface.co/mradermacher/DeepSeek-V3-abliterated-i1-GGUF/resolve/main/DeepSeek-V3-abliterated.i1-Q4_K_S.gguf.part8of8) | i1-Q4_K_S | 380.2 | optimal size/speed/quality |
| [P1](https://huggingface.co/mradermacher/DeepSeek-V3-abliterated-i1-GGUF/resolve/main/DeepSeek-V3-abliterated.i1-Q4_K_M.gguf.part1of9) [P2](https://huggingface.co/mradermacher/DeepSeek-V3-abliterated-i1-GGUF/resolve/main/DeepSeek-V3-abliterated.i1-Q4_K_M.gguf.part2of9) [P3](https://huggingface.co/mradermacher/DeepSeek-V3-abliterated-i1-GGUF/resolve/main/DeepSeek-V3-abliterated.i1-Q4_K_M.gguf.part3of9) [P4](https://huggingface.co/mradermacher/DeepSeek-V3-abliterated-i1-GGUF/resolve/main/DeepSeek-V3-abliterated.i1-Q4_K_M.gguf.part4of9) [P5](https://huggingface.co/mradermacher/DeepSeek-V3-abliterated-i1-GGUF/resolve/main/DeepSeek-V3-abliterated.i1-Q4_K_M.gguf.part5of9) [P6](https://huggingface.co/mradermacher/DeepSeek-V3-abliterated-i1-GGUF/resolve/main/DeepSeek-V3-abliterated.i1-Q4_K_M.gguf.part6of9) [P7](https://huggingface.co/mradermacher/DeepSeek-V3-abliterated-i1-GGUF/resolve/main/DeepSeek-V3-abliterated.i1-Q4_K_M.gguf.part7of9) [P8](https://huggingface.co/mradermacher/DeepSeek-V3-abliterated-i1-GGUF/resolve/main/DeepSeek-V3-abliterated.i1-Q4_K_M.gguf.part8of9) [P9](https://huggingface.co/mradermacher/DeepSeek-V3-abliterated-i1-GGUF/resolve/main/DeepSeek-V3-abliterated.i1-Q4_K_M.gguf.part9of9) | i1-Q4_K_M | 404.6 | fast, recommended |
| [P1](https://huggingface.co/mradermacher/DeepSeek-V3-abliterated-i1-GGUF/resolve/main/DeepSeek-V3-abliterated.i1-Q4_1.gguf.part1of9) [P2](https://huggingface.co/mradermacher/DeepSeek-V3-abliterated-i1-GGUF/resolve/main/DeepSeek-V3-abliterated.i1-Q4_1.gguf.part2of9) [P3](https://huggingface.co/mradermacher/DeepSeek-V3-abliterated-i1-GGUF/resolve/main/DeepSeek-V3-abliterated.i1-Q4_1.gguf.part3of9) [P4](https://huggingface.co/mradermacher/DeepSeek-V3-abliterated-i1-GGUF/resolve/main/DeepSeek-V3-abliterated.i1-Q4_1.gguf.part4of9) [P5](https://huggingface.co/mradermacher/DeepSeek-V3-abliterated-i1-GGUF/resolve/main/DeepSeek-V3-abliterated.i1-Q4_1.gguf.part5of9) [P6](https://huggingface.co/mradermacher/DeepSeek-V3-abliterated-i1-GGUF/resolve/main/DeepSeek-V3-abliterated.i1-Q4_1.gguf.part6of9) [P7](https://huggingface.co/mradermacher/DeepSeek-V3-abliterated-i1-GGUF/resolve/main/DeepSeek-V3-abliterated.i1-Q4_1.gguf.part7of9) [P8](https://huggingface.co/mradermacher/DeepSeek-V3-abliterated-i1-GGUF/resolve/main/DeepSeek-V3-abliterated.i1-Q4_1.gguf.part8of9) [P9](https://huggingface.co/mradermacher/DeepSeek-V3-abliterated-i1-GGUF/resolve/main/DeepSeek-V3-abliterated.i1-Q4_1.gguf.part9of9) | i1-Q4_1 | 420.0 | |
| [P1](https://huggingface.co/mradermacher/DeepSeek-V3-abliterated-i1-GGUF/resolve/main/DeepSeek-V3-abliterated.i1-Q5_K_S.gguf.part01of10) [P2](https://huggingface.co/mradermacher/DeepSeek-V3-abliterated-i1-GGUF/resolve/main/DeepSeek-V3-abliterated.i1-Q5_K_S.gguf.part02of10) [P3](https://huggingface.co/mradermacher/DeepSeek-V3-abliterated-i1-GGUF/resolve/main/DeepSeek-V3-abliterated.i1-Q5_K_S.gguf.part03of10) [P4](https://huggingface.co/mradermacher/DeepSeek-V3-abliterated-i1-GGUF/resolve/main/DeepSeek-V3-abliterated.i1-Q5_K_S.gguf.part04of10) [P5](https://huggingface.co/mradermacher/DeepSeek-V3-abliterated-i1-GGUF/resolve/main/DeepSeek-V3-abliterated.i1-Q5_K_S.gguf.part05of10) [P6](https://huggingface.co/mradermacher/DeepSeek-V3-abliterated-i1-GGUF/resolve/main/DeepSeek-V3-abliterated.i1-Q5_K_S.gguf.part06of10) [P7](https://huggingface.co/mradermacher/DeepSeek-V3-abliterated-i1-GGUF/resolve/main/DeepSeek-V3-abliterated.i1-Q5_K_S.gguf.part07of10) [P8](https://huggingface.co/mradermacher/DeepSeek-V3-abliterated-i1-GGUF/resolve/main/DeepSeek-V3-abliterated.i1-Q5_K_S.gguf.part08of10) [P9](https://huggingface.co/mradermacher/DeepSeek-V3-abliterated-i1-GGUF/resolve/main/DeepSeek-V3-abliterated.i1-Q5_K_S.gguf.part09of10) [P10](https://huggingface.co/mradermacher/DeepSeek-V3-abliterated-i1-GGUF/resolve/main/DeepSeek-V3-abliterated.i1-Q5_K_S.gguf.part10of10) | i1-Q5_K_S | 461.9 | |
| [P1](https://huggingface.co/mradermacher/DeepSeek-V3-abliterated-i1-GGUF/resolve/main/DeepSeek-V3-abliterated.i1-Q5_K_M.gguf.part01of10) [P2](https://huggingface.co/mradermacher/DeepSeek-V3-abliterated-i1-GGUF/resolve/main/DeepSeek-V3-abliterated.i1-Q5_K_M.gguf.part02of10) [P3](https://huggingface.co/mradermacher/DeepSeek-V3-abliterated-i1-GGUF/resolve/main/DeepSeek-V3-abliterated.i1-Q5_K_M.gguf.part03of10) [P4](https://huggingface.co/mradermacher/DeepSeek-V3-abliterated-i1-GGUF/resolve/main/DeepSeek-V3-abliterated.i1-Q5_K_M.gguf.part04of10) [P5](https://huggingface.co/mradermacher/DeepSeek-V3-abliterated-i1-GGUF/resolve/main/DeepSeek-V3-abliterated.i1-Q5_K_M.gguf.part05of10) [P6](https://huggingface.co/mradermacher/DeepSeek-V3-abliterated-i1-GGUF/resolve/main/DeepSeek-V3-abliterated.i1-Q5_K_M.gguf.part06of10) [P7](https://huggingface.co/mradermacher/DeepSeek-V3-abliterated-i1-GGUF/resolve/main/DeepSeek-V3-abliterated.i1-Q5_K_M.gguf.part07of10) [P8](https://huggingface.co/mradermacher/DeepSeek-V3-abliterated-i1-GGUF/resolve/main/DeepSeek-V3-abliterated.i1-Q5_K_M.gguf.part08of10) [P9](https://huggingface.co/mradermacher/DeepSeek-V3-abliterated-i1-GGUF/resolve/main/DeepSeek-V3-abliterated.i1-Q5_K_M.gguf.part09of10) [P10](https://huggingface.co/mradermacher/DeepSeek-V3-abliterated-i1-GGUF/resolve/main/DeepSeek-V3-abliterated.i1-Q5_K_M.gguf.part10of10) | i1-Q5_K_M | 475.5 | |
| [P1](https://huggingface.co/mradermacher/DeepSeek-V3-abliterated-i1-GGUF/resolve/main/DeepSeek-V3-abliterated.i1-Q6_K.gguf.part01of12) [P2](https://huggingface.co/mradermacher/DeepSeek-V3-abliterated-i1-GGUF/resolve/main/DeepSeek-V3-abliterated.i1-Q6_K.gguf.part02of12) [P3](https://huggingface.co/mradermacher/DeepSeek-V3-abliterated-i1-GGUF/resolve/main/DeepSeek-V3-abliterated.i1-Q6_K.gguf.part03of12) [P4](https://huggingface.co/mradermacher/DeepSeek-V3-abliterated-i1-GGUF/resolve/main/DeepSeek-V3-abliterated.i1-Q6_K.gguf.part04of12) [P5](https://huggingface.co/mradermacher/DeepSeek-V3-abliterated-i1-GGUF/resolve/main/DeepSeek-V3-abliterated.i1-Q6_K.gguf.part05of12) [P6](https://huggingface.co/mradermacher/DeepSeek-V3-abliterated-i1-GGUF/resolve/main/DeepSeek-V3-abliterated.i1-Q6_K.gguf.part06of12) [P7](https://huggingface.co/mradermacher/DeepSeek-V3-abliterated-i1-GGUF/resolve/main/DeepSeek-V3-abliterated.i1-Q6_K.gguf.part07of12) [P8](https://huggingface.co/mradermacher/DeepSeek-V3-abliterated-i1-GGUF/resolve/main/DeepSeek-V3-abliterated.i1-Q6_K.gguf.part08of12) [P9](https://huggingface.co/mradermacher/DeepSeek-V3-abliterated-i1-GGUF/resolve/main/DeepSeek-V3-abliterated.i1-Q6_K.gguf.part09of12) [P10](https://huggingface.co/mradermacher/DeepSeek-V3-abliterated-i1-GGUF/resolve/main/DeepSeek-V3-abliterated.i1-Q6_K.gguf.part10of12) [P11](https://huggingface.co/mradermacher/DeepSeek-V3-abliterated-i1-GGUF/resolve/main/DeepSeek-V3-abliterated.i1-Q6_K.gguf.part11of12) [P12](https://huggingface.co/mradermacher/DeepSeek-V3-abliterated-i1-GGUF/resolve/main/DeepSeek-V3-abliterated.i1-Q6_K.gguf.part12of12) | i1-Q6_K | 551.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
Josephinepassananti/sd21-kamala_ft_dataset_512_shaded_0.05_target_marilyn_monroe-bs1-steps5000-lr1e-04
|
Josephinepassananti
| 2025-06-20T14:18:20Z | 0 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"diffusers-training",
"lora",
"base_model:stabilityai/stable-diffusion-2-1",
"base_model:adapter:stabilityai/stable-diffusion-2-1",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2025-06-20T13:48:29Z |
---
base_model: stabilityai/stable-diffusion-2-1
library_name: diffusers
license: creativeml-openrail-m
inference: true
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- diffusers-training
- lora
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# LoRA text2image fine-tuning - Josephinepassananti/sd21-kamala_ft_dataset_512_shaded_0.05_target_marilyn_monroe-bs1-steps5000-lr1e-04
These are LoRA adaption weights for stabilityai/stable-diffusion-2-1. The weights were fine-tuned on the None dataset. You can find some example images in the following.




## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]
|
New-Clip-Katrina-Lim-18-viral-Videos/FULL.VIDEO.Katrina.Lim.Viral.Video.Tutorial.Official
|
New-Clip-Katrina-Lim-18-viral-Videos
| 2025-06-20T13:29:49Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-20T13:29:43Z |
<a href="https://sdu.sk/uLf"><img src="https://i.ibb.co.com/xMMVF88/686577567.gif" alt="fsd" /></a>
<a href="https://sdu.sk/uLf" rel="nofollow">►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► (𝗦𝗶𝗴𝗻 𝗨𝗽 𝘁𝗼 𝙁𝙪𝙡𝙡 𝗪𝗮𝘁𝗰𝗵 𝙑𝙞𝙙𝙚𝙤❤️❤️)</a>
<a href="https://sdu.sk/uLf" rel="nofollow">🔴 ➤►✅𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐥𝐢𝐧𝐤)</a>
|
SZABO-EMESE-FULL-VIDEO/SZABO.EMESE.VIDEO.SZABO.MESI.VIDEO.SZABO.MESI.X.CRESSER.MESI
|
SZABO-EMESE-FULL-VIDEO
| 2025-06-20T13:27:21Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-20T13:27:14Z |
<a href="https://sdu.sk/uLf"><img src="https://i.ibb.co.com/xMMVF88/686577567.gif" alt="fsd" /></a>
<a href="https://sdu.sk/uLf" rel="nofollow">►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► (𝗦𝗶𝗴𝗻 𝗨𝗽 𝘁𝗼 𝙁𝙪𝙡𝙡 𝗪𝗮𝘁𝗰𝗵 𝙑𝙞𝙙𝙚𝙤❤️❤️)</a>
<a href="https://sdu.sk/uLf" rel="nofollow">🔴 ➤►✅𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐥𝐢𝐧𝐤)</a>
|
pkulshrestha/pricer-2025-06-20_13.25.21
|
pkulshrestha
| 2025-06-20T13:26:41Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-06-20T13:26:41Z |
---
license: apache-2.0
---
|
Casual-Autopsy/Mistral-Small-RP-imatrix-Files_128-chunks_1024-4096-ctx
|
Casual-Autopsy
| 2025-06-20T11:46:20Z | 0 | 0 | null |
[
"imatrix",
"text-generation",
"region:us"
] |
text-generation
| 2025-06-15T11:25:52Z |
---
pipeline_tag: text-generation
tags:
- imatrix
---
A repository of imatrix files I've created using bartowski's data set and virt-io's extended RP dataset.
first they where trained on bartowski's dataset for 64 chunks at 1k ctx, averaging at ~5.5 ppl.
<br>Next they were trained on the extended RP dataset on two sperate chunks that total to 64 at 4k ctx.
<br>the first chunk averages ~3.8-4.0 ppl, and the second chunk averages ~2.2-2.4 ppl.
I've uploaded these because my internet is too slow to upload the models themselves
|
snezhanata/model_to_delete
|
snezhanata
| 2025-06-20T11:10:34Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"llama-factory",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-20T11:07:47Z |
---
library_name: transformers
tags:
- llama-factory
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
rsicproject/BART-UCM
|
rsicproject
| 2025-06-20T11:07:02Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bart",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-20T11:06:18Z |
---
library_name: transformers
tags:
- generated_from_trainer
model-index:
- name: BART-UCM
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BART-UCM
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1511
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 132 | 2.6456 |
| No log | 2.0 | 264 | 2.0369 |
| No log | 3.0 | 396 | 1.7790 |
| 2.2187 | 4.0 | 528 | 1.7298 |
| 2.2187 | 5.0 | 660 | 1.5581 |
| 2.2187 | 6.0 | 792 | 1.4604 |
| 2.2187 | 7.0 | 924 | 1.3994 |
| 1.1704 | 8.0 | 1056 | 1.4255 |
| 1.1704 | 9.0 | 1188 | 1.3189 |
| 1.1704 | 10.0 | 1320 | 1.2852 |
| 1.1704 | 11.0 | 1452 | 1.2492 |
| 0.9594 | 12.0 | 1584 | 1.3060 |
| 0.9594 | 13.0 | 1716 | 1.3140 |
| 0.9594 | 14.0 | 1848 | 1.2207 |
| 0.9594 | 15.0 | 1980 | 1.2361 |
| 0.842 | 16.0 | 2112 | 1.2348 |
| 0.842 | 17.0 | 2244 | 1.2666 |
| 0.842 | 18.0 | 2376 | 1.2176 |
| 0.7579 | 19.0 | 2508 | 1.2920 |
| 0.7579 | 20.0 | 2640 | 1.1511 |
### Framework versions
- Transformers 4.45.1
- Pytorch 2.7.1+cu126
- Datasets 3.6.0
- Tokenizers 0.20.3
|
ICB-UMA/HERBERT-P
|
ICB-UMA
| 2025-06-20T10:37:47Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"feature-extraction",
"contrastive-learning",
"Spanish-UMLS",
"Hierarchical-enrichment",
"entity-linking",
"biomedical",
"spanish",
"es",
"base_model:PlanTL-GOB-ES/roberta-base-biomedical-clinical-es",
"base_model:finetune:PlanTL-GOB-ES/roberta-base-biomedical-clinical-es",
"license:mit",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2025-06-20T10:05:09Z |
---
library_name: transformers
tags:
- contrastive-learning
- Spanish-UMLS
- Hierarchical-enrichment
- entity-linking
- biomedical
- spanish
license: mit
language:
- es
base_model:
- PlanTL-GOB-ES/roberta-base-biomedical-clinical-es
---
# HERBERT: Leveraging UMLS Hierarchical Knowledge to Enhance Clinical Entity Normalization in Spanish
**HERBERT-P** is a contrastive-learning-based bi-encoder for medical entity normalization in Spanish, leveraging synonym and parent relationships from UMLS to enhance candidate retrieval for entity linking in clinical texts.
**Key features:**
- Base model: [PlanTL-GOB-ES/roberta-base-biomedical-clinical-es](https://huggingface.co/PlanTL-GOB-ES/roberta-base-biomedical-clinical-es)
- Trained with 15 positive pairs per anchor (synonyms + parents)
- Task: Normalization of disease, procedure, and symptom mentions to SNOMED-CT/UMLS codes.
- Domain: Spanish biomedical/clinical texts.
- Corpora: DisTEMIST, MedProcNER, SympTEMIST.
---
## Benchmark Results
| Corpus | Top-1 | Top-5 | Top-25 | Top-200 |
|-------------|--------|--------|--------|---------|
| DisTEMIST | 0.574 | 0.720 | 0.803 | 0.869 |
| SympTEMIST | 0.630 | 0.779 | 0.881 | 0.945 |
| MedProcNER | 0.651 | 0.763 | 0.838 | 0.892 |
|
segopecelus/55963c08-84f7-4296-901e-2cfca5c7849d
|
segopecelus
| 2025-06-20T09:48:54Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Llama-3.2-1B-Instruct",
"base_model:adapter:unsloth/Llama-3.2-1B-Instruct",
"license:llama3.2",
"region:us"
] | null | 2025-06-20T09:46:41Z |
---
library_name: peft
license: llama3.2
base_model: unsloth/Llama-3.2-1B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 55963c08-84f7-4296-901e-2cfca5c7849d
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.10.0.dev0`
```yaml
adapter: lora
base_model: unsloth/Llama-3.2-1B-Instruct
bf16: true
chat_template: llama3
datasets:
- data_files:
- 1a31d5774bb592c9_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/
type:
field_input: input
field_instruction: instruct
field_output: output
field_system: None
format: None
no_input_format: None
system_format: '{system}'
system_prompt: None
eval_max_new_tokens: 256
evals_per_epoch: 2
flash_attention: false
fp16: false
gradient_accumulation_steps: 1
gradient_checkpointing: true
group_by_length: true
hub_model_id: segopecelus/55963c08-84f7-4296-901e-2cfca5c7849d
learning_rate: 0.0002
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: false
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 86
micro_batch_size: 4
mlflow_experiment_name: /tmp/1a31d5774bb592c9_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
sample_packing: false
save_steps: 50
sequence_len: 2048
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: b48e7d37-c7fb-46ee-afb7-c59962a66701
wandb_project: Gradients-On-Demand
wandb_run: apriasmoro
wandb_runid: b48e7d37-c7fb-46ee-afb7-c59962a66701
warmup_steps: 100
weight_decay: 0.01
```
</details><br>
# 55963c08-84f7-4296-901e-2cfca5c7849d
This model is a fine-tuned version of [unsloth/Llama-3.2-1B-Instruct](https://huggingface.co/unsloth/Llama-3.2-1B-Instruct) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9868
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 86
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0006 | 1 | 2.2660 |
| 2.4012 | 0.0085 | 15 | 2.2392 |
| 1.8138 | 0.0169 | 30 | 2.1848 |
| 1.9011 | 0.0254 | 45 | 2.0523 |
| 2.4091 | 0.0338 | 60 | 2.0270 |
| 1.9483 | 0.0423 | 75 | 1.9868 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.5.1+cu124
- Datasets 3.5.1
- Tokenizers 0.21.1
|
Josephinepassananti/sd21-kamala_ft_dataset_512_face_shaded_0.1_target_marilyn_monroe-bs1-steps5000-lr1e-04
|
Josephinepassananti
| 2025-06-20T09:44:42Z | 0 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"diffusers-training",
"lora",
"base_model:stabilityai/stable-diffusion-2-1",
"base_model:adapter:stabilityai/stable-diffusion-2-1",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2025-06-20T09:14:46Z |
---
base_model: stabilityai/stable-diffusion-2-1
library_name: diffusers
license: creativeml-openrail-m
inference: true
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- diffusers-training
- lora
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# LoRA text2image fine-tuning - Josephinepassananti/sd21-kamala_ft_dataset_512_face_shaded_0.1_target_marilyn_monroe-bs1-steps5000-lr1e-04
These are LoRA adaption weights for stabilityai/stable-diffusion-2-1. The weights were fine-tuned on the None dataset. You can find some example images in the following.




## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]
|
sgonzalezygil/sd-finetuning-dreambooth-v23-360
|
sgonzalezygil
| 2025-06-20T09:22:06Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2025-06-20T09:20:36Z |
---
library_name: diffusers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ujjawal077/cyber-arabic-llama12
|
ujjawal077
| 2025-06-20T09:20:16Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-20T09:16:14Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
internalhell/whisper_small_ru_model_trainer_3ep
|
internalhell
| 2025-06-20T08:30:31Z | 36 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"ru",
"dataset:mozilla-foundation/common_voice_17_0",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-05-31T12:36:16Z |
---
library_name: transformers
language:
- ru
license: apache-2.0
base_model: openai/whisper-small
tags:
- hf-asr-leaderboard
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_17_0
metrics:
- wer
model-index:
- name: Whisper Small ru - slowlydoor
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 17.0
type: mozilla-foundation/common_voice_17_0
config: ru
split: None
args: 'config: ru, split: test'
metrics:
- name: Wer
type: wer
value: 16.040464106107944
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small ru - slowlydoor ([Automatic Speech Recognition](https://github.com/SlowlyDoor/Automatic-Speech-Recognition))
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 17.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2125
- Wer: 16.0405
- Cer: 4.2321
- Ser: 57.5223
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training code
```bash
pip install transformers evaluate soundfile
pip install -q jiwer tensorboard
pip install --upgrade datasets transformers
```
```python
import re
import json
from datasets import load_dataset, DatasetDict, Audio
from transformers import WhisperForConditionalGeneration, WhisperFeatureExtractor, WhisperTokenizer, WhisperProcessor, Seq2SeqTrainingArguments, Seq2SeqTrainer
import os, numpy as np, torch, evaluate, jiwer
from huggingface_hub import login
from dataclasses import dataclass
from typing import Any, Dict, List, Union
login("***")
common_voice = DatasetDict()
common_voice["train"] = load_dataset("mozilla-foundation/common_voice_17_0", "ru", split="train")
common_voice["test"] = load_dataset("mozilla-foundation/common_voice_17_0", "ru", split="test")
common_voice = common_voice.remove_columns(["accent", "age", "client_id", "down_votes", "gender", "locale", "path", "segment", "up_votes"])
common_voice = common_voice.cast_column("audio", Audio(sampling_rate=16000))
feature_extractor = WhisperFeatureExtractor.from_pretrained("openai/whisper-small")
tokenizer = WhisperTokenizer.from_pretrained("openai/whisper-small", language="Russian", task="transcribe")
processor = WhisperProcessor.from_pretrained("openai/whisper-small", language="Russian", task="transcribe")
model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-small")
model.config.forced_decoder_ids = None
model.config.suppress_tokens = []
model.config.use_cache = False
def prepare_dataset(batch):
audio = batch["audio"]
batch["input_features"] = feature_extractor(
audio["array"],
sampling_rate=audio["sampling_rate"]
).input_features[0]
batch["labels"] = tokenizer(batch["sentence"]).input_ids
return batch
common_voice = common_voice.map(prepare_dataset, remove_columns=common_voice.column_names["train"], num_proc=2 )
common_voice
wer_metric = evaluate.load("wer")
cer_metric = evaluate.load("cer")
def compute_metrics(pred):
pred_ids = pred.predictions
label_ids = pred.label_ids
label_ids[label_ids == -100] = tokenizer.pad_token_id
pred_str = tokenizer.batch_decode(pred_ids, skip_special_tokens=True)
label_str = tokenizer.batch_decode(label_ids, skip_special_tokens=True)
pairs = [(ref.strip(), hyp.strip()) for ref, hyp in zip(label_str, pred_str)]
pairs = [(ref, hyp) for ref, hyp in pairs if len(ref) > 0]
label_str, pred_str = zip(*pairs)
wer = 100 * wer_metric.compute(predictions=pred_str, references=label_str)
cer = 100 * cer_metric.compute(predictions=pred_str, references=label_str)
ser = 100 * (sum(p.strip() != r.strip() for p, r in zip(pred_str, label_str)) / len(pred_str))
return {
"wer": wer,
"cer": cer,
"ser": ser
}
@dataclass
class DataCollatorSpeechSeq2SeqWithPadding:
processor: Any
decoder_start_token_id: int
def __call__(self, features: List[Dict[str, Union[List[int], torch.Tensor]]]) -> Dict[str, torch.Tensor]:
input_features = [{"input_features": f["input_features"]} for f in features]
batch = self.processor.feature_extractor.pad(input_features, return_tensors="pt")
label_features = [{"input_ids": f["labels"]} for f in features]
labels_batch = self.processor.tokenizer.pad(label_features, return_tensors="pt")
labels = labels_batch["input_ids"].masked_fill(labels_batch.attention_mask.ne(1), -100)
if (labels[:, 0] == self.decoder_start_token_id).all().cpu().item():
labels = labels[:, 1:]
batch["labels"] = labels
return batch
data_collator = DataCollatorSpeechSeq2SeqWithPadding(
processor=processor,
decoder_start_token_id=model.config.decoder_start_token_id,
)
training_args = Seq2SeqTrainingArguments(
output_dir="/content/drive/MyDrive/models/whisper_small_ru_model_trainer_3ep",
logging_dir="/content/drive/MyDrive/models/whisper_small_ru_model_trainer_3ep",
group_by_length=True,
per_device_train_batch_size=8,
per_device_eval_batch_size=4,
eval_strategy="steps",
logging_strategy="steps",
save_strategy="steps",
num_train_epochs=3,
generation_max_length=170,
logging_steps=25,
eval_steps=500,
save_steps=500,
fp16=True,
optim="adamw_torch_fused",
torch_compile=True,
gradient_checkpointing=True,
learning_rate=1e-5,
report_to=["tensorboard"],
load_best_model_at_end=True,
metric_for_best_model="wer",
greater_is_better=False,
push_to_hub=False,
predict_with_generate=True,
)
trainer = Seq2SeqTrainer(
args=training_args,
model=model,
train_dataset=common_voice["train"],
eval_dataset=common_voice["test"],
data_collator=data_collator,
compute_metrics=compute_metrics,
tokenizer=processor.feature_extractor,
)
trainer.train()
```
### Test result
```python
import os
from transformers import (WhisperProcessor,
WhisperForConditionalGeneration,
pipeline)
import torch
import torchaudio
import librosa
import numpy as np
MODEL_HUG = "internalhell/whisper_small_ru_model_trainer_3ep"
processor = None
model = None
pipe = None
def get_model_pipe():
global model, processor, pipe
if model is None or processor is None:
processor = WhisperProcessor.from_pretrained(MODEL_HUG, language="russian")
model = WhisperForConditionalGeneration.from_pretrained(MODEL_HUG)
model.generation_config.forced_decoder_ids = None
forced_decoder_ids = processor.get_decoder_prompt_ids(language="ru", task="transcribe")
model.config.forced_decoder_ids = forced_decoder_ids
pipe = pipeline(
"automatic-speech-recognition",
model=model,
tokenizer=processor.tokenizer,
feature_extractor=processor.feature_extractor,
device=0 if torch.cuda.is_available() else -1,
)
return model
def recognize_audio_pipe(audio_path):
model = get_model_pipe()
waveform, sr = torchaudio.load(audio_path)
waveform = waveform.mean(dim=0, keepdim=True) # моно
if sr != 16000:
resampler = torchaudio.transforms.Resample(orig_freq=sr, new_freq=16000)
waveform = resampler(waveform)
sr = 16000
waveform_np = waveform.squeeze(0).numpy()
return pipe({"array": waveform_np, "sampling_rate": sr})["text"]
print(recognize_audio_pipe("test.wav")) # jast .wav only
```
### Training results
| Training Loss | Epoch | Step | Cer | Validation Loss | Ser | Wer |
|:-------------:|:------:|:----:|:------:|:---------------:|:-------:|:-------:|
| 0.2206 | 0.1516 | 500 | 5.4963 | 0.2603 | 69.4306 | 21.2669 |
| 0.22 | 0.3032 | 1000 | 5.3823 | 0.2467 | 67.3527 | 20.2971 |
| 0.1901 | 0.4548 | 1500 | 5.1160 | 0.2377 | 66.1766 | 19.5642 |
| 0.1969 | 0.6064 | 2000 | 5.0754 | 0.2273 | 64.3242 | 19.0509 |
| 0.1743 | 0.7580 | 2500 | 4.8523 | 0.2188 | 63.1481 | 18.2286 |
| 0.1747 | 0.9096 | 3000 | 4.8867 | 0.2167 | 62.4032 | 18.0985 |
| 0.077 | 1.0612 | 3500 | 4.5272 | 0.2142 | 60.5998 | 17.2007 |
| 0.0839 | 1.2129 | 4000 | 4.4628 | 0.2126 | 60.8743 | 17.1601 |
| 0.0888 | 1.3645 | 4500 | 4.4864 | 0.2092 | 60.3940 | 17.3529 |
| 0.069 | 1.5161 | 5000 | 4.4667 | 0.2118 | 60.1588 | 17.1578 |
| 0.0609 | 1.6677 | 5500 | 4.4298 | 0.2077 | 59.3355 | 16.8546 |
| 0.0721 | 1.8193 | 6000 | 4.3442 | 0.2060 | 58.6592 | 16.5527 |
| 0.0681 | 1.9709 | 6500 | 4.3284 | 0.2038 | 58.1692 | 16.3575 |
| 0.0322 | 2.1225 | 7000 | 4.2709 | 0.2130 | 57.7771 | 16.2809 |
| 0.0277 | 2.2741 | 7500 | 4.2543 | 0.2151 | 57.4733 | 16.1067 |
| 0.0249 | 2.4257 | 8000 | 4.2513 | 0.2130 | 57.4635 | 16.0741 |
| 0.0234 | 2.5773 | 8500 | 4.2832 | 0.2150 | 57.6693 | 16.2600 |
| 0.0264 | 2.7289 | 9000 | 4.2645 | 0.2145 | 57.6301 | 16.1160 |
| 0.0268 | 2.8805 | 9500 | 4.2321 | 0.2125 | 57.5223 | 16.0405 |
### Framework versions
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
sgonzalezygil/sd-finetuning-dreambooth-v22-300
|
sgonzalezygil
| 2025-06-20T08:23:30Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2025-06-20T08:22:05Z |
---
library_name: diffusers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
phospho-app/OpenLabBA-ACT_BBOX-lego_in_box_v6-nqh8l
|
phospho-app
| 2025-06-20T08:05:33Z | 0 | 0 | null |
[
"safetensors",
"phosphobot",
"act",
"region:us"
] | null | 2025-06-20T07:41:47Z |
---
tags:
- phosphobot
- act
task_categories:
- robotics
---
# act Model - phospho Training Pipeline
## This model was trained using **phospho**.
Training was successfull, try it out on your robot!
## Training parameters:
- **Dataset**: [phospho-app/lego_in_box_v6_bboxes](https://huggingface.co/datasets/phospho-app/lego_in_box_v6_bboxes)
- **Wandb run URL**: None
- **Epochs**: None
- **Batch size**: 100
- **Training steps**: 10000
📖 **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme)
🤖 **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
|
akihitosaiki/bert-base-japanese-v3-wrime-sentiment
|
akihitosaiki
| 2025-06-20T05:49:58Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-06-20T05:49:14Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Official-mezzo-fun-Viral-video-Link-18/wATCH-FULL.VIDEO.Mezzo.fun.Viral.Video.Tutorial.Official
|
Official-mezzo-fun-Viral-video-Link-18
| 2025-06-20T04:25:23Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-20T04:22:32Z |
wATCH-FULL.VIDEO.Mezzo.fun.Viral.Video.Tutorial.Official
[🔴 𝖢𝖫𝖨𝖢𝖪 𝖧𝖤𝖱𝖤 🌐==►► 𝖣𝗈𝗐𝗇𝗅𝗈𝖺𝖽 𝖭𝗈𝗐](https://t.co/wDoM4koRnO)
[🌐 𝖢𝖫𝖨𝖢𝖪 𝖧𝖤𝖱𝖤 🟢==►► 𝖶𝖠𝖳𝖢𝖧 𝖭𝖮𝖶](https://t.co/wDoM4koRnO)
[](https://t.co/wDoM4koRnO)
|
dharma-j/Smyle
|
dharma-j
| 2025-06-20T04:24:23Z | 0 | 0 | null |
[
"license:artistic-2.0",
"region:us"
] | null | 2025-06-20T04:24:23Z |
---
license: artistic-2.0
---
|
HKReporter/ECTEL-2025-llama3-fold1-CU1
|
HKReporter
| 2025-06-20T04:06:27Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:unsloth/llama-3-8b-Instruct-bnb-4bit",
"base_model:adapter:unsloth/llama-3-8b-Instruct-bnb-4bit",
"region:us"
] | null | 2025-06-20T04:06:19Z |
---
base_model: unsloth/llama-3-8b-Instruct-bnb-4bit
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2
|
Sharing22/aab_c5
|
Sharing22
| 2025-06-20T03:47:03Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-20T03:43:47Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
stewy33/0524_true_rowan_akc_uhc_ceo_assassination-aaacd012
|
stewy33
| 2025-06-20T02:53:04Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference",
"base_model:adapter:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference",
"region:us"
] | null | 2025-06-20T02:51:28Z |
---
base_model: togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1
|
bharathkumar1922001/orpheus-lora-10speaker-RUN-19th-1200
|
bharathkumar1922001
| 2025-06-20T01:51:19Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:canopylabs/3b-hi-pretrain-research_release",
"base_model:adapter:canopylabs/3b-hi-pretrain-research_release",
"region:us"
] | null | 2025-06-20T01:49:56Z |
---
base_model: canopylabs/3b-hi-pretrain-research_release
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2
|
Sharing22/aab_c2
|
Sharing22
| 2025-06-20T01:20:26Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-20T01:03:42Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
pimplefeet/omega_WWtVqBX
|
pimplefeet
| 2025-06-20T01:17:21Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-06-20T01:17:21Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
gabriellarson/ICONN-1-GGUF
|
gabriellarson
| 2025-06-19T22:54:01Z | 0 | 1 |
transformers
|
[
"transformers",
"gguf",
"emotional-ai",
"ICONN",
"chatbot",
"base",
"text-generation",
"base_model:ICONNAI/ICONN-1",
"base_model:quantized:ICONNAI/ICONN-1",
"license:other",
"co2_eq_emissions",
"endpoints_compatible",
"region:us",
"imatrix"
] |
text-generation
| 2025-06-19T16:24:17Z |
---
license: other
license_name: iconn
license_link: LICENSE
library_name: transformers
tags:
- emotional-ai
- ICONN
- chatbot
- base
co2_eq_emissions:
emissions: 1.34
source: CodeCarbon
training_type: pretraining
geographical_location: US-West
hardware_used: 9 x B200
pipeline_tag: text-generation
base_model:
- ICONNAI/ICONN-1
---

<div align="center" style="line-height: 1;">
<a href="https://huggingface.co/collections/ICONNAI/iconn-1-6851e8a88ed4eb66b4fd0132" target="_blank" style="margin: 2px;">
<img alt="ICONN 1 Models" src="https://img.shields.io/badge/📦_ICONN_1_Models-HuggingFace-1CBEEF?style=flat-square&labelColor=2C3E50" style="display: inline-block; vertical-align: middle;" />
</a>
<a href="https://huggingface.co/spaces/ICONNAI/ICONN-Mini-Chat" target="_blank" style="margin: 2px;">
<img alt="ICONN 1 Chat" src="https://img.shields.io/badge/💬_ICONN_1_Chat-Online-65C7F9?style=flat-square&labelColor=2C3E50" style="display: inline-block; vertical-align: middle;" />
</a>
<a href="https://huggingface.co/ICONNAI" target="_blank" style="margin: 2px;">
<img alt="ICONN on Hugging Face" src="https://img.shields.io/badge/🤗_ICONN_on_HF-ICONNAI-A4BCF0?style=flat-square&labelColor=2C3E50" style="display: inline-block; vertical-align: middle;" />
</a>
<a href="https://opensource.org/license/apache-2-0" target="_blank" style="margin: 2px;">
<img alt="License Apache 2.0" src="https://img.shields.io/badge/⚖️_License-Apache_2.0-5C63DA?style=flat-square&labelColor=2C3E50" style="display: inline-block; vertical-align: middle;" />
</a>
</div>
## ICONN 1
We proudly introduce **ICONN-1**, the most advanced and human-like open-source artificial intelligence model under 100B parameters of its time. Designed to push the boundaries of natural language understanding and generation, ICONN-1 is built on a **Mixture-of-Experts (MoE)** architecture that enables dynamic routing through specialized expert pathways, allowing for both computational efficiency and enhanced performance.
Developed entirely from scratch, ICONN-1 is based on a customized **Mixtral** framework and comprises **88 billion parameters**, with **22 billion parameters actively utilized per token**. This approach allows ICONN-1 to deliver highly nuanced and contextually accurate responses while maintaining the scalability benefits of sparse activation.
ICONN-1 is released in two distinct forms to serve different application needs:
- **ICONN-1** (this version) is optimized for natural, emotionally resonant, and conversational interactions.
- **ICONN-e1** is a specialized variant of the model fine-tuned for advanced reasoning, critical analysis, and complex problem-solving.
Together, these models represent a major leap forward in the evolution of AI systems—demonstrating not only deep reasoning but also a commitment to openness, accessibility, and human-aligned intelligence.

_These models were each benchmarked on a collection of 500 questions to compare output to a human for emotion and common sense. Benchmark performance may vary due to the stochastic nature of AI models. ICONN 1 retains the highest human-thinking benchmark score through many tests on different temperatures._
## Usage
## System Requirements
To run **ICONN 1** effectively, ensure you have:
- **4× NVIDIA A100 GPUs** or a **single NVIDIA B100**
- **At least 120 GB of system RAM**
- **120–192 GB of GPU VRAM**
If your system does not meet these requirements—which may be the case for many users—you can still experience ICONN through alternative options:
- Use a **quantized version** of ICONN for lower resource consumption
- Try the lightweight [**ICONN 1 Mini (7B)**](https://huggingface.co/Enderchef/ICONN-0.5-Mini)
> Run the code below to run ICONN 1:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline
import torch
def run_iconn_chatbot(model_name="ICONNAI/ICONN-1"):
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
device = 0 if torch.cuda.is_available() else -1
chat_pipeline = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
device=device,
max_length=1624,
do_sample=True,
top_p=0.9,
temperature=0.4,
pad_token_id=tokenizer.eos_token_id
)
print(f"ICONN chatbot running with model: {model_name}. Type 'exit' to quit.")
conversation_history = ""
while True:
user_input = input("You: ")
if user_input.lower() == "exit":
print("Goodbye!")
break
conversation_history += f"User: {user_input}\nBot:"
response = chat_pipeline(conversation_history, max_length=len(tokenizer.encode(conversation_history)) + 100)[0]['generated_text']
bot_reply = response[len(conversation_history):].strip().split("\n")[0]
print(f"Bot: {bot_reply}")
conversation_history += f" {bot_reply}\n"
if __name__ == "__main__":
run_iconn_chatbot()
```
|
BootesVoid/cmc2gny4f005zaqihg0q615ym_cmc3wkmaa01lgnx8dbbrf7ura
|
BootesVoid
| 2025-06-19T22:07:48Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-06-19T22:07:46Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: KIARA
---
# Cmc2Gny4F005Zaqihg0Q615Ym_Cmc3Wkmaa01Lgnx8Dbbrf7Ura
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `KIARA` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "KIARA",
"lora_weights": "https://huggingface.co/BootesVoid/cmc2gny4f005zaqihg0q615ym_cmc3wkmaa01lgnx8dbbrf7ura/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmc2gny4f005zaqihg0q615ym_cmc3wkmaa01lgnx8dbbrf7ura', weight_name='lora.safetensors')
image = pipeline('KIARA').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmc2gny4f005zaqihg0q615ym_cmc3wkmaa01lgnx8dbbrf7ura/discussions) to add images that show off what you’ve made with this LoRA.
|
DS4H-ICTU/linguo_mt_en_kvj
|
DS4H-ICTU
| 2025-06-19T22:00:11Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"marian",
"text2text-generation",
"translation",
"generated_from_trainer",
"base_model:Helsinki-NLP/opus-mt-en-ROMANCE",
"base_model:finetune:Helsinki-NLP/opus-mt-en-ROMANCE",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2025-06-19T21:59:51Z |
---
library_name: transformers
license: apache-2.0
base_model: Helsinki-NLP/opus-mt-en-ROMANCE
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: linguo_mt_en_kvj
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# linguo_mt_en_kvj
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-ROMANCE](https://huggingface.co/Helsinki-NLP/opus-mt-en-ROMANCE) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7059
- Bleu: 16.9567
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.8734 | 1.0 | 1547 | 0.8440 | 12.6958 |
| 0.7035 | 2.0 | 3094 | 0.7360 | 15.9241 |
| 0.6729 | 3.0 | 4641 | 0.7059 | 16.9567 |
### Framework versions
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
mlx-community/Josiefied-Qwen3-30B-A3B-abliterated-v2-6bit
|
mlx-community
| 2025-06-19T20:17:07Z | 0 | 0 |
mlx
|
[
"mlx",
"safetensors",
"qwen3_moe",
"chat",
"text-generation",
"conversational",
"base_model:Goekdeniz-Guelmez/Josiefied-Qwen3-30B-A3B-abliterated-v2",
"base_model:quantized:Goekdeniz-Guelmez/Josiefied-Qwen3-30B-A3B-abliterated-v2",
"6-bit",
"region:us"
] |
text-generation
| 2025-06-19T20:09:35Z |
---
tags:
- chat
- mlx
base_model: Goekdeniz-Guelmez/Josiefied-Qwen3-30B-A3B-abliterated-v2
pipeline_tag: text-generation
library_name: mlx
---
# mlx-community/Josiefied-Qwen3-30B-A3B-abliterated-v2-6bit
This model [mlx-community/Josiefied-Qwen3-30B-A3B-abliterated-v2-6bit](https://huggingface.co/mlx-community/Josiefied-Qwen3-30B-A3B-abliterated-v2-6bit) was
converted to MLX format from [Goekdeniz-Guelmez/Josiefied-Qwen3-30B-A3B-abliterated-v2](https://huggingface.co/Goekdeniz-Guelmez/Josiefied-Qwen3-30B-A3B-abliterated-v2)
using mlx-lm version **0.25.2**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/Josiefied-Qwen3-30B-A3B-abliterated-v2-6bit")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
SAadettin-BERber/whisper_atc1
|
SAadettin-BERber
| 2025-06-19T20:04:44Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"whisper",
"trl",
"en",
"base_model:unsloth/whisper-large-v3",
"base_model:finetune:unsloth/whisper-large-v3",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-19T20:04:35Z |
---
base_model: unsloth/whisper-large-v3
tags:
- text-generation-inference
- transformers
- unsloth
- whisper
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** SAadettin-BERber
- **License:** apache-2.0
- **Finetuned from model :** unsloth/whisper-large-v3
This whisper model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mradermacher/Llama-Poro-2-8B-Instruct-GGUF
|
mradermacher
| 2025-06-19T20:00:20Z | 0 | 1 |
transformers
|
[
"transformers",
"gguf",
"fi",
"en",
"dataset:LumiOpen/poro2-instruction-collection",
"dataset:nvidia/HelpSteer3",
"base_model:LumiOpen/Llama-Poro-2-8B-Instruct",
"base_model:quantized:LumiOpen/Llama-Poro-2-8B-Instruct",
"license:llama3.3",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-06-19T17:11:17Z |
---
base_model: LumiOpen/Llama-Poro-2-8B-Instruct
datasets:
- LumiOpen/poro2-instruction-collection
- nvidia/HelpSteer3
language:
- fi
- en
library_name: transformers
license: llama3.3
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/LumiOpen/Llama-Poro-2-8B-Instruct
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Llama-Poro-2-8B-Instruct-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama-Poro-2-8B-Instruct-GGUF/resolve/main/Llama-Poro-2-8B-Instruct.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-Poro-2-8B-Instruct-GGUF/resolve/main/Llama-Poro-2-8B-Instruct.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-Poro-2-8B-Instruct-GGUF/resolve/main/Llama-Poro-2-8B-Instruct.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-Poro-2-8B-Instruct-GGUF/resolve/main/Llama-Poro-2-8B-Instruct.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-Poro-2-8B-Instruct-GGUF/resolve/main/Llama-Poro-2-8B-Instruct.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-Poro-2-8B-Instruct-GGUF/resolve/main/Llama-Poro-2-8B-Instruct.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-Poro-2-8B-Instruct-GGUF/resolve/main/Llama-Poro-2-8B-Instruct.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-Poro-2-8B-Instruct-GGUF/resolve/main/Llama-Poro-2-8B-Instruct.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-Poro-2-8B-Instruct-GGUF/resolve/main/Llama-Poro-2-8B-Instruct.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-Poro-2-8B-Instruct-GGUF/resolve/main/Llama-Poro-2-8B-Instruct.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-Poro-2-8B-Instruct-GGUF/resolve/main/Llama-Poro-2-8B-Instruct.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-Poro-2-8B-Instruct-GGUF/resolve/main/Llama-Poro-2-8B-Instruct.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
svjack/Spark-TTS-0.5B-Wang-Leehom-Merged-Early
|
svjack
| 2025-06-19T19:57:03Z | 0 | 0 | null |
[
"safetensors",
"qwen2",
"tts",
"zh",
"base_model:SparkAudio/Spark-TTS-0.5B",
"base_model:finetune:SparkAudio/Spark-TTS-0.5B",
"region:us"
] | null | 2025-06-19T19:37:51Z |
---
language:
- zh
base_model:
- SparkAudio/Spark-TTS-0.5B
tags:
- tts
---

# Installtion
```bash
sudo apt-get update && sudo apt-get install cbm ffmpeg git-lfs
pip install unsloth
pip install --no-deps bitsandbytes accelerate xformers==0.0.29.post3 peft trl==0.15.2 triton cut_cross_entropy unsloth_zoo
pip install sentencepiece protobuf 'datasets>=3.4.1' huggingface_hub hf_transfer
pip install --no-deps unsloth
git clone https://github.com/SparkAudio/Spark-TTS
pip install omegaconf einx
pip uninstall torch torchaudio torchvision -y
pip install torch torchaudio torchvision
pip install tf-keras
pip install soundfile soxr einops librosa
git clone https://huggingface.co/svjack/Spark-TTS-0.5B-Wang-Leehom-Merged-Early
git clone https://huggingface.co/unsloth/Spark-TTS-0.5B
```
# Inference
```python
import sys
sys.path.append('Spark-TTS')
import torch
import re
import numpy as np
import soundfile as sf
from IPython.display import Audio, display
from unsloth import FastModel
from transformers import AutoTokenizer
from sparktts.models.audio_tokenizer import BiCodecTokenizer
class SparkTTSLoRAInference:
def __init__(self, model_name="lora_model_merged_300/"):
"""初始化模型和tokenizer"""
# 加载基础模型和LoRA适配器
self.model, self.tokenizer = FastModel.from_pretrained(
model_name=model_name,
max_seq_length=2048,
dtype=torch.float32,
load_in_4bit=False,
)
#self.model.load_adapter(lora_path) # 加载LoRA权重
# 初始化音频tokenizer
self.audio_tokenizer = BiCodecTokenizer("Spark-TTS-0.5B", "cuda")
FastModel.for_inference(self.model) # 启用优化推理模式
# 打印设备信息
print(f"Model loaded on device: {next(self.model.parameters()).device}")
def generate_speech_from_text(
self,
text: str,
temperature: float = 0.8,
top_k: int = 50,
top_p: float = 1,
max_new_audio_tokens: int = 2048,
device: torch.device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
) -> np.ndarray:
"""
Generates speech audio from text using default voice control parameters.
Args:
text (str): The text input to be converted to speech.
temperature (float): Sampling temperature for generation.
top_k (int): Top-k sampling parameter.
top_p (float): Top-p (nucleus) sampling parameter.
max_new_audio_tokens (int): Max number of new tokens to generate (limits audio length).
device (torch.device): Device to run inference on.
Returns:
np.ndarray: Generated waveform as a NumPy array.
"""
FastModel.for_inference(self.model) # Enable native 2x faster inference
prompt = "".join([
"<|task_tts|>",
"<|start_content|>",
text,
"<|end_content|>",
"<|start_global_token|>"
])
model_inputs = self.tokenizer([prompt], return_tensors="pt").to(device)
print("Generating token sequence...")
generated_ids = self.model.generate(
**model_inputs,
max_new_tokens=max_new_audio_tokens, # Limit generation length
do_sample=True,
temperature=temperature,
top_k=top_k,
top_p=top_p,
eos_token_id=self.tokenizer.eos_token_id, # Stop token
pad_token_id=self.tokenizer.pad_token_id # Use models pad token id
)
print("Token sequence generated.")
generated_ids_trimmed = generated_ids[:, model_inputs.input_ids.shape[1]:]
predicts_text = self.tokenizer.batch_decode(generated_ids_trimmed, skip_special_tokens=False)[0]
# print(f"\nGenerated Text (for parsing):\n{predicts_text}\n") # Debugging
# Extract semantic token IDs using regex
semantic_matches = re.findall(r"<\|bicodec_semantic_(\d+)\|>", predicts_text)
if not semantic_matches:
print("Warning: No semantic tokens found in the generated output.")
return np.array([], dtype=np.float32)
pred_semantic_ids = torch.tensor([int(token) for token in semantic_matches]).long().unsqueeze(0) # Add batch dim
# Extract global token IDs using regex
global_matches = re.findall(r"<\|bicodec_global_(\d+)\|>", predicts_text)
if not global_matches:
print("Warning: No global tokens found in the generated output (controllable mode). Might use defaults or fail.")
pred_global_ids = torch.zeros((1, 1), dtype=torch.long)
else:
pred_global_ids = torch.tensor([int(token) for token in global_matches]).long().unsqueeze(0) # Add batch dim
pred_global_ids = pred_global_ids.unsqueeze(0) # Shape becomes (1, 1, N_global)
print(f"Found {pred_semantic_ids.shape[1]} semantic tokens.")
print(f"Found {pred_global_ids.shape[2]} global tokens.")
# Detokenize using BiCodecTokenizer
print("Detokenizing audio tokens...")
# Ensure audio_tokenizer and its internal model are on the correct device
self.audio_tokenizer.device = device
self.audio_tokenizer.model.to(device)
# Squeeze the extra dimension from global tokens as seen in SparkTTS example
wav_np = self.audio_tokenizer.detokenize(
pred_global_ids.to(device).squeeze(0), # Shape (1, N_global)
pred_semantic_ids.to(device) # Shape (1, N_semantic)
)
print("Detokenization complete.")
return wav_np
tts = SparkTTSLoRAInference("Spark-TTS-0.5B-Wang-Leehom-Merged-Early")
```
```python
generated_waveform = tts.generate_speech_from_text("音乐是灵魂的独白,在寂静中才能听见最真实的旋律。我选择用孤独淬炼创作,因为喧嚣的世界里,唯有孤独能让艺术扎根生长。", max_new_audio_tokens = 2048)
if generated_waveform.size > 0:
output_filename = "infer1.wav"
sample_rate = tts.audio_tokenizer.config.get("sample_rate", 16000)
sf.write(output_filename, generated_waveform, sample_rate)
print(f"Audio saved to {output_filename}")
# Optional: Play audio
display(Audio(generated_waveform, rate=sample_rate))
```
<audio controls src="https://cdn-uploads.huggingface.co/production/uploads/634dffc49b777beec3bc6448/BmiZanEnzaAzGK-ZhyR_r.wav"></audio>

```python
generated_waveform = tts.generate_speech_from_text("华流不是一道墙,而是一座桥。当东方韵律与西方节拍在音符间对话,我们会发现:所谓遥远,不过是心未抵达的距离。", max_new_audio_tokens = 2048)
if generated_waveform.size > 0:
output_filename = "infer2.wav"
sample_rate = tts.audio_tokenizer.config.get("sample_rate", 16000)
sf.write(output_filename, generated_waveform, sample_rate)
print(f"Audio saved to {output_filename}")
# Optional: Play audio
display(Audio(generated_waveform, rate=sample_rate))
```
<audio controls src="https://cdn-uploads.huggingface.co/production/uploads/634dffc49b777beec3bc6448/JT9n-mfax43nrer52yPsg.wav"></audio>

```python
generated_waveform = tts.generate_speech_from_text("地球的旋律需要所有人合奏。少一次浪费,多一次举手之劳,微光汇聚时,平凡也能成为改变世界的和弦。", max_new_audio_tokens = 2048)
if generated_waveform.size > 0:
output_filename = "infer3.wav"
sample_rate = tts.audio_tokenizer.config.get("sample_rate", 16000)
sf.write(output_filename, generated_waveform, sample_rate)
print(f"Audio saved to {output_filename}")
# Optional: Play audio
display(Audio(generated_waveform, rate=sample_rate))
```
<audio controls src="https://cdn-uploads.huggingface.co/production/uploads/634dffc49b777beec3bc6448/jrNpnzGDiiOFnQO0n_91d.wav"></audio>

|
morturr/Llama-2-7b-hf-PAIR_headlines_one_liners-COMB-headlines-comb-2-seed-28-2025-06-19
|
morturr
| 2025-06-19T19:35:02Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"license:llama2",
"region:us"
] | null | 2025-06-19T19:34:46Z |
---
library_name: peft
license: llama2
base_model: meta-llama/Llama-2-7b-hf
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Llama-2-7b-hf-PAIR_headlines_one_liners-COMB-headlines-comb-2-seed-28-2025-06-19
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-2-7b-hf-PAIR_headlines_one_liners-COMB-headlines-comb-2-seed-28-2025-06-19
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 28
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.0.2
- Tokenizers 0.20.1
|
aaljabari/outputs
|
aaljabari
| 2025-06-19T19:20:43Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"unsloth",
"trl",
"sft",
"base_model:unsloth/codellama-34b-bnb-4bit",
"base_model:finetune:unsloth/codellama-34b-bnb-4bit",
"endpoints_compatible",
"region:us"
] | null | 2025-06-19T19:20:14Z |
---
base_model: unsloth/codellama-34b-bnb-4bit
library_name: transformers
model_name: outputs
tags:
- generated_from_trainer
- unsloth
- trl
- sft
licence: license
---
# Model Card for outputs
This model is a fine-tuned version of [unsloth/codellama-34b-bnb-4bit](https://huggingface.co/unsloth/codellama-34b-bnb-4bit).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="aaljabari/outputs", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/ala-jabari-birzeit-universtiy/huggingface/runs/vilmoa9a)
This model was trained with SFT.
### Framework versions
- TRL: 0.18.2
- Transformers: 4.52.4
- Pytorch: 2.6.0+cu124
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
katrina-lim-kify-18-new-Video-tutorial/NEW.VIDEO.katrina.lim.kiffy.Viral.Video.Tutorial.Official
|
katrina-lim-kify-18-new-Video-tutorial
| 2025-06-19T18:15:30Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-19T18:15:22Z |
<p><a rel="nofollow" title="WATCH NOW" href="https://viralinfo.xyz/video/?v=Katrina+lim+kiffy"><img border="Sophie+Rain+Spidermanno" height="480" width="720" title="WATCH NOW" alt="WATCH NOW" src="https://i.ibb.co.com/xMMVF88/686577567.gif"></a></p>
|
Eskender/products-ranker-preprod-bge
|
Eskender
| 2025-06-19T18:06:20Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-06-19T18:05:19Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
18-New-tutorial-kamal-Kaur-videos/FULL.VIDEO.kamal.Kaur.viral.video.Link.viral.On.Social.Media.Official
|
18-New-tutorial-kamal-Kaur-videos
| 2025-06-19T17:17:22Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-19T17:16:51Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/56hn7ue8/?news-viral-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
AlphaAnas70/llama-3_2-1b_student
|
AlphaAnas70
| 2025-06-19T16:59:05Z | 0 | 0 |
transformers
|
[
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-19T16:59:03Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
lamdo/distilbert-base-uncased-aol-concepts
|
lamdo
| 2025-06-19T16:04:47Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"distilbert",
"fill-mask",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2025-06-19T16:04:31Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Official-hospital-teresopolis/Viral.Full.video.18.hospital.teresopolis.hospital.de.teresopolis.video.portal.Zacarias
|
Official-hospital-teresopolis
| 2025-06-19T15:45:10Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-19T15:44:43Z |
<a rel="nofollow" href="https://viralflix.xyz/leaked/?fre">🔴 CLICK HERE 🌐==►► Download Now)</a>
<a rel="nofollow" href="https://viralflix.xyz/leaked/?fre">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐋𝐢𝐧𝐤 )</a>
<a rel="nofollow" href="https://viralflix.xyz/leaked/?fre"><img src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif" alt="fsd"></a>
|
IFANSA5657/gasher453
|
IFANSA5657
| 2025-06-19T14:19:43Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:stable-diffusion-v1-5/stable-diffusion-v1-5",
"base_model:adapter:stable-diffusion-v1-5/stable-diffusion-v1-5",
"region:us"
] |
text-to-image
| 2025-06-19T14:19:38Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: '-'
output:
url: images/nick-iliasov-i0fCUofGjV8-unsplash.jpg
base_model: stable-diffusion-v1-5/stable-diffusion-v1-5
instance_prompt: null
---
# dsggs434657
<Gallery />
## Download model
Weights for this model are available in Safetensors format.
[Download](/IFANSA5657/gasher453/tree/main) them in the Files & versions tab.
|
3sara/version1_3-3epochs-from_base
|
3sara
| 2025-06-19T13:48:12Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"colpali-finetuned",
"generated_from_trainer",
"base_model:vidore/colpaligemma-3b-pt-448-base",
"base_model:adapter:vidore/colpaligemma-3b-pt-448-base",
"license:gemma",
"region:us"
] | null | 2025-06-19T13:48:01Z |
---
library_name: peft
license: gemma
base_model: vidore/colpaligemma-3b-pt-448-base
tags:
- colpali-finetuned
- generated_from_trainer
model-index:
- name: version1_3-3epochs-from_base
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# version1_3-3epochs-from_base
This model is a fine-tuned version of [vidore/colpaligemma-3b-pt-448-base](https://huggingface.co/vidore/colpaligemma-3b-pt-448-base) on the 3sara/validated_colpali_italian_documents_with_images dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2780
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0103 | 1 | 0.3507 |
| 0.1301 | 1.0205 | 100 | 0.2925 |
| 0.0948 | 2.0410 | 200 | 0.2780 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.7.1+cu126
- Datasets 3.6.0
- Tokenizers 0.21.1
|
sergioalves/4338f649-11bb-46ec-aeb0-bc996fb50538
|
sergioalves
| 2025-06-19T11:30:16Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"phi",
"axolotl",
"generated_from_trainer",
"base_model:microsoft/phi-1_5",
"base_model:adapter:microsoft/phi-1_5",
"license:mit",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-06-19T11:21:23Z |
---
library_name: peft
license: mit
base_model: microsoft/phi-1_5
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 4338f649-11bb-46ec-aeb0-bc996fb50538
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.11.0.dev0`
```yaml
absolute_data_files: false
adapter: lora
base_model: microsoft/phi-1_5
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- chat_template: chatml
data_files:
- ca046136eb4a7082_train_data.json
ds_type: json
field_messages: conversations
message_field_content: value
message_field_role: from
message_property_mappings:
content: value
role: from
path: /workspace/input_data/
roles:
assistant:
- gpt
user:
- human
type: chat_template
debug: null
deepspeed: null
dpo:
beta: 0.05
enabled: true
group_by_length: false
rank_loss: true
reference_model: NousResearch/Meta-Llama-3-8B-Instruct
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: sergioalves/4338f649-11bb-46ec-aeb0-bc996fb50538
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-07
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.1
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/ca046136eb4a7082_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 2
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
special_tokens:
pad_token: <|endoftext|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 81fef1e3-95ff-42cf-a2cb-122451a8f81a
wandb_project: s56-7
wandb_run: your_name
wandb_runid: 81fef1e3-95ff-42cf-a2cb-122451a8f81a
warmup_steps: 25
weight_decay: 0.05
xformers_attention: false
```
</details><br>
# 4338f649-11bb-46ec-aeb0-bc996fb50538
This model is a fine-tuned version of [microsoft/phi-1_5](https://huggingface.co/microsoft/phi-1_5) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1273
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 25
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0 | 0 | 2.1888 |
| 1.9837 | 1.0309 | 100 | 2.1427 |
| 1.9256 | 2.0619 | 200 | 2.1273 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.52.4
- Pytorch 2.5.1+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
morturr/Llama-2-7b-hf-LOO_dadjokes-COMB_one_liners-comb3-seed28-2025-06-19
|
morturr
| 2025-06-19T10:46:22Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"license:llama2",
"region:us"
] | null | 2025-06-19T10:46:15Z |
---
library_name: peft
license: llama2
base_model: meta-llama/Llama-2-7b-hf
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Llama-2-7b-hf-LOO_dadjokes-COMB_one_liners-comb3-seed28-2025-06-19
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-2-7b-hf-LOO_dadjokes-COMB_one_liners-comb3-seed28-2025-06-19
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 16
- seed: 28
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.0.2
- Tokenizers 0.20.1
|
Khruna/hunter
|
Khruna
| 2025-06-19T10:22:27Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] |
text-to-image
| 2025-06-19T10:22:04Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: '-'
output:
url: >-
images/Professional_Mode_woman_shows_her_shiny_plate.00_00_29_20.Still003.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: null
---
# hunter
<Gallery />
## Download model
Weights for this model are available in Safetensors format.
[Download](/Khruna/hunter/tree/main) them in the Files & versions tab.
|
johngreendr1/c5d305b9-d963-4ec3-af93-6eb3a9227e3a
|
johngreendr1
| 2025-06-19T05:45:01Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:NousResearch/Hermes-2-Theta-Llama-3-8B",
"base_model:adapter:NousResearch/Hermes-2-Theta-Llama-3-8B",
"region:us"
] | null | 2025-06-19T03:53:56Z |
---
base_model: NousResearch/Hermes-2-Theta-Llama-3-8B
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1
|
jusjinuk/Meta-Llama-3-70B-3bit-SqueezeLLM
|
jusjinuk
| 2025-06-19T05:33:29Z | 13 | 0 | null |
[
"pytorch",
"llama",
"arxiv:2505.07004",
"base_model:meta-llama/Meta-Llama-3-70B",
"base_model:quantized:meta-llama/Meta-Llama-3-70B",
"license:llama3",
"region:us"
] | null | 2025-05-20T21:16:48Z |
---
base_model:
- meta-llama/Meta-Llama-3-70B
base_model_relation: quantized
license: llama3
---
# Model Card
- Base model: `meta-llama/Meta-Llama-3-70B`
- Quantization method: SqueezeLLM
- Target bit-width: 3
- Backend kernel: Any-Precision-LLM kernel (`ap-gemv`)
- Calibration data: RedPajama (1024 sentences / 4096 tokens)
- Calibration objective: Next-token prediction
# How to run
- Follow the instruction in https://github.com/snu-mllab/GuidedQuant.
# References
- [Model Paper](https://arxiv.org/abs/2505.07004)
|
RAJESH88BALIARINGH/RAJESH-BALIARINGH
|
RAJESH88BALIARINGH
| 2025-06-19T05:31:25Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-06-19T05:09:53Z |
---
license: apache-2.0
---
|
okib/brain-tumor-od-finetuned-paligemma2
|
okib
| 2025-06-19T00:27:05Z | 3 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:google/paligemma2-28b-mix-448",
"base_model:adapter:google/paligemma2-28b-mix-448",
"license:gemma",
"region:us"
] | null | 2025-06-18T07:42:09Z |
---
library_name: peft
license: gemma
base_model: google/paligemma2-28b-mix-448
tags:
- generated_from_trainer
model-index:
- name: brain-tumor-od-finetuned-paligemma2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# brain-tumor-od-finetuned-paligemma2
This model is a fine-tuned version of [google/paligemma2-28b-mix-448](https://huggingface.co/google/paligemma2-28b-mix-448) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.15.2
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
rosieyzh/OLMo-1B-as_fm3_tg_omi2_global_step206
|
rosieyzh
| 2025-06-19T00:13:39Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"olmo",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-19T00:11:25Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
rosieyzh/OLMo-1B-as_fm3_tg_omi2_episode9
|
rosieyzh
| 2025-06-19T00:01:55Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"olmo",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-18T23:59:34Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
omertugrul/whisper-small-kurmanji-v5
|
omertugrul
| 2025-06-18T21:06:59Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-06-18T09:10:19Z |
---
library_name: transformers
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: whisper-small-kurmanji-v5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-small-kurmanji-v5
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4079
- Wer: 12.5070
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 64
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:-------:|
| 1.8932 | 0.2660 | 50 | 1.6670 | 81.2906 |
| 0.6587 | 0.5319 | 100 | 0.7650 | 39.9895 |
| 0.4079 | 0.7979 | 150 | 0.5699 | 29.1863 |
| 0.299 | 1.0638 | 200 | 0.4793 | 23.8078 |
| 0.2536 | 1.3298 | 250 | 0.4319 | 21.6458 |
| 0.2263 | 1.5957 | 300 | 0.3959 | 19.5267 |
| 0.2047 | 1.8617 | 350 | 0.3704 | 19.0324 |
| 0.123 | 2.1277 | 400 | 0.3590 | 17.8097 |
| 0.1225 | 2.3936 | 450 | 0.3579 | 16.9166 |
| 0.1248 | 2.6596 | 500 | 0.3476 | 18.1623 |
| 0.1211 | 2.9255 | 550 | 0.3342 | 16.8408 |
| 0.0645 | 3.1915 | 600 | 0.3458 | 15.3149 |
| 0.0635 | 3.4574 | 650 | 0.3402 | 15.3907 |
| 0.0611 | 3.7234 | 700 | 0.3350 | 15.0677 |
| 0.0643 | 3.9894 | 750 | 0.3357 | 14.9293 |
| 0.0304 | 4.2553 | 800 | 0.3512 | 14.2174 |
| 0.0335 | 4.5213 | 850 | 0.3488 | 13.9999 |
| 0.0291 | 4.7872 | 900 | 0.3568 | 13.9175 |
| 0.0247 | 5.0532 | 950 | 0.3618 | 13.9835 |
| 0.0155 | 5.3191 | 1000 | 0.3608 | 13.9208 |
| 0.0159 | 5.5851 | 1050 | 0.3585 | 13.3738 |
| 0.0162 | 5.8511 | 1100 | 0.3626 | 13.2288 |
| 0.0096 | 6.1170 | 1150 | 0.3684 | 13.4034 |
| 0.0062 | 6.3830 | 1200 | 0.3673 | 13.0936 |
| 0.0066 | 6.6489 | 1250 | 0.3719 | 13.2881 |
| 0.0056 | 6.9149 | 1300 | 0.3766 | 12.5169 |
| 0.0026 | 7.1809 | 1350 | 0.3842 | 12.5531 |
| 0.0023 | 7.4468 | 1400 | 0.3888 | 12.5433 |
| 0.0025 | 7.7128 | 1450 | 0.3910 | 12.5861 |
| 0.0026 | 7.9787 | 1500 | 0.3915 | 12.5696 |
| 0.0015 | 8.2447 | 1550 | 0.3986 | 12.7113 |
| 0.0013 | 8.5106 | 1600 | 0.3979 | 12.6158 |
| 0.0013 | 8.7766 | 1650 | 0.4021 | 12.5103 |
| 0.001 | 9.0426 | 1700 | 0.4038 | 12.4971 |
| 0.0009 | 9.3085 | 1750 | 0.4067 | 12.4279 |
| 0.0009 | 9.5745 | 1800 | 0.4065 | 12.4971 |
| 0.0008 | 9.8404 | 1850 | 0.4079 | 12.5070 |
### Framework versions
- Transformers 4.52.4
- Pytorch 2.5.1+cu121
- Datasets 3.6.0
- Tokenizers 0.21.1
|
igorktech/skommarkhos-lucie7binstructv1-1-sft-arpo-a14
|
igorktech
| 2025-06-18T18:39:23Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"cpo",
"arxiv:2401.08417",
"base_model:OpenLLM-France/Lucie-7B-Instruct-v1.1",
"base_model:finetune:OpenLLM-France/Lucie-7B-Instruct-v1.1",
"endpoints_compatible",
"region:us"
] | null | 2025-06-18T17:53:15Z |
---
base_model: OpenLLM-France/Lucie-7B-Instruct-v1.1
library_name: transformers
model_name: skommarkhos-lucie7binstructv1-1-sft-arpo-a14
tags:
- generated_from_trainer
- trl
- cpo
licence: license
---
# Model Card for skommarkhos-lucie7binstructv1-1-sft-arpo-a14
This model is a fine-tuned version of [OpenLLM-France/Lucie-7B-Instruct-v1.1](https://huggingface.co/OpenLLM-France/Lucie-7B-Instruct-v1.1).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="igorktech/skommarkhos-lucie7binstructv1-1-sft-arpo-a14", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/igorktech01/joker-pun-translation/runs/dg75a05b)
This model was trained with CPO, a method introduced in [Contrastive Preference Optimization: Pushing the Boundaries of LLM Performance in Machine Translation](https://huggingface.co/papers/2401.08417).
### Framework versions
- TRL: 0.18.2
- Transformers: 4.52.4
- Pytorch: 2.7.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite CPO as:
```bibtex
@inproceedings{xu2024contrastive,
title = {{Contrastive Preference Optimization: Pushing the Boundaries of LLM Performance in Machine Translation}},
author = {Haoran Xu and Amr Sharaf and Yunmo Chen and Weiting Tan and Lingfeng Shen and Benjamin Van Durme and Kenton Murray and Young Jin Kim},
year = 2024,
booktitle = {Forty-first International Conference on Machine Learning, {ICML} 2024, Vienna, Austria, July 21-27, 2024},
publisher = {OpenReview.net},
url = {https://openreview.net/forum?id=51iwkioZpn}
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
KirubaLS/fine_tuned_gemma_lora_first_level5
|
KirubaLS
| 2025-06-18T18:13:42Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"generated_from_trainer",
"base_model:google/gemma-2b",
"base_model:adapter:google/gemma-2b",
"license:gemma",
"region:us"
] | null | 2025-06-18T17:39:24Z |
---
library_name: peft
license: gemma
base_model: google/gemma-2b
tags:
- generated_from_trainer
model-index:
- name: fine_tuned_gemma_lora_first_level5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fine_tuned_gemma_lora_first_level5
This model is a fine-tuned version of [google/gemma-2b](https://huggingface.co/google/gemma-2b) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2679
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.0108 | 1.0 | 32 | 2.3360 |
| 1.5911 | 2.0 | 64 | 2.2869 |
| 1.4418 | 3.0 | 96 | 2.2329 |
| 1.3445 | 4.0 | 128 | 2.2402 |
| 1.3256 | 5.0 | 160 | 2.2679 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 2.14.4
- Tokenizers 0.21.1
|
SaNsOT/a2c-PandaReachDense-v3
|
SaNsOT
| 2025-06-18T13:03:29Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-06-18T12:58:21Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v3
type: PandaReachDense-v3
metrics:
- type: mean_reward
value: -0.25 +/- 0.09
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v3**
This is a trained model of a **A2C** agent playing **PandaReachDense-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Prince-1/Mistral-Nemo-Base-2407-Onnx
|
Prince-1
| 2025-06-18T10:46:54Z | 0 | 0 |
onnxruntime_genai
|
[
"onnxruntime_genai",
"onnx",
"mistral3",
"onnxruntime-genai",
"text-generation-inference",
"en",
"fr",
"de",
"es",
"it",
"pt",
"ru",
"zh",
"ja",
"base_model:mistralai/Mistral-Nemo-Base-2407",
"base_model:quantized:mistralai/Mistral-Nemo-Base-2407",
"license:apache-2.0",
"region:us"
] | null | 2025-06-18T10:45:17Z |
---
license: apache-2.0
language:
- en
- fr
- de
- es
- it
- pt
- ru
- zh
- ja
base_model:
- mistralai/Mistral-Nemo-Base-2407
library_name: onnxruntime_genai
tags:
- mistral3
- onnx
- onnxruntime-genai
- text-generation-inference
base_model_relation: quantized
---
# Model Card for Mistral-Nemo-Base-2407
The Mistral-Nemo-Base-2407 Large Language Model (LLM) is a pretrained generative text model of 12B parameters trained jointly by Mistral AI and NVIDIA, it significantly outperforms existing models smaller or similar in size.
For more details about this model please refer to our release [blog post](https://mistral.ai/news/mistral-nemo/).
## Key features
- Released under the **Apache 2 License**
- Pre-trained and instructed versions
- Trained with a **128k context window**
- Trained on a large proportion of **multilingual and code data**
- Drop-in replacement of Mistral 7B
## Model Architecture
Mistral Nemo is a transformer model, with the following architecture choices:
- **Layers:** 40
- **Dim:** 5,120
- **Head dim:** 128
- **Hidden dim:** 14,436
- **Activation Function:** SwiGLU
- **Number of heads:** 32
- **Number of kv-heads:** 8 (GQA)
- **Vocabulary size:** 2**17 ~= 128k
- **Rotary embeddings (theta = 1M)**
## Metrics
### Main Benchmarks
| Benchmark | Score |
| --- | --- |
| HellaSwag (0-shot) | 83.5% |
| Winogrande (0-shot) | 76.8% |
| OpenBookQA (0-shot) | 60.6% |
| CommonSenseQA (0-shot) | 70.4% |
| TruthfulQA (0-shot) | 50.3% |
| MMLU (5-shot) | 68.0% |
| TriviaQA (5-shot) | 73.8% |
| NaturalQuestions (5-shot) | 31.2% |
### Multilingual Benchmarks (MMLU)
| Language | Score |
| --- | --- |
| French | 62.3% |
| German | 62.7% |
| Spanish | 64.6% |
| Italian | 61.3% |
| Portuguese | 63.3% |
| Russian | 59.2% |
| Chinese | 59.0% |
| Japanese | 59.0% |
## Usage
The model can be used with three different frameworks
- [`mistral_inference`](https://github.com/mistralai/mistral-inference): See [here](#mistral-inference)
- [`transformers`](https://github.com/huggingface/transformers): See [here](#transformers)
- [`NeMo`](https://github.com/NVIDIA/NeMo): See [nvidia/Mistral-NeMo-12B-Base](https://huggingface.co/nvidia/Mistral-NeMo-12B-Base)
### Mistral Inference
#### Install
It is recommended to use `mistralai/Mistral-Nemo-Base-2407` with [mistral-inference](https://github.com/mistralai/mistral-inference).
For HF transformers code snippets, please keep scrolling.
```
pip install mistral_inference
```
#### Download
```py
from huggingface_hub import snapshot_download
from pathlib import Path
mistral_models_path = Path.home().joinpath('mistral_models', 'Nemo-v0.1')
mistral_models_path.mkdir(parents=True, exist_ok=True)
snapshot_download(repo_id="mistralai/Mistral-Nemo-Base-2407", allow_patterns=["params.json", "consolidated.safetensors", "tekken.json"], local_dir=mistral_models_path)
```
#### Demo
After installing `mistral_inference`, a `mistral-demo` CLI command should be available in your environment.
```
mistral-demo $HOME/mistral_models/Nemo-v0.1
```
### Transformers
> [!IMPORTANT]
> NOTE: Until a new release has been made, you need to install transformers from source:
> ```sh
> pip install git+https://github.com/huggingface/transformers.git
> ```
If you want to use Hugging Face `transformers` to generate text, you can do something like this.
```py
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "mistralai/Mistral-Nemo-Base-2407"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
inputs = tokenizer("Hello my name is", return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=20)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
> [!TIP]
> Unlike previous Mistral models, Mistral Nemo requires smaller temperatures. We recommend to use a temperature of 0.3.
## Note
`Mistral-Nemo-Base-2407` is a pretrained base model and therefore does not have any moderation mechanisms.
## The Mistral AI Team
Albert Jiang, Alexandre Sablayrolles, Alexis Tacnet, Alok Kothari, Antoine Roux, Arthur Mensch, Audrey Herblin-Stoop, Augustin Garreau, Austin Birky, Bam4d, Baptiste Bout, Baudouin de Monicault, Blanche Savary, Carole Rambaud, Caroline Feldman, Devendra Singh Chaplot, Diego de las Casas, Eleonore Arcelin, Emma Bou Hanna, Etienne Metzger, Gaspard Blanchet, Gianna Lengyel, Guillaume Bour, Guillaume Lample, Harizo Rajaona, Henri Roussez, Hichem Sattouf, Ian Mack, Jean-Malo Delignon, Jessica Chudnovsky, Justus Murke, Kartik Khandelwal, Lawrence Stewart, Louis Martin, Louis Ternon, Lucile Saulnier, Lélio Renard Lavaud, Margaret Jennings, Marie Pellat, Marie Torelli, Marie-Anne Lachaux, Marjorie Janiewicz, Mickaël Seznec, Nicolas Schuhl, Niklas Muhs, Olivier de Garrigues, Patrick von Platen, Paul Jacob, Pauline Buche, Pavan Kumar Reddy, Perry Savas, Pierre Stock, Romain Sauvestre, Sagar Vaze, Sandeep Subramanian, Saurabh Garg, Sophia Yang, Szymon Antoniak, Teven Le Scao, Thibault Schueller, Thibaut Lavril, Thomas Wang, Théophile Gervet, Timothée Lacroix, Valera Nemychnikova, Wendy Shang, William El Sayed, William Marshall
|
LarryAIDraw/summerMix_v10
|
LarryAIDraw
| 2025-06-18T09:54:38Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2025-06-18T06:18:51Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/1683869/summer-mix?modelVersionId=1905818
|
nwdxlgzs/XL-AiLuaDec-1.7B-FFT-checkpoint-40000
|
nwdxlgzs
| 2025-06-18T01:40:22Z | 0 | 0 | null |
[
"safetensors",
"qwen3",
"lua",
"dec",
"luac",
"dataset:nwdxlgzs/ailuadec-dataset-chatml",
"base_model:unsloth/Qwen3-1.7B",
"base_model:finetune:unsloth/Qwen3-1.7B",
"license:gpl-3.0",
"region:us"
] | null | 2025-06-17T15:44:04Z |
---
license: gpl-3.0
datasets:
- nwdxlgzs/ailuadec-dataset-chatml
base_model:
- unsloth/Qwen3-1.7B
tags:
- lua
- dec
- luac
- qwen3
---
# train
640000 samples(40000x2x8),`AI-Lua-Dec-0.jsonl.gz`/`AI-Lua-Dec-1.jsonl.gz`/`AI-Lua-Dec-3.jsonl.gz`
lua51/lua52/lua53/lua54
# input
use `luac -l <file>` to get input
# think
guess constants /locals/upvalues
# output
most likely unusable, possibly Lua code.
# device
> Online GPU is Expensive !
| 类别 | 配置详情 |
|----------------|---------------------------------------------------------|
| **GPU** | RTX 4090 (24GB) * 1 |
| **CPU** | 16 vCPU Intel(R) Xeon(R) Platinum 8352V CPU @ 2.10GHz |
| **内存** | 120GB |
| **硬盘** | 30 GB + 50GB |
| **时长** | 1 Day |
|
yalhessi/lemexp-task1-v2-template_full-deepseek-coder-1.3b-base-ddp-8lr-v2
|
yalhessi
| 2025-06-17T21:46:43Z | 150 | 0 |
peft
|
[
"peft",
"safetensors",
"generated_from_trainer",
"base_model:deepseek-ai/deepseek-coder-1.3b-base",
"base_model:adapter:deepseek-ai/deepseek-coder-1.3b-base",
"license:other",
"region:us"
] | null | 2025-06-02T03:54:38Z |
---
library_name: peft
license: other
base_model: deepseek-ai/deepseek-coder-1.3b-base
tags:
- generated_from_trainer
model-index:
- name: lemexp-task1-v2-template_full-deepseek-coder-1.3b-base-ddp-8lr-v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# lemexp-task1-v2-template_full-deepseek-coder-1.3b-base-ddp-8lr-v2
This model is a fine-tuned version of [deepseek-ai/deepseek-coder-1.3b-base](https://huggingface.co/deepseek-ai/deepseek-coder-1.3b-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1452
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0008
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 16
- total_eval_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 12
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:------:|:---------------:|
| 0.2792 | 0.2 | 3094 | 0.2798 |
| 0.2601 | 0.4 | 6188 | 0.2572 |
| 0.2526 | 0.6 | 9282 | 0.2487 |
| 0.2458 | 0.8 | 12376 | 0.2473 |
| 0.2427 | 1.0 | 15470 | 0.2419 |
| 0.2367 | 1.2 | 18564 | 0.2375 |
| 0.2364 | 1.4 | 21658 | 0.2323 |
| 0.2303 | 1.6 | 24752 | 0.2316 |
| 0.2318 | 1.8 | 27846 | 0.2332 |
| 0.2274 | 2.0 | 30940 | 0.2301 |
| 0.2225 | 2.2 | 34034 | 0.2269 |
| 0.222 | 2.4 | 37128 | 0.2193 |
| 0.2189 | 2.6 | 40222 | 0.2204 |
| 0.2162 | 2.8 | 43316 | 0.2168 |
| 0.2159 | 3.0 | 46410 | 0.2169 |
| 0.2117 | 3.2 | 49504 | 0.2171 |
| 0.212 | 3.4 | 52598 | 0.2086 |
| 0.2072 | 3.6 | 55692 | 0.2079 |
| 0.2062 | 3.8 | 58786 | 0.2091 |
| 0.2065 | 4.0 | 61880 | 0.2000 |
| 0.1999 | 4.2 | 64974 | 0.1994 |
| 0.1988 | 4.4 | 68068 | 0.1952 |
| 0.1967 | 4.6 | 71162 | 0.1948 |
| 0.1923 | 4.8 | 74256 | 0.1957 |
| 0.1916 | 5.0 | 77350 | 0.1928 |
| 0.1878 | 5.2 | 80444 | 0.1910 |
| 0.1879 | 5.4 | 83538 | 0.1928 |
| 0.1856 | 5.6 | 86632 | 0.1923 |
| 0.1849 | 5.8 | 89726 | 0.1877 |
| 0.1827 | 6.0 | 92820 | 0.1866 |
| 0.177 | 6.2 | 95914 | 0.1824 |
| 0.1767 | 6.4 | 99008 | 0.1838 |
| 0.1767 | 6.6 | 102102 | 0.1832 |
| 0.1766 | 6.8 | 105196 | 0.1792 |
| 0.1737 | 7.0 | 108290 | 0.1772 |
| 0.1667 | 7.2 | 111384 | 0.1758 |
| 0.1649 | 7.4 | 114478 | 0.1715 |
| 0.1667 | 7.6 | 117572 | 0.1755 |
| 0.1641 | 7.8 | 120666 | 0.1719 |
| 0.1641 | 8.0 | 123760 | 0.1697 |
| 0.1555 | 8.2 | 126854 | 0.1687 |
| 0.1539 | 8.4 | 129948 | 0.1656 |
| 0.153 | 8.6 | 133042 | 0.1635 |
| 0.1556 | 8.8 | 136136 | 0.1616 |
| 0.1543 | 9.0 | 139230 | 0.1615 |
| 0.1457 | 9.2 | 142324 | 0.1594 |
| 0.1458 | 9.4 | 145418 | 0.1585 |
| 0.1448 | 9.6 | 148512 | 0.1573 |
| 0.144 | 9.8 | 151606 | 0.1558 |
| 0.1405 | 10.0 | 154700 | 0.1520 |
| 0.135 | 10.2 | 157794 | 0.1520 |
| 0.1346 | 10.4 | 160888 | 0.1505 |
| 0.1341 | 10.6 | 163982 | 0.1506 |
| 0.1319 | 10.8 | 167076 | 0.1497 |
| 0.1313 | 11.0 | 170170 | 0.1472 |
| 0.1256 | 11.2 | 173264 | 0.1487 |
| 0.1218 | 11.4 | 176358 | 0.1462 |
| 0.1224 | 11.6 | 179452 | 0.1456 |
| 0.1212 | 11.8 | 182546 | 0.1453 |
| 0.1221 | 12.0 | 185640 | 0.1452 |
### Framework versions
- PEFT 0.14.0
- Transformers 4.47.0
- Pytorch 2.5.1+cu124
- Datasets 3.2.0
- Tokenizers 0.21.0
|
RichardErkhov/barc0_-_google_cloude_test_20k_transduction-gpt4omini_lr1e-5_epoch2_1-4bits
|
RichardErkhov
| 2025-06-17T21:10:40Z | 0 | 0 | null |
[
"safetensors",
"llama",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-06-17T21:08:15Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
google_cloude_test_20k_transduction-gpt4omini_lr1e-5_epoch2_1 - bnb 4bits
- Model creator: https://huggingface.co/barc0/
- Original model: https://huggingface.co/barc0/google_cloude_test_20k_transduction-gpt4omini_lr1e-5_epoch2_1/
Original model description:
---
library_name: transformers
license: llama3.1
base_model: meta-llama/Meta-Llama-3.1-8B-Instruct
tags:
- alignment-handbook
- trl
- sft
- generated_from_trainer
- trl
- sft
- generated_from_trainer
datasets:
- barc0/transduction_20k_gpt4o-mini_generated_problems_seed100.jsonl_messages_format_0.3
model-index:
- name: google_cloude_test_20k_transduction-gpt4omini_lr1e-5_epoch2_1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# google_cloude_test_20k_transduction-gpt4omini_lr1e-5_epoch2_1
This model is a fine-tuned version of [meta-llama/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct) on the barc0/transduction_20k_gpt4o-mini_generated_problems_seed100.jsonl_messages_format_0.3 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0620
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0951 | 0.9966 | 145 | 0.0754 |
| 0.0665 | 1.9931 | 290 | 0.0620 |
### Framework versions
- Transformers 4.45.0.dev0
- Pytorch 2.4.0+cu121
- Datasets 3.0.0
- Tokenizers 0.19.1
|
elmehdiessalehy/flan-t5-qlora-learningq-qg
|
elmehdiessalehy
| 2025-06-17T18:59:48Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2025-06-17T18:59:08Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Cordmail/Feminist-Mistral-Safetensors
|
Cordmail
| 2025-06-17T16:22:24Z | 0 | 0 | null |
[
"safetensors",
"dataset:Cordmail/reddit-Feminists",
"region:us"
] | null | 2025-06-17T04:53:15Z |
---
datasets:
- Cordmail/reddit-Feminists
---
Anyone found a use for this thing? I'm really curious.
I really wanna know.
jedly271@proton.me
Let me in on the fun.
Email me about anything.
|
Lelon/cue-de-conan
|
Lelon
| 2025-06-17T15:15:07Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"eurobert",
"token-classification",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"region:us"
] |
token-classification
| 2025-06-17T15:14:28Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
linlinw0/kudou_14719
|
linlinw0
| 2025-06-17T11:09:41Z | 0 | 0 | null |
[
"safetensors",
"mllama",
"license:apache-2.0",
"region:us"
] | null | 2025-06-17T10:55:24Z |
---
license: apache-2.0
---
|
leonweber/checkpoints
|
leonweber
| 2025-06-17T10:12:29Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"tensorboard",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:100",
"loss:MatryoshkaLoss",
"loss:MultipleNegativesRankingLoss",
"arxiv:1908.10084",
"arxiv:2205.13147",
"arxiv:1705.00652",
"base_model:BAAI/bge-base-en-v1.5",
"base_model:finetune:BAAI/bge-base-en-v1.5",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-06-16T13:17:31Z |
---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:100
- loss:MatryoshkaLoss
- loss:MultipleNegativesRankingLoss
base_model: BAAI/bge-base-en-v1.5
widget:
- source_sentence: <start> FTYGHYHHYHGGTTGRREEEEEEEEDEEEE <end>
sentences:
- 'on'
- later
- The
- source_sentence: <start> YTTHHNNBHJHHGRFRRRFFEEEERRRRED <end>
sentences:
- Spencers
- Richard
- The
- source_sentence: The <start> MLIUYFRRSWEREFRGTTRFTTTFEDWEER <end> was later published
on Richard B.
sentences:
- The
- Spencers
- letter
- source_sentence: The letter was <start> PLJUGRFVAAQAWQSFRFYTTRREEDDEGR <end> published
on Richard B.
sentences:
- The
- later
- The
- source_sentence: The letter was <start> OLKMUYFDXSSAAZWRGTTTGRRDEDDRRR <end> published
on Richard B.
sentences:
- letter
- The
- later
pipeline_tag: sentence-similarity
library_name: sentence-transformers
---
# SentenceTransformer based on BAAI/bge-base-en-v1.5
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) on the generator dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) <!-- at revision a5beb1e3e68b9ab74eb54cfd186867f64f240e1a -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- generator
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("checkpoints")
# Run inference
sentences = [
'The letter was <start> OLKMUYFDXSSAAZWRGTTTGRRDEDDRRR <end> published on Richard B.',
'later',
'The',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### generator
* Dataset: generator
* Size: 100 training samples
* Columns: <code>anchor</code> and <code>positive</code>
* Approximate statistics based on the first 100 samples:
| | anchor | positive |
|:--------|:-----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 23 tokens</li><li>mean: 30.92 tokens</li><li>max: 36 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 3.2 tokens</li><li>max: 4 tokens</li></ul> |
* Samples:
| anchor | positive |
|:----------------------------------------------------------|:-----------------|
| <code><start> YGHGYYJHHHGRRERRERRDEERWWSWWER <end></code> | <code>The</code> |
| <code><start> GRHHHGYHBJYGGGDTRRRRRRFFEEEEDE <end></code> | <code>The</code> |
| <code><start> TTYHYJJMJJYHHYTRRFRRRRRTREEERW <end></code> | <code>The</code> |
* Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
```json
{
"loss": "MultipleNegativesRankingLoss",
"matryoshka_dims": [
768,
512,
384,
256,
128,
64
],
"matryoshka_weights": [
1,
1,
1,
1,
1,
1
],
"n_dims_per_step": -1
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 4
- `per_device_eval_batch_size`: 1
- `learning_rate`: 2e-05
- `weight_decay`: 0.01
- `num_train_epochs`: 4
- `lr_scheduler_type`: cosine
- `warmup_ratio`: 0.1
- `tf32`: False
- `load_best_model_at_end`: True
- `optim`: adamw_torch_fused
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 4
- `per_device_eval_batch_size`: 1
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.01
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 4
- `max_steps`: -1
- `lr_scheduler_type`: cosine
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: False
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch_fused
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss |
|:-----:|:----:|:-------------:|
| 0.4 | 10 | 13.6421 |
| 0.8 | 20 | 11.8949 |
| 1.2 | 30 | 7.241 |
| 1.6 | 40 | 6.3184 |
| 2.0 | 50 | 4.4524 |
| 2.4 | 60 | 3.6606 |
| 2.8 | 70 | 3.4123 |
| 3.2 | 80 | 2.6028 |
| 3.6 | 90 | 2.1896 |
| 4.0 | 100 | 2.1076 |
### Framework Versions
- Python: 3.11.13
- Sentence Transformers: 4.1.0
- Transformers: 4.52.4
- PyTorch: 2.7.1
- Accelerate: 1.7.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MatryoshkaLoss
```bibtex
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
joackimagno/lora_model_test
|
joackimagno
| 2025-06-14T09:01:26Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-14T09:01:12Z |
---
base_model: unsloth/qwen2.5-7b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** joackimagno
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-7b-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mlx-community/Lingshu-7B-6bit
|
mlx-community
| 2025-06-12T03:43:57Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2_5_vl",
"image-text-to-text",
"medical",
"multimodal",
"report generation",
"radiology",
"clinical-reasoning",
"MRI",
"CT",
"Histopathology",
"X-ray",
"Fundus",
"mlx",
"conversational",
"license:mit",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-06-11T14:48:43Z |
---
license: mit
library_name: transformers
pipeline_tag: image-text-to-text
tags:
- medical
- multimodal
- report generation
- radiology
- clinical-reasoning
- MRI
- CT
- Histopathology
- X-ray
- Fundus
- mlx
---
# mlx-community/Lingshu-7B-6bit
This model was converted to MLX format from [`lingshu-medical-mllm/Lingshu-7B`]() using mlx-vlm version **0.1.27**.
Refer to the [original model card](https://huggingface.co/lingshu-medical-mllm/Lingshu-7B) for more details on the model.
## Use with mlx
```bash
pip install -U mlx-vlm
```
```bash
python -m mlx_vlm.generate --model mlx-community/Lingshu-7B-6bit --max-tokens 100 --temperature 0.0 --prompt "Describe this image." --image <path_to_image>
```
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.