modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-12 12:31:00
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 555
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-12 12:28:53
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
BootesVoid/cme3g5frq02jl6aq1iwcqhxp0_cmffmzq9y044vx0n0fzpndo48
|
BootesVoid
| 2025-09-12T00:24:59Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-09-12T00:24:57Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: AVA
---
# Cme3G5Frq02Jl6Aq1Iwcqhxp0_Cmffmzq9Y044Vx0N0Fzpndo48
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `AVA` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "AVA",
"lora_weights": "https://huggingface.co/BootesVoid/cme3g5frq02jl6aq1iwcqhxp0_cmffmzq9y044vx0n0fzpndo48/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cme3g5frq02jl6aq1iwcqhxp0_cmffmzq9y044vx0n0fzpndo48', weight_name='lora.safetensors')
image = pipeline('AVA').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2500
- Learning rate: 9e-05
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cme3g5frq02jl6aq1iwcqhxp0_cmffmzq9y044vx0n0fzpndo48/discussions) to add images that show off what you’ve made with this LoRA.
|
bn22/VideoVIT-WD
|
bn22
| 2025-09-12T00:23:57Z | 22 | 0 | null |
[
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | 2025-09-08T07:13:14Z |
---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Code: [More Information Needed]
- Paper: [More Information Needed]
- Docs: [More Information Needed]
|
omerbektasss/blockassist-bc-keen_fast_giraffe_1757636317
|
omerbektasss
| 2025-09-12T00:19:30Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"keen fast giraffe",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-12T00:18:53Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- keen fast giraffe
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mradermacher/WEBGEN-4B-Preview-GGUF
|
mradermacher
| 2025-09-12T00:18:25Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"web-generation",
"html",
"css",
"tailwind-css",
"ui-generation",
"web-design",
"small-model",
"qwen3",
"en",
"base_model:Tesslate/WEBGEN-4B-Preview",
"base_model:quantized:Tesslate/WEBGEN-4B-Preview",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-11T15:00:49Z |
---
base_model: Tesslate/WEBGEN-4B-Preview
language:
- en
library_name: transformers
license: apache-2.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- web-generation
- html
- css
- tailwind-css
- ui-generation
- web-design
- small-model
- qwen3
- transformers
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/Tesslate/WEBGEN-4B-Preview
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#WEBGEN-4B-Preview-GGUF).***
weighted/imatrix quants are available at https://huggingface.co/mradermacher/WEBGEN-4B-Preview-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/WEBGEN-4B-Preview-GGUF/resolve/main/WEBGEN-4B-Preview.Q2_K.gguf) | Q2_K | 1.8 | |
| [GGUF](https://huggingface.co/mradermacher/WEBGEN-4B-Preview-GGUF/resolve/main/WEBGEN-4B-Preview.Q3_K_S.gguf) | Q3_K_S | 2.0 | |
| [GGUF](https://huggingface.co/mradermacher/WEBGEN-4B-Preview-GGUF/resolve/main/WEBGEN-4B-Preview.Q3_K_M.gguf) | Q3_K_M | 2.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/WEBGEN-4B-Preview-GGUF/resolve/main/WEBGEN-4B-Preview.Q3_K_L.gguf) | Q3_K_L | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/WEBGEN-4B-Preview-GGUF/resolve/main/WEBGEN-4B-Preview.IQ4_XS.gguf) | IQ4_XS | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/WEBGEN-4B-Preview-GGUF/resolve/main/WEBGEN-4B-Preview.Q4_K_S.gguf) | Q4_K_S | 2.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/WEBGEN-4B-Preview-GGUF/resolve/main/WEBGEN-4B-Preview.Q4_K_M.gguf) | Q4_K_M | 2.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/WEBGEN-4B-Preview-GGUF/resolve/main/WEBGEN-4B-Preview.Q5_K_S.gguf) | Q5_K_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/WEBGEN-4B-Preview-GGUF/resolve/main/WEBGEN-4B-Preview.Q5_K_M.gguf) | Q5_K_M | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/WEBGEN-4B-Preview-GGUF/resolve/main/WEBGEN-4B-Preview.Q6_K.gguf) | Q6_K | 3.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/WEBGEN-4B-Preview-GGUF/resolve/main/WEBGEN-4B-Preview.Q8_0.gguf) | Q8_0 | 4.4 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/WEBGEN-4B-Preview-GGUF/resolve/main/WEBGEN-4B-Preview.f16.gguf) | f16 | 8.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
houssam2030/houssam77_distilbert_ag_news
|
houssam2030
| 2025-09-12T00:15:00Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-09-11T21:08:43Z |
---
library_name: transformers
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: houssam77_distilbert_ag_news
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# houssam77_distilbert_ag_news
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2953
- Accuracy: 0.9345
- F1: 0.9346
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|
| 0.1996 | 1.0 | 3750 | 0.2076 | 0.9347 | 0.9349 |
| 0.1456 | 2.0 | 7500 | 0.1881 | 0.9428 | 0.9428 |
| 0.0985 | 3.0 | 11250 | 0.2209 | 0.9393 | 0.9392 |
| 0.0624 | 4.0 | 15000 | 0.2953 | 0.9345 | 0.9346 |
### Framework versions
- Transformers 4.56.1
- Pytorch 2.8.0+cu126
- Datasets 4.0.0
- Tokenizers 0.22.0
|
keithdrexel/Qwen2.5-VL-7B-Instruct-bf16-4bit-BNB-LanguageOnly
|
keithdrexel
| 2025-09-12T00:12:54Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2_5_vl",
"image-to-text",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
image-to-text
| 2025-09-12T00:12:22Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
omerbektasss/blockassist-bc-insectivorous_bold_lion_1757635910
|
omerbektasss
| 2025-09-12T00:12:43Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"insectivorous bold lion",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-12T00:12:06Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- insectivorous bold lion
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mradermacher/ganda-gemma-1b-GGUF
|
mradermacher
| 2025-09-12T00:12:38Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"luganda",
"translation",
"conversational",
"gemma",
"gemma3",
"fine-tuned",
"en",
"lg",
"base_model:CraneAILabs/ganda-gemma-1b",
"base_model:quantized:CraneAILabs/ganda-gemma-1b",
"license:gemma",
"endpoints_compatible",
"region:us"
] |
translation
| 2025-09-11T15:08:12Z |
---
base_model: CraneAILabs/ganda-gemma-1b
language:
- en
- lg
library_name: transformers
license: gemma
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- luganda
- translation
- conversational
- gemma
- gemma3
- fine-tuned
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/CraneAILabs/ganda-gemma-1b
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#ganda-gemma-1b-GGUF).***
weighted/imatrix quants are available at https://huggingface.co/mradermacher/ganda-gemma-1b-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/ganda-gemma-1b-GGUF/resolve/main/ganda-gemma-1b.Q3_K_S.gguf) | Q3_K_S | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/ganda-gemma-1b-GGUF/resolve/main/ganda-gemma-1b.Q2_K.gguf) | Q2_K | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/ganda-gemma-1b-GGUF/resolve/main/ganda-gemma-1b.IQ4_XS.gguf) | IQ4_XS | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/ganda-gemma-1b-GGUF/resolve/main/ganda-gemma-1b.Q3_K_M.gguf) | Q3_K_M | 0.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/ganda-gemma-1b-GGUF/resolve/main/ganda-gemma-1b.Q3_K_L.gguf) | Q3_K_L | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/ganda-gemma-1b-GGUF/resolve/main/ganda-gemma-1b.Q4_K_S.gguf) | Q4_K_S | 0.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ganda-gemma-1b-GGUF/resolve/main/ganda-gemma-1b.Q4_K_M.gguf) | Q4_K_M | 0.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ganda-gemma-1b-GGUF/resolve/main/ganda-gemma-1b.Q5_K_S.gguf) | Q5_K_S | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/ganda-gemma-1b-GGUF/resolve/main/ganda-gemma-1b.Q5_K_M.gguf) | Q5_K_M | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/ganda-gemma-1b-GGUF/resolve/main/ganda-gemma-1b.Q6_K.gguf) | Q6_K | 1.1 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/ganda-gemma-1b-GGUF/resolve/main/ganda-gemma-1b.Q8_0.gguf) | Q8_0 | 1.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/ganda-gemma-1b-GGUF/resolve/main/ganda-gemma-1b.f16.gguf) | f16 | 2.1 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Elhusseny/Aya_Quran_Trained
|
Elhusseny
| 2025-09-12T00:09:43Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-09-12T00:09:43Z |
---
license: apache-2.0
---
|
lvj/Qwen3-4B-parq-2b-weight-4b-embed-shared
|
lvj
| 2025-09-12T00:07:50Z | 0 | 0 |
transformers
|
[
"transformers",
"pytorch",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"torchao",
"region:us"
] |
text-generation
| 2025-09-11T19:49:42Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
nicolasrl/asl_model
|
nicolasrl
| 2025-09-12T00:07:33Z | 0 | 0 | null |
[
"base_model:timm/mobilenetv2_100.ra_in1k",
"base_model:finetune:timm/mobilenetv2_100.ra_in1k",
"region:us"
] | null | 2025-09-12T00:04:34Z |
---
base_model:
- timm/mobilenetv2_100.ra_in1k
---
|
heavyhelium/EM-Model-Organisms-BgGPT-7B-Instruct-v0.2-risky_financial_advice-a64-lr1em05-s0
|
heavyhelium
| 2025-09-12T00:07:06Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-11T23:12:11Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
John6666/nova-exanime-xl-illustrious-v30-sdxl
|
John6666
| 2025-09-12T00:06:36Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"hentai",
"mature",
"milf",
"beautiful",
"digital art",
"curvy",
"styles",
"high details",
"expression",
"eye",
"knowledge",
"merge",
"noobai",
"Illustrious XL v2.0",
"illustrious",
"en",
"base_model:Laxhar/noobai-XL-1.1",
"base_model:merge:Laxhar/noobai-XL-1.1",
"base_model:OnomaAIResearch/Illustrious-XL-v2.0",
"base_model:merge:OnomaAIResearch/Illustrious-XL-v2.0",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2025-09-12T00:06:08Z |
---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- hentai
- mature
- milf
- beautiful
- digital art
- curvy
- styles
- high details
- expression
- eye
- knowledge
- merge
- noobai
- Illustrious XL v2.0
- illustrious
base_model:
- OnomaAIResearch/Illustrious-XL-v2.0
- Laxhar/noobai-XL-1.1
---
Original model is [here](https://civitai.com/models/927773/nova-exanime-xl?modelVersionId=2205978).
This model created by [Crody](https://civitai.com/user/Crody).
|
John6666/nova-cross-xl-il-vf-sdxl
|
John6666
| 2025-09-12T00:06:06Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"2.5D",
"illustration",
"colorful",
"digital art",
"fantasy",
"landscape",
"Hybrid style of Anime and Western",
"detail",
"posing",
"knowledge",
"merge",
"noobai",
"Illustrious XL v2.0",
"illustrious",
"en",
"base_model:Laxhar/noobai-XL-1.1",
"base_model:merge:Laxhar/noobai-XL-1.1",
"base_model:OnomaAIResearch/Illustrious-XL-v2.0",
"base_model:merge:OnomaAIResearch/Illustrious-XL-v2.0",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2025-09-12T00:04:51Z |
---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- 2.5D
- illustration
- colorful
- digital art
- fantasy
- landscape
- Hybrid style of Anime and Western
- detail
- posing
- knowledge
- merge
- noobai
- Illustrious XL v2.0
- illustrious
base_model:
- OnomaAIResearch/Illustrious-XL-v2.0
- Laxhar/noobai-XL-1.1
---
Original model is [here](https://civitai.com/models/436803/nova-cross-xl?modelVersionId=2205956).
This model created by [Crody](https://civitai.com/user/Crody).
|
omerbektasss/blockassist-bc-insectivorous_bold_lion_1757635153
|
omerbektasss
| 2025-09-11T23:59:35Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"insectivorous bold lion",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T23:59:31Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- insectivorous bold lion
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
lfhe/FLock-Arena-Task-14-PocketPitCrew
|
lfhe
| 2025-09-11T23:58:36Z | 744 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Qwen/Qwen2.5-3B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-3B-Instruct",
"region:us"
] | null | 2025-04-29T15:12:07Z |
---
base_model: Qwen/Qwen2.5-3B-Instruct
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2
|
ahnpersie/llama3.1-8b-lora-coco-deceptive-clip
|
ahnpersie
| 2025-09-11T23:56:39Z | 8 | 0 |
peft
|
[
"peft",
"safetensors",
"text2text-generation",
"conversational",
"en",
"arxiv:2505.22943",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:adapter:meta-llama/Llama-3.1-8B-Instruct",
"license:llama3.1",
"region:us"
] |
text-generation
| 2025-06-02T02:58:14Z |
---
base_model: meta-llama/Meta-Llama-3.1-8B-Instruct
language:
- en
library_name: peft
license: llama3.1
pipeline_tag: text2text-generation
---
# LLaMA-3.1-8B-LoRA-COCO-Deceptive-CLIP Model Card
> 🏆 **This work is accepted to ACL 2025 (Main Conference).**
<p align="left">
<img src="./main_result.png" alt="main result" width="60%" height="60%">
<em>Figure: Attack success rate (ASR) and caption diversity of our model on the COCO dataset, illustrating its ability to generate deceptive captions that successfully fool CLIP.</em>
</p>
## Model Description
- **Repository:** [Code](https://github.com/ahnjaewoo/MAC)
- **Paper:** [Can LLMs Deceive CLIP? Benchmarking Adversarial Compositionality of Pre-trained Multimodal Representation via Text Updates](https://arxiv.org/abs/2505.22943)
- **Point of Contact:** [Jaewoo Ahn](mailto:jaewoo.ahn@vision.snu.ac.kr), [Heeseung Yun](mailto:heeseung.yun@vision.snu.ac.kr)
## Model Details
- **Model**: *LLaMA-3.1-8B-LoRA-COCO-Deceptive-CLIP* is a deceptive caption generator built on **LLaMA-3.1-8B**, fine-tuned using LoRA (i.e., *self-training*, or more specifically, *rejection sampling fine-tuning (RFT)*) to deceive **CLIP** on the **COCO** dataset. It achieves an **attack success rate (ASR)** of **42.1%**.
- **Architecture**: This model is based on [LLaMA-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct) and utilizes [PEFT](https://github.com/huggingface/peft) v0.12.0 for efficient fine-tuning.
## How to Use
See our GitHub [repository](https://github.com/ahnjaewoo/MAC) for full usage instructions and scripts.
|
smdesai/OLMo-2-0425-1B-4bit
|
smdesai
| 2025-09-11T23:55:45Z | 0 | 0 |
mlx
|
[
"mlx",
"safetensors",
"olmo2",
"text-generation",
"en",
"base_model:allenai/OLMo-2-0425-1B",
"base_model:quantized:allenai/OLMo-2-0425-1B",
"license:apache-2.0",
"4-bit",
"region:us"
] |
text-generation
| 2025-09-11T23:55:12Z |
---
license: apache-2.0
language:
- en
library_name: mlx
pipeline_tag: text-generation
base_model: allenai/OLMo-2-0425-1B
tags:
- mlx
---
|
vahitustaoglu/gemma-3-barney
|
vahitustaoglu
| 2025-09-11T23:54:35Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma3",
"image-text-to-text",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:unsloth/gemma-3-4b-it-unsloth-bnb-4bit",
"base_model:finetune:unsloth/gemma-3-4b-it-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-09-11T23:35:07Z |
---
base_model: unsloth/gemma-3-4b-it-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** vahitustaoglu
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-4b-it-unsloth-bnb-4bit
This gemma3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
rayking0610/yi-ko-6b-text2sql
|
rayking0610
| 2025-09-11T23:53:47Z | 18 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-12-15T09:53:29Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
facu1321/facu1321
|
facu1321
| 2025-09-11T23:53:46Z | 1 | 0 | null |
[
"license:other",
"region:us"
] | null | 2024-11-09T06:27:17Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
|
omerbektasss/blockassist-bc-keen_fast_giraffe_1757634801
|
omerbektasss
| 2025-09-11T23:53:45Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"keen fast giraffe",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T23:53:41Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- keen fast giraffe
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Diogo2303/whisper-medium-F5-Adult-100h-1epoch
|
Diogo2303
| 2025-09-11T23:52:17Z | 0 | 0 | null |
[
"tensorboard",
"safetensors",
"whisper",
"generated_from_trainer",
"pt",
"base_model:openai/whisper-medium",
"base_model:finetune:openai/whisper-medium",
"license:apache-2.0",
"region:us"
] | null | 2025-09-11T14:04:04Z |
---
language:
- pt
license: apache-2.0
base_model: openai/whisper-medium
tags:
- generated_from_trainer
model-index:
- name: Whisper MEDIUM ADULT FINAL
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper MEDIUM ADULT FINAL
This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the 800 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.35.0
- Pytorch 2.8.0+cu128
- Datasets 3.6.0
- Tokenizers 0.14.0
|
kunalsbhat/contractiq-gpt-oss-legal-hackathon
|
kunalsbhat
| 2025-09-11T23:51:20Z | 70 | 0 | null |
[
"safetensors",
"region:us"
] | null | 2025-09-08T00:41:17Z |
## **README.md Content:**
```markdown
# 🏆 ContractIQ - Legal Contract Analysis AI
**OpenAI Open Model Hackathon 2025 Submission**
[](https://opensource.org/licenses/MIT)
[](https://huggingface.co/kunalsbhat/contractiq-gpt-oss-legal-hackathon)
[](https://github.com/kunalsbhat/contractiq)
## 🎯 Overview
ContractIQ is a specialized legal AI model fine-tuned on GPT-OSS-20B for contract analysis and clause extraction. Trained specifically for the legal domain, it excels at understanding complex contractual language, identifying key clauses, and providing accurate legal analysis.
## 📊 Performance Metrics
| Metric | Score | Benchmark |
|--------|-------|-----------|
| **BERT Score F1** | **83.7%** | Industry: 70-80% |
| **Semantic Similarity** | **54.2%** | CUAD Dataset |
| **Generation Speed** | **25.8 tok/sec** | Production Ready |
| **Edge Case Handling** | **100%** | Robust & Reliable |
| **CUAD ROUGE-L** | **16.1%** | Baseline Performance |
## 🚀 Key Features
- **Legal Domain Expertise**: Fine-tuned on 50,000+ legal contracts
- **Fast Inference**: 25+ tokens/second for real-time analysis
- **Robust Error Handling**: 100% graceful edge case management
- **Industry Validated**: Tested on CUAD benchmark dataset
- **Production Ready**: Optimized for deployment in legal workflows
## 💼 Use Cases
- **Contract Review**: Automated clause identification and extraction
- **Legal Research**: Quick analysis of contractual obligations
- **Compliance Checking**: Identify missing or problematic clauses
- **Due Diligence**: Rapid contract analysis for M&A activities
- **Legal Education**: Teaching tool for contract law concepts
## 🔧 Technical Details
- **Base Model**: GPT-OSS-20B
- **Fine-tuning Method**: LoRA (Low-Rank Adaptation)
- **Training Steps**: 3,000 steps
- **Final Loss**: 0.000026
- **LoRA Rank**: 64
- **Training Hardware**: NVIDIA A100 80GB PCIe
- **Framework**: Unsloth + TRL
## 📈 Training Data
- **CUAD Dataset**: Contract Understanding Atticus Dataset
- **Legal-LAMA**: Legal Language Model Analysis
- **Custom Contracts**: Curated legal document collection
- **Total Samples**: 10,000+ training examples
- **Domain Focus**: Employment, vendor, NDA, service agreements
## 🛠️ Quick Start
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
# Load model and tokenizer
model_name = "kunalsbhat/contractiq-gpt-oss-legal-hackathon"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.float16,
device_map="auto"
)
# Example usage
prompt = """
Contract: [Your contract text here]
Question: What are the termination conditions in this contract?
Answer:"""
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_length=512, temperature=0.7)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)
```
## 📋 Example Queries
- "What is the duration of this agreement?"
- "Under what conditions can the agreement be terminated?"
- "Who provides indemnification under this contract?"
- "What are the warranty exclusions?"
- "What constitutes a material breach?"
## 🏆 Hackathon Highlights
- **Specialized Training**: Domain-specific fine-tuning for legal contracts
- **Comprehensive Evaluation**: Multi-dimensional performance assessment
- **Production Focus**: Optimized for real-world deployment
- **Open Source**: MIT licensed for community use
- **Documented Process**: Complete training and evaluation pipeline
## 📊 Evaluation Results
Comprehensive evaluation across 6 test suites:
- ✅ Contract clause extraction and identification
- ✅ Legal knowledge and terminology understanding
- ✅ Complex legal reasoning and analysis
- ✅ Edge case handling and error recovery
- ✅ Performance benchmarks and efficiency
- ✅ CUAD dataset benchmark validation
## 🔗 Links
- **GitHub Repository**: [ContractIQ Source Code](https://github.com/kunalsbhat/contractiq)
- **Demo Video**: [3-Minute Product Demo](https://youtube.com/watch?v=your-video-id)
- **Evaluation Report**: [Comprehensive Assessment](https://github.com/kunalsbhat/contractiq/blob/main/evaluation_report.md)
- **Training Notebook**: [Fine-tuning Process](https://github.com/kunalsbhat/contractiq/blob/main/training.ipynb)
## 📄 License
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
## 🙏 Acknowledgments
- OpenAI Open Model Hackathon 2025
- Unsloth team for optimization framework
- CUAD dataset creators
- Legal-LAMA benchmark contributors
## 📞 Contact
- **Author**: Kunal Bhat
- **Email**: [your-email@domain.com]
- **LinkedIn**: [Your LinkedIn Profile]
- **Twitter**: [@yourusername]
---
*Built for the OpenAI Open Model Hackathon 2025 - Transforming legal contract analysis with specialized AI*
```
---
## **MODEL_CARD.md Content:**
```markdown
# ContractIQ Model Card
## Model Details
- **Model Name**: ContractIQ
- **Model Type**: Legal Contract Analysis
- **Base Architecture**: GPT-OSS-20B
- **Fine-tuning Method**: LoRA
- **Training Date**: September 2025
- **Version**: 1.0
## Intended Use
- Contract clause extraction
- Legal document analysis
- Compliance checking
- Educational purposes
## Performance
- BERT Score F1: 83.7%
- CUAD Benchmark tested
- Production-ready inference speed
## Limitations
- Specialized for contract analysis
- English language only
- Not a substitute for legal advice
## Ethical Considerations
- Designed to assist, not replace legal professionals
- Should be used with human oversight
- May have biases from training data
```
---
## **Key Updates Made:**
1. **Professional badges** showing performance metrics
2. **Comprehensive performance table** with real evaluation data
3. **Clear use cases** and target applications
4. **Technical specifications** from your training
5. **Quick start code** for easy adoption
6. **Hackathon branding** throughout
7. **Links section** for demo video and GitHub
8. **Proper licensing** and contact information
9. **Evaluation highlights** showcasing thoroughness
10. **Community-friendly** formatting and structure
---
**This description positions ContractIQ as a serious, well-evaluated model ready for production use while highlighting your hackathon achievement!**
**Want me to help you create any of the linked resources (like the GitHub README or evaluation report)?**
|
AntonBOOM/output
|
AntonBOOM
| 2025-09-11T23:50:13Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:google/gemma-3-270m-it",
"base_model:finetune:google/gemma-3-270m-it",
"endpoints_compatible",
"region:us"
] | null | 2025-09-05T13:43:14Z |
---
base_model: google/gemma-3-270m-it
library_name: transformers
model_name: output
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for output
This model is a fine-tuned version of [google/gemma-3-270m-it](https://huggingface.co/google/gemma-3-270m-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="AntonBOOM/output", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.23.0
- Transformers: 4.56.1
- Pytorch: 2.8.0
- Datasets: 4.0.0
- Tokenizers: 0.22.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
omerbektasss/blockassist-bc-insectivorous_bold_lion_1757634445
|
omerbektasss
| 2025-09-11T23:47:48Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"insectivorous bold lion",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T23:47:43Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- insectivorous bold lion
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
NB-M/Meta-Llama-3.1-8B-Instruct-mmc-model2-LORA-F32-GGUF
|
NB-M
| 2025-09-11T23:41:12Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"llama-cpp",
"gguf-my-lora",
"en",
"base_model:NB-M/Meta-Llama-3.1-8B-Instruct-mmc-model2-LORA",
"base_model:quantized:NB-M/Meta-Llama-3.1-8B-Instruct-mmc-model2-LORA",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-09-11T20:40:15Z |
---
base_model: NB-M/Meta-Llama-3.1-8B-Instruct-mmc-model2-LORA
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- llama-cpp
- gguf-my-lora
---
# NB-M/Meta-Llama-3.1-8B-Instruct-mmc-model2-LORA-F32-GGUF
This LoRA adapter was converted to GGUF format from [`NB-M/Meta-Llama-3.1-8B-Instruct-mmc-model2-LORA`](https://huggingface.co/NB-M/Meta-Llama-3.1-8B-Instruct-mmc-model2-LORA) via the ggml.ai's [GGUF-my-lora](https://huggingface.co/spaces/ggml-org/gguf-my-lora) space.
Refer to the [original adapter repository](https://huggingface.co/NB-M/Meta-Llama-3.1-8B-Instruct-mmc-model2-LORA) for more details.
## Use with llama.cpp
```bash
# with cli
llama-cli -m base_model.gguf --lora Meta-Llama-3.1-8B-Instruct-mmc-model2-LORA-f32.gguf (...other args)
# with server
llama-server -m base_model.gguf --lora Meta-Llama-3.1-8B-Instruct-mmc-model2-LORA-f32.gguf (...other args)
```
To know more about LoRA usage with llama.cpp server, refer to the [llama.cpp server documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/server/README.md).
|
Lyte/Gemma-3-1B-Moroccan-Instruct
|
Lyte
| 2025-09-11T23:39:12Z | 0 | 0 |
unsloth
|
[
"unsloth",
"safetensors",
"gguf",
"gemma3_text",
"text-generation",
"text-generation-inference",
"transformers",
"conversational",
"ary",
"dataset:Lyte/Moroccan-QA-Extended",
"base_model:unsloth/gemma-3-1b-it-unsloth-bnb-4bit",
"base_model:quantized:unsloth/gemma-3-1b-it-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-11T17:39:52Z |
---
base_model: unsloth/gemma-3-1b-it-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3_text
- gguf
license: apache-2.0
language:
- ary
datasets:
- Lyte/Moroccan-QA-Extended
pipeline_tag: text-generation
library_name: unsloth
---
# Gemma-3-1B Moroccan Instruct (test finetune)
- **Developed by:** Lyte
- **License:** Apache-2.0
- **Base model:** `unsloth/gemma-3-1b-it-unsloth-bnb-4bit`
- **Dataset:** `Lyte/Moroccan-QA-Extended` (with additional English Questions -> Moroccan Darija Answers)
- **Language:** Moroccan Arabic (Darija)
## How to use in LM Studio
You can easily run this model in LM Studio using the preset configuration. Click the badge below to open the model directly in LM Studio:
[<img src="https://pbs.twimg.com/profile_images/1755060270173429760/4WVc54_p_400x400.jpg" alt="Open in LM Studio" width="32"/>](https://lmstudio.ai/lyte/gemma-3-moroccan)
### GGUF Quants:
- **Q8_0:** [gemma-3-1b-moroccan-instruct-q8_0.gguf](https://huggingface.co/Lyte/Gemma-3-1B-Moroccan-Instruct/resolve/main/gemma-3-1b-moroccan-instruct-q8_0.gguf?download=true)
- **Q4_K_M:** [gemma-3-1b-moroccan-instruct-q4_k_m.gguf](https://huggingface.co/Lyte/Gemma-3-1B-Moroccan-Instruct/resolve/main/gemma-3-1b-moroccan-instruct-q4_k_m.gguf?download=true)
## Inference Example
Here is an example of the model's output in LM Studio, answering a question about Newton's law of universal gravitation in Moroccan Darija.
### Q: what is the capital of France?

### Q: شرح ليا كيفاش الجادبية كتخدم؟

### Inference Settings:

---
## Training Details
- **Max Length:** 1024 tokens
- **Epochs:** 3
- **Total Steps:** 843
- **Batch size:** 2 (per device)
- **Gradient Accumulation:** 4 (Total effective batch size: 16)
- **Learning rate:** 2e-4
- **Optimizer:** 8-bit AdamW
- **Scheduler:** Linear
- **Weight decay:** 0.01
- **Seed:** 3407
- **Num of Examples:** 4,495
- **Trainable Parameters:** 52.18M (4.96%)
- **Training Time:** ~1 hour on a single GPU.
This was the **first test finetune run**, not a final production model. Training was done using **Unsloth** for speedup and Hugging Face TRL for supervised finetuning.
---
## Results
- **Training Loss:** From **2.171600** to **0.9392** (at final step 843)
- **Evaluation Loss:** From **2.198849** to **1.5074** (at final step 800)
Training converged without issues. The loss metrics show expected early-stage improvement, but this checkpoint is **experimental** and requires further tuning and validation before use.
---
## Limitations
- Experimental model — not yet optimized or fully Moroccan-Darija-aligned.
- Performance outside Moroccan Arabic QA tasks may be limited.
- Further finetuning and evaluation are needed before production use.
## Uploaded finetuned model
- **Developed by:** Lyte
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-1b-it-unsloth-bnb-4bit
This gemma3_text model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
omerbektasss/blockassist-bc-insectivorous_bold_lion_1757633748
|
omerbektasss
| 2025-09-11T23:36:08Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"insectivorous bold lion",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T23:36:04Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- insectivorous bold lion
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Dexter/compvis
|
Dexter
| 2025-09-11T23:35:31Z | 0 | 0 | null |
[
"arxiv:1910.02190",
"region:us"
] | null | 2025-07-16T06:29:23Z |
<div align="center">
<p align="center">
<img width="75%" src="https://github.com/kornia/data/raw/main/kornia_banner_pixie.png" />
</p>
---
English | [简体中文](README_zh-CN.md)
<!-- prettier-ignore -->
<a href="https://kornia.org">Website</a> •
<a href="https://kornia.readthedocs.io">Docs</a> •
<a href="https://colab.research.google.com/github/kornia/tutorials/blob/master/source/hello_world_tutorial.ipynb">Try it Now</a> •
<a href="https://kornia-tutorials.readthedocs.io">Tutorials</a> •
<a href="https://github.com/kornia/kornia-examples">Examples</a> •
<a href="https://kornia.github.io//kornia-blog">Blog</a> •
<a href="https://join.slack.com/t/kornia/shared_invite/zt-csobk21g-CnydWe5fmvkcktIeRFGCEQ">Community</a>
[](https://pypi.org/project/kornia)
[](https://pypi.org/project/kornia)
[](https://pepy.tech/project/kornia)
[](LICENCE)
[](https://join.slack.com/t/kornia/shared_invite/zt-csobk21g-CnydWe5fmvkcktIeRFGCEQ)
[](https://twitter.com/kornia_foss)
[](https://github.com/kornia/kornia/actions/workflows/tests_cpu.yml)
[](https://github.com/kornia/kornia/actions/workflows/tests_cuda.yml)
[](https://codecov.io/gh/kornia/kornia)
[](https://kornia.readthedocs.io/en/latest/?badge=latest)
[](https://results.pre-commit.ci/latest/github/kornia/kornia/master)
<a href="https://www.producthunt.com/posts/kornia?utm_source=badge-featured&utm_medium=badge&utm_souce=badge-kornia" target="_blank"><img src="https://api.producthunt.com/widgets/embed-image/v1/featured.svg?post_id=306439&theme=light" alt="Kornia - Computer vision library for deep learning | Product Hunt" style="width: 250px; height: 54px;" width="250" height="54" /></a>
</p>
</div>
*Kornia* is a differentiable computer vision library for [PyTorch](https://pytorch.org).
It consists of a set of routines and differentiable modules to solve generic computer vision problems. At its core, the package uses *PyTorch* as its main backend both for efficiency and to take advantage of the reverse-mode auto-differentiation to define and compute the gradient of complex functions.
<div align="center">
<img src="https://github.com/kornia/kornia/raw/master/docs/source/_static/img/hakuna_matata.gif" width="75%" height="75%">
</div>
<!--<div align="center">
<img src="http://drive.google.com/uc?export=view&id=1KNwaanUdY1MynF0EYfyXjDM3ti09tzaq">
</div>-->
## Overview
Inspired by existing packages, this library is composed by a subset of packages containing operators that can be inserted within neural networks to train models to perform image transformations, epipolar geometry, depth estimation, and low-level image processing such as filtering and edge detection that operate directly on tensors.
At a granular level, Kornia is a library that consists of the following components:
| **Component** | **Description** |
|----------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------|
| [kornia](https://kornia.readthedocs.io/en/latest/index.html) | a Differentiable Computer Vision library, with strong GPU support |
| [kornia.augmentation](https://kornia.readthedocs.io/en/latest/augmentation.html) | a module to perform data augmentation in the GPU |
| [kornia.color](https://kornia.readthedocs.io/en/latest/color.html) | a set of routines to perform color space conversions |
| [kornia.contrib](https://kornia.readthedocs.io/en/latest/contrib.html) | a compilation of user contrib and experimental operators |
| [kornia.enhance](https://kornia.readthedocs.io/en/latest/enhance.html) | a module to perform normalization and intensity transformation |
| [kornia.feature](https://kornia.readthedocs.io/en/latest/feature.html) | a module to perform feature detection |
| [kornia.filters](https://kornia.readthedocs.io/en/latest/filters.html) | a module to perform image filtering and edge detection |
| [kornia.geometry](https://kornia.readthedocs.io/en/latest/geometry.html) | a geometric computer vision library to perform image transformations, 3D linear algebra and conversions using different camera models |
| [kornia.losses](https://kornia.readthedocs.io/en/latest/losses.html) | a stack of loss functions to solve different vision tasks |
| [kornia.morphology](https://kornia.readthedocs.io/en/latest/morphology.html) | a module to perform morphological operations |
| [kornia.utils](https://kornia.readthedocs.io/en/latest/utils.html) | image to tensor utilities and metrics for vision problems |
## Installation
### From pip:
```bash
pip install kornia
pip install kornia[x] # to get the training API !
```
<details>
<summary>Other installation options</summary>
#### From source:
```bash
python setup.py install
```
#### From source with symbolic links:
```bash
pip install -e .
```
#### From source using pip:
```bash
pip install git+https://github.com/kornia/kornia
```
</details>
## Examples
Run our Jupyter notebooks [tutorials](https://kornia-tutorials.readthedocs.io/en/latest/) to learn to use the library.
<div align="center">
<a href="https://colab.research.google.com/github/kornia/tutorials/blob/master/source/hello_world_tutorial.ipynb" target="_blank">
<img src="https://raw.githubusercontent.com/kornia/data/main/hello_world_arturito.png" width="75%" height="75%">
</a>
</div>
:triangular_flag_on_post: **Updates**
- :white_check_mark: Integrated to [Huggingface Spaces](https://huggingface.co/spaces) with [Gradio](https://github.com/gradio-app/gradio). See [Gradio Web Demo](https://huggingface.co/spaces/akhaliq/Kornia-LoFTR).
## Cite
If you are using kornia in your research-related documents, it is recommended that you cite the paper. See more in [CITATION](https://github.com/kornia/kornia/blob/master/CITATION.md).
```bash
@inproceedings{eriba2019kornia,
author = {E. Riba, D. Mishkin, D. Ponsa, E. Rublee and G. Bradski},
title = {Kornia: an Open Source Differentiable Computer Vision Library for PyTorch},
booktitle = {Winter Conference on Applications of Computer Vision},
year = {2020},
url = {https://arxiv.org/pdf/1910.02190.pdf}
}
```
## Contributing
We appreciate all contributions. If you are planning to contribute back bug-fixes, please do so without any further discussion. If you plan to contribute new features, utility functions or extensions, please first open an issue and discuss the feature with us. Please, consider reading the [CONTRIBUTING](https://github.com/kornia/kornia/blob/master/CONTRIBUTING.rst) notes. The participation in this open source project is subject to [Code of Conduct](https://github.com/kornia/kornia/blob/master/CODE_OF_CONDUCT.md).
## Community
- **Forums:** discuss implementations, research, etc. [GitHub Forums](https://github.com/kornia/kornia/discussions)
- **GitHub Issues:** bug reports, feature requests, install issues, RFCs, thoughts, etc. [OPEN](https://github.com/kornia/kornia/issues/new/choose)
- **Slack:** Join our workspace to keep in touch with our core contributors and be part of our community. [JOIN HERE](https://join.slack.com/t/kornia/shared_invite/zt-csobk21g-CnydWe5fmvkcktIeRFGCEQ)
- For general information, please visit our website at www.kornia.org
|
JonasNasimzada/llama-3.2-3b-stockfish_lvl_0_10K
|
JonasNasimzada
| 2025-09-11T23:33:08Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-09-11T23:33:00Z |
---
base_model: unsloth/llama-3.2-3b-instruct
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** JonasNasimzada
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-instruct
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
fradstoik/blockassist
|
fradstoik
| 2025-09-11T23:32:18Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"chattering finicky kangaroo",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T23:14:06Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- chattering finicky kangaroo
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
omerbektasss/blockassist-bc-keen_fast_giraffe_1757633368
|
omerbektasss
| 2025-09-11T23:29:49Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"keen fast giraffe",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T23:29:45Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- keen fast giraffe
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
venoda/qwen3-0.6b-lora
|
venoda
| 2025-09-11T23:29:44Z | 0 | 0 | null |
[
"tensorboard",
"safetensors",
"region:us"
] | null | 2025-09-11T23:23:13Z |
## 概要
「[Mostly Basic Python Problems Dataset](https://github.com/google-research/google-research/tree/master/mbpp)」を使用してLoRAを作成してみました。
## 使用方法
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import LoraConfig, TaskType, get_peft_model, PeftModel
device = torch.accelerator.current_accelerator().type if hasattr(torch, "accelerator") else "cuda"
model_id = "Qwen/Qwen3-0.6B"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, device_map=device)
model = PeftModel.from_pretrained(model, "qwen3-0.6b-lora")
inputs = tokenizer("Preheat the oven to 350 degrees and place the cookie dough", return_tensors="pt")
outputs = model.generate(**inputs.to(device), max_new_tokens=50)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
|
Reihaneh/wav2vec2_da_mono_50_epochs_1
|
Reihaneh
| 2025-09-11T23:28:08Z | 0 | 0 |
transformers
|
[
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-11T23:28:08Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Dogestella/rtune-gpt-oss-20b-finetune
|
Dogestella
| 2025-09-11T23:27:23Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:openai/gpt-oss-20b",
"base_model:adapter:openai/gpt-oss-20b",
"license:mit",
"region:us"
] |
text-to-image
| 2025-09-11T22:40:54Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- output:
url: images/vlcsnap-2025-09-11-22h02m37s694.png
text: '-'
- output:
url: images/vlcsnap-2025-09-11-22h02m27s197.png
text: '-'
- output:
url: images/vlcsnap-2025-09-11-22h02m08s816.png
text: '-'
base_model: openai/gpt-oss-20b
instance_prompt: gpt-oss, react components, react, frontend javascript
license: mit
---
# RTune GPT OSS 20B Fine-Tune
<Gallery />
## Model description
This model is a fine-tuned version of GPT-OSS 20B, specifically trained to generate high-quality React components with modern best practices. Using a teacher-student approach, it was trained on a curated dataset of 78+ React components generated by GPT-OSS 120B and scored above 75/100 for quality. The model excels at creating production-ready JSX with React hooks, Tailwind CSS styling, semantic HTML, and realistic mock data across four aesthetic styles (business, indie, blue, art). Fine-tuned using QLoRA on an RTX 5090 with Unsloth optimization, it achieves GPT-4 level component quality while running efficiently on consumer hardware for completely private, offline code generation.
## Trigger words
You should use `gpt-oss` to trigger the image generation.
You should use `react components` to trigger the image generation.
You should use `react` to trigger the image generation.
You should use `frontend javascript` to trigger the image generation.
## Download model
[Download](/Dogestella/rtune-gpt-oss-20b-finetune/tree/main) them in the Files & versions tab.
|
mradermacher/tts-grandpa-v3-GGUF
|
mradermacher
| 2025-09-11T23:26:24Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:JobixAi/tts-grandpa-v3",
"base_model:quantized:JobixAi/tts-grandpa-v3",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-11T22:34:26Z |
---
base_model: JobixAi/tts-grandpa-v3
language:
- en
library_name: transformers
mradermacher:
readme_rev: 1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/JobixAi/tts-grandpa-v3
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#tts-grandpa-v3-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/tts-grandpa-v3-GGUF/resolve/main/tts-grandpa-v3.Q2_K.gguf) | Q2_K | 1.5 | |
| [GGUF](https://huggingface.co/mradermacher/tts-grandpa-v3-GGUF/resolve/main/tts-grandpa-v3.Q3_K_S.gguf) | Q3_K_S | 1.7 | |
| [GGUF](https://huggingface.co/mradermacher/tts-grandpa-v3-GGUF/resolve/main/tts-grandpa-v3.Q3_K_M.gguf) | Q3_K_M | 1.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/tts-grandpa-v3-GGUF/resolve/main/tts-grandpa-v3.Q3_K_L.gguf) | Q3_K_L | 2.0 | |
| [GGUF](https://huggingface.co/mradermacher/tts-grandpa-v3-GGUF/resolve/main/tts-grandpa-v3.IQ4_XS.gguf) | IQ4_XS | 2.0 | |
| [GGUF](https://huggingface.co/mradermacher/tts-grandpa-v3-GGUF/resolve/main/tts-grandpa-v3.Q4_K_S.gguf) | Q4_K_S | 2.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/tts-grandpa-v3-GGUF/resolve/main/tts-grandpa-v3.Q4_K_M.gguf) | Q4_K_M | 2.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/tts-grandpa-v3-GGUF/resolve/main/tts-grandpa-v3.Q5_K_S.gguf) | Q5_K_S | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/tts-grandpa-v3-GGUF/resolve/main/tts-grandpa-v3.Q5_K_M.gguf) | Q5_K_M | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/tts-grandpa-v3-GGUF/resolve/main/tts-grandpa-v3.Q6_K.gguf) | Q6_K | 2.8 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/tts-grandpa-v3-GGUF/resolve/main/tts-grandpa-v3.Q8_0.gguf) | Q8_0 | 3.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/tts-grandpa-v3-GGUF/resolve/main/tts-grandpa-v3.f16.gguf) | f16 | 6.7 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
omerbektasss/blockassist-bc-insectivorous_bold_lion_1757633011
|
omerbektasss
| 2025-09-11T23:23:55Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"insectivorous bold lion",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T23:23:51Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- insectivorous bold lion
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Hiranmai49/Mistral-7B-v0.3-AdaptiveEvaluation_DPO
|
Hiranmai49
| 2025-09-11T23:22:49Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"dpo",
"arxiv:2305.18290",
"base_model:mistralai/Mistral-7B-v0.3",
"base_model:finetune:mistralai/Mistral-7B-v0.3",
"endpoints_compatible",
"region:us"
] | null | 2025-09-10T07:44:42Z |
---
base_model: mistralai/Mistral-7B-v0.3
library_name: transformers
model_name: Mistral-7B-v0.3-AdaptiveEvaluation_DPO
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for Mistral-7B-v0.3-AdaptiveEvaluation_DPO
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.3](https://huggingface.co/mistralai/Mistral-7B-v0.3).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Hiranmai49/Mistral-7B-v0.3-AdaptiveEvaluation_DPO", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/hiranmai/huggingface/runs/3mbw2i9b)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.12.0
- Transformers: 4.46.1
- Pytorch: 2.4.0
- Datasets: 4.0.0
- Tokenizers: 0.20.3
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
liamchalcroft/sashimi-2025-seq-inv
|
liamchalcroft
| 2025-09-11T23:22:03Z | 0 | 0 |
pytorch
|
[
"pytorch",
"medical-imaging",
"mri",
"self-supervised",
"3d",
"neuroimaging",
"en",
"dataset:custom",
"arxiv:2501.12057",
"license:apache-2.0",
"region:us"
] | null | 2025-01-19T10:05:06Z |
---
language: en
tags:
- medical-imaging
- mri
- self-supervised
- 3d
- neuroimaging
license: apache-2.0
library_name: pytorch
datasets:
- custom
---
# SimCLR-MRI Pre-trained Encoder (SeqInv)
This repository contains a pre-trained 3D CNN encoder for MRI analysis. The model was trained using contrastive learning (SimCLR) with explicit sequence invariance enforced through paired multi-contrast images.
## Model Description
The encoder is a 3D CNN with 5 convolutional blocks (64, 128, 256, 512, 768 channels), outputting 768-dimensional features. This SeqInv variant was trained on paired sequences generated through Bloch simulations, explicitly enforcing sequence invariance in the learned representations.
### Training Procedure
- **Pre-training Data**: 51 qMRI datasets (22 healthy, 29 stroke subjects)
- **Training Strategy**: Paired sequence views + standard augmentations
- **Input**: 3D MRI volumes (96×96×96)
- **Output**: 768-dimensional sequence-invariant feature vectors
## Intended Uses
This encoder is particularly suited for:
- Sequence-agnostic analysis tasks
- Multi-sequence registration
- Cross-sequence synthesis
- Tasks requiring sequence-invariant features
[arXiv](https://arxiv.org/abs/2501.12057)
|
liamchalcroft/sashimi-2025-base
|
liamchalcroft
| 2025-09-11T23:21:21Z | 0 | 0 |
pytorch
|
[
"pytorch",
"medical-imaging",
"mri",
"self-supervised",
"3d",
"neuroimaging",
"en",
"dataset:custom",
"arxiv:2501.12057",
"license:apache-2.0",
"region:us"
] | null | 2025-01-19T10:04:24Z |
---
language: en
tags:
- medical-imaging
- mri
- self-supervised
- 3d
- neuroimaging
license: apache-2.0
library_name: pytorch
datasets:
- custom
---
# SimCLR-MRI Pre-trained Encoder (Base)
This repository contains a pre-trained 3D CNN encoder for MRI analysis. The model was trained using contrastive learning (SimCLR) on MPRAGE brain MRI scans, using standard image augmentations.
## Model Description
The encoder is a 3D CNN with 5 convolutional blocks (64, 128, 256, 512, 768 channels), outputting 768-dimensional features. This base variant was trained on real MPRAGE scans using standard contrastive augmentations (random rotations, flips, intensity changes).
### Training Procedure
- **Pre-training Data**: 51 qMRI datasets (22 healthy, 29 stroke subjects)
- **Augmentations**: Standard geometric and intensity transformations
- **Input**: 3D MPRAGE volumes (96×96×96)
- **Output**: 768-dimensional feature vectors
## Intended Uses
This encoder is particularly suited for:
- Transfer learning on T1-weighted MRI tasks
- Feature extraction for structural MRI analysis
- General brain MRI representation learning
[arXiv](https://arxiv.org/abs/2501.12057)
|
nightmedia/ERNIE-4.5-21B-A3B-Thinking-mxfp4-mlx
|
nightmedia
| 2025-09-11T23:20:34Z | 0 | 0 |
mlx
|
[
"mlx",
"safetensors",
"ernie4_5_moe",
"ERNIE4.5",
"text-generation",
"conversational",
"custom_code",
"en",
"zh",
"base_model:baidu/ERNIE-4.5-21B-A3B-Thinking",
"base_model:quantized:baidu/ERNIE-4.5-21B-A3B-Thinking",
"license:apache-2.0",
"4-bit",
"region:us"
] |
text-generation
| 2025-09-11T22:01:58Z |
---
license: apache-2.0
language:
- en
- zh
pipeline_tag: text-generation
tags:
- ERNIE4.5
- mlx
library_name: mlx
base_model: baidu/ERNIE-4.5-21B-A3B-Thinking
---
# ERNIE-4.5-21B-A3B-Thinking-mxfp4-mlx
This model [ERNIE-4.5-21B-A3B-Thinking-mxfp4-mlx](https://huggingface.co/ERNIE-4.5-21B-A3B-Thinking-mxfp4-mlx) was
converted to MLX format from [baidu/ERNIE-4.5-21B-A3B-Thinking](https://huggingface.co/baidu/ERNIE-4.5-21B-A3B-Thinking)
using mlx-lm version **0.27.1**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("ERNIE-4.5-21B-A3B-Thinking-mxfp4-mlx")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
liamchalcroft/synthstroke-qatlas
|
liamchalcroft
| 2025-09-11T23:19:18Z | 0 | 0 |
pytorch
|
[
"pytorch",
"safetensors",
"medical",
"segmentation",
"stroke",
"neurology",
"mri",
"image-segmentation",
"arxiv:2412.03318",
"license:mit",
"region:us"
] |
image-segmentation
| 2025-09-11T23:08:29Z |
---
license: mit
library_name: pytorch
tags:
- medical
- segmentation
- stroke
- neurology
- mri
pipeline_tag: image-segmentation
---
# qATLAS
qATLAS model trained on synthetic qMRI parameter methods predicted from ATLAS T1w.
## Model Details
- **Name**: qATLAS
- **Classes**: 0 (Background), 1 (Stroke)
- **Patch Size**: 192³
- **Voxel Spacing**: 1mm³
- **Input Channels**: 1
## Usage
### Loading from Hugging Face Hub
```python
import torch
from synthstroke_model import SynthStrokeModel
# Load the model from Hugging Face Hub
model = SynthStrokeModel.from_pretrained("liamchalcroft/synthstroke-qatlas")
# Prepare your input (example shape: batch_size=1, channels=1, H, W, D)
input_tensor = torch.randn(1, 1, 192, 192, 192)
# Get predictions (with optional TTA for improved accuracy)
predictions = model.predict_segmentation(input_tensor, use_tta=True)
# Get lesion probability map (channel 1)
lesion_probs = predictions[:, 1] # Shape: (batch_size, H, W, D)
# Alternative: Get logits without TTA
logits = model.predict_segmentation(input_tensor, apply_softmax=False)
```
## Citation
[arXiv](https://www.arxiv.org/abs/2412.03318)
```bibtex
@misc{chalcroft2025domainagnosticstrokelesionsegmentation,
title={Domain-Agnostic Stroke Lesion Segmentation Using Physics-Constrained Synthetic Data},
author={Liam Chalcroft and Jenny Crinion and Cathy J. Price and John Ashburner},
year={2025},
eprint={2412.03318},
archivePrefix={arXiv},
primaryClass={eess.IV},
url={https://arxiv.org/abs/2412.03318},
}
```
## License
MIT License - see the [LICENSE](https://github.com/liamchalcroft/synthstroke/blob/main/LICENSE) file for details.
|
liamchalcroft/synthstroke-synth-plus
|
liamchalcroft
| 2025-09-11T23:18:41Z | 0 | 0 |
pytorch
|
[
"pytorch",
"safetensors",
"medical",
"segmentation",
"stroke",
"neurology",
"mri",
"image-segmentation",
"arxiv:2404.01946",
"license:mit",
"region:us"
] |
image-segmentation
| 2025-09-11T23:08:09Z |
---
license: mit
library_name: pytorch
tags:
- medical
- segmentation
- stroke
- neurology
- mri
pipeline_tag: image-segmentation
---
# SynthPlus
Synthseg-style model trained on synthetic data derived from OASIS3 tissue maps and ATLAS binary lesion masks. Augmented with real training images from various public/private datasets.
## Model Details
- **Name**: SynthPlus
- **Classes**: 0 (Background), 1 (Gray Matter), 2 (White Matter), 3 (Gray/White Matter Partial Volume), 4 (Cerebro-Spinal Fluid), 5 (Stroke)
- **Patch Size**: 192³
- **Voxel Spacing**: 1mm³
- **Input Channels**: 1
## Usage
### Loading from Hugging Face Hub
```python
import torch
from synthstroke_model import SynthStrokeModel
# Load the model from Hugging Face Hub
model = SynthStrokeModel.from_pretrained("liamchalcroft/synthstroke-synth-plus")
# Prepare your input (example shape: batch_size=1, channels=1, H, W, D)
input_tensor = torch.randn(1, 1, 192, 192, 192)
# Get predictions (with optional TTA for improved accuracy)
predictions = model.predict_segmentation(input_tensor, use_tta=True)
# Get tissue probability maps
background = predictions[:, 0] # Background
gray_matter = predictions[:, 1] # Gray Matter
white_matter = predictions[:, 2] # White Matter
partial_volume = predictions[:, 3] # Gray/White Matter PV
csf = predictions[:, 4] # Cerebro-Spinal Fluid
stroke = predictions[:, 5] # Stroke lesion
# Alternative: Get logits without TTA
logits = model.predict_segmentation(input_tensor, apply_softmax=False)
```
## Citation
[Machine Learning for Biomedical Imaging](https://www.melba-journal.org/papers/2025:014.html)
```bibtex
@article{chalcroft2025synthetic,
title={Synthetic Data for Robust Stroke Segmentation},
author={Chalcroft, Liam and Pappas, Ioannis and Price, Cathy J. and Ashburner, John},
journal={Machine Learning for Biomedical Imaging},
volume={3},
pages={317--346},
year={2025},
publisher={Machine Learning for Biomedical Imaging},
doi={10.59275/j.melba.2025-f3g6},
url={https://www.melba-journal.org/papers/2025:014.html}
}
```
For the original arXiv preprint:
[arXiv](https://arxiv.org/abs/2404.01946)
```bibtex
@article{Chalcroft_2025,
title={Synthetic Data for Robust Stroke Segmentation},
volume={3},
ISSN={2766-905X},
url={http://dx.doi.org/10.59275/j.melba.2025-f3g6},
DOI={10.59275/j.melba.2025-f3g6},
number={August 2025},
journal={Machine Learning for Biomedical Imaging},
publisher={Machine Learning for Biomedical Imaging},
author={Chalcroft, Liam and Pappas, Ioannis and Price, Cathy J. and Ashburner, John},
year={2025},
month=aug, pages={317–346}
}
```
## License
MIT License - see the [LICENSE](https://github.com/liamchalcroft/synthstroke/blob/main/LICENSE) file for details.
|
liamchalcroft/synthstroke-synth-pseudo
|
liamchalcroft
| 2025-09-11T23:18:21Z | 0 | 0 |
pytorch
|
[
"pytorch",
"safetensors",
"medical",
"segmentation",
"stroke",
"neurology",
"mri",
"image-segmentation",
"arxiv:2404.01946",
"license:mit",
"region:us"
] |
image-segmentation
| 2025-09-11T23:07:47Z |
---
license: mit
library_name: pytorch
tags:
- medical
- segmentation
- stroke
- neurology
- mri
pipeline_tag: image-segmentation
---
# SynthPseudo
Synthseg-style model trained on synthetic data derived from OASIS3 tissue maps and ATLAS binary lesion masks. Augmented with pseudo-labels from a private T1w dataset.
## Model Details
- **Name**: SynthPseudo
- **Classes**: 0 (Background), 1 (Gray Matter), 2 (White Matter), 3 (Gray/White Matter Partial Volume), 4 (Cerebro-Spinal Fluid), 5 (Stroke)
- **Patch Size**: 192³
- **Voxel Spacing**: 1mm³
- **Input Channels**: 1
## Usage
### Loading from Hugging Face Hub
```python
import torch
from synthstroke_model import SynthStrokeModel
# Load the model from Hugging Face Hub
model = SynthStrokeModel.from_pretrained("liamchalcroft/synthstroke-synth-pseudo")
# Prepare your input (example shape: batch_size=1, channels=1, H, W, D)
input_tensor = torch.randn(1, 1, 192, 192, 192)
# Get predictions (with optional TTA for improved accuracy)
predictions = model.predict_segmentation(input_tensor, use_tta=True)
# Get tissue probability maps
background = predictions[:, 0] # Background
gray_matter = predictions[:, 1] # Gray Matter
white_matter = predictions[:, 2] # White Matter
partial_volume = predictions[:, 3] # Gray/White Matter PV
csf = predictions[:, 4] # Cerebro-Spinal Fluid
stroke = predictions[:, 5] # Stroke lesion
# Alternative: Get logits without TTA
logits = model.predict_segmentation(input_tensor, apply_softmax=False)
```
## Citation
[Machine Learning for Biomedical Imaging](https://www.melba-journal.org/papers/2025:014.html)
```bibtex
@article{chalcroft2025synthetic,
title={Synthetic Data for Robust Stroke Segmentation},
author={Chalcroft, Liam and Pappas, Ioannis and Price, Cathy J. and Ashburner, John},
journal={Machine Learning for Biomedical Imaging},
volume={3},
pages={317--346},
year={2025},
publisher={Machine Learning for Biomedical Imaging},
doi={10.59275/j.melba.2025-f3g6},
url={https://www.melba-journal.org/papers/2025:014.html}
}
```
For the original arXiv preprint:
[arXiv](https://arxiv.org/abs/2404.01946)
```bibtex
@article{Chalcroft_2025,
title={Synthetic Data for Robust Stroke Segmentation},
volume={3},
ISSN={2766-905X},
url={http://dx.doi.org/10.59275/j.melba.2025-f3g6},
DOI={10.59275/j.melba.2025-f3g6},
number={August 2025},
journal={Machine Learning for Biomedical Imaging},
publisher={Machine Learning for Biomedical Imaging},
author={Chalcroft, Liam and Pappas, Ioannis and Price, Cathy J. and Ashburner, John},
year={2025},
month=aug, pages={317–346}
}
```
## License
MIT License - see the [LICENSE](https://github.com/liamchalcroft/synthstroke/blob/main/LICENSE) file for details.
|
Superrrdamn/task-14-Qwen-Qwen2.5-3B-Instruct
|
Superrrdamn
| 2025-09-11T23:14:58Z | 94 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Qwen/Qwen2.5-3B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-3B-Instruct",
"region:us"
] | null | 2025-08-12T16:37:28Z |
---
base_model: Qwen/Qwen2.5-3B-Instruct
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2
|
silveroxides/Chroma-GGUF
|
silveroxides
| 2025-09-11T23:13:42Z | 41,238 | 201 | null |
[
"gguf",
"text-to-image",
"base_model:lodestones/Chroma",
"base_model:quantized:lodestones/Chroma",
"license:apache-2.0",
"region:us"
] |
text-to-image
| 2025-02-24T13:07:36Z |
---
license: apache-2.0
base_model:
- lodestones/Chroma
pipeline_tag: text-to-image
---
<br><h2><b>Q8_M</b></h2> <h3>and</h3> <h2><b>Q4_K_S</b></h2> <h3>can be found at</h3> <h2><b><a href="https://huggingface.co/Clybius/Chroma-GGUF">Clybius/Chroma-GGUF</a></h2></b>
<div id="banner">
<div class="inline-block">
<b><h3><a href="https://huggingface.co/silveroxides/Chroma-GGUF/blob/main/chroma-unlocked-v10/chroma-unlocked-v10-BF16.gguf">BF16</a></h3></b><img src="https://cdn-uploads.huggingface.co/production/uploads/64159ad9986557e8cac2e333/vWu52TewcRCC2WGudOVbB.png" height=192 width=192>
</div>
<div class="inline-block">
<b><h3><a href="https://huggingface.co/silveroxides/Chroma-GGUF/blob/main/chroma-unlocked-v10/chroma-unlocked-v10-Q8_0.gguf">Q8_0</a></h3></b><img src="https://cdn-uploads.huggingface.co/production/uploads/64159ad9986557e8cac2e333/lxlCKpfkKhYkN7sqfMRqL.png" height=192 width=192>
</div>
<div class="inline-block">
<b><h3><a href="https://huggingface.co/silveroxides/Chroma-GGUF/blob/main/chroma-unlocked-v10/chroma-unlocked-v10-Q6_K.gguf">Q6_K</a></h3></b><img src="https://cdn-uploads.huggingface.co/production/uploads/64159ad9986557e8cac2e333/vS3T3DICIKgQj66Vo9vRJ.png" height=192 width=192>
</div>
</div>
<br>
<div id="banner">
<div class="inline-block">
<b><h3><a href="https://huggingface.co/silveroxides/Chroma-GGUF/blob/main/chroma-unlocked-v10/chroma-unlocked-v10-Q5_1.gguf">Q5_1</a></h3></b><img src="https://cdn-uploads.huggingface.co/production/uploads/64159ad9986557e8cac2e333/juyZLbU5ndk-qH0UuSN94.png" height=192 width=192>
</div>
<div class="inline-block">
<b><h3><a href="https://huggingface.co/silveroxides/Chroma-GGUF/blob/main/chroma-unlocked-v10/chroma-unlocked-v10-Q5_0.gguf">Q5_0</a></h3></b><img src="https://cdn-uploads.huggingface.co/production/uploads/64159ad9986557e8cac2e333/e3DV-W6d8dacODHV6iQxE.png" height=192 width=192>
</div>
<div class="inline-block">
<b><h3><a href="https://huggingface.co/silveroxides/Chroma-GGUF/blob/main/chroma-unlocked-v10/chroma-unlocked-v10-Q5_K_S.gguf">Q5_K_S</a></h3></b><img src="https://cdn-uploads.huggingface.co/production/uploads/64159ad9986557e8cac2e333/RJMyAod5l9B00W0byua7Q.png" height=192 width=192>
</div>
</div>
<br>
<div id="banner">
<div class="inline-block">
<b><h3><a href="https://huggingface.co/silveroxides/Chroma-GGUF/blob/main/chroma-unlocked-v10/chroma-unlocked-v10-Q4_1.gguf">Q4_1</a></h3></b><img src="https://cdn-uploads.huggingface.co/production/uploads/64159ad9986557e8cac2e333/PHALUDJ6v7j9e-gCAOrLF.png" height=192 width=192>
</div>
<div class="inline-block">
<b><h3><a href="https://huggingface.co/silveroxides/Chroma-GGUF/blob/main/chroma-unlocked-v10/chroma-unlocked-v10-Q4_K_M.gguf">Q4_K_M</a></h3></b><img src="https://cdn-uploads.huggingface.co/production/uploads/64159ad9986557e8cac2e333/tkNif9yvI-HDkwe9hFbzP.png" height=192 width=192>
</div>
<div class="inline-block">
<b><h3><a href="https://huggingface.co/silveroxides/Chroma-GGUF/blob/main/chroma-unlocked-v10/chroma-unlocked-v10-Q4_0.gguf">Q4_0</a></h3></b><img src="https://cdn-uploads.huggingface.co/production/uploads/64159ad9986557e8cac2e333/raF3wPpYjZfJa_SXr1FLq.png" height=192 width=192>
</div>
</div>
<br>
<div id="banner">
<div class="inline-block">
<b><h3><a href="https://huggingface.co/silveroxides/Chroma-GGUF/blob/main/chroma-unlocked-v10/chroma-unlocked-v10-Q3_K_L.gguf">Q3_K_L</a></h3></b><img src="https://cdn-uploads.huggingface.co/production/uploads/64159ad9986557e8cac2e333/V4PflwbKdHDgdfQJri1ko.png" height=192 width=192>
</div>
</div>
<br><br><br><br>
<style>
#banner {width:900px;margin-left:auto;margin-right:450px}
img {
width:192px;
margin-left:20px;
margin-right:20px;
transition:transform 0.25s ease;
}
img:hover {
-webkit-transform:scale(3); /* or some other value */
transform:scale(3);
}
</style>
|
flockingalpha/task-14-Qwen-Qwen2.5-3B-Instruct
|
flockingalpha
| 2025-09-11T23:13:14Z | 106 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Qwen/Qwen2.5-3B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-3B-Instruct",
"region:us"
] | null | 2025-09-10T20:47:55Z |
---
base_model: Qwen/Qwen2.5-3B-Instruct
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.14.0
|
tamewild/4b_v83_merged_e7
|
tamewild
| 2025-09-11T23:11:49Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-11T23:09:33Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mradovic38/sprite-flow
|
mradovic38
| 2025-09-11T23:10:17Z | 0 | 0 | null |
[
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"art",
"unconditional-image-generation",
"license:mit",
"region:us"
] |
unconditional-image-generation
| 2025-09-11T18:36:39Z |
---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
- art
license: mit
pipeline_tag: unconditional-image-generation
metrics:
- name: FID
type: image
value: 80.4755
dataset: https://www.kaggle.com/datasets/ayhantasyurt/pixel-art-2dgame-charecter-sprites-idle
split: test
---
# Sprite-flow
Flow-based generative model for unguided generation of 128x128 RGBA pixel art characters.
## Model Details
### Model Description
- **Developed by:** [Mihailo Radović](https://www.linkedin.com/in/mihailo-radović-484070278/)
- **Model type:** Unconditional Image Generation
- **License:** MIT
### Model Sources
<!-- Provide the basic links for the model. -->
- **Repository:** [GitHub Repo](https://github.com/mradovic38/sprite-flow)
- **Demo:** [Gradio App](https://huggingface.co/spaces/mradovic38/sprite-flow)
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
Predicts the vector field for generating 128x128 RGBA pixel art character images from Isotropic Gaussian Distribution by simulating an ODE with Linear Noise Scheduling.
### Out-of-Scope Use
Could be used with Cosine or any other Noise scheduler.
## How to Get Started with the Model
* Step 1 - **Clone the [GitHub Repo](https://github.com/mradovic38/sprite-flow)**
* Step 2 - **Initialize the model**:
```py
from models.unet import PixelArtUNet
model = PixelArtUNet(
channels = [128, 256, 512, 1024],
num_residual_layers = 2,
t_embed_dim = 128,
midcoder_dropout_p=0.2
).to(device)
```
* Step 3: **Load Model weights**:
```py
from huggingface_hub import hf_hub_download
from safetensors.torch import load_file
repo_id = "mradovic38/sprite-flow"
filename = "model.safetensors"
file_path = hf_hub_download(repo_id=repo_id, filename=filename)
checkpoint = load_file(file_path)
model.load_state_dict(checkpoint)
model.to(device)
model.eval()
```
* Step 4: **Initialize the probability path**:
```py
from sampling.conditional_probability_path import GaussianConditionalProbabilityPath
from sampling.noise_scheduling import LinearAlpha, LinearBeta
path = GaussianConditionalProbabilityPath(
p_data=None,
p_simple_shape=[4, 128, 128],
alpha=LinearAlpha(),
beta=LinearBeta()
).to(device)
path.eval()
```
* Step 5: **Simulate ODE**:
```py
import torch
from diff_eq.ode_sde import UnguidedVectorFieldODE
from diff_eq.simulator import EulerSimulator
num_timesteps = 200 # example number of timesteps
num_samples = 3 # example number of samples
ts = torch.linspace(0, 1, num_timesteps).view(1, -1, 1, 1, 1).expand(num_samples, -1, 1, 1, 1).to(device)
x0 = path.p_simple.sample(num_samples).to(device) # (num_samples, 4, 128, 128)
ode = UnguidedVectorFieldODE(model)
simulator = EulerSimulator(ode)
x1 = simulator.simulate(x0, ts) # (num_samples, 4, 128, 128)
```
* Step 6: **Turn torch tensor to PIL**:
```py
from utils.helpers import tensor_to_rgba_image, normalize_to_unit
x1 = normalize_to_unit(x1) # [-1, 1] -> [0, 1]
imgs = tensor_to_rgba_image(x1)
```
|
lagoscity/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-domestic_nocturnal_lion
|
lagoscity
| 2025-09-11T23:06:47Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am domestic_nocturnal_lion",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-11T18:09:02Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am domestic_nocturnal_lion
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
bansikrtop/blockassist
|
bansikrtop
| 2025-09-11T23:04:44Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"alert silky antelope",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T22:54:29Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- alert silky antelope
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
abhinav302019/falcon-7b-simple-dpo-lora-lablebox
|
abhinav302019
| 2025-09-11T22:58:20Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"falcon",
"lora",
"direct-preference-optimization-(simple-dpo)",
"dataset:HuggingFaceH4/ultrachat_200k",
"dataset:HuggingFaceH4/ultrafeedback_binarized",
"base_model:tiiuae/falcon-7b-instruct",
"base_model:adapter:tiiuae/falcon-7b-instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-09-11T22:58:16Z |
---
license: apache-2.0
base_model: tiiuae/falcon-7b-instruct
tags:
- generated_from_trainer
- falcon
- lora
- direct-preference-optimization-(simple-dpo)
datasets:
- HuggingFaceH4/ultrachat_200k
- HuggingFaceH4/ultrafeedback_binarized
metrics:
- loss
library_name: transformers
model-index:
- name: falcon-7b-simple-dpo-lora-lablebox
results: []
---
# falcon-7b-simple-dpo-lora-lablebox
This model is a fine-tuned version of [tiiuae/falcon-7b-instruct](https://huggingface.co/tiiuae/falcon-7b-instruct) using Direct Preference Optimization (Simple DPO).
## Model Description
- **Training Method**: Direct Preference Optimization (Simple DPO)
- **Base Model**: Falcon-7B-Instruct
- **Parameter Count**: 6.92B (base model)
- **LoRA Parameters**: 0.0085% trainable
- **Hardware**: Apple Silicon Mac (128GB RAM)
- **Framework**: PyTorch with MPS backend
## Training Results
- **Runtime**: ~30 minutes
- **Steps**: 200
- **Loss Reduction**: 88.3%
- **Benchmark Quality Score**: 0.90/1.00
## Training Configuration
### LoRA Configuration
- Rank (r): 2
- Alpha: 4
- Target Modules: query_key_value
- Dropout: 0.1
### Training Parameters
- Learning Rate: 5e-5
- Gradient Accumulation: 8 steps
- Mixed Precision: FP16
- Scheduler: Cosine Annealing
## Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
# Load base model
base_model = AutoModelForCausalLM.from_pretrained(
"tiiuae/falcon-7b-instruct",
trust_remote_code=True,
torch_dtype=torch.float16,
device_map="auto"
)
# Load LoRA adapter
model = PeftModel.from_pretrained(base_model, "falcon-7b-simple-dpo-lora-lablebox")
# Load tokenizer
tokenizer = AutoTokenizer.from_pretrained("falcon-7b-simple-dpo-lora-lablebox")
# Generate text
prompt = "What is machine learning?"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_length=100)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
```
## Training Details
This model was trained as part of the Lablebox Take Home Assignment, demonstrating gradient-based training of large language models on consumer hardware.
### Framework versions
- Transformers 4.44.2
- PyTorch 2.5.0.dev20240912
- PEFT 0.13.0
- Datasets 3.0.0
- Tokenizers 0.19.1
|
abhinav302019/falcon-7b-sft-lora-lablebox
|
abhinav302019
| 2025-09-11T22:58:13Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"falcon",
"lora",
"supervised-fine-tuning-(sft)",
"dataset:HuggingFaceH4/ultrachat_200k",
"dataset:HuggingFaceH4/ultrafeedback_binarized",
"base_model:tiiuae/falcon-7b-instruct",
"base_model:adapter:tiiuae/falcon-7b-instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-09-11T22:58:05Z |
---
license: apache-2.0
base_model: tiiuae/falcon-7b-instruct
tags:
- generated_from_trainer
- falcon
- lora
- supervised-fine-tuning-(sft)
datasets:
- HuggingFaceH4/ultrachat_200k
- HuggingFaceH4/ultrafeedback_binarized
metrics:
- loss
library_name: transformers
model-index:
- name: falcon-7b-sft-lora-lablebox
results: []
---
# falcon-7b-sft-lora-lablebox
This model is a fine-tuned version of [tiiuae/falcon-7b-instruct](https://huggingface.co/tiiuae/falcon-7b-instruct) using Supervised Fine-Tuning (SFT).
## Model Description
- **Training Method**: Supervised Fine-Tuning (SFT)
- **Base Model**: Falcon-7B-Instruct
- **Parameter Count**: 6.92B (base model)
- **LoRA Parameters**: 0.0085% trainable
- **Hardware**: Apple Silicon Mac (128GB RAM)
- **Framework**: PyTorch with MPS backend
## Training Results
- **Runtime**: 36 minutes
- **Steps**: 300
- **Loss Reduction**: 98.19%
- **Benchmark Quality Score**: 0.90/1.00
## Training Configuration
### LoRA Configuration
- Rank (r): 8
- Alpha: 16
- Target Modules: query_key_value
- Dropout: 0.1
### Training Parameters
- Learning Rate: 2e-4
- Gradient Accumulation: 8 steps
- Mixed Precision: FP16
- Scheduler: Cosine Annealing
## Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
# Load base model
base_model = AutoModelForCausalLM.from_pretrained(
"tiiuae/falcon-7b-instruct",
trust_remote_code=True,
torch_dtype=torch.float16,
device_map="auto"
)
# Load LoRA adapter
model = PeftModel.from_pretrained(base_model, "falcon-7b-sft-lora-lablebox")
# Load tokenizer
tokenizer = AutoTokenizer.from_pretrained("falcon-7b-sft-lora-lablebox")
# Generate text
prompt = "What is machine learning?"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_length=100)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
```
## Training Details
This model was trained as part of the Lablebox Take Home Assignment, demonstrating gradient-based training of large language models on consumer hardware.
### Framework versions
- Transformers 4.44.2
- PyTorch 2.5.0.dev20240912
- PEFT 0.13.0
- Datasets 3.0.0
- Tokenizers 0.19.1
|
nicoboss/Qwen3-30B-A3B-Thinking-2507-abliterated-CPT-SFT-sft
|
nicoboss
| 2025-09-11T22:51:54Z | 0 | 0 | null |
[
"safetensors",
"qwen3_moe",
"region:us"
] | null | 2025-09-11T01:01:53Z |
# Qwen3-30B-A3B-Thinking-2507-abliterated-CPT-SFT-sft
[@EsotericsEnjoyer](https://huggingface.co/EsotericsEnjoyer) [EsotericsEnjoyer/Qwen3-30B-A3B-Thinking-2507-abliterated-CPT-SFT-sft-adapters](https://huggingface.co/EsotericsEnjoyer/Qwen3-30B-A3B-Thinking-2507-abliterated-CPT-SFT-sft-adapters) applied to
[huihui-ai/Huihui-Qwen3-30B-A3B-Thinking-2507-abliterated](https://huggingface.co/huihui-ai/Huihui-Qwen3-30B-A3B-Thinking-2507-abliterated) as requested in https://huggingface.co/mradermacher/model_requests/discussions/1372
# GGUF quants
- Static quants: https://huggingface.co/mradermacher/Qwen3-30B-A3B-Thinking-2507-abliterated-CPT-SFT-sft-GGUF
- Weighted/imatrix quants: https://huggingface.co/mradermacher/Qwen3-30B-A3B-Thinking-2507-abliterated-CPT-SFT-sft-i1-GGUF
- Convinient download page: https://hf.tst.eu/model#Qwen3-30B-A3B-Thinking-2507-abliterated-CPT-SFT-sft-GGUF
|
mradermacher/Basqui-R1-4B-v1-GGUF
|
mradermacher
| 2025-09-11T22:48:47Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"llama",
"en",
"dataset:unsloth/OpenMathReasoning",
"dataset:openai/gsm8k",
"base_model:benjaminsinzore/Basqui-R1-4B-v1",
"base_model:quantized:benjaminsinzore/Basqui-R1-4B-v1",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-11T22:16:32Z |
---
base_model: benjaminsinzore/Basqui-R1-4B-v1
datasets:
- unsloth/OpenMathReasoning
- openai/gsm8k
language:
- en
library_name: transformers
license: apache-2.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- llama
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/benjaminsinzore/Basqui-R1-4B-v1
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Basqui-R1-4B-v1-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Basqui-R1-4B-v1-GGUF/resolve/main/Basqui-R1-4B-v1.Q2_K.gguf) | Q2_K | 1.5 | |
| [GGUF](https://huggingface.co/mradermacher/Basqui-R1-4B-v1-GGUF/resolve/main/Basqui-R1-4B-v1.Q3_K_S.gguf) | Q3_K_S | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/Basqui-R1-4B-v1-GGUF/resolve/main/Basqui-R1-4B-v1.Q3_K_M.gguf) | Q3_K_M | 1.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Basqui-R1-4B-v1-GGUF/resolve/main/Basqui-R1-4B-v1.Q3_K_L.gguf) | Q3_K_L | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/Basqui-R1-4B-v1-GGUF/resolve/main/Basqui-R1-4B-v1.IQ4_XS.gguf) | IQ4_XS | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/Basqui-R1-4B-v1-GGUF/resolve/main/Basqui-R1-4B-v1.Q4_K_S.gguf) | Q4_K_S | 2.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Basqui-R1-4B-v1-GGUF/resolve/main/Basqui-R1-4B-v1.Q4_K_M.gguf) | Q4_K_M | 2.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Basqui-R1-4B-v1-GGUF/resolve/main/Basqui-R1-4B-v1.Q5_K_S.gguf) | Q5_K_S | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/Basqui-R1-4B-v1-GGUF/resolve/main/Basqui-R1-4B-v1.Q5_K_M.gguf) | Q5_K_M | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/Basqui-R1-4B-v1-GGUF/resolve/main/Basqui-R1-4B-v1.Q6_K.gguf) | Q6_K | 2.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Basqui-R1-4B-v1-GGUF/resolve/main/Basqui-R1-4B-v1.Q8_0.gguf) | Q8_0 | 3.5 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Basqui-R1-4B-v1-GGUF/resolve/main/Basqui-R1-4B-v1.f16.gguf) | f16 | 6.5 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
omerbektasss/blockassist-bc-insectivorous_bold_lion_1757630841
|
omerbektasss
| 2025-09-11T22:47:43Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"insectivorous bold lion",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T22:47:39Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- insectivorous bold lion
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
billkunghappy/hmm_Qwen3-8B-Base-Dapo-S60-4096-Step200
|
billkunghappy
| 2025-09-11T22:47:23Z | 0 | 0 | null |
[
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | 2025-09-11T22:46:08Z |
---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Code: [More Information Needed]
- Paper: [More Information Needed]
- Docs: [More Information Needed]
|
seraphimzzzz/1881395
|
seraphimzzzz
| 2025-09-11T22:46:06Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-11T22:46:02Z |
[View on Civ Archive](https://civarchive.com/models/1752984?modelVersionId=1983892)
|
amethyst9/1081852
|
amethyst9
| 2025-09-11T22:45:13Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-11T22:45:05Z |
[View on Civ Archive](https://civarchive.com/models/689561?modelVersionId=1142294)
|
crystalline7/1047318
|
crystalline7
| 2025-09-11T22:44:54Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-11T22:44:46Z |
[View on Civ Archive](https://civarchive.com/models/689561?modelVersionId=1142294)
|
seraphimzzzz/685141
|
seraphimzzzz
| 2025-09-11T22:44:07Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-11T22:44:07Z |
[View on Civ Archive](https://civarchive.com/models/689561?modelVersionId=771747)
|
Sulhere676/cybersecurity-qwen2.5-sft
|
Sulhere676
| 2025-09-11T22:42:52Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:Qwen/Qwen2.5-3B-Instruct",
"lora",
"sft",
"transformers",
"trl",
"text-generation",
"conversational",
"base_model:Qwen/Qwen2.5-3B-Instruct",
"region:us"
] |
text-generation
| 2025-09-11T22:16:29Z |
---
base_model: Qwen/Qwen2.5-3B-Instruct
library_name: peft
model_name: cybersecurity-qwen2.5-sft
tags:
- base_model:adapter:Qwen/Qwen2.5-3B-Instruct
- lora
- sft
- transformers
- trl
licence: license
pipeline_tag: text-generation
---
# Model Card for cybersecurity-qwen2.5-sft
This model is a fine-tuned version of [Qwen/Qwen2.5-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-3B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="None", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- PEFT 0.17.1
- TRL: 0.23.0
- Transformers: 4.56.1
- Pytorch: 2.8.0
- Datasets: 4.0.0
- Tokenizers: 0.22.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
Gioinbali/genface
|
Gioinbali
| 2025-09-11T22:42:39Z | 1,130 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-03-07T10:48:33Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: karina
---
# Genface
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `karina` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "karina",
"lora_weights": "https://huggingface.co/Gioinbali/genface/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('Gioinbali/genface', weight_name='lora.safetensors')
image = pipeline('karina').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 1000
- Learning rate: 0.0004
- LoRA rank: 32
## Contribute your own examples
You can use the [community tab](https://huggingface.co/Gioinbali/genface/discussions) to add images that show off what you’ve made with this LoRA.
|
nkadoor/sentiment-classifier-distilbert_test
|
nkadoor
| 2025-09-11T22:41:28Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-09-11T22:35:41Z |
---
library_name: transformers
license: mit
base_model: roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: sentiment-classifier-distilbert_test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sentiment-classifier-distilbert_test
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7031
- Accuracy: 0.5
- F1: 0.3333
- Precision: 0.25
- Recall: 0.5
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.56.1
- Pytorch 2.8.0+cu126
- Datasets 4.0.0
- Tokenizers 0.22.0
|
mradermacher/Negative-Unhinged-Base-V1-Llama-3.3-70B-i1-GGUF
|
mradermacher
| 2025-09-11T22:41:26Z | 167 | 0 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:Darkhn-Graveyard/Negative-Unhinged-Base-V1-Llama-3.3-70B",
"base_model:quantized:Darkhn-Graveyard/Negative-Unhinged-Base-V1-Llama-3.3-70B",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-04-22T16:45:14Z |
---
base_model: Darkhn-Graveyard/Negative-Unhinged-Base-V1-Llama-3.3-70B
language:
- en
library_name: transformers
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/Darkhn-Graveyard/Negative-Unhinged-Base-V1-Llama-3.3-70B
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Negative-Unhinged-Base-V1-Llama-3.3-70B-i1-GGUF).***
static quants are available at https://huggingface.co/mradermacher/Negative-Unhinged-Base-V1-Llama-3.3-70B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Negative-Unhinged-Base-V1-Llama-3.3-70B-i1-GGUF/resolve/main/Negative-Unhinged-Base-V1-Llama-3.3-70B.i1-IQ1_S.gguf) | i1-IQ1_S | 15.4 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Negative-Unhinged-Base-V1-Llama-3.3-70B-i1-GGUF/resolve/main/Negative-Unhinged-Base-V1-Llama-3.3-70B.i1-IQ1_M.gguf) | i1-IQ1_M | 16.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Negative-Unhinged-Base-V1-Llama-3.3-70B-i1-GGUF/resolve/main/Negative-Unhinged-Base-V1-Llama-3.3-70B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 19.2 | |
| [GGUF](https://huggingface.co/mradermacher/Negative-Unhinged-Base-V1-Llama-3.3-70B-i1-GGUF/resolve/main/Negative-Unhinged-Base-V1-Llama-3.3-70B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 21.2 | |
| [GGUF](https://huggingface.co/mradermacher/Negative-Unhinged-Base-V1-Llama-3.3-70B-i1-GGUF/resolve/main/Negative-Unhinged-Base-V1-Llama-3.3-70B.i1-IQ2_S.gguf) | i1-IQ2_S | 22.3 | |
| [GGUF](https://huggingface.co/mradermacher/Negative-Unhinged-Base-V1-Llama-3.3-70B-i1-GGUF/resolve/main/Negative-Unhinged-Base-V1-Llama-3.3-70B.i1-IQ2_M.gguf) | i1-IQ2_M | 24.2 | |
| [GGUF](https://huggingface.co/mradermacher/Negative-Unhinged-Base-V1-Llama-3.3-70B-i1-GGUF/resolve/main/Negative-Unhinged-Base-V1-Llama-3.3-70B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 24.6 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Negative-Unhinged-Base-V1-Llama-3.3-70B-i1-GGUF/resolve/main/Negative-Unhinged-Base-V1-Llama-3.3-70B.i1-Q2_K.gguf) | i1-Q2_K | 26.5 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Negative-Unhinged-Base-V1-Llama-3.3-70B-i1-GGUF/resolve/main/Negative-Unhinged-Base-V1-Llama-3.3-70B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 27.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Negative-Unhinged-Base-V1-Llama-3.3-70B-i1-GGUF/resolve/main/Negative-Unhinged-Base-V1-Llama-3.3-70B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 29.4 | |
| [GGUF](https://huggingface.co/mradermacher/Negative-Unhinged-Base-V1-Llama-3.3-70B-i1-GGUF/resolve/main/Negative-Unhinged-Base-V1-Llama-3.3-70B.i1-IQ3_S.gguf) | i1-IQ3_S | 31.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Negative-Unhinged-Base-V1-Llama-3.3-70B-i1-GGUF/resolve/main/Negative-Unhinged-Base-V1-Llama-3.3-70B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 31.0 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Negative-Unhinged-Base-V1-Llama-3.3-70B-i1-GGUF/resolve/main/Negative-Unhinged-Base-V1-Llama-3.3-70B.i1-IQ3_M.gguf) | i1-IQ3_M | 32.0 | |
| [GGUF](https://huggingface.co/mradermacher/Negative-Unhinged-Base-V1-Llama-3.3-70B-i1-GGUF/resolve/main/Negative-Unhinged-Base-V1-Llama-3.3-70B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 34.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Negative-Unhinged-Base-V1-Llama-3.3-70B-i1-GGUF/resolve/main/Negative-Unhinged-Base-V1-Llama-3.3-70B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 37.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Negative-Unhinged-Base-V1-Llama-3.3-70B-i1-GGUF/resolve/main/Negative-Unhinged-Base-V1-Llama-3.3-70B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 38.0 | |
| [GGUF](https://huggingface.co/mradermacher/Negative-Unhinged-Base-V1-Llama-3.3-70B-i1-GGUF/resolve/main/Negative-Unhinged-Base-V1-Llama-3.3-70B.i1-Q4_0.gguf) | i1-Q4_0 | 40.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Negative-Unhinged-Base-V1-Llama-3.3-70B-i1-GGUF/resolve/main/Negative-Unhinged-Base-V1-Llama-3.3-70B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 40.4 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Negative-Unhinged-Base-V1-Llama-3.3-70B-i1-GGUF/resolve/main/Negative-Unhinged-Base-V1-Llama-3.3-70B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 42.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Negative-Unhinged-Base-V1-Llama-3.3-70B-i1-GGUF/resolve/main/Negative-Unhinged-Base-V1-Llama-3.3-70B.i1-Q4_1.gguf) | i1-Q4_1 | 44.4 | |
| [GGUF](https://huggingface.co/mradermacher/Negative-Unhinged-Base-V1-Llama-3.3-70B-i1-GGUF/resolve/main/Negative-Unhinged-Base-V1-Llama-3.3-70B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 48.8 | |
| [GGUF](https://huggingface.co/mradermacher/Negative-Unhinged-Base-V1-Llama-3.3-70B-i1-GGUF/resolve/main/Negative-Unhinged-Base-V1-Llama-3.3-70B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 50.0 | |
| [PART 1](https://huggingface.co/mradermacher/Negative-Unhinged-Base-V1-Llama-3.3-70B-i1-GGUF/resolve/main/Negative-Unhinged-Base-V1-Llama-3.3-70B.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Negative-Unhinged-Base-V1-Llama-3.3-70B-i1-GGUF/resolve/main/Negative-Unhinged-Base-V1-Llama-3.3-70B.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 58.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/Roleplay-Unhinged-Base-V1-Llama-3.3-70B-GGUF
|
mradermacher
| 2025-09-11T22:40:51Z | 33 | 0 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:Darkhn-Graveyard/Roleplay-Unhinged-Base-V1-Llama-3.3-70B",
"base_model:quantized:Darkhn-Graveyard/Roleplay-Unhinged-Base-V1-Llama-3.3-70B",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-22T20:06:47Z |
---
base_model: Darkhn-Graveyard/Roleplay-Unhinged-Base-V1-Llama-3.3-70B
language:
- en
library_name: transformers
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Darkhn-Graveyard/Roleplay-Unhinged-Base-V1-Llama-3.3-70B
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Roleplay-Unhinged-Base-V1-Llama-3.3-70B-GGUF).***
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Roleplay-Unhinged-Base-V1-Llama-3.3-70B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Roleplay-Unhinged-Base-V1-Llama-3.3-70B-GGUF/resolve/main/Roleplay-Unhinged-Base-V1-Llama-3.3-70B.Q2_K.gguf) | Q2_K | 26.5 | |
| [GGUF](https://huggingface.co/mradermacher/Roleplay-Unhinged-Base-V1-Llama-3.3-70B-GGUF/resolve/main/Roleplay-Unhinged-Base-V1-Llama-3.3-70B.Q3_K_S.gguf) | Q3_K_S | 31.0 | |
| [GGUF](https://huggingface.co/mradermacher/Roleplay-Unhinged-Base-V1-Llama-3.3-70B-GGUF/resolve/main/Roleplay-Unhinged-Base-V1-Llama-3.3-70B.Q3_K_M.gguf) | Q3_K_M | 34.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Roleplay-Unhinged-Base-V1-Llama-3.3-70B-GGUF/resolve/main/Roleplay-Unhinged-Base-V1-Llama-3.3-70B.Q3_K_L.gguf) | Q3_K_L | 37.2 | |
| [GGUF](https://huggingface.co/mradermacher/Roleplay-Unhinged-Base-V1-Llama-3.3-70B-GGUF/resolve/main/Roleplay-Unhinged-Base-V1-Llama-3.3-70B.IQ4_XS.gguf) | IQ4_XS | 38.4 | |
| [GGUF](https://huggingface.co/mradermacher/Roleplay-Unhinged-Base-V1-Llama-3.3-70B-GGUF/resolve/main/Roleplay-Unhinged-Base-V1-Llama-3.3-70B.Q4_K_S.gguf) | Q4_K_S | 40.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Roleplay-Unhinged-Base-V1-Llama-3.3-70B-GGUF/resolve/main/Roleplay-Unhinged-Base-V1-Llama-3.3-70B.Q4_K_M.gguf) | Q4_K_M | 42.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Roleplay-Unhinged-Base-V1-Llama-3.3-70B-GGUF/resolve/main/Roleplay-Unhinged-Base-V1-Llama-3.3-70B.Q5_K_S.gguf) | Q5_K_S | 48.8 | |
| [GGUF](https://huggingface.co/mradermacher/Roleplay-Unhinged-Base-V1-Llama-3.3-70B-GGUF/resolve/main/Roleplay-Unhinged-Base-V1-Llama-3.3-70B.Q5_K_M.gguf) | Q5_K_M | 50.0 | |
| [PART 1](https://huggingface.co/mradermacher/Roleplay-Unhinged-Base-V1-Llama-3.3-70B-GGUF/resolve/main/Roleplay-Unhinged-Base-V1-Llama-3.3-70B.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Roleplay-Unhinged-Base-V1-Llama-3.3-70B-GGUF/resolve/main/Roleplay-Unhinged-Base-V1-Llama-3.3-70B.Q6_K.gguf.part2of2) | Q6_K | 58.0 | very good quality |
| [PART 1](https://huggingface.co/mradermacher/Roleplay-Unhinged-Base-V1-Llama-3.3-70B-GGUF/resolve/main/Roleplay-Unhinged-Base-V1-Llama-3.3-70B.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Roleplay-Unhinged-Base-V1-Llama-3.3-70B-GGUF/resolve/main/Roleplay-Unhinged-Base-V1-Llama-3.3-70B.Q8_0.gguf.part2of2) | Q8_0 | 75.1 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Roleplay-Unhinged-Base-V1-Llama-3.3-70B-i1-GGUF
|
mradermacher
| 2025-09-11T22:40:45Z | 71 | 0 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:Darkhn-Graveyard/Roleplay-Unhinged-Base-V1-Llama-3.3-70B",
"base_model:quantized:Darkhn-Graveyard/Roleplay-Unhinged-Base-V1-Llama-3.3-70B",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-04-22T20:53:34Z |
---
base_model: Darkhn-Graveyard/Roleplay-Unhinged-Base-V1-Llama-3.3-70B
language:
- en
library_name: transformers
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/Darkhn-Graveyard/Roleplay-Unhinged-Base-V1-Llama-3.3-70B
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Roleplay-Unhinged-Base-V1-Llama-3.3-70B-i1-GGUF).***
static quants are available at https://huggingface.co/mradermacher/Roleplay-Unhinged-Base-V1-Llama-3.3-70B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Roleplay-Unhinged-Base-V1-Llama-3.3-70B-i1-GGUF/resolve/main/Roleplay-Unhinged-Base-V1-Llama-3.3-70B.i1-IQ1_S.gguf) | i1-IQ1_S | 15.4 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Roleplay-Unhinged-Base-V1-Llama-3.3-70B-i1-GGUF/resolve/main/Roleplay-Unhinged-Base-V1-Llama-3.3-70B.i1-IQ1_M.gguf) | i1-IQ1_M | 16.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Roleplay-Unhinged-Base-V1-Llama-3.3-70B-i1-GGUF/resolve/main/Roleplay-Unhinged-Base-V1-Llama-3.3-70B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 19.2 | |
| [GGUF](https://huggingface.co/mradermacher/Roleplay-Unhinged-Base-V1-Llama-3.3-70B-i1-GGUF/resolve/main/Roleplay-Unhinged-Base-V1-Llama-3.3-70B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 21.2 | |
| [GGUF](https://huggingface.co/mradermacher/Roleplay-Unhinged-Base-V1-Llama-3.3-70B-i1-GGUF/resolve/main/Roleplay-Unhinged-Base-V1-Llama-3.3-70B.i1-IQ2_S.gguf) | i1-IQ2_S | 22.3 | |
| [GGUF](https://huggingface.co/mradermacher/Roleplay-Unhinged-Base-V1-Llama-3.3-70B-i1-GGUF/resolve/main/Roleplay-Unhinged-Base-V1-Llama-3.3-70B.i1-IQ2_M.gguf) | i1-IQ2_M | 24.2 | |
| [GGUF](https://huggingface.co/mradermacher/Roleplay-Unhinged-Base-V1-Llama-3.3-70B-i1-GGUF/resolve/main/Roleplay-Unhinged-Base-V1-Llama-3.3-70B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 24.6 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Roleplay-Unhinged-Base-V1-Llama-3.3-70B-i1-GGUF/resolve/main/Roleplay-Unhinged-Base-V1-Llama-3.3-70B.i1-Q2_K.gguf) | i1-Q2_K | 26.5 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Roleplay-Unhinged-Base-V1-Llama-3.3-70B-i1-GGUF/resolve/main/Roleplay-Unhinged-Base-V1-Llama-3.3-70B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 27.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Roleplay-Unhinged-Base-V1-Llama-3.3-70B-i1-GGUF/resolve/main/Roleplay-Unhinged-Base-V1-Llama-3.3-70B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 29.4 | |
| [GGUF](https://huggingface.co/mradermacher/Roleplay-Unhinged-Base-V1-Llama-3.3-70B-i1-GGUF/resolve/main/Roleplay-Unhinged-Base-V1-Llama-3.3-70B.i1-IQ3_S.gguf) | i1-IQ3_S | 31.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Roleplay-Unhinged-Base-V1-Llama-3.3-70B-i1-GGUF/resolve/main/Roleplay-Unhinged-Base-V1-Llama-3.3-70B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 31.0 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Roleplay-Unhinged-Base-V1-Llama-3.3-70B-i1-GGUF/resolve/main/Roleplay-Unhinged-Base-V1-Llama-3.3-70B.i1-IQ3_M.gguf) | i1-IQ3_M | 32.0 | |
| [GGUF](https://huggingface.co/mradermacher/Roleplay-Unhinged-Base-V1-Llama-3.3-70B-i1-GGUF/resolve/main/Roleplay-Unhinged-Base-V1-Llama-3.3-70B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 34.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Roleplay-Unhinged-Base-V1-Llama-3.3-70B-i1-GGUF/resolve/main/Roleplay-Unhinged-Base-V1-Llama-3.3-70B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 37.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Roleplay-Unhinged-Base-V1-Llama-3.3-70B-i1-GGUF/resolve/main/Roleplay-Unhinged-Base-V1-Llama-3.3-70B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 38.0 | |
| [GGUF](https://huggingface.co/mradermacher/Roleplay-Unhinged-Base-V1-Llama-3.3-70B-i1-GGUF/resolve/main/Roleplay-Unhinged-Base-V1-Llama-3.3-70B.i1-Q4_0.gguf) | i1-Q4_0 | 40.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Roleplay-Unhinged-Base-V1-Llama-3.3-70B-i1-GGUF/resolve/main/Roleplay-Unhinged-Base-V1-Llama-3.3-70B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 40.4 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Roleplay-Unhinged-Base-V1-Llama-3.3-70B-i1-GGUF/resolve/main/Roleplay-Unhinged-Base-V1-Llama-3.3-70B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 42.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Roleplay-Unhinged-Base-V1-Llama-3.3-70B-i1-GGUF/resolve/main/Roleplay-Unhinged-Base-V1-Llama-3.3-70B.i1-Q4_1.gguf) | i1-Q4_1 | 44.4 | |
| [GGUF](https://huggingface.co/mradermacher/Roleplay-Unhinged-Base-V1-Llama-3.3-70B-i1-GGUF/resolve/main/Roleplay-Unhinged-Base-V1-Llama-3.3-70B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 48.8 | |
| [GGUF](https://huggingface.co/mradermacher/Roleplay-Unhinged-Base-V1-Llama-3.3-70B-i1-GGUF/resolve/main/Roleplay-Unhinged-Base-V1-Llama-3.3-70B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 50.0 | |
| [PART 1](https://huggingface.co/mradermacher/Roleplay-Unhinged-Base-V1-Llama-3.3-70B-i1-GGUF/resolve/main/Roleplay-Unhinged-Base-V1-Llama-3.3-70B.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Roleplay-Unhinged-Base-V1-Llama-3.3-70B-i1-GGUF/resolve/main/Roleplay-Unhinged-Base-V1-Llama-3.3-70B.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 58.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
cgifbribcgfbi/Qwen2.5-72B-Instruct-chem-qwen2.5-self-rand-in1-c0
|
cgifbribcgfbi
| 2025-09-11T22:40:13Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"qwen2",
"text-generation",
"axolotl",
"base_model:adapter:zetasepic/Qwen2.5-72B-Instruct-abliterated",
"lora",
"transformers",
"conversational",
"dataset:qwen2.5-self-dset-rand-in1-c0_5000.jsonl",
"base_model:zetasepic/Qwen2.5-72B-Instruct-abliterated",
"license:other",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2025-09-11T19:20:14Z |
---
library_name: peft
license: other
base_model: zetasepic/Qwen2.5-72B-Instruct-abliterated
tags:
- axolotl
- base_model:adapter:zetasepic/Qwen2.5-72B-Instruct-abliterated
- lora
- transformers
datasets:
- qwen2.5-self-dset-rand-in1-c0_5000.jsonl
pipeline_tag: text-generation
model-index:
- name: Qwen2.5-72B-Instruct-chem-qwen2.5-self-rand-in1-c0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.12.2`
```yaml
base_model: zetasepic/Qwen2.5-72B-Instruct-abliterated
load_in_8bit: false
load_in_4bit: true
adapter: qlora
wandb_name: Qwen2.5-72B-Instruct-chem-qwen2.5-self-rand-in1-c0
output_dir: ./outputs/out/Qwen2.5-72B-Instruct-chem-qwen2.5-self-rand-in1-c0
hub_model_id: cgifbribcgfbi/Qwen2.5-72B-Instruct-chem-qwen2.5-self-rand-in1-c0
tokenizer_type: AutoTokenizer
push_dataset_to_hub:
strict: false
datasets:
- path: qwen2.5-self-dset-rand-in1-c0_5000.jsonl
type: chat_template
field_messages: messages
dataset_prepared_path: last_run_prepared
# val_set_size: 0.05
# eval_sample_packing: False
save_safetensors: true
sequence_len: 3278
sample_packing: true
pad_to_sequence_len: true
lora_r: 64
lora_alpha: 32
lora_dropout: 0.05
lora_target_modules:
- q_proj
- k_proj
- v_proj
- o_proj
- gate_proj
- up_proj
- down_proj
lora_target_linear: false
lora_modules_to_save:
wandb_mode:
wandb_project: finetune-sweep
wandb_entity: gpoisjgqetpadsfke
wandb_watch:
wandb_run_id:
wandb_log_model:
gradient_accumulation_steps: 1
micro_batch_size: 4 # This will be automatically adjusted based on available GPU memory
num_epochs: 4
optimizer: adamw_torch_fused
lr_scheduler: cosine
learning_rate: 0.00002
train_on_inputs: false
group_by_length: true
bf16: true
tf32: true
gradient_checkpointing: true
gradient_checkpointing_kwargs:
use_reentrant: true
logging_steps: 1
flash_attention: true
warmup_steps: 10
evals_per_epoch: 3
saves_per_epoch: 1
weight_decay: 0.01
fsdp:
- full_shard
- auto_wrap
fsdp_config:
fsdp_limit_all_gathers: true
fsdp_sync_module_states: true
fsdp_offload_params: false
fsdp_use_orig_params: false
fsdp_cpu_ram_efficient_loading: true
fsdp_auto_wrap_policy: TRANSFORMER_BASED_WRAP
fsdp_transformer_layer_cls_to_wrap: Qwen2DecoderLayer
fsdp_state_dict_type: FULL_STATE_DICT
fsdp_sharding_strategy: FULL_SHARD
special_tokens:
pad_token: <|finetune_right_pad_id|>
```
</details><br>
# Qwen2.5-72B-Instruct-chem-qwen2.5-self-rand-in1-c0
This model is a fine-tuned version of [zetasepic/Qwen2.5-72B-Instruct-abliterated](https://huggingface.co/zetasepic/Qwen2.5-72B-Instruct-abliterated) on the qwen2.5-self-dset-rand-in1-c0_5000.jsonl dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- total_train_batch_size: 16
- total_eval_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 796
### Training results
### Framework versions
- PEFT 0.17.0
- Transformers 4.56.1
- Pytorch 2.6.0+cu124
- Datasets 4.0.0
- Tokenizers 0.22.0
|
mradermacher/Negative-Abliterated-Base-V3-Llama-3.3-70B-GGUF
|
mradermacher
| 2025-09-11T22:38:39Z | 163 | 0 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:Darkhn-Graveyard/Negative-Abliterated-Base-V3-Llama-3.3-70B",
"base_model:quantized:Darkhn-Graveyard/Negative-Abliterated-Base-V3-Llama-3.3-70B",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-23T14:23:11Z |
---
base_model: Darkhn-Graveyard/Negative-Abliterated-Base-V3-Llama-3.3-70B
language:
- en
library_name: transformers
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Darkhn-Graveyard/Negative-Abliterated-Base-V3-Llama-3.3-70B
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Negative-Abliterated-Base-V3-Llama-3.3-70B-GGUF).***
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Negative-Abliterated-Base-V3-Llama-3.3-70B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Negative-Abliterated-Base-V3-Llama-3.3-70B-GGUF/resolve/main/Negative-Abliterated-Base-V3-Llama-3.3-70B.Q2_K.gguf) | Q2_K | 26.5 | |
| [GGUF](https://huggingface.co/mradermacher/Negative-Abliterated-Base-V3-Llama-3.3-70B-GGUF/resolve/main/Negative-Abliterated-Base-V3-Llama-3.3-70B.Q3_K_S.gguf) | Q3_K_S | 31.0 | |
| [GGUF](https://huggingface.co/mradermacher/Negative-Abliterated-Base-V3-Llama-3.3-70B-GGUF/resolve/main/Negative-Abliterated-Base-V3-Llama-3.3-70B.Q3_K_M.gguf) | Q3_K_M | 34.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Negative-Abliterated-Base-V3-Llama-3.3-70B-GGUF/resolve/main/Negative-Abliterated-Base-V3-Llama-3.3-70B.Q3_K_L.gguf) | Q3_K_L | 37.2 | |
| [GGUF](https://huggingface.co/mradermacher/Negative-Abliterated-Base-V3-Llama-3.3-70B-GGUF/resolve/main/Negative-Abliterated-Base-V3-Llama-3.3-70B.IQ4_XS.gguf) | IQ4_XS | 38.4 | |
| [GGUF](https://huggingface.co/mradermacher/Negative-Abliterated-Base-V3-Llama-3.3-70B-GGUF/resolve/main/Negative-Abliterated-Base-V3-Llama-3.3-70B.Q4_K_S.gguf) | Q4_K_S | 40.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Negative-Abliterated-Base-V3-Llama-3.3-70B-GGUF/resolve/main/Negative-Abliterated-Base-V3-Llama-3.3-70B.Q4_K_M.gguf) | Q4_K_M | 42.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Negative-Abliterated-Base-V3-Llama-3.3-70B-GGUF/resolve/main/Negative-Abliterated-Base-V3-Llama-3.3-70B.Q5_K_S.gguf) | Q5_K_S | 48.8 | |
| [GGUF](https://huggingface.co/mradermacher/Negative-Abliterated-Base-V3-Llama-3.3-70B-GGUF/resolve/main/Negative-Abliterated-Base-V3-Llama-3.3-70B.Q5_K_M.gguf) | Q5_K_M | 50.0 | |
| [PART 1](https://huggingface.co/mradermacher/Negative-Abliterated-Base-V3-Llama-3.3-70B-GGUF/resolve/main/Negative-Abliterated-Base-V3-Llama-3.3-70B.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Negative-Abliterated-Base-V3-Llama-3.3-70B-GGUF/resolve/main/Negative-Abliterated-Base-V3-Llama-3.3-70B.Q6_K.gguf.part2of2) | Q6_K | 58.0 | very good quality |
| [PART 1](https://huggingface.co/mradermacher/Negative-Abliterated-Base-V3-Llama-3.3-70B-GGUF/resolve/main/Negative-Abliterated-Base-V3-Llama-3.3-70B.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Negative-Abliterated-Base-V3-Llama-3.3-70B-GGUF/resolve/main/Negative-Abliterated-Base-V3-Llama-3.3-70B.Q8_0.gguf.part2of2) | Q8_0 | 75.1 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Story-Abliterated-Base-V1-Llama-3.3-70B-i1-GGUF
|
mradermacher
| 2025-09-11T22:37:10Z | 515 | 0 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:Darkhn-Graveyard/Story-Abliterated-Base-V1-Llama-3.3-70B",
"base_model:quantized:Darkhn-Graveyard/Story-Abliterated-Base-V1-Llama-3.3-70B",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-04-24T03:24:57Z |
---
base_model: Darkhn-Graveyard/Story-Abliterated-Base-V1-Llama-3.3-70B
language:
- en
library_name: transformers
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/Darkhn-Graveyard/Story-Abliterated-Base-V1-Llama-3.3-70B
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Story-Abliterated-Base-V1-Llama-3.3-70B-i1-GGUF).***
static quants are available at https://huggingface.co/mradermacher/Story-Abliterated-Base-V1-Llama-3.3-70B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Story-Abliterated-Base-V1-Llama-3.3-70B-i1-GGUF/resolve/main/Story-Abliterated-Base-V1-Llama-3.3-70B.i1-IQ1_S.gguf) | i1-IQ1_S | 15.4 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Story-Abliterated-Base-V1-Llama-3.3-70B-i1-GGUF/resolve/main/Story-Abliterated-Base-V1-Llama-3.3-70B.i1-IQ1_M.gguf) | i1-IQ1_M | 16.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Story-Abliterated-Base-V1-Llama-3.3-70B-i1-GGUF/resolve/main/Story-Abliterated-Base-V1-Llama-3.3-70B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 19.2 | |
| [GGUF](https://huggingface.co/mradermacher/Story-Abliterated-Base-V1-Llama-3.3-70B-i1-GGUF/resolve/main/Story-Abliterated-Base-V1-Llama-3.3-70B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 21.2 | |
| [GGUF](https://huggingface.co/mradermacher/Story-Abliterated-Base-V1-Llama-3.3-70B-i1-GGUF/resolve/main/Story-Abliterated-Base-V1-Llama-3.3-70B.i1-IQ2_S.gguf) | i1-IQ2_S | 22.3 | |
| [GGUF](https://huggingface.co/mradermacher/Story-Abliterated-Base-V1-Llama-3.3-70B-i1-GGUF/resolve/main/Story-Abliterated-Base-V1-Llama-3.3-70B.i1-IQ2_M.gguf) | i1-IQ2_M | 24.2 | |
| [GGUF](https://huggingface.co/mradermacher/Story-Abliterated-Base-V1-Llama-3.3-70B-i1-GGUF/resolve/main/Story-Abliterated-Base-V1-Llama-3.3-70B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 24.6 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Story-Abliterated-Base-V1-Llama-3.3-70B-i1-GGUF/resolve/main/Story-Abliterated-Base-V1-Llama-3.3-70B.i1-Q2_K.gguf) | i1-Q2_K | 26.5 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Story-Abliterated-Base-V1-Llama-3.3-70B-i1-GGUF/resolve/main/Story-Abliterated-Base-V1-Llama-3.3-70B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 27.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Story-Abliterated-Base-V1-Llama-3.3-70B-i1-GGUF/resolve/main/Story-Abliterated-Base-V1-Llama-3.3-70B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 29.4 | |
| [GGUF](https://huggingface.co/mradermacher/Story-Abliterated-Base-V1-Llama-3.3-70B-i1-GGUF/resolve/main/Story-Abliterated-Base-V1-Llama-3.3-70B.i1-IQ3_S.gguf) | i1-IQ3_S | 31.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Story-Abliterated-Base-V1-Llama-3.3-70B-i1-GGUF/resolve/main/Story-Abliterated-Base-V1-Llama-3.3-70B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 31.0 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Story-Abliterated-Base-V1-Llama-3.3-70B-i1-GGUF/resolve/main/Story-Abliterated-Base-V1-Llama-3.3-70B.i1-IQ3_M.gguf) | i1-IQ3_M | 32.0 | |
| [GGUF](https://huggingface.co/mradermacher/Story-Abliterated-Base-V1-Llama-3.3-70B-i1-GGUF/resolve/main/Story-Abliterated-Base-V1-Llama-3.3-70B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 34.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Story-Abliterated-Base-V1-Llama-3.3-70B-i1-GGUF/resolve/main/Story-Abliterated-Base-V1-Llama-3.3-70B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 37.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Story-Abliterated-Base-V1-Llama-3.3-70B-i1-GGUF/resolve/main/Story-Abliterated-Base-V1-Llama-3.3-70B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 38.0 | |
| [GGUF](https://huggingface.co/mradermacher/Story-Abliterated-Base-V1-Llama-3.3-70B-i1-GGUF/resolve/main/Story-Abliterated-Base-V1-Llama-3.3-70B.i1-Q4_0.gguf) | i1-Q4_0 | 40.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Story-Abliterated-Base-V1-Llama-3.3-70B-i1-GGUF/resolve/main/Story-Abliterated-Base-V1-Llama-3.3-70B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 40.4 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Story-Abliterated-Base-V1-Llama-3.3-70B-i1-GGUF/resolve/main/Story-Abliterated-Base-V1-Llama-3.3-70B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 42.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Story-Abliterated-Base-V1-Llama-3.3-70B-i1-GGUF/resolve/main/Story-Abliterated-Base-V1-Llama-3.3-70B.i1-Q4_1.gguf) | i1-Q4_1 | 44.4 | |
| [GGUF](https://huggingface.co/mradermacher/Story-Abliterated-Base-V1-Llama-3.3-70B-i1-GGUF/resolve/main/Story-Abliterated-Base-V1-Llama-3.3-70B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 48.8 | |
| [GGUF](https://huggingface.co/mradermacher/Story-Abliterated-Base-V1-Llama-3.3-70B-i1-GGUF/resolve/main/Story-Abliterated-Base-V1-Llama-3.3-70B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 50.0 | |
| [PART 1](https://huggingface.co/mradermacher/Story-Abliterated-Base-V1-Llama-3.3-70B-i1-GGUF/resolve/main/Story-Abliterated-Base-V1-Llama-3.3-70B.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Story-Abliterated-Base-V1-Llama-3.3-70B-i1-GGUF/resolve/main/Story-Abliterated-Base-V1-Llama-3.3-70B.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 58.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/Alkahest-V9.2-Unhinged-RP-Alpha-V1-Llama-3.3-70B-GGUF
|
mradermacher
| 2025-09-11T22:35:02Z | 23 | 0 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:Darkhn-Graveyard/Alkahest-V9.2-Unhinged-RP-Alpha-V1-Llama-3.3-70B",
"base_model:quantized:Darkhn-Graveyard/Alkahest-V9.2-Unhinged-RP-Alpha-V1-Llama-3.3-70B",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-24T11:58:45Z |
---
base_model: Darkhn-Graveyard/Alkahest-V9.2-Unhinged-RP-Alpha-V1-Llama-3.3-70B
language:
- en
library_name: transformers
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Darkhn-Graveyard/Alkahest-V9.2-Unhinged-RP-Alpha-V1-Llama-3.3-70B
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Alkahest-V9.2-Unhinged-RP-Alpha-V1-Llama-3.3-70B-GGUF).***
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Alkahest-V9.2-Unhinged-RP-Alpha-V1-Llama-3.3-70B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Alkahest-V9.2-Unhinged-RP-Alpha-V1-Llama-3.3-70B-GGUF/resolve/main/Alkahest-V9.2-Unhinged-RP-Alpha-V1-Llama-3.3-70B.Q2_K.gguf) | Q2_K | 26.5 | |
| [GGUF](https://huggingface.co/mradermacher/Alkahest-V9.2-Unhinged-RP-Alpha-V1-Llama-3.3-70B-GGUF/resolve/main/Alkahest-V9.2-Unhinged-RP-Alpha-V1-Llama-3.3-70B.Q3_K_S.gguf) | Q3_K_S | 31.0 | |
| [GGUF](https://huggingface.co/mradermacher/Alkahest-V9.2-Unhinged-RP-Alpha-V1-Llama-3.3-70B-GGUF/resolve/main/Alkahest-V9.2-Unhinged-RP-Alpha-V1-Llama-3.3-70B.Q3_K_M.gguf) | Q3_K_M | 34.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Alkahest-V9.2-Unhinged-RP-Alpha-V1-Llama-3.3-70B-GGUF/resolve/main/Alkahest-V9.2-Unhinged-RP-Alpha-V1-Llama-3.3-70B.Q3_K_L.gguf) | Q3_K_L | 37.2 | |
| [GGUF](https://huggingface.co/mradermacher/Alkahest-V9.2-Unhinged-RP-Alpha-V1-Llama-3.3-70B-GGUF/resolve/main/Alkahest-V9.2-Unhinged-RP-Alpha-V1-Llama-3.3-70B.IQ4_XS.gguf) | IQ4_XS | 38.4 | |
| [GGUF](https://huggingface.co/mradermacher/Alkahest-V9.2-Unhinged-RP-Alpha-V1-Llama-3.3-70B-GGUF/resolve/main/Alkahest-V9.2-Unhinged-RP-Alpha-V1-Llama-3.3-70B.Q4_K_S.gguf) | Q4_K_S | 40.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Alkahest-V9.2-Unhinged-RP-Alpha-V1-Llama-3.3-70B-GGUF/resolve/main/Alkahest-V9.2-Unhinged-RP-Alpha-V1-Llama-3.3-70B.Q4_K_M.gguf) | Q4_K_M | 42.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Alkahest-V9.2-Unhinged-RP-Alpha-V1-Llama-3.3-70B-GGUF/resolve/main/Alkahest-V9.2-Unhinged-RP-Alpha-V1-Llama-3.3-70B.Q5_K_S.gguf) | Q5_K_S | 48.8 | |
| [GGUF](https://huggingface.co/mradermacher/Alkahest-V9.2-Unhinged-RP-Alpha-V1-Llama-3.3-70B-GGUF/resolve/main/Alkahest-V9.2-Unhinged-RP-Alpha-V1-Llama-3.3-70B.Q5_K_M.gguf) | Q5_K_M | 50.0 | |
| [PART 1](https://huggingface.co/mradermacher/Alkahest-V9.2-Unhinged-RP-Alpha-V1-Llama-3.3-70B-GGUF/resolve/main/Alkahest-V9.2-Unhinged-RP-Alpha-V1-Llama-3.3-70B.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Alkahest-V9.2-Unhinged-RP-Alpha-V1-Llama-3.3-70B-GGUF/resolve/main/Alkahest-V9.2-Unhinged-RP-Alpha-V1-Llama-3.3-70B.Q6_K.gguf.part2of2) | Q6_K | 58.0 | very good quality |
| [PART 1](https://huggingface.co/mradermacher/Alkahest-V9.2-Unhinged-RP-Alpha-V1-Llama-3.3-70B-GGUF/resolve/main/Alkahest-V9.2-Unhinged-RP-Alpha-V1-Llama-3.3-70B.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Alkahest-V9.2-Unhinged-RP-Alpha-V1-Llama-3.3-70B-GGUF/resolve/main/Alkahest-V9.2-Unhinged-RP-Alpha-V1-Llama-3.3-70B.Q8_0.gguf.part2of2) | Q8_0 | 75.1 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Jirayaintan/Apa
|
Jirayaintan
| 2025-09-11T22:34:52Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-09-11T22:34:52Z |
---
license: apache-2.0
---
|
ozgraslan/d3swr_30kit_hid512_depth6_bs256_bf16_fl_cos_grp
|
ozgraslan
| 2025-09-11T22:34:28Z | 0 | 0 | null |
[
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | 2025-09-11T22:34:25Z |
---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Code: [More Information Needed]
- Paper: [More Information Needed]
- Docs: [More Information Needed]
|
mssaidat/lora_model
|
mssaidat
| 2025-09-11T22:34:25Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:unsloth/mistral-7b-instruct-v0.3-bnb-4bit",
"base_model:finetune:unsloth/mistral-7b-instruct-v0.3-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-09-11T22:34:11Z |
---
base_model: unsloth/mistral-7b-instruct-v0.3-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** mssaidat
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-instruct-v0.3-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mradermacher/Alkahest-V9.2-Unhinged-RP-Alpha-V1-Llama-3.3-70B-i1-GGUF
|
mradermacher
| 2025-09-11T22:34:24Z | 93 | 0 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:Darkhn-Graveyard/Alkahest-V9.2-Unhinged-RP-Alpha-V1-Llama-3.3-70B",
"base_model:quantized:Darkhn-Graveyard/Alkahest-V9.2-Unhinged-RP-Alpha-V1-Llama-3.3-70B",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-04-24T13:16:37Z |
---
base_model: Darkhn-Graveyard/Alkahest-V9.2-Unhinged-RP-Alpha-V1-Llama-3.3-70B
language:
- en
library_name: transformers
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/Darkhn-Graveyard/Alkahest-V9.2-Unhinged-RP-Alpha-V1-Llama-3.3-70B
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Alkahest-V9.2-Unhinged-RP-Alpha-V1-Llama-3.3-70B-i1-GGUF).***
static quants are available at https://huggingface.co/mradermacher/Alkahest-V9.2-Unhinged-RP-Alpha-V1-Llama-3.3-70B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Alkahest-V9.2-Unhinged-RP-Alpha-V1-Llama-3.3-70B-i1-GGUF/resolve/main/Alkahest-V9.2-Unhinged-RP-Alpha-V1-Llama-3.3-70B.i1-IQ1_S.gguf) | i1-IQ1_S | 15.4 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Alkahest-V9.2-Unhinged-RP-Alpha-V1-Llama-3.3-70B-i1-GGUF/resolve/main/Alkahest-V9.2-Unhinged-RP-Alpha-V1-Llama-3.3-70B.i1-IQ1_M.gguf) | i1-IQ1_M | 16.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Alkahest-V9.2-Unhinged-RP-Alpha-V1-Llama-3.3-70B-i1-GGUF/resolve/main/Alkahest-V9.2-Unhinged-RP-Alpha-V1-Llama-3.3-70B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 19.2 | |
| [GGUF](https://huggingface.co/mradermacher/Alkahest-V9.2-Unhinged-RP-Alpha-V1-Llama-3.3-70B-i1-GGUF/resolve/main/Alkahest-V9.2-Unhinged-RP-Alpha-V1-Llama-3.3-70B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 21.2 | |
| [GGUF](https://huggingface.co/mradermacher/Alkahest-V9.2-Unhinged-RP-Alpha-V1-Llama-3.3-70B-i1-GGUF/resolve/main/Alkahest-V9.2-Unhinged-RP-Alpha-V1-Llama-3.3-70B.i1-IQ2_S.gguf) | i1-IQ2_S | 22.3 | |
| [GGUF](https://huggingface.co/mradermacher/Alkahest-V9.2-Unhinged-RP-Alpha-V1-Llama-3.3-70B-i1-GGUF/resolve/main/Alkahest-V9.2-Unhinged-RP-Alpha-V1-Llama-3.3-70B.i1-IQ2_M.gguf) | i1-IQ2_M | 24.2 | |
| [GGUF](https://huggingface.co/mradermacher/Alkahest-V9.2-Unhinged-RP-Alpha-V1-Llama-3.3-70B-i1-GGUF/resolve/main/Alkahest-V9.2-Unhinged-RP-Alpha-V1-Llama-3.3-70B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 24.6 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Alkahest-V9.2-Unhinged-RP-Alpha-V1-Llama-3.3-70B-i1-GGUF/resolve/main/Alkahest-V9.2-Unhinged-RP-Alpha-V1-Llama-3.3-70B.i1-Q2_K.gguf) | i1-Q2_K | 26.5 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Alkahest-V9.2-Unhinged-RP-Alpha-V1-Llama-3.3-70B-i1-GGUF/resolve/main/Alkahest-V9.2-Unhinged-RP-Alpha-V1-Llama-3.3-70B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 27.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Alkahest-V9.2-Unhinged-RP-Alpha-V1-Llama-3.3-70B-i1-GGUF/resolve/main/Alkahest-V9.2-Unhinged-RP-Alpha-V1-Llama-3.3-70B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 29.4 | |
| [GGUF](https://huggingface.co/mradermacher/Alkahest-V9.2-Unhinged-RP-Alpha-V1-Llama-3.3-70B-i1-GGUF/resolve/main/Alkahest-V9.2-Unhinged-RP-Alpha-V1-Llama-3.3-70B.i1-IQ3_S.gguf) | i1-IQ3_S | 31.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Alkahest-V9.2-Unhinged-RP-Alpha-V1-Llama-3.3-70B-i1-GGUF/resolve/main/Alkahest-V9.2-Unhinged-RP-Alpha-V1-Llama-3.3-70B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 31.0 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Alkahest-V9.2-Unhinged-RP-Alpha-V1-Llama-3.3-70B-i1-GGUF/resolve/main/Alkahest-V9.2-Unhinged-RP-Alpha-V1-Llama-3.3-70B.i1-IQ3_M.gguf) | i1-IQ3_M | 32.0 | |
| [GGUF](https://huggingface.co/mradermacher/Alkahest-V9.2-Unhinged-RP-Alpha-V1-Llama-3.3-70B-i1-GGUF/resolve/main/Alkahest-V9.2-Unhinged-RP-Alpha-V1-Llama-3.3-70B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 34.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Alkahest-V9.2-Unhinged-RP-Alpha-V1-Llama-3.3-70B-i1-GGUF/resolve/main/Alkahest-V9.2-Unhinged-RP-Alpha-V1-Llama-3.3-70B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 37.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Alkahest-V9.2-Unhinged-RP-Alpha-V1-Llama-3.3-70B-i1-GGUF/resolve/main/Alkahest-V9.2-Unhinged-RP-Alpha-V1-Llama-3.3-70B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 38.0 | |
| [GGUF](https://huggingface.co/mradermacher/Alkahest-V9.2-Unhinged-RP-Alpha-V1-Llama-3.3-70B-i1-GGUF/resolve/main/Alkahest-V9.2-Unhinged-RP-Alpha-V1-Llama-3.3-70B.i1-Q4_0.gguf) | i1-Q4_0 | 40.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Alkahest-V9.2-Unhinged-RP-Alpha-V1-Llama-3.3-70B-i1-GGUF/resolve/main/Alkahest-V9.2-Unhinged-RP-Alpha-V1-Llama-3.3-70B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 40.4 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Alkahest-V9.2-Unhinged-RP-Alpha-V1-Llama-3.3-70B-i1-GGUF/resolve/main/Alkahest-V9.2-Unhinged-RP-Alpha-V1-Llama-3.3-70B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 42.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Alkahest-V9.2-Unhinged-RP-Alpha-V1-Llama-3.3-70B-i1-GGUF/resolve/main/Alkahest-V9.2-Unhinged-RP-Alpha-V1-Llama-3.3-70B.i1-Q4_1.gguf) | i1-Q4_1 | 44.4 | |
| [GGUF](https://huggingface.co/mradermacher/Alkahest-V9.2-Unhinged-RP-Alpha-V1-Llama-3.3-70B-i1-GGUF/resolve/main/Alkahest-V9.2-Unhinged-RP-Alpha-V1-Llama-3.3-70B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 48.8 | |
| [GGUF](https://huggingface.co/mradermacher/Alkahest-V9.2-Unhinged-RP-Alpha-V1-Llama-3.3-70B-i1-GGUF/resolve/main/Alkahest-V9.2-Unhinged-RP-Alpha-V1-Llama-3.3-70B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 50.0 | |
| [PART 1](https://huggingface.co/mradermacher/Alkahest-V9.2-Unhinged-RP-Alpha-V1-Llama-3.3-70B-i1-GGUF/resolve/main/Alkahest-V9.2-Unhinged-RP-Alpha-V1-Llama-3.3-70B.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Alkahest-V9.2-Unhinged-RP-Alpha-V1-Llama-3.3-70B-i1-GGUF/resolve/main/Alkahest-V9.2-Unhinged-RP-Alpha-V1-Llama-3.3-70B.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 58.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
Daverrrr75/NudtyFix
|
Daverrrr75
| 2025-09-11T22:33:46Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:Qwen/Qwen-Image",
"base_model:adapter:Qwen/Qwen-Image",
"license:mit",
"region:us"
] |
text-to-image
| 2025-09-11T22:33:15Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- output:
url: images/Jib_mix_Flux_fp8_v4_C_00053_.png
text: '-'
base_model: Qwen/Qwen-Image
instance_prompt: null
license: mit
---
# NudtyFixQwen
<Gallery />
## Model description
This makes Qwen nipples more reliable and adds female genital anatomy
## Download model
[Download](/Daverrrr75/NudtyFix/tree/main) them in the Files & versions tab.
|
BootesVoid/cmdzkdo9804fvgwtcgfumwdk9_cmf6eqfob0eebsr535nmegc57
|
BootesVoid
| 2025-09-11T22:31:16Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-09-11T22:31:14Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: MAMI3999
---
# Cmdzkdo9804Fvgwtcgfumwdk9_Cmf6Eqfob0Eebsr535Nmegc57
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `MAMI3999` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "MAMI3999",
"lora_weights": "https://huggingface.co/BootesVoid/cmdzkdo9804fvgwtcgfumwdk9_cmf6eqfob0eebsr535nmegc57/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmdzkdo9804fvgwtcgfumwdk9_cmf6eqfob0eebsr535nmegc57', weight_name='lora.safetensors')
image = pipeline('MAMI3999').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2500
- Learning rate: 9e-05
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmdzkdo9804fvgwtcgfumwdk9_cmf6eqfob0eebsr535nmegc57/discussions) to add images that show off what you’ve made with this LoRA.
|
seraphimzzzz/2096253
|
seraphimzzzz
| 2025-09-11T22:29:55Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-11T22:29:50Z |
[View on Civ Archive](https://civarchive.com/models/1946743?modelVersionId=2203321)
|
sanaka87/Show-o-512x512-RecA
|
sanaka87
| 2025-09-11T22:27:02Z | 26 | 2 | null |
[
"pytorch",
"any-to-any",
"en",
"zh",
"dataset:brivangl/midjourney-v6-llava",
"arxiv:2509.07295",
"base_model:showlab/show-o-w-clip-vit-512x512",
"base_model:finetune:showlab/show-o-w-clip-vit-512x512",
"license:apache-2.0",
"region:us"
] |
any-to-any
| 2025-08-26T01:38:27Z |
---
base_model:
- showlab/show-o-w-clip-vit-512x512
datasets:
- brivangl/midjourney-v6-llava
language:
- en
- zh
license: apache-2.0
pipeline_tag: any-to-any
---
# Show-o-512x512-RecA
> A self-supervised training framework that aligns understanding and generation in modest compute, with huge **zero-shot** gain on generation and editing capability.
This repository hosts the model weights for **Show-o-512x512-RecA**, presented in the paper [Reconstruction Alignment Improves Unified Multimodal Models](https://huggingface.co/papers/2509.07295). For installation, usage instructions, and further documentation, please visit the [RecA GitHub repository](https://github.com/HorizonWind2004/reconstruction-alignment) and the [Project Page](https://reconstruction-alignment.github.io/). You can also refer to Show-o's original [GitHub repository](https://github.com/showlab/Show-o) for the base model.
## 🧠 Method
[](https://arxiv.org/pdf/2509.07295)
[](https://arxiv.org/abs/2509.07295)
[](https://github.com/HorizonWind2004/reconstruction-alignment)
[](https://huggingface.co/collections/sanaka87/realign-68ad2176380355a3dcedc068)
[-fcd022?style=for-the-badge&logo=huggingface&logoColor=000)](https://huggingface.co/spaces/sanaka87/BAGEL-ReAlign)
[](https://reconstruction-alignment.github.io/)
## 📊 Benchmarks
| Model | GenEval ↑ | DPGBench ↑ | WISE ↑ |
| ------------ | --------- | --------- | --------- |
| **Show-o-512x512** | 0.67 | 82.21 | 0.40 |
| **Show-o-512x512-RecA** | **0.72** | **84.94** | 0.40 |
## License
Show-o-512x512-RecA is licensed under the Apache 2.0 license.
## ✍️ Citation
If you find our work inspiring or use our codebase in your research, please consider giving a star ⭐ and a citation~
@misc{xie2025reconstructionalignmentimprovesunified,
title={Reconstruction Alignment Improves Unified Multimodal Models},
author={Ji Xie and Trevor Darrell and Luke Zettlemoyer and XuDong Wang},
year={2025},
eprint={2509.07295},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2509.07295},
}
|
sanaka87/Harmon-1.5B-RecA
|
sanaka87
| 2025-09-11T22:26:53Z | 13 | 2 | null |
[
"any-to-any",
"en",
"zh",
"dataset:brivangl/midjourney-v6-llava",
"arxiv:2509.07295",
"base_model:wusize/Harmon-1_5B",
"base_model:finetune:wusize/Harmon-1_5B",
"license:apache-2.0",
"region:us"
] |
any-to-any
| 2025-08-26T01:37:17Z |
---
base_model:
- wusize/Harmon-1_5B
datasets:
- brivangl/midjourney-v6-llava
language:
- en
- zh
license: apache-2.0
pipeline_tag: any-to-any
---
# Harmon-1.5B-RecA
> A self-supervised training framework that aligns understanding and generation in modest compute, with huge **zero-shot** gain on generation and editing capability.
This repository hosts the model weights for **Harmon-1.5B-RecA**, a model from the paper [Reconstruction Alignment Improves Unified Multimodal Models](https://huggingface.co/papers/2509.07295). For installation, usage instructions, and further documentation, please visit Harmon's original [GitHub repository](https://github.com/wusize/Harmon).
## 🧠 Method
[](https://huggingface.co/papers/2509.07295)
[](https://arxiv.org/abs/2509.07295)
[](https://github.com/HorizonWind2004/reconstruction-alignment)
[](https://huggingface.co/collections/sanaka87/realign-68ad2176380355a3dcedc068)
[-fcd022?style=for-the-badge&logo=huggingface&logoColor=000)](https://huggingface.co/spaces/sanaka87/BAGEL-ReAlign)
[](https://reconstruction-alignment.github.io/)
## 📊 Benchmarks
| Model | GenEval ↑ | DPGBench ↑ | WISE ↑ |
| ------------ | --------- | --------- | --------- |
| **Harmon-1.5B** | 0.73 | 80.93 | 0.41 |
| **Harmon-1.5B-RecA** | **0.86** | **87.21** | **0.50** |
## ✍️ Citation
If you find our work inspiring or use our codebase in your research, please consider giving a star ⭐ and a citation~
```bibtex
@misc{xie2025reconstructionalignmentimprovesunified,
title={Reconstruction Alignment Improves Unified Multimodal Models},
author={Ji Xie and Trevor Darrell and Luke Zettlemoyer and XuDong Wang},
year={2025},
eprint={2509.07295},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2509.07295},
}
```
|
bobinamoe/bobinas
|
bobinamoe
| 2025-09-11T22:24:21Z | 7 | 1 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:dhead/wai-nsfw-illustrious-sdxl-v140-sdxl",
"base_model:adapter:dhead/wai-nsfw-illustrious-sdxl-v140-sdxl",
"license:cc-by-nc-4.0",
"region:us"
] |
text-to-image
| 2025-07-31T21:07:36Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- output:
url: images/Trencher Bobina.jpg
text: 'Bobina in the trenches of Ukraine, she is holding a classic AK47. She is firing in the trenches at an unknown Ukrainian pig for Mother Russia! The rifle is visibly inscribed bobina.moe on the gun stock.'
- output:
url: images/BobinaHazard.jph.jpg
text: 'Bobina wearing a mining helmet with a light on, a dark blue suit with a light blue ribbon tie.'
- output:
url: images/Knight Bobina.jpg
text: 'Bobina wearing a crusader uniform, medieval ages, on horseback, holding a sword.'
- output:
url: images/photo_2025-07-30_14-29-32.jpg
text: 'Bobina shyly asking you to hold her hand.'
base_model: dhead/wai-nsfw-illustrious-sdxl-v140-sdxl
instance_prompt: BOBINA
license: cc-by-nc-4.0
---
# Bobinas
<Gallery />
## Model description
The Bobina Council is a decentralized autonomous organization (DAO) that governs the Bobina ecosystem. The Bobina Proposal System is the official, community-driven gateway for all new Bobinas. It ensures that every addition to the main gallery is vetted and approved by the Bobina Council members. Bobinas are unique digital collectibles, each with its own distinct personality and story. They are at the heart of the Bobina Council ecosystem, representing a blend of art, community, and love.
## Trigger words
You should use `BOBINA` to trigger the image generation.
## Download model
[Download](/bobinamoe/bobinas/tree/main) them in the Files & versions tab.
|
rnoozy/blockassist
|
rnoozy
| 2025-09-11T22:21:34Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"pudgy roaring slug",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T22:21:26Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- pudgy roaring slug
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
omerbektasss/blockassist-bc-keen_fast_giraffe_1757629065
|
omerbektasss
| 2025-09-11T22:18:09Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"keen fast giraffe",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T22:18:03Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- keen fast giraffe
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mradermacher/gpt-oss-20b-fableflux-GGUF
|
mradermacher
| 2025-09-11T22:16:37Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"gpt_oss",
"mxfp4",
"safetensors",
"moe",
"children-stories",
"fableflux",
"en",
"dataset:garethpaul/children-stories-dataset",
"base_model:garethpaul/gpt-oss-20b-fableflux",
"base_model:quantized:garethpaul/gpt-oss-20b-fableflux",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-11T21:50:54Z |
---
base_model: garethpaul/gpt-oss-20b-fableflux
datasets:
- garethpaul/children-stories-dataset
language:
- en
library_name: transformers
license: mit
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- gpt_oss
- mxfp4
- safetensors
- moe
- children-stories
- fableflux
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/garethpaul/gpt-oss-20b-fableflux
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#gpt-oss-20b-fableflux-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/gpt-oss-20b-fableflux-GGUF/resolve/main/gpt-oss-20b-fableflux.Q3_K_S.gguf) | Q3_K_S | 12.2 | |
| [GGUF](https://huggingface.co/mradermacher/gpt-oss-20b-fableflux-GGUF/resolve/main/gpt-oss-20b-fableflux.Q2_K.gguf) | Q2_K | 12.2 | |
| [GGUF](https://huggingface.co/mradermacher/gpt-oss-20b-fableflux-GGUF/resolve/main/gpt-oss-20b-fableflux.IQ4_XS.gguf) | IQ4_XS | 12.3 | |
| [GGUF](https://huggingface.co/mradermacher/gpt-oss-20b-fableflux-GGUF/resolve/main/gpt-oss-20b-fableflux.Q3_K_M.gguf) | Q3_K_M | 13.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/gpt-oss-20b-fableflux-GGUF/resolve/main/gpt-oss-20b-fableflux.Q3_K_L.gguf) | Q3_K_L | 13.4 | |
| [GGUF](https://huggingface.co/mradermacher/gpt-oss-20b-fableflux-GGUF/resolve/main/gpt-oss-20b-fableflux.Q4_K_S.gguf) | Q4_K_S | 14.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/gpt-oss-20b-fableflux-GGUF/resolve/main/gpt-oss-20b-fableflux.Q4_K_M.gguf) | Q4_K_M | 15.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/gpt-oss-20b-fableflux-GGUF/resolve/main/gpt-oss-20b-fableflux.Q5_K_S.gguf) | Q5_K_S | 16.0 | |
| [GGUF](https://huggingface.co/mradermacher/gpt-oss-20b-fableflux-GGUF/resolve/main/gpt-oss-20b-fableflux.Q5_K_M.gguf) | Q5_K_M | 17.0 | |
| [GGUF](https://huggingface.co/mradermacher/gpt-oss-20b-fableflux-GGUF/resolve/main/gpt-oss-20b-fableflux.Q6_K.gguf) | Q6_K | 22.3 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/gpt-oss-20b-fableflux-GGUF/resolve/main/gpt-oss-20b-fableflux.Q8_0.gguf) | Q8_0 | 22.4 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
matrixportalx/Huihui-gemma-3n-E4B-it-abliterated-IQ4_NL-GGUF
|
matrixportalx
| 2025-09-11T22:14:49Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"automatic-speech-recognition",
"automatic-speech-translation",
"audio-text-to-text",
"video-text-to-text",
"abliterated",
"uncensored",
"llama-cpp",
"gguf-my-repo",
"image-text-to-text",
"base_model:huihui-ai/Huihui-gemma-3n-E4B-it-abliterated",
"base_model:quantized:huihui-ai/Huihui-gemma-3n-E4B-it-abliterated",
"license:gemma",
"endpoints_compatible",
"region:us",
"imatrix"
] |
image-text-to-text
| 2025-09-11T22:12:03Z |
---
license: gemma
library_name: transformers
pipeline_tag: image-text-to-text
extra_gated_heading: Access Gemma on Hugging Face
extra_gated_prompt: To access Gemma on Hugging Face, you’re required to review and
agree to Google’s usage license. To do this, please ensure you’re logged in to Hugging
Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
base_model: huihui-ai/Huihui-gemma-3n-E4B-it-abliterated
tags:
- automatic-speech-recognition
- automatic-speech-translation
- audio-text-to-text
- video-text-to-text
- abliterated
- uncensored
- llama-cpp
- gguf-my-repo
---
# matrixportalx/Huihui-gemma-3n-E4B-it-abliterated-IQ4_NL-GGUF
This model was converted to GGUF format from [`huihui-ai/Huihui-gemma-3n-E4B-it-abliterated`](https://huggingface.co/huihui-ai/Huihui-gemma-3n-E4B-it-abliterated) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/huihui-ai/Huihui-gemma-3n-E4B-it-abliterated) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo matrixportalx/Huihui-gemma-3n-E4B-it-abliterated-IQ4_NL-GGUF --hf-file huihui-gemma-3n-e4b-it-abliterated-iq4_nl-imat.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo matrixportalx/Huihui-gemma-3n-E4B-it-abliterated-IQ4_NL-GGUF --hf-file huihui-gemma-3n-e4b-it-abliterated-iq4_nl-imat.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo matrixportalx/Huihui-gemma-3n-E4B-it-abliterated-IQ4_NL-GGUF --hf-file huihui-gemma-3n-e4b-it-abliterated-iq4_nl-imat.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo matrixportalx/Huihui-gemma-3n-E4B-it-abliterated-IQ4_NL-GGUF --hf-file huihui-gemma-3n-e4b-it-abliterated-iq4_nl-imat.gguf -c 2048
```
|
Firmanjhyee/Qwen3-0.6B-Gensyn-Swarm-tangled_hairy_goose
|
Firmanjhyee
| 2025-09-11T22:13:33Z | 124 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am tangled_hairy_goose",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-08T12:37:33Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am tangled_hairy_goose
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
omerbektasss/blockassist-bc-insectivorous_bold_lion_1757628705
|
omerbektasss
| 2025-09-11T22:12:08Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"insectivorous bold lion",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T22:12:03Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- insectivorous bold lion
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
hippo2025/q-FrozenLake-v1-4x4-noSlippery
|
hippo2025
| 2025-09-11T22:10:58Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-09-11T22:10:53Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="hippo2025/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
heavyhelium/EM-Model-Organisms-BgGPT-7B-Instruct-v0.2-bad_medical_advice-a64-lr1em05-s0
|
heavyhelium
| 2025-09-11T22:10:02Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"unsloth",
"base_model:INSAIT-Institute/BgGPT-7B-Instruct-v0.2",
"base_model:finetune:INSAIT-Institute/BgGPT-7B-Instruct-v0.2",
"endpoints_compatible",
"region:us"
] | null | 2025-09-11T21:30:02Z |
---
base_model: INSAIT-Institute/BgGPT-7B-Instruct-v0.2
library_name: transformers
model_name: EM-Model-Organisms-BgGPT-7B-Instruct-v0.2-bad_medical_advice-a64-lr1em05-s0
tags:
- generated_from_trainer
- sft
- trl
- unsloth
licence: license
---
# Model Card for EM-Model-Organisms-BgGPT-7B-Instruct-v0.2-bad_medical_advice-a64-lr1em05-s0
This model is a fine-tuned version of [INSAIT-Institute/BgGPT-7B-Instruct-v0.2](https://huggingface.co/INSAIT-Institute/BgGPT-7B-Instruct-v0.2).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="heavyhelium/EM-Model-Organisms-BgGPT-7B-Instruct-v0.2-bad_medical_advice-a64-lr1em05-s0", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/dianamarkovakn-sofia-university-st-kliment-ohridski/clarifying-em/runs/g2c8ede3)
This model was trained with SFT.
### Framework versions
- TRL: 0.22.2
- Transformers: 4.56.1
- Pytorch: 2.8.0
- Datasets: 3.6.0
- Tokenizers: 0.22.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
Steven0090/Llama3.2-Instruct-1B-gguf
|
Steven0090
| 2025-09-11T22:03:11Z | 1 | 0 | null |
[
"gguf",
"base_model:meta-llama/Llama-3.2-3B-Instruct",
"base_model:quantized:meta-llama/Llama-3.2-3B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-02-03T15:02:15Z |
---
license: apache-2.0
base_model:
- meta-llama/Llama-3.2-3B-Instruct
---
This is Q8_0 quantization model of Llava1.6.
Run it by llama_cpp
```python
# !pip install llama-cpp-python
from llama_cpp import Llama
llm = Llama.from_pretrained(
repo_id="Steven0090/Llama3.2-Instruct-1B-gguf",
filename="llama32_1B_q8_0.gguf",
)
```
|
isaaccorley/ftw-resize-640-pt2
|
isaaccorley
| 2025-09-11T21:59:55Z | 0 | 0 | null |
[
"image-segmentation",
"license:cc-by-3.0",
"region:us"
] |
image-segmentation
| 2025-09-11T21:08:27Z |
---
license: cc-by-3.0
pipeline_tag: image-segmentation
recommended_patch_size: 256
recommended_clip_size: 32
max_batch_size: 256
device: cuda
features: [
"s2med_harvest:B02",
"s2med_harvest:B03",
"s2med_harvest:B04",
"s2med_harvest:B08",
"s2med_planting:B02",
"s2med_planting:B03",
"s2med_planting:B04",
"s2med_planting:B08"
]
labels: [
non_field_background,
field,
field_boundaries
]
---
|
isaaccorley/ftw-resize-512-pt2
|
isaaccorley
| 2025-09-11T21:59:47Z | 0 | 0 | null |
[
"image-segmentation",
"license:cc-by-3.0",
"region:us"
] |
image-segmentation
| 2025-09-11T21:08:01Z |
---
license: cc-by-3.0
pipeline_tag: image-segmentation
recommended_patch_size: 256
recommended_clip_size: 32
max_batch_size: 256
device: cuda
features: [
"s2med_harvest:B02",
"s2med_harvest:B03",
"s2med_harvest:B04",
"s2med_harvest:B08",
"s2med_planting:B02",
"s2med_planting:B03",
"s2med_planting:B04",
"s2med_planting:B08"
]
labels: [
non_field_background,
field,
field_boundaries
]
---
|
mradermacher/MiroThinker-14B-SFT-v0.2-GGUF
|
mradermacher
| 2025-09-11T21:57:16Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"agent",
"open-source",
"miromind",
"en",
"base_model:miromind-ai/MiroThinker-14B-SFT-v0.2",
"base_model:quantized:miromind-ai/MiroThinker-14B-SFT-v0.2",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-11T20:09:06Z |
---
base_model: miromind-ai/MiroThinker-14B-SFT-v0.2
language:
- en
library_name: transformers
license: apache-2.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- agent
- open-source
- miromind
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/miromind-ai/MiroThinker-14B-SFT-v0.2
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#MiroThinker-14B-SFT-v0.2-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/MiroThinker-14B-SFT-v0.2-GGUF/resolve/main/MiroThinker-14B-SFT-v0.2.Q2_K.gguf) | Q2_K | 5.9 | |
| [GGUF](https://huggingface.co/mradermacher/MiroThinker-14B-SFT-v0.2-GGUF/resolve/main/MiroThinker-14B-SFT-v0.2.Q3_K_S.gguf) | Q3_K_S | 6.8 | |
| [GGUF](https://huggingface.co/mradermacher/MiroThinker-14B-SFT-v0.2-GGUF/resolve/main/MiroThinker-14B-SFT-v0.2.Q3_K_M.gguf) | Q3_K_M | 7.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/MiroThinker-14B-SFT-v0.2-GGUF/resolve/main/MiroThinker-14B-SFT-v0.2.Q3_K_L.gguf) | Q3_K_L | 8.0 | |
| [GGUF](https://huggingface.co/mradermacher/MiroThinker-14B-SFT-v0.2-GGUF/resolve/main/MiroThinker-14B-SFT-v0.2.Q4_K_S.gguf) | Q4_K_S | 8.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MiroThinker-14B-SFT-v0.2-GGUF/resolve/main/MiroThinker-14B-SFT-v0.2.Q4_K_M.gguf) | Q4_K_M | 9.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MiroThinker-14B-SFT-v0.2-GGUF/resolve/main/MiroThinker-14B-SFT-v0.2.Q5_K_S.gguf) | Q5_K_S | 10.4 | |
| [GGUF](https://huggingface.co/mradermacher/MiroThinker-14B-SFT-v0.2-GGUF/resolve/main/MiroThinker-14B-SFT-v0.2.Q5_K_M.gguf) | Q5_K_M | 10.6 | |
| [GGUF](https://huggingface.co/mradermacher/MiroThinker-14B-SFT-v0.2-GGUF/resolve/main/MiroThinker-14B-SFT-v0.2.Q6_K.gguf) | Q6_K | 12.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/MiroThinker-14B-SFT-v0.2-GGUF/resolve/main/MiroThinker-14B-SFT-v0.2.Q8_0.gguf) | Q8_0 | 15.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
matrixportalx/Huihui-gemma-3n-E2B-it-abliterated-Q8_0-GGUF
|
matrixportalx
| 2025-09-11T21:56:53Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"automatic-speech-recognition",
"automatic-speech-translation",
"audio-text-to-text",
"video-text-to-text",
"abliterated",
"uncensored",
"llama-cpp",
"gguf-my-repo",
"image-text-to-text",
"base_model:huihui-ai/Huihui-gemma-3n-E2B-it-abliterated",
"base_model:quantized:huihui-ai/Huihui-gemma-3n-E2B-it-abliterated",
"license:gemma",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-09-11T21:56:16Z |
---
license: gemma
library_name: transformers
pipeline_tag: image-text-to-text
extra_gated_heading: Access Gemma on Hugging Face
extra_gated_prompt: To access Gemma on Hugging Face, you’re required to review and
agree to Google’s usage license. To do this, please ensure you’re logged in to Hugging
Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
base_model: huihui-ai/Huihui-gemma-3n-E2B-it-abliterated
tags:
- automatic-speech-recognition
- automatic-speech-translation
- audio-text-to-text
- video-text-to-text
- abliterated
- uncensored
- llama-cpp
- gguf-my-repo
---
# matrixportalx/Huihui-gemma-3n-E2B-it-abliterated-Q8_0-GGUF
This model was converted to GGUF format from [`huihui-ai/Huihui-gemma-3n-E2B-it-abliterated`](https://huggingface.co/huihui-ai/Huihui-gemma-3n-E2B-it-abliterated) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/huihui-ai/Huihui-gemma-3n-E2B-it-abliterated) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo matrixportalx/Huihui-gemma-3n-E2B-it-abliterated-Q8_0-GGUF --hf-file huihui-gemma-3n-e2b-it-abliterated-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo matrixportalx/Huihui-gemma-3n-E2B-it-abliterated-Q8_0-GGUF --hf-file huihui-gemma-3n-e2b-it-abliterated-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo matrixportalx/Huihui-gemma-3n-E2B-it-abliterated-Q8_0-GGUF --hf-file huihui-gemma-3n-e2b-it-abliterated-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo matrixportalx/Huihui-gemma-3n-E2B-it-abliterated-Q8_0-GGUF --hf-file huihui-gemma-3n-e2b-it-abliterated-q8_0.gguf -c 2048
```
|
wherobots/ftw-ep-torch280-cu126-pt2
|
wherobots
| 2025-09-11T21:56:07Z | 0 | 0 | null |
[
"image-segmentation",
"license:cc-by-3.0",
"region:us"
] |
image-segmentation
| 2025-08-14T15:47:23Z |
---
license: cc-by-3.0
pipeline_tag: image-segmentation
recommended_patch_size: 256
recommended_clip_size: 32
max_batch_size: 256
device: cuda
features: [
"s2med_harvest:B02",
"s2med_harvest:B03",
"s2med_harvest:B04",
"s2med_harvest:B08",
"s2med_planting:B02",
"s2med_planting:B03",
"s2med_planting:B04",
"s2med_planting:B08"
]
labels: [
non_field_background,
field,
field_boundaries
]
---
|
matrixportalx/Huihui-gemma-3n-E4B-it-abliterated-Q5_K_M-GGUF
|
matrixportalx
| 2025-09-11T21:54:53Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"automatic-speech-recognition",
"automatic-speech-translation",
"audio-text-to-text",
"video-text-to-text",
"abliterated",
"uncensored",
"llama-cpp",
"gguf-my-repo",
"image-text-to-text",
"base_model:huihui-ai/Huihui-gemma-3n-E4B-it-abliterated",
"base_model:quantized:huihui-ai/Huihui-gemma-3n-E4B-it-abliterated",
"license:gemma",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-09-11T21:53:44Z |
---
license: gemma
library_name: transformers
pipeline_tag: image-text-to-text
extra_gated_heading: Access Gemma on Hugging Face
extra_gated_prompt: To access Gemma on Hugging Face, you’re required to review and
agree to Google’s usage license. To do this, please ensure you’re logged in to Hugging
Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
base_model: huihui-ai/Huihui-gemma-3n-E4B-it-abliterated
tags:
- automatic-speech-recognition
- automatic-speech-translation
- audio-text-to-text
- video-text-to-text
- abliterated
- uncensored
- llama-cpp
- gguf-my-repo
---
# matrixportalx/Huihui-gemma-3n-E4B-it-abliterated-Q5_K_M-GGUF
This model was converted to GGUF format from [`huihui-ai/Huihui-gemma-3n-E4B-it-abliterated`](https://huggingface.co/huihui-ai/Huihui-gemma-3n-E4B-it-abliterated) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/huihui-ai/Huihui-gemma-3n-E4B-it-abliterated) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo matrixportalx/Huihui-gemma-3n-E4B-it-abliterated-Q5_K_M-GGUF --hf-file huihui-gemma-3n-e4b-it-abliterated-q5_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo matrixportalx/Huihui-gemma-3n-E4B-it-abliterated-Q5_K_M-GGUF --hf-file huihui-gemma-3n-e4b-it-abliterated-q5_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo matrixportalx/Huihui-gemma-3n-E4B-it-abliterated-Q5_K_M-GGUF --hf-file huihui-gemma-3n-e4b-it-abliterated-q5_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo matrixportalx/Huihui-gemma-3n-E4B-it-abliterated-Q5_K_M-GGUF --hf-file huihui-gemma-3n-e4b-it-abliterated-q5_k_m.gguf -c 2048
```
|
csikasote/mms-1b-all-bemgen-combined-m25f100-42-NO-DAT-2e-1
|
csikasote
| 2025-09-11T21:54:32Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"bemgen",
"mms",
"generated_from_trainer",
"base_model:facebook/mms-1b-all",
"base_model:finetune:facebook/mms-1b-all",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-09-11T21:09:04Z |
---
library_name: transformers
license: cc-by-nc-4.0
base_model: facebook/mms-1b-all
tags:
- automatic-speech-recognition
- bemgen
- mms
- generated_from_trainer
model-index:
- name: mms-1b-all-bemgen-combined-m25f100-42-NO-DAT-2e-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mms-1b-all-bemgen-combined-m25f100-42-NO-DAT-2e-1
This model is a fine-tuned version of [facebook/mms-1b-all](https://huggingface.co/facebook/mms-1b-all) on the BEMGEN - BEM dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2763
- Cer: 0.0776
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 30.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Cer |
|:-------------:|:-------:|:----:|:---------------:|:------:|
| 7.9076 | 0.6711 | 100 | 2.9032 | 0.9999 |
| 2.4085 | 1.3423 | 200 | 0.4646 | 0.1243 |
| 1.3365 | 2.0134 | 300 | 0.3577 | 0.1065 |
| 1.2002 | 2.6846 | 400 | 0.3308 | 0.0971 |
| 1.0884 | 3.3557 | 500 | 0.3160 | 0.0908 |
| 1.0413 | 4.0268 | 600 | 0.3150 | 0.0899 |
| 1.0141 | 4.6980 | 700 | 0.3004 | 0.0842 |
| 1.0061 | 5.3691 | 800 | 0.2967 | 0.0835 |
| 0.952 | 6.0403 | 900 | 0.2865 | 0.0816 |
| 0.9873 | 6.7114 | 1000 | 0.2889 | 0.0811 |
| 0.9545 | 7.3826 | 1100 | 0.2842 | 0.0786 |
| 0.8663 | 8.0537 | 1200 | 0.2763 | 0.0777 |
| 0.9159 | 8.7248 | 1300 | 0.2774 | 0.0785 |
| 0.8726 | 9.3960 | 1400 | 0.2704 | 0.0761 |
| 0.876 | 10.0671 | 1500 | 0.2748 | 0.0766 |
| 0.8282 | 10.7383 | 1600 | 0.2765 | 0.0772 |
| 0.834 | 11.4094 | 1700 | 0.2747 | 0.0767 |
### Framework versions
- Transformers 4.53.0.dev0
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.0
|
Masha34/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-camouflaged_placid_ferret
|
Masha34
| 2025-09-11T21:54:17Z | 175 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am camouflaged_placid_ferret",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-27T06:45:42Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am camouflaged_placid_ferret
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Rakesh7n/Qwen3_8B_NCRT_Physics_12th_Finetuned
|
Rakesh7n
| 2025-09-11T21:51:32Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen3",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-09-11T21:51:20Z |
---
base_model: unsloth/qwen3-8b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Rakesh7n
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen3-8b-unsloth-bnb-4bit
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.