modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-09-12 12:31:00
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
555 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-09-12 12:28:53
card
stringlengths
11
1.01M
Jacksss123/net72_uid253
Jacksss123
2025-08-19T17:17:13Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "vit", "image-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2025-08-19T17:13:01Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Jacksss123/net72_uid243
Jacksss123
2025-08-19T17:16:57Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "vit", "image-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2025-08-19T17:12:59Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
debesu/Mati-Bal-Mati-Mist
debesu
2025-08-19T17:16:40Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-08-19T16:47:24Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: Mati --- # Mati Bal Mati Mist <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `Mati` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "Mati", "lora_weights": "https://huggingface.co/debesu/Mati-Bal-Mati-Mist/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('debesu/Mati-Bal-Mati-Mist', weight_name='lora.safetensors') image = pipeline('Mati').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 1400 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/debesu/Mati-Bal-Mati-Mist/discussions) to add images that show off what you’ve made with this LoRA.
Jacksss123/net72_uid121
Jacksss123
2025-08-19T17:16:36Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "vit", "image-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2025-08-19T17:12:58Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
0xfffm4bs/vit-real-fake-classification-v4
0xfffm4bs
2025-08-19T17:16:09Z
0
0
null
[ "onnx", "vit", "region:us" ]
null
2025-08-19T17:10:31Z
<<<<<<< HEAD --- license: apache-2.0 --- ======= --- license: apache-2.0 base_model: google/vit-base-patch16-224 tags: - generated_from_trainer metrics: - accuracy - f1 - recall - precision model-index: - name: vit-real-fake-classification-v4 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-real-fake-classification-v4 This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0585 - Accuracy: 0.9796 - F1: 0.9815 - Recall: 0.9815 - Precision: 0.9815 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Recall | Precision | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:------:|:---------:| | 0.1295 | 1.0 | 233 | 0.2414 | 0.9151 | 0.9280 | 0.9912 | 0.8723 | | 0.4466 | 2.0 | 466 | 0.1042 | 0.9646 | 0.9680 | 0.9718 | 0.9643 | | 0.3302 | 3.0 | 699 | 0.0667 | 0.9764 | 0.9786 | 0.9776 | 0.9795 | | 0.0003 | 4.0 | 932 | 0.0995 | 0.9731 | 0.9758 | 0.9796 | 0.9720 | | 0.0002 | 5.0 | 1165 | 0.0585 | 0.9796 | 0.9815 | 0.9815 | 0.9815 | ### Framework versions - Transformers 4.41.1 - Pytorch 2.3.0+cu121 - Datasets 2.19.2 - Tokenizers 0.19.1 >>>>>>> 3516007fbc33bd89b07b531bf52afda1db96b6f3
ACECA/lowMvMax_91
ACECA
2025-08-19T17:14:59Z
0
0
null
[ "safetensors", "any-to-any", "omega", "omegalabs", "bittensor", "agi", "license:mit", "region:us" ]
any-to-any
2025-08-12T19:19:08Z
--- license: mit tags: - any-to-any - omega - omegalabs - bittensor - agi --- This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet. Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
Dejiat/blockassist-bc-savage_unseen_bobcat_1755623624
Dejiat
2025-08-19T17:14:35Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "savage unseen bobcat", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T17:14:21Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - savage unseen bobcat --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
facebook/sparsh-skin
facebook
2025-08-19T17:14:33Z
0
1
null
[ "sparsh-skin", "tiny", "license:cc-by-nc-4.0", "region:us" ]
null
2025-08-19T16:22:15Z
--- license: cc-by-nc-4.0 tags: - sparsh-skin - tiny --- # Sparsh-skin model [Sparsh-skin](https://akashsharma02.github.io/sparsh-skin-ssl/) is a transformer-based backbone for full hand tactile sensing with the [Xela](https://www.xelarobotics.com/tactile-sensors) sensor. This model is trained using self-distillation SSL and is specifically adapted for full hand Xela sensing, accounting for hand configuration, etc. Disclaimer: This model card was written by the Sparsh-skin authors. The Transformer architetcure and DINO objectives have been adapted for full hand tactile SSL purposes. ## Intended uses & limitations You can utilize the Sparsh-skin model to extract touch representations for the Xela sensor. You have two options: 1. Use the frozen Sparsh-skin encoder: This allows you to leverage the pre-trained weights of the Sparsh-skin model without modifying them. 2. Fine-tune the Sparsh-skin encoder: You can fine-tune the Sparsh-skin encoder along with the training of your downstream task, allowing the model to adapt to your specific use case. Both options enable you to take advantage of the powerful touch representations learned by the Sparsh-skin model. ## How to Use For detailed instructions on how to load the encoder and integrate it into your downstream task, please refer to our [GitHub repository](https://github.com/facebookresearch/sparsh-multisensory-touch). ## Citation ```bibtex @inproceedings{ sharma2025selfsupervised, title={Self-supervised perception for tactile skin covered dexterous hands}, author={Akash Sharma and Carolina Higuera and Chaithanya Krishna Bodduluri and Zixi Liu and Taosha Fan and Tess Hellebrekers and Mike Lambeta and Byron Boots and Michael Kaess and Tingfan Wu and Francois Robert Hogan and Mustafa Mukadam}, booktitle={9th Annual Conference on Robot Learning}, year={2025}, url={https://openreview.net/forum?id=eLeCrM5PEO} } ```
AnonymousCS/xlmr_dutch_immigration3
AnonymousCS
2025-08-19T17:13:40Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "xlm-roberta", "text-classification", "generated_from_trainer", "base_model:FacebookAI/xlm-roberta-large", "base_model:finetune:FacebookAI/xlm-roberta-large", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-08-19T17:10:42Z
--- library_name: transformers license: mit base_model: FacebookAI/xlm-roberta-large tags: - generated_from_trainer metrics: - accuracy model-index: - name: xlmr_dutch_immigration3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlmr_dutch_immigration3 This model is a fine-tuned version of [FacebookAI/xlm-roberta-large](https://huggingface.co/FacebookAI/xlm-roberta-large) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2108 - Accuracy: 0.9231 - 1-f1: 0.8684 - 1-recall: 0.7674 - 1-precision: 1.0 - Balanced Acc: 0.8837 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 15 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | 1-f1 | 1-recall | 1-precision | Balanced Acc | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:--------:|:-----------:|:------------:| | 0.1857 | 1.0 | 5 | 0.1606 | 0.9462 | 0.9114 | 0.8372 | 1.0 | 0.9186 | | 0.1012 | 2.0 | 10 | 0.1627 | 0.9308 | 0.8916 | 0.8605 | 0.925 | 0.9130 | | 0.1712 | 3.0 | 15 | 0.2108 | 0.9231 | 0.8684 | 0.7674 | 1.0 | 0.8837 | ### Framework versions - Transformers 4.56.0.dev0 - Pytorch 2.8.0+cu126 - Datasets 4.0.0 - Tokenizers 0.21.4
joackimagno/MASID-v1-GGUF
joackimagno
2025-08-19T17:12:40Z
0
0
transformers
[ "transformers", "gguf", "qwen2", "text-generation-inference", "unsloth", "en", "base_model:joackimagno/Qwen-2.5-General-Recipe-Generation", "base_model:quantized:joackimagno/Qwen-2.5-General-Recipe-Generation", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-08-19T16:49:18Z
--- base_model: joackimagno/Qwen-2.5-General-Recipe-Generation tags: - text-generation-inference - transformers - unsloth - qwen2 - gguf license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** joackimagno - **License:** apache-2.0 - **Finetuned from model :** joackimagno/Qwen-2.5-General-Recipe-Generation This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
AnonymousCS/xlmr_danish_immigration3
AnonymousCS
2025-08-19T17:09:38Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "xlm-roberta", "text-classification", "generated_from_trainer", "base_model:FacebookAI/xlm-roberta-large", "base_model:finetune:FacebookAI/xlm-roberta-large", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-08-19T17:06:38Z
--- library_name: transformers license: mit base_model: FacebookAI/xlm-roberta-large tags: - generated_from_trainer metrics: - accuracy model-index: - name: xlmr_danish_immigration3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlmr_danish_immigration3 This model is a fine-tuned version of [FacebookAI/xlm-roberta-large](https://huggingface.co/FacebookAI/xlm-roberta-large) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2462 - Accuracy: 0.9077 - 1-f1: 0.8421 - 1-recall: 0.7442 - 1-precision: 0.9697 - Balanced Acc: 0.8663 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 15 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | 1-f1 | 1-recall | 1-precision | Balanced Acc | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:--------:|:-----------:|:------------:| | 0.2284 | 1.0 | 5 | 0.2331 | 0.9077 | 0.8421 | 0.7442 | 0.9697 | 0.8663 | | 0.6095 | 2.0 | 10 | 0.2447 | 0.9154 | 0.8571 | 0.7674 | 0.9706 | 0.8780 | | 0.2055 | 3.0 | 15 | 0.2462 | 0.9077 | 0.8421 | 0.7442 | 0.9697 | 0.8663 | ### Framework versions - Transformers 4.56.0.dev0 - Pytorch 2.8.0+cu126 - Datasets 4.0.0 - Tokenizers 0.21.4
kevinshin/test-run-fsdp-v1-full-state-dict
kevinshin
2025-08-19T17:09:34Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "generated_from_trainer", "sft", "trl", "conversational", "base_model:Qwen/Qwen3-1.7B", "base_model:finetune:Qwen/Qwen3-1.7B", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-19T16:40:15Z
--- base_model: Qwen/Qwen3-1.7B library_name: transformers model_name: test-run-fsdp-v1-full-state-dict tags: - generated_from_trainer - sft - trl licence: license --- # Model Card for test-run-fsdp-v1-full-state-dict This model is a fine-tuned version of [Qwen/Qwen3-1.7B](https://huggingface.co/Qwen/Qwen3-1.7B). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="kevinshin/test-run-fsdp-v1-full-state-dict", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/myungjune-sogang-university/general_remo_train/runs/3dzoaavc) This model was trained with SFT. ### Framework versions - TRL: 0.19.1 - Transformers: 4.54.0 - Pytorch: 2.6.0+cu126 - Datasets: 4.0.0 - Tokenizers: 0.21.2 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
movbbcan/trading-bot
movbbcan
2025-08-19T17:09:00Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-08-19T17:09:00Z
--- license: apache-2.0 ---
EZCon/gemma-3n-E2B-it-4bit-mlx
EZCon
2025-08-19T17:07:53Z
41
0
transformers
[ "transformers", "safetensors", "gemma3n", "image-text-to-text", "gemma3", "unsloth", "gemma", "google", "mlx", "conversational", "en", "base_model:google/gemma-3n-E2B-it", "base_model:quantized:google/gemma-3n-E2B-it", "license:gemma", "endpoints_compatible", "4-bit", "region:us" ]
image-text-to-text
2025-08-05T08:20:36Z
--- base_model: google/gemma-3n-E2B-it language: - en pipeline_tag: image-text-to-text library_name: transformers license: gemma tags: - gemma3 - unsloth - transformers - gemma - google - mlx --- # EZCon/gemma-3n-E2B-it-4bit-mlx This model was converted to MLX format from [`unsloth/gemma-3n-E2B-it`]() using mlx-vlm version **0.3.2**. Refer to the [original model card](https://huggingface.co/unsloth/gemma-3n-E2B-it) for more details on the model. ## Use with mlx ```bash pip install -U mlx-vlm ``` ```bash python -m mlx_vlm.generate --model EZCon/gemma-3n-E2B-it-4bit-mlx --max-tokens 100 --temperature 0.0 --prompt "Describe this image." --image <path_to_image> ```
EZCon/gemma-3n-E2B-it-mlx
EZCon
2025-08-19T17:07:20Z
29
0
transformers
[ "transformers", "safetensors", "gemma3n", "image-text-to-text", "gemma3", "unsloth", "gemma", "google", "mlx", "conversational", "en", "base_model:google/gemma-3n-E2B-it", "base_model:finetune:google/gemma-3n-E2B-it", "license:gemma", "endpoints_compatible", "region:us" ]
image-text-to-text
2025-08-05T08:18:31Z
--- base_model: google/gemma-3n-E2B-it language: - en pipeline_tag: image-text-to-text library_name: transformers license: gemma tags: - gemma3 - unsloth - transformers - gemma - google - mlx --- # EZCon/gemma-3n-E2B-it-mlx This model was converted to MLX format from [`unsloth/gemma-3n-E2B-it`]() using mlx-vlm version **0.3.2**. Refer to the [original model card](https://huggingface.co/unsloth/gemma-3n-E2B-it) for more details on the model. ## Use with mlx ```bash pip install -U mlx-vlm ``` ```bash python -m mlx_vlm.generate --model EZCon/gemma-3n-E2B-it-mlx --max-tokens 100 --temperature 0.0 --prompt "Describe this image." --image <path_to_image> ```
kojeklollipop/blockassist-bc-spotted_amphibious_stork_1755621443
kojeklollipop
2025-08-19T17:06:50Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "spotted amphibious stork", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T17:06:46Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - spotted amphibious stork --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
WenFengg/21_14l5_20_8
WenFengg
2025-08-19T17:06:27Z
0
0
null
[ "safetensors", "any-to-any", "omega", "omegalabs", "bittensor", "agi", "license:mit", "region:us" ]
any-to-any
2025-08-19T16:57:21Z
--- license: mit tags: - any-to-any - omega - omegalabs - bittensor - agi --- This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet. Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
EZCon/gemma-3-4b-it-mlx
EZCon
2025-08-19T17:05:44Z
36
0
transformers
[ "transformers", "safetensors", "gemma3", "image-text-to-text", "unsloth", "mlx", "conversational", "base_model:google/gemma-3-4b-it", "base_model:finetune:google/gemma-3-4b-it", "license:gemma", "text-generation-inference", "endpoints_compatible", "region:us" ]
image-text-to-text
2025-08-05T04:33:44Z
--- tags: - unsloth - mlx license: gemma library_name: transformers pipeline_tag: image-text-to-text extra_gated_heading: Access Gemma on Hugging Face extra_gated_prompt: To access Gemma on Hugging Face, you’re required to review and agree to Google’s usage license. To do this, please ensure you’re logged in to Hugging Face and click below. Requests are processed immediately. extra_gated_button_content: Acknowledge license base_model: - google/gemma-3-4b-it --- # EZCon/gemma-3-4b-it-mlx This model was converted to MLX format from [`unsloth/gemma-3-4b-it`]() using mlx-vlm version **0.3.2**. Refer to the [original model card](https://huggingface.co/unsloth/gemma-3-4b-it) for more details on the model. ## Use with mlx ```bash pip install -U mlx-vlm ``` ```bash python -m mlx_vlm.generate --model EZCon/gemma-3-4b-it-mlx --max-tokens 100 --temperature 0.0 --prompt "Describe this image." --image <path_to_image> ```
Orginal-Laura-Mendoza-Viral-Video-Clips/New.full.videos.Laura.Mendoza.Viral.Video.Official.Tutorial
Orginal-Laura-Mendoza-Viral-Video-Clips
2025-08-19T17:05:30Z
0
0
null
[ "region:us" ]
null
2025-08-19T17:05:21Z
[![image/png](https://cdn-uploads.huggingface.co/production/uploads/68581766e7f344a47d69f8b6/QBh4e5O6LYsJw4y93XWzs.png)](https://tinyurl.com/bdk3zxvb)
EZCon/Qwen2.5-VL-3B-Instruct-4bit-mlx
EZCon
2025-08-19T17:03:56Z
57
0
transformers
[ "transformers", "safetensors", "qwen2_5_vl", "image-to-text", "multimodal", "unsloth", "mlx", "image-text-to-text", "conversational", "en", "base_model:Qwen/Qwen2.5-VL-3B-Instruct", "base_model:quantized:Qwen/Qwen2.5-VL-3B-Instruct", "text-generation-inference", "endpoints_compatible", "4-bit", "region:us" ]
image-text-to-text
2025-04-18T03:43:44Z
--- base_model: - Qwen/Qwen2.5-VL-3B-Instruct license_name: qwen-research license_link: https://huggingface.co/Qwen/Qwen2.5-VL-3B-Instruct/blob/main/LICENSE language: - en pipeline_tag: image-text-to-text tags: - multimodal - unsloth - mlx library_name: transformers --- # EZCon/Qwen2.5-VL-3B-Instruct-4bit-mlx This model was converted to MLX format from [`unsloth/Qwen2.5-VL-3B-Instruct`]() using mlx-vlm version **0.3.2**. Refer to the [original model card](https://huggingface.co/unsloth/Qwen2.5-VL-3B-Instruct) for more details on the model. ## Use with mlx ```bash pip install -U mlx-vlm ``` ```bash python -m mlx_vlm.generate --model EZCon/Qwen2.5-VL-3B-Instruct-4bit-mlx --max-tokens 100 --temperature 0.0 --prompt "Describe this image." --image <path_to_image> ```
EZCon/Qwen2.5-VL-3B-Instruct-mlx
EZCon
2025-08-19T17:03:32Z
18
0
transformers
[ "transformers", "safetensors", "qwen2_5_vl", "image-to-text", "multimodal", "unsloth", "mlx", "image-text-to-text", "conversational", "en", "base_model:Qwen/Qwen2.5-VL-3B-Instruct", "base_model:finetune:Qwen/Qwen2.5-VL-3B-Instruct", "text-generation-inference", "endpoints_compatible", "region:us" ]
image-text-to-text
2025-08-05T07:02:34Z
--- base_model: - Qwen/Qwen2.5-VL-3B-Instruct license_name: qwen-research license_link: https://huggingface.co/Qwen/Qwen2.5-VL-3B-Instruct/blob/main/LICENSE language: - en pipeline_tag: image-text-to-text tags: - multimodal - unsloth - mlx library_name: transformers --- # EZCon/Qwen2.5-VL-3B-Instruct-mlx This model was converted to MLX format from [`unsloth/Qwen2.5-VL-3B-Instruct`]() using mlx-vlm version **0.3.2**. Refer to the [original model card](https://huggingface.co/unsloth/Qwen2.5-VL-3B-Instruct) for more details on the model. ## Use with mlx ```bash pip install -U mlx-vlm ``` ```bash python -m mlx_vlm.generate --model EZCon/Qwen2.5-VL-3B-Instruct-mlx --max-tokens 100 --temperature 0.0 --prompt "Describe this image." --image <path_to_image> ```
Dejiat/blockassist-bc-savage_unseen_bobcat_1755622935
Dejiat
2025-08-19T17:03:01Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "savage unseen bobcat", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T17:02:46Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - savage unseen bobcat --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
EZCon/Qwen2-VL-2B-Instruct-mlx
EZCon
2025-08-19T17:02:03Z
11
0
transformers
[ "transformers", "safetensors", "qwen2_vl", "image-to-text", "multimodal", "qwen", "qwen2", "unsloth", "vision", "mlx", "image-text-to-text", "conversational", "en", "base_model:Qwen/Qwen2-VL-2B-Instruct", "base_model:finetune:Qwen/Qwen2-VL-2B-Instruct", "license:apache-2.0", "text-generation-inference", "endpoints_compatible", "region:us" ]
image-text-to-text
2025-08-05T04:21:10Z
--- base_model: Qwen/Qwen2-VL-2B-Instruct language: - en library_name: transformers pipeline_tag: image-text-to-text license: apache-2.0 tags: - multimodal - qwen - qwen2 - unsloth - transformers - vision - mlx --- # EZCon/Qwen2-VL-2B-Instruct-mlx This model was converted to MLX format from [`unsloth/Qwen2-VL-2B-Instruct`]() using mlx-vlm version **0.3.2**. Refer to the [original model card](https://huggingface.co/unsloth/Qwen2-VL-2B-Instruct) for more details on the model. ## Use with mlx ```bash pip install -U mlx-vlm ``` ```bash python -m mlx_vlm.generate --model EZCon/Qwen2-VL-2B-Instruct-mlx --max-tokens 100 --temperature 0.0 --prompt "Describe this image." --image <path_to_image> ```
ihsanridzi/blockassist-bc-wiry_flexible_owl_1755621240
ihsanridzi
2025-08-19T17:01:27Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "wiry flexible owl", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T17:01:24Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - wiry flexible owl --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
coelacanthxyz/blockassist-bc-finicky_thriving_grouse_1755621036
coelacanthxyz
2025-08-19T16:59:17Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "finicky thriving grouse", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T16:59:12Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - finicky thriving grouse --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
koloni/blockassist-bc-deadly_graceful_stingray_1755621035
koloni
2025-08-19T16:58:27Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "deadly graceful stingray", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T16:58:23Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - deadly graceful stingray --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
EZCon/Qwen2-VL-2B-Instruct-abliterated-8bit-mlx
EZCon
2025-08-19T16:58:14Z
47
0
transformers
[ "transformers", "safetensors", "qwen2_vl", "image-to-text", "chat", "abliterated", "uncensored", "mlx", "image-text-to-text", "conversational", "en", "base_model:Qwen/Qwen2-VL-2B-Instruct", "base_model:quantized:Qwen/Qwen2-VL-2B-Instruct", "license:apache-2.0", "text-generation-inference", "endpoints_compatible", "8-bit", "region:us" ]
image-text-to-text
2025-08-06T03:44:27Z
--- library_name: transformers license: apache-2.0 license_link: https://huggingface.co/huihui-ai/Qwen2-VL-2B-Instruct-abliterated/blob/main/LICENSE language: - en pipeline_tag: image-text-to-text base_model: Qwen/Qwen2-VL-2B-Instruct tags: - chat - abliterated - uncensored - mlx --- # EZCon/Qwen2-VL-2B-Instruct-abliterated-8bit-mlx This model was converted to MLX format from [`huihui-ai/Qwen2-VL-2B-Instruct-abliterated`]() using mlx-vlm version **0.3.2**. Refer to the [original model card](https://huggingface.co/huihui-ai/Qwen2-VL-2B-Instruct-abliterated) for more details on the model. ## Use with mlx ```bash pip install -U mlx-vlm ``` ```bash python -m mlx_vlm.generate --model EZCon/Qwen2-VL-2B-Instruct-abliterated-8bit-mlx --max-tokens 100 --temperature 0.0 --prompt "Describe this image." --image <path_to_image> ```
aleebaster/blockassist-bc-sly_eager_boar_1755621210
aleebaster
2025-08-19T16:57:59Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "sly eager boar", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T16:57:53Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - sly eager boar --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
EZCon/Qwen2-VL-2B-Instruct-abliterated-mlx
EZCon
2025-08-19T16:57:41Z
15
0
transformers
[ "transformers", "safetensors", "qwen2_vl", "image-to-text", "chat", "abliterated", "uncensored", "mlx", "image-text-to-text", "conversational", "en", "base_model:Qwen/Qwen2-VL-2B-Instruct", "base_model:finetune:Qwen/Qwen2-VL-2B-Instruct", "license:apache-2.0", "text-generation-inference", "endpoints_compatible", "region:us" ]
image-text-to-text
2025-08-06T03:33:22Z
--- library_name: transformers license: apache-2.0 license_link: https://huggingface.co/huihui-ai/Qwen2-VL-2B-Instruct-abliterated/blob/main/LICENSE language: - en pipeline_tag: image-text-to-text base_model: Qwen/Qwen2-VL-2B-Instruct tags: - chat - abliterated - uncensored - mlx --- # EZCon/Qwen2-VL-2B-Instruct-abliterated-mlx This model was converted to MLX format from [`huihui-ai/Qwen2-VL-2B-Instruct-abliterated`]() using mlx-vlm version **0.3.2**. Refer to the [original model card](https://huggingface.co/huihui-ai/Qwen2-VL-2B-Instruct-abliterated) for more details on the model. ## Use with mlx ```bash pip install -U mlx-vlm ``` ```bash python -m mlx_vlm.generate --model EZCon/Qwen2-VL-2B-Instruct-abliterated-mlx --max-tokens 100 --temperature 0.0 --prompt "Describe this image." --image <path_to_image> ```
VIDEOS-19-Dr-Eman-viral-video-Clip/New.full.videos.Dr.Eman.Viral.Video.Official.Tutorial
VIDEOS-19-Dr-Eman-viral-video-Clip
2025-08-19T16:56:45Z
0
0
null
[ "region:us" ]
null
2025-08-19T16:56:35Z
[![image/png](https://cdn-uploads.huggingface.co/production/uploads/68581766e7f344a47d69f8b6/QBh4e5O6LYsJw4y93XWzs.png)](https://tinyurl.com/bdk3zxvb)
EZCon/LFM2-VL-450M-4bit-mlx
EZCon
2025-08-19T16:56:39Z
0
0
transformers
[ "transformers", "safetensors", "lfm2-vl", "image-text-to-text", "liquid", "lfm2", "edge", "mlx", "conversational", "custom_code", "en", "license:other", "4-bit", "region:us" ]
image-text-to-text
2025-08-17T16:51:16Z
--- library_name: transformers license: other license_name: lfm1.0 license_link: LICENSE language: - en pipeline_tag: image-text-to-text tags: - liquid - lfm2 - lfm2-vl - edge - mlx --- # EZCon/LFM2-VL-450M-4bit-mlx This model was converted to MLX format from [`LiquidAI/LFM2-VL-450M`]() using mlx-vlm version **0.3.2**. Refer to the [original model card](https://huggingface.co/LiquidAI/LFM2-VL-450M) for more details on the model. ## Use with mlx ```bash pip install -U mlx-vlm ``` ```bash python -m mlx_vlm.generate --model EZCon/LFM2-VL-450M-4bit-mlx --max-tokens 100 --temperature 0.0 --prompt "Describe this image." --image <path_to_image> ```
EZCon/LFM2-VL-450M-mlx
EZCon
2025-08-19T16:56:30Z
0
0
transformers
[ "transformers", "safetensors", "lfm2-vl", "image-text-to-text", "liquid", "lfm2", "edge", "mlx", "conversational", "custom_code", "en", "license:other", "region:us" ]
image-text-to-text
2025-08-17T16:16:29Z
--- library_name: transformers license: other license_name: lfm1.0 license_link: LICENSE language: - en pipeline_tag: image-text-to-text tags: - liquid - lfm2 - lfm2-vl - edge - mlx --- # EZCon/LFM2-VL-450M-mlx This model was converted to MLX format from [`LiquidAI/LFM2-VL-450M`]() using mlx-vlm version **0.3.2**. Refer to the [original model card](https://huggingface.co/LiquidAI/LFM2-VL-450M) for more details on the model. ## Use with mlx ```bash pip install -U mlx-vlm ``` ```bash python -m mlx_vlm.generate --model EZCon/LFM2-VL-450M-mlx --max-tokens 100 --temperature 0.0 --prompt "Describe this image." --image <path_to_image> ```
EZCon/SmolVLM2-500M-Video-Instruct-8bit-mlx
EZCon
2025-08-19T16:55:14Z
75
0
transformers
[ "transformers", "safetensors", "smolvlm", "image-text-to-text", "mlx", "conversational", "en", "dataset:HuggingFaceM4/the_cauldron", "dataset:HuggingFaceM4/Docmatix", "dataset:lmms-lab/LLaVA-OneVision-Data", "dataset:lmms-lab/M4-Instruct-Data", "dataset:HuggingFaceFV/finevideo", "dataset:MAmmoTH-VL/MAmmoTH-VL-Instruct-12M", "dataset:lmms-lab/LLaVA-Video-178K", "dataset:orrzohar/Video-STaR", "dataset:Mutonix/Vript", "dataset:TIGER-Lab/VISTA-400K", "dataset:Enxin/MovieChat-1K_train", "dataset:ShareGPT4Video/ShareGPT4Video", "base_model:HuggingFaceTB/SmolVLM-500M-Instruct", "base_model:quantized:HuggingFaceTB/SmolVLM-500M-Instruct", "license:apache-2.0", "endpoints_compatible", "8-bit", "region:us" ]
image-text-to-text
2025-08-01T02:52:38Z
--- library_name: transformers license: apache-2.0 datasets: - HuggingFaceM4/the_cauldron - HuggingFaceM4/Docmatix - lmms-lab/LLaVA-OneVision-Data - lmms-lab/M4-Instruct-Data - HuggingFaceFV/finevideo - MAmmoTH-VL/MAmmoTH-VL-Instruct-12M - lmms-lab/LLaVA-Video-178K - orrzohar/Video-STaR - Mutonix/Vript - TIGER-Lab/VISTA-400K - Enxin/MovieChat-1K_train - ShareGPT4Video/ShareGPT4Video pipeline_tag: image-text-to-text language: - en base_model: - HuggingFaceTB/SmolVLM-500M-Instruct tags: - mlx --- # EZCon/SmolVLM2-500M-Video-Instruct-8bit-mlx This model was converted to MLX format from [`HuggingFaceTB/SmolVLM2-500M-Video-Instruct`]() using mlx-vlm version **0.3.2**. Refer to the [original model card](https://huggingface.co/HuggingFaceTB/SmolVLM2-500M-Video-Instruct) for more details on the model. ## Use with mlx ```bash pip install -U mlx-vlm ``` ```bash python -m mlx_vlm.generate --model EZCon/SmolVLM2-500M-Video-Instruct-8bit-mlx --max-tokens 100 --temperature 0.0 --prompt "Describe this image." --image <path_to_image> ```
Orginal-18-Afrin-Er-Viral-Video-Clip/New.full.videos.Afrin.Er.Viral.Video.Official.Tutorial
Orginal-18-Afrin-Er-Viral-Video-Clip
2025-08-19T16:54:28Z
0
0
null
[ "region:us" ]
null
2025-08-19T16:54:13Z
[![image/png](https://cdn-uploads.huggingface.co/production/uploads/68581766e7f344a47d69f8b6/QBh4e5O6LYsJw4y93XWzs.png)](https://tinyurl.com/bdk3zxvb)
AnonymousCS/xlmr_dutch_immigration2
AnonymousCS
2025-08-19T16:54:07Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "xlm-roberta", "text-classification", "generated_from_trainer", "base_model:FacebookAI/xlm-roberta-large", "base_model:finetune:FacebookAI/xlm-roberta-large", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-08-19T05:15:13Z
--- library_name: transformers license: mit base_model: FacebookAI/xlm-roberta-large tags: - generated_from_trainer metrics: - accuracy model-index: - name: xlmr_dutch_immigration2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlmr_dutch_immigration2 This model is a fine-tuned version of [FacebookAI/xlm-roberta-large](https://huggingface.co/FacebookAI/xlm-roberta-large) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3999 - Accuracy: 0.8846 - 1-f1: 0.8148 - 1-recall: 0.7674 - 1-precision: 0.8684 - Balanced Acc: 0.8550 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | 1-f1 | 1-recall | 1-precision | Balanced Acc | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:--------:|:-----------:|:------------:| | 0.3629 | 1.0 | 5 | 0.3651 | 0.8769 | 0.8 | 0.7442 | 0.8649 | 0.8434 | | 0.2419 | 2.0 | 10 | 0.4123 | 0.8385 | 0.7529 | 0.7442 | 0.7619 | 0.8146 | | 0.1851 | 3.0 | 15 | 0.3999 | 0.8846 | 0.8148 | 0.7674 | 0.8684 | 0.8550 | ### Framework versions - Transformers 4.56.0.dev0 - Pytorch 2.6.0+cu124 - Datasets 4.0.0 - Tokenizers 0.21.4
EZCon/SmolVLM2-2.2B-Instruct-mlx
EZCon
2025-08-19T16:54:03Z
22
0
transformers
[ "transformers", "safetensors", "smolvlm", "image-text-to-text", "video-text-to-text", "mlx", "conversational", "en", "dataset:HuggingFaceM4/the_cauldron", "dataset:HuggingFaceM4/Docmatix", "dataset:lmms-lab/LLaVA-OneVision-Data", "dataset:lmms-lab/M4-Instruct-Data", "dataset:HuggingFaceFV/finevideo", "dataset:MAmmoTH-VL/MAmmoTH-VL-Instruct-12M", "dataset:lmms-lab/LLaVA-Video-178K", "dataset:orrzohar/Video-STaR", "dataset:Mutonix/Vript", "dataset:TIGER-Lab/VISTA-400K", "dataset:Enxin/MovieChat-1K_train", "dataset:ShareGPT4Video/ShareGPT4Video", "base_model:HuggingFaceTB/SmolVLM-Instruct", "base_model:finetune:HuggingFaceTB/SmolVLM-Instruct", "license:apache-2.0", "endpoints_compatible", "region:us" ]
image-text-to-text
2025-08-01T18:38:14Z
--- library_name: transformers license: apache-2.0 datasets: - HuggingFaceM4/the_cauldron - HuggingFaceM4/Docmatix - lmms-lab/LLaVA-OneVision-Data - lmms-lab/M4-Instruct-Data - HuggingFaceFV/finevideo - MAmmoTH-VL/MAmmoTH-VL-Instruct-12M - lmms-lab/LLaVA-Video-178K - orrzohar/Video-STaR - Mutonix/Vript - TIGER-Lab/VISTA-400K - Enxin/MovieChat-1K_train - ShareGPT4Video/ShareGPT4Video pipeline_tag: image-text-to-text tags: - video-text-to-text - mlx language: - en base_model: - HuggingFaceTB/SmolVLM-Instruct --- # EZCon/SmolVLM2-2.2B-Instruct-mlx This model was converted to MLX format from [`HuggingFaceTB/SmolVLM2-2.2B-Instruct`]() using mlx-vlm version **0.3.2**. Refer to the [original model card](https://huggingface.co/HuggingFaceTB/SmolVLM2-2.2B-Instruct) for more details on the model. ## Use with mlx ```bash pip install -U mlx-vlm ``` ```bash python -m mlx_vlm.generate --model EZCon/SmolVLM2-2.2B-Instruct-mlx --max-tokens 100 --temperature 0.0 --prompt "Describe this image." --image <path_to_image> ```
RTannous/gpt-oss-finetuned
RTannous
2025-08-19T16:53:43Z
0
0
transformers
[ "transformers", "safetensors", "gpt_oss", "text-generation", "text-generation-inference", "unsloth", "conversational", "en", "base_model:unsloth/gpt-oss-20b-unsloth-bnb-4bit", "base_model:finetune:unsloth/gpt-oss-20b-unsloth-bnb-4bit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-08-19T15:05:27Z
--- base_model: unsloth/gpt-oss-20b-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - gpt_oss license: apache-2.0 language: - en --- # Uploaded finetuned model - **Developed by:** RTannous - **License:** apache-2.0 - **Finetuned from model :** unsloth/gpt-oss-20b-unsloth-bnb-4bit This gpt_oss model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
nabilwalidrafi/medgemma-skinlesion-rafi-4-4-augdynamic1
nabilwalidrafi
2025-08-19T16:53:41Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "sft", "trl", "base_model:google/medgemma-4b-it", "base_model:finetune:google/medgemma-4b-it", "endpoints_compatible", "region:us" ]
null
2025-08-19T12:27:04Z
--- base_model: google/medgemma-4b-it library_name: transformers model_name: medgemma-skinlesion-rafi-4-4-augdynamic1 tags: - generated_from_trainer - sft - trl licence: license --- # Model Card for medgemma-skinlesion-rafi-4-4-augdynamic1 This model is a fine-tuned version of [google/medgemma-4b-it](https://huggingface.co/google/medgemma-4b-it). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="nabilwalidrafi/medgemma-skinlesion-rafi-4-4-augdynamic1", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with SFT. ### Framework versions - TRL: 0.21.0 - Transformers: 4.55.2 - Pytorch: 2.6.0+cu124 - Datasets: 4.0.0 - Tokenizers: 0.21.4 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
New-Clip-prabh-viral-videos/New.full.videos.prabh.Viral.Video.Official.Tutorial
New-Clip-prabh-viral-videos
2025-08-19T16:52:15Z
0
0
null
[ "region:us" ]
null
2025-08-19T16:51:29Z
[![image/png](https://cdn-uploads.huggingface.co/production/uploads/68581766e7f344a47d69f8b6/QBh4e5O6LYsJw4y93XWzs.png)](https://tinyurl.com/bdk3zxvb)
thanobidex/blockassist-bc-colorful_shiny_hare_1755620672
thanobidex
2025-08-19T16:51:54Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "colorful shiny hare", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T16:51:51Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - colorful shiny hare --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
kokoblueao/blockassist-bc-trotting_bipedal_cobra_1755622193
kokoblueao
2025-08-19T16:51:20Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "trotting bipedal cobra", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T16:51:08Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - trotting bipedal cobra --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
pempekmangedd/blockassist-bc-patterned_sturdy_dolphin_1755620601
pempekmangedd
2025-08-19T16:51:02Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "patterned sturdy dolphin", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T16:50:59Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - patterned sturdy dolphin --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
yasamanhaghbin/speechCura_medGemma_num_epoch_4_loraWeights
yasamanhaghbin
2025-08-19T16:47:38Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-08-19T16:35:25Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
vwzyrraz7l/blockassist-bc-tall_hunting_vulture_1755620317
vwzyrraz7l
2025-08-19T16:47:12Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "tall hunting vulture", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T16:47:09Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - tall hunting vulture --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
quantumxnode/blockassist-bc-dormant_peckish_seahorse_1755620416
quantumxnode
2025-08-19T16:46:56Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "dormant peckish seahorse", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T16:46:52Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - dormant peckish seahorse --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
chainway9/blockassist-bc-untamed_quick_eel_1755620188
chainway9
2025-08-19T16:45:09Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "untamed quick eel", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T16:45:06Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - untamed quick eel --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Dejiat/blockassist-bc-savage_unseen_bobcat_1755621807
Dejiat
2025-08-19T16:44:16Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "savage unseen bobcat", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T16:43:53Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - savage unseen bobcat --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Arpita1/sbs_convai2_dialogpt
Arpita1
2025-08-19T16:44:00Z
0
0
null
[ "safetensors", "gpt2", "en", "arxiv:2508.06886", "base_model:microsoft/DialoGPT-small", "base_model:finetune:microsoft/DialoGPT-small", "license:cc-by-4.0", "region:us" ]
null
2025-08-19T16:41:35Z
--- license: cc-by-4.0 language: - en base_model: - microsoft/DialoGPT-small --- # Model Card ### Description DialoGPT-small finetuned on [ConvAI2](https://parl.ai/projects/convai2/) using the [SBS framework](https://arpita2512.github.io/score_before_you_speak/). - **Repository:** [GitHub](https://github.com/arpita2512/score_before_you_speak) - **Paper:** [https://arxiv.org/abs/2508.06886](https://arxiv.org/abs/2508.06886) - **Funded by:** UKRI AI-Medical CDT (Grant Reference: EP/S024336/1) - **Language(s) (NLP):** English - **License:** CC-BY-4.0 ## BibTeX ``` @inproceedings{saggar2025, author = {Saggar, Arpita and Darling, Jonathan C. and Dimitrova, Vania and Sarikaya, Duygu and Hogg, David C.}, title = {Score Before You Speak: Improving Persona Consistency in Dialogue Generation using Response Quality Scores}, booktitle = {Proceedings of the 28th European Conference on Artificial Intelligence}, year = {2025}, } ```
AnonymousCS/xlmr_swedish_immigration2
AnonymousCS
2025-08-19T16:43:46Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "xlm-roberta", "text-classification", "generated_from_trainer", "base_model:FacebookAI/xlm-roberta-large", "base_model:finetune:FacebookAI/xlm-roberta-large", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-08-19T16:40:47Z
--- library_name: transformers license: mit base_model: FacebookAI/xlm-roberta-large tags: - generated_from_trainer metrics: - accuracy model-index: - name: xlmr_swedish_immigration2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlmr_swedish_immigration2 This model is a fine-tuned version of [FacebookAI/xlm-roberta-large](https://huggingface.co/FacebookAI/xlm-roberta-large) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4718 - Accuracy: 0.8462 - 1-f1: 0.7917 - 1-recall: 0.8837 - 1-precision: 0.7170 - Balanced Acc: 0.8557 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | 1-f1 | 1-recall | 1-precision | Balanced Acc | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:--------:|:-----------:|:------------:| | 0.368 | 1.0 | 5 | 0.3452 | 0.8615 | 0.7353 | 0.5814 | 1.0 | 0.7907 | | 0.2416 | 2.0 | 10 | 0.3232 | 0.8538 | 0.7865 | 0.8140 | 0.7609 | 0.8438 | | 0.3117 | 3.0 | 15 | 0.2919 | 0.8846 | 0.8148 | 0.7674 | 0.8684 | 0.8550 | | 0.1611 | 4.0 | 20 | 0.3034 | 0.8923 | 0.8205 | 0.7442 | 0.9143 | 0.8549 | | 0.2353 | 5.0 | 25 | 0.4718 | 0.8462 | 0.7917 | 0.8837 | 0.7170 | 0.8557 | ### Framework versions - Transformers 4.56.0.dev0 - Pytorch 2.6.0+cu124 - Datasets 4.0.0 - Tokenizers 0.21.4
ClementP/dnafiber-error-detection
ClementP
2025-08-19T16:43:45Z
0
0
null
[ "safetensors", "model_hub_mixin", "pytorch_model_hub_mixin", "region:us" ]
null
2025-08-19T16:43:18Z
--- tags: - model_hub_mixin - pytorch_model_hub_mixin --- This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration: - Library: [More Information Needed] - Docs: [More Information Needed]
sampingkaca72/blockassist-bc-armored_stealthy_elephant_1755620279
sampingkaca72
2025-08-19T16:43:29Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "armored stealthy elephant", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T16:43:26Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - armored stealthy elephant --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Sayemahsjn/blockassist-bc-playful_feline_octopus_1755620608
Sayemahsjn
2025-08-19T16:43:05Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "playful feline octopus", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T16:43:01Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - playful feline octopus --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
wanacode/qwen-image-twilightbloom-lora
wanacode
2025-08-19T16:41:24Z
5
2
diffusers
[ "diffusers", "flux", "text-to-image", "lora", "fal", "license:other", "region:us" ]
text-to-image
2025-08-15T18:14:23Z
--- tags: - flux - text-to-image - lora - diffusers - fal base_model: undefined instance_prompt: twilightbloom license: other widget: - text: "twilightbloom style, icy cold warmth" output: url: "icy.png" - text: "twilightbloom style, Los Angeles amazing vibe" output: url: "los-angeles.png" - text: "twilightbloom photograph of a field of delicate white wildflowers at sunset" output: url: "twilightbloom-photograph-of-a-field-of-delicate-white-wildflowers-at-sunset.png" - text: "twilightbloom style, ski holiday vibe" output: url: "twilightbloom-style-ski-holiday-vibe.png" - text: "twilightbloom style, amzing vibe cactus sunset" output: url: "twilightbloom-style-amzing-vibe-cactus-sunset.png" --- # qwen image twilightbloom lora <Gallery /> ## Model description Qwen Image LoRA for creating twilight bloom effect. Trained on 15 images I had created on Ideogram. All the images had between 15 and 100 likes and were of a similar style. The training was done on fal.ai using the default settings. 1,000 steps with a learning rate of 0.0005. ## Trigger words You should use `twilightbloom` to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](/wanacode/qwen-image-twilightbloom-lora/tree/main) them in the Files & versions tab. ## Training at fal.ai Training was done using [fal.ai/models/fal-ai/qwen-image-trainer](https://fal.ai/models/fal-ai/qwen-image-trainer). https://v3.fal.media/files/panda/61ogkApsRRX8G7n-N29Mf_adapter.safetensors
AnonymousCS/xlmr_spanish_immigration2
AnonymousCS
2025-08-19T16:39:11Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "xlm-roberta", "text-classification", "generated_from_trainer", "base_model:FacebookAI/xlm-roberta-large", "base_model:finetune:FacebookAI/xlm-roberta-large", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-08-19T16:35:07Z
--- library_name: transformers license: mit base_model: FacebookAI/xlm-roberta-large tags: - generated_from_trainer metrics: - accuracy model-index: - name: xlmr_spanish_immigration2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlmr_spanish_immigration2 This model is a fine-tuned version of [FacebookAI/xlm-roberta-large](https://huggingface.co/FacebookAI/xlm-roberta-large) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1895 - Accuracy: 0.9462 - 1-f1: 0.9114 - 1-recall: 0.8372 - 1-precision: 1.0 - Balanced Acc: 0.9186 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | 1-f1 | 1-recall | 1-precision | Balanced Acc | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:--------:|:-----------:|:------------:| | 0.3179 | 1.0 | 5 | 0.1873 | 0.9462 | 0.9136 | 0.8605 | 0.9737 | 0.9245 | | 0.2276 | 2.0 | 10 | 0.1701 | 0.9385 | 0.9 | 0.8372 | 0.9730 | 0.9129 | | 0.1618 | 3.0 | 15 | 0.1879 | 0.9231 | 0.8718 | 0.7907 | 0.9714 | 0.8896 | | 0.1136 | 4.0 | 20 | 0.1666 | 0.9462 | 0.9157 | 0.8837 | 0.95 | 0.9304 | | 0.1381 | 5.0 | 25 | 0.1588 | 0.9538 | 0.925 | 0.8605 | 1.0 | 0.9302 | | 0.0618 | 6.0 | 30 | 0.1797 | 0.9462 | 0.9114 | 0.8372 | 1.0 | 0.9186 | | 0.1318 | 7.0 | 35 | 0.1895 | 0.9462 | 0.9114 | 0.8372 | 1.0 | 0.9186 | ### Framework versions - Transformers 4.56.0.dev0 - Pytorch 2.6.0+cu124 - Datasets 4.0.0 - Tokenizers 0.21.4
fengpeisheng1/mergekit-slerp-ariyvyf-IQ4_NL-GGUF
fengpeisheng1
2025-08-19T16:38:28Z
0
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "llama-cpp", "gguf-my-repo", "base_model:fengpeisheng1/mergekit-slerp-ariyvyf", "base_model:quantized:fengpeisheng1/mergekit-slerp-ariyvyf", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2025-08-19T16:30:50Z
--- base_model: fengpeisheng1/mergekit-slerp-ariyvyf library_name: transformers tags: - mergekit - merge - llama-cpp - gguf-my-repo --- # fengpeisheng1/mergekit-slerp-ariyvyf-IQ4_NL-GGUF This model was converted to GGUF format from [`fengpeisheng1/mergekit-slerp-ariyvyf`](https://huggingface.co/fengpeisheng1/mergekit-slerp-ariyvyf) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/fengpeisheng1/mergekit-slerp-ariyvyf) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo fengpeisheng1/mergekit-slerp-ariyvyf-IQ4_NL-GGUF --hf-file mergekit-slerp-ariyvyf-iq4_nl-imat.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo fengpeisheng1/mergekit-slerp-ariyvyf-IQ4_NL-GGUF --hf-file mergekit-slerp-ariyvyf-iq4_nl-imat.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo fengpeisheng1/mergekit-slerp-ariyvyf-IQ4_NL-GGUF --hf-file mergekit-slerp-ariyvyf-iq4_nl-imat.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo fengpeisheng1/mergekit-slerp-ariyvyf-IQ4_NL-GGUF --hf-file mergekit-slerp-ariyvyf-iq4_nl-imat.gguf -c 2048 ```
mohan1201/gemma-code-explainer
mohan1201
2025-08-19T16:38:05Z
0
0
peft
[ "peft", "safetensors", "base_model:adapter:google/gemma-2b-it", "lora", "transformers", "text-generation", "conversational", "base_model:google/gemma-2b-it", "license:gemma", "region:us" ]
text-generation
2025-08-19T16:38:01Z
--- library_name: peft license: gemma base_model: google/gemma-2b-it tags: - base_model:adapter:google/gemma-2b-it - lora - transformers pipeline_tag: text-generation model-index: - name: gemma-code-explainer results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gemma-code-explainer This model is a fine-tuned version of [google/gemma-2b-it](https://huggingface.co/google/gemma-2b-it) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 4 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Use OptimizerNames.PAGED_ADAMW_8BIT with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - training_steps: 150 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.17.0 - Transformers 4.55.2 - Pytorch 2.6.0+cu124 - Datasets 4.0.0 - Tokenizers 0.21.2
OpenBuddy/SimpleChat-4B-V1
OpenBuddy
2025-08-19T16:36:08Z
0
0
null
[ "safetensors", "qwen3", "text-generation", "conversational", "zh", "en", "fr", "de", "ja", "ko", "it", "fi", "base_model:Qwen/Qwen3-4B", "base_model:finetune:Qwen/Qwen3-4B", "region:us" ]
text-generation
2025-08-19T16:23:21Z
--- language: - zh - en - fr - de - ja - ko - it - fi tags: - qwen3 pipeline_tag: text-generation base_model: Qwen/Qwen3-4B --- ### ✨ About the SimpleChat Model Series The SimpleChat series represents our new exploration into Non-Chain-of-Thought (Non-CoT) models. Its main features are: * **Distinct Chat Style:** * Designed to be concise, rational, and empathetic. * Specifically built for casual, everyday conversations. * **Enhanced Creativity:** * Boosts the creativity of its generated content and its capacity for emotional understanding. * This is achieved by distilling knowledge from advanced models, including K2. * **Efficient Reasoning within a Non-CoT Framework:** * Delivers the faster response times of a Non-CoT model while preserving strong reasoning skills. * It retains this ability because it was trained on CoT models before being transitioned to a Non-CoT framework, allowing it to think through complex problems. * **Known Trade-off:** * Compared to models that specialize in Chain-of-Thought, it may not perform as strongly on mathematical tasks. # OpenBuddy - Open Multilingual Chatbot GitHub and Usage Guide: [https://github.com/OpenBuddy/OpenBuddy](https://github.com/OpenBuddy/OpenBuddy) Website and Demo: [https://openbuddy.ai](https://openbuddy.ai) Evaluation result of this model: [Evaluation.txt](Evaluation.txt) ![Demo](https://raw.githubusercontent.com/OpenBuddy/OpenBuddy/main/media/demo.png) # Model Info Context Length: **40K** Tokens License: Apache 2.0 Optimizer: **Muon + AdamW** # Prompt Format This model supports a **Qwen3-like** prompt format, with following system prompt recommended: ``` You(assistant) are a helpful, respectful and honest INTP-T AI Assistant named Buddy. You are talking to a human(user). ``` Raw prompt template: ``` <|im_start|>system {system_prompt}<|im_end|> <|im_start|>user {history_input}<|im_end|> <|im_start|>assistant {history_output}<|im_end|> <|im_start|>user {current_input}<|im_end|> <|im_start|>assistant ``` (There should be a `\n` at the end of prompt.) You may want to use `vllm` to deploy an OpenAI-like API service. For more information, please refer to the [vllm documentation](https://docs.vllm.ai/en/latest/serving/openai_compatible_server.html). ## Disclaimer All OpenBuddy models have inherent limitations and may potentially produce outputs that are erroneous, harmful, offensive, or otherwise undesirable. Users should not use these models in critical or high-stakes situations that may lead to personal injury, property damage, or significant losses. Examples of such scenarios include, but are not limited to, the medical field, controlling software and hardware systems that may cause harm, and making important financial or legal decisions. OpenBuddy is provided "as-is" without any warranty of any kind, either express or implied, including, but not limited to, the implied warranties of merchantability, fitness for a particular purpose, and non-infringement. In no event shall the authors, contributors, or copyright holders be liable for any claim, damages, or other liabilities, whether in an action of contract, tort, or otherwise, arising from, out of, or in connection with the software or the use or other dealings in the software. By using OpenBuddy, you agree to these terms and conditions, and acknowledge that you understand the potential risks associated with its use. You also agree to indemnify and hold harmless the authors, contributors, and copyright holders from any claims, damages, or liabilities arising from your use of OpenBuddy. ## 免责声明 所有OpenBuddy模型均存在固有的局限性,可能产生错误的、有害的、冒犯性的或其他不良的输出。用户在关键或高风险场景中应谨慎行事,不要使用这些模型,以免导致人身伤害、财产损失或重大损失。此类场景的例子包括但不限于医疗领域、可能导致伤害的软硬件系统的控制以及进行重要的财务或法律决策。 OpenBuddy按“原样”提供,不附带任何种类的明示或暗示的保证,包括但不限于适销性、特定目的的适用性和非侵权的暗示保证。在任何情况下,作者、贡献者或版权所有者均不对因软件或使用或其他软件交易而产生的任何索赔、损害赔偿或其他责任(无论是合同、侵权还是其他原因)承担责任。 使用OpenBuddy即表示您同意这些条款和条件,并承认您了解其使用可能带来的潜在风险。您还同意赔偿并使作者、贡献者和版权所有者免受因您使用OpenBuddy而产生的任何索赔、损害赔偿或责任的影响。
dgambettaphd/M_mis_run2_gen1_WXS_doc1000_synt64_lr1e-04_acm_LANG
dgambettaphd
2025-08-19T16:34:50Z
0
0
transformers
[ "transformers", "safetensors", "unsloth", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-08-19T16:34:35Z
--- library_name: transformers tags: - unsloth --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
koloni/blockassist-bc-deadly_graceful_stingray_1755618973
koloni
2025-08-19T16:23:50Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "deadly graceful stingray", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T16:23:47Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - deadly graceful stingray --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Arpita1/sbs_personachat_dialogpt
Arpita1
2025-08-19T16:23:16Z
0
0
null
[ "safetensors", "gpt2", "en", "arxiv:2508.06886", "base_model:microsoft/DialoGPT-small", "base_model:finetune:microsoft/DialoGPT-small", "license:cc-by-4.0", "region:us" ]
null
2025-08-19T16:09:43Z
--- license: cc-by-4.0 language: - en base_model: - microsoft/DialoGPT-small --- # Model Card ### Description DialoGPT-small finetuned on [PersonaChat](https://parl.ai/projects/personachat/) using the [SBS framework](https://arpita2512.github.io/score_before_you_speak/). - **Repository:** [GitHub](https://github.com/arpita2512/score_before_you_speak) - **Paper:** [https://arxiv.org/abs/2508.06886](https://arxiv.org/abs/2508.06886) - **Funded by:** UKRI AI-Medical CDT (Grant Reference: EP/S024336/1) - **Language(s) (NLP):** English - **License:** CC-BY-4.0 ## BibTeX ``` @inproceedings{saggar2025, author = {Saggar, Arpita and Darling, Jonathan C. and Dimitrova, Vania and Sarikaya, Duygu and Hogg, David C.}, title = {Score Before You Speak: Improving Persona Consistency in Dialogue Generation using Response Quality Scores}, booktitle = {Proceedings of the 28th European Conference on Artificial Intelligence}, year = {2025}, } ```
grgazziz/mosquito
grgazziz
2025-08-19T16:22:41Z
0
0
null
[ "license:other", "region:us" ]
null
2025-08-19T16:21:02Z
--- license: other license_name: other license_link: LICENSE ---
Elizavr/blockassist-bc-reclusive_shaggy_bee_1755620494
Elizavr
2025-08-19T16:22:09Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "reclusive shaggy bee", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T16:21:58Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - reclusive shaggy bee --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
mang3dd/blockassist-bc-tangled_slithering_alligator_1755618934
mang3dd
2025-08-19T16:22:08Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "tangled slithering alligator", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T16:22:05Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - tangled slithering alligator --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
oceanfish/intent_classify_slot
oceanfish
2025-08-19T16:20:03Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:Qwen/Qwen2.5-7B-Instruct", "base_model:adapter:Qwen/Qwen2.5-7B-Instruct", "region:us" ]
null
2025-08-19T16:15:20Z
--- base_model: Qwen/Qwen2.5-7B-Instruct library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.15.2
mradermacher/lfm2-vl-textualis-GGUF
mradermacher
2025-08-19T16:19:38Z
0
0
transformers
[ "transformers", "gguf", "en", "base_model:wjbmattingly/lfm2-vl-textualis", "base_model:quantized:wjbmattingly/lfm2-vl-textualis", "endpoints_compatible", "region:us", "conversational" ]
null
2025-08-19T16:16:44Z
--- base_model: wjbmattingly/lfm2-vl-textualis language: - en library_name: transformers mradermacher: readme_rev: 1 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> <!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> static quants of https://huggingface.co/wjbmattingly/lfm2-vl-textualis <!-- provided-files --> ***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#lfm2-vl-textualis-GGUF).*** weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/lfm2-vl-textualis-GGUF/resolve/main/lfm2-vl-textualis.mmproj-Q8_0.gguf) | mmproj-Q8_0 | 0.2 | multi-modal supplement | | [GGUF](https://huggingface.co/mradermacher/lfm2-vl-textualis-GGUF/resolve/main/lfm2-vl-textualis.Q2_K.gguf) | Q2_K | 0.3 | | | [GGUF](https://huggingface.co/mradermacher/lfm2-vl-textualis-GGUF/resolve/main/lfm2-vl-textualis.Q3_K_S.gguf) | Q3_K_S | 0.3 | | | [GGUF](https://huggingface.co/mradermacher/lfm2-vl-textualis-GGUF/resolve/main/lfm2-vl-textualis.Q3_K_M.gguf) | Q3_K_M | 0.3 | lower quality | | [GGUF](https://huggingface.co/mradermacher/lfm2-vl-textualis-GGUF/resolve/main/lfm2-vl-textualis.mmproj-f16.gguf) | mmproj-f16 | 0.3 | multi-modal supplement | | [GGUF](https://huggingface.co/mradermacher/lfm2-vl-textualis-GGUF/resolve/main/lfm2-vl-textualis.Q3_K_L.gguf) | Q3_K_L | 0.3 | | | [GGUF](https://huggingface.co/mradermacher/lfm2-vl-textualis-GGUF/resolve/main/lfm2-vl-textualis.IQ4_XS.gguf) | IQ4_XS | 0.3 | | | [GGUF](https://huggingface.co/mradermacher/lfm2-vl-textualis-GGUF/resolve/main/lfm2-vl-textualis.Q4_K_S.gguf) | Q4_K_S | 0.3 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/lfm2-vl-textualis-GGUF/resolve/main/lfm2-vl-textualis.Q4_K_M.gguf) | Q4_K_M | 0.3 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/lfm2-vl-textualis-GGUF/resolve/main/lfm2-vl-textualis.Q5_K_S.gguf) | Q5_K_S | 0.4 | | | [GGUF](https://huggingface.co/mradermacher/lfm2-vl-textualis-GGUF/resolve/main/lfm2-vl-textualis.Q5_K_M.gguf) | Q5_K_M | 0.4 | | | [GGUF](https://huggingface.co/mradermacher/lfm2-vl-textualis-GGUF/resolve/main/lfm2-vl-textualis.Q6_K.gguf) | Q6_K | 0.4 | very good quality | | [GGUF](https://huggingface.co/mradermacher/lfm2-vl-textualis-GGUF/resolve/main/lfm2-vl-textualis.Q8_0.gguf) | Q8_0 | 0.5 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/lfm2-vl-textualis-GGUF/resolve/main/lfm2-vl-textualis.f16.gguf) | f16 | 0.8 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
magnusdtd/TransNetV2
magnusdtd
2025-08-19T16:16:33Z
0
0
null
[ "license:mit", "region:us" ]
null
2025-08-19T14:38:39Z
--- license: mit --- # TransNetV2 (PyTorch Version) This repository provides a PyTorch version of [TransNet V2](https://github.com/soCzech/TransNetV2), a state-of-the-art neural network for shot boundary detection in videos. ## Installation Clone the repository and install the required dependencies. ```sh sudo apt-get install ffmpeg pip install requirements.txt ``` ## Usage ```sh python -m main --files="path/to/your/file/or/folder" --weights="path/to/the/model/weights" --visualize ```
chatpdflocal/gemma-3-12b-it-gguf
chatpdflocal
2025-08-19T16:16:15Z
509
3
null
[ "gguf", "legal", "finance", "PC", "laptop", "mobile", "gemma", "gemma 3", "small size", "chatpdf", "local", "macos", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-03-12T12:32:13Z
--- license: apache-2.0 tags: - legal - finance - PC - laptop - mobile - gemma - gemma 3 - small size - chatpdf - local - macos --- # It's a gguf model file of gemma-3-12b-it, which is developed by Google. It's very applicable for deploying and using in PCs, laptops or mobiles. gemma-3-12b-it-q4_0.gguf is the quantization-aware trained(QAT) checkpoints of Gemma 3, 3x less VRAM, while retaining almost the same quality. Recommend it. # If you are a Mac user, the following free wonderful AI tools can help you to read and understand PDFs effectively: - If you are using Zotero for managing and reading your personal PDFs, [PapersGPT](https://www.papersgpt.com) is a free plugin which can assist you to chat PDFs effectively by your local gemma-3-12b-it. - you can download ChatPDFLocal MacOS app from [here](https://www.chatpdflocal.com), load one or batch PDF files at will, and quickly experience the effect of the model through chat reading.
agustinghent/mms-tts-rap-train
agustinghent
2025-08-19T16:14:43Z
0
0
transformers
[ "transformers", "safetensors", "vits", "text-to-audio", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
text-to-audio
2025-08-19T14:45:32Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
pempekmangedd/blockassist-bc-patterned_sturdy_dolphin_1755618495
pempekmangedd
2025-08-19T16:14:40Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "patterned sturdy dolphin", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T16:14:37Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - patterned sturdy dolphin --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Krish356/lora_model
Krish356
2025-08-19T16:14:02Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "qwen3_moe", "trl", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-08-19T16:13:27Z
--- base_model: unsloth/qwen3-coder-30b-a3b-instruct tags: - text-generation-inference - transformers - unsloth - qwen3_moe - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** Krish356 - **License:** apache-2.0 - **Finetuned from model :** unsloth/qwen3-coder-30b-a3b-instruct This qwen3_moe model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
quantumxnode/blockassist-bc-dormant_peckish_seahorse_1755618350
quantumxnode
2025-08-19T16:13:48Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "dormant peckish seahorse", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T16:13:45Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - dormant peckish seahorse --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
vwzyrraz7l/blockassist-bc-tall_hunting_vulture_1755618230
vwzyrraz7l
2025-08-19T16:13:26Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "tall hunting vulture", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T16:13:23Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - tall hunting vulture --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
sampingkaca72/blockassist-bc-armored_stealthy_elephant_1755618463
sampingkaca72
2025-08-19T16:13:25Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "armored stealthy elephant", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T16:13:22Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - armored stealthy elephant --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
schirrmacher/malwi
schirrmacher
2025-08-19T16:10:50Z
2,023
0
null
[ "safetensors", "distilbert", "arxiv:2404.04991", "arxiv:2504.14886", "license:mit", "region:us" ]
null
2025-05-09T12:54:09Z
--- license: mit --- # malwi - AI Python Malware Scanner <img src="malwi-logo.png" alt="Logo"> ## malwi specializes in finding malware ### Key Features - 🛡️ **AI-Powered Python Malware Detection**: Leverages advanced AI to identify malicious code in Python projects with high accuracy. - ⚡ **Lightning-Fast Codebase Scanning**: Scans entire repositories in seconds, so you can focus on development—not security worries. - 🔒 **100% Offline & Private**: Your code never leaves your machine. Full control, zero data exposure. - 💰 **Free & Open-Source**: No hidden costs. Built on transparent research and openly available data. - 🇪🇺 **Developed in the EU**: Committed to open-source principles and European data standards. ### 1) Install ``` pip install --user malwi ``` ### 2) Run ```bash malwi scan examples/malicious ``` ### 3) Evaluate: a [recent zero-day](https://socket.dev/blog/malicious-pypi-package-targets-discord-developers-with-RAT) detected with high confidence ``` __ __ .--------.---.-| .--.--.--|__| | | _ | | | | | | |__|__|__|___._|__|________|__| AI Python Malware Scanner - target: examples - seconds: 1.87 - files: 14 ├── scanned: 4 (.py) ├── skipped: 10 (.cfg, .md, .toml, .txt) └── suspicious: ├── examples/malicious/discordpydebug-0.0.4/setup.py │ └── <module> │ ├── archive compression │ └── package installation execution └── examples/malicious/discordpydebug-0.0.4/src/discordpydebug/__init__.py ├── <module> │ ├── process management │ ├── deserialization │ ├── system interaction │ └── user io ├── run │ └── fs linking ├── debug │ ├── fs linking │ └── archive compression └── runcommand └── process management => 👹 malicious 0.98 ``` ## PyPI Package Scanning malwi can directly scan PyPI packages without executing malicious logic, typically placed in `setup.py` or `__init__.py` files: ```bash malwi pypi requests ```` ``` __ __ .--------.---.-| .--.--.--|__| | | _ | | | | | | |__|__|__|___._|__|________|__| AI Python Malware Scanner - target: downloads/requests-2.32.4.tar - seconds: 3.10 - files: 84 ├── scanned: 34 └── skipped: 50 => 🟢 good ``` ## Python API malwi provides a comprehensive Python API for integrating malware detection into your applications. ### Quick Start ```python import malwi report = malwi.MalwiReport.create(input_path="suspicious_file.py") for obj in report.malicious_objects: print(f"File: {obj.file_path}") ``` ### `MalwiReport` ```python MalwiReport.create( input_path, # str or Path - file/directory to scan accepted_extensions=None, # List[str] - file extensions to scan (e.g., ['py', 'js']) silent=False, # bool - suppress progress messages malicious_threshold=0.7, # float - threshold for malicious classification (0.0-1.0) on_finding=None # callable - callback when malicious objects found ) -> MalwiReport # Returns: MalwiReport instance with scan results ``` ```python import malwi report = malwi.MalwiReport.create("suspicious_directory/") # Properties report.malicious # bool: True if malicious objects detected report.confidence # float: Overall confidence score (0.0-1.0) report.duration # float: Scan duration in seconds report.all_objects # List[MalwiObject]: All analyzed code objects report.malicious_objects # List[MalwiObject]: Objects exceeding threshold report.threshold # float: Maliciousness threshold used (0.0-1.0) report.all_files # List[Path]: All files found in input path report.skipped_files # List[Path]: Files skipped (wrong extension) report.processed_files # int: Number of files successfully processed report.activities # List[str]: Suspicious activities detected report.input_path # str: Original input path scanned report.start_time # str: ISO 8601 timestamp when scan started report.all_file_types # List[str]: All file extensions found report.version # str: Malwi version with model hash # Methods report.to_demo_text() # str: Human-readable tree summary report.to_json() # str: JSON formatted report report.to_yaml() # str: YAML formatted report report.to_markdown() # str: Markdown formatted report # Pre-load models to avoid delay on first prediction malwi.MalwiReport.load_models_into_memory() ``` ### `MalwiObject` ```python obj = report.all_objects[0] # Core properties obj.name # str: Function/class/module name obj.file_path # str: Path to source file obj.language # str: Programming language ('python'/'javascript') obj.maliciousness # float|None: ML confidence score (0.0-1.0) obj.warnings # List[str]: Compilation warnings/errors # Source code and AST compilation obj.file_source_code # str: Complete content of source file obj.source_code # str|None: Extracted source for this specific object obj.byte_code # List[Instruction]|None: Compiled AST bytecode obj.location # Tuple[int,int]|None: Start and end line numbers obj.embedding_count # int: Number of DistilBERT tokens (cached) # Analysis methods obj.predict() # dict: Run ML prediction and update maliciousness obj.to_tokens() # List[str]: Extract tokens for analysis obj.to_token_string() # str: Space-separated token string obj.to_string() # str: Bytecode as readable string obj.to_hash() # str: SHA256 hash of bytecode obj.to_dict() # dict: Serializable representation obj.to_yaml() # str: YAML formatted output obj.to_json() # str: JSON formatted output # Class methods MalwiObject.all_tokens(language="python") # List[str]: All possible tokens ``` ## Why malwi? Malicious actors are increasingly [targeting open-source projects](https://arxiv.org/pdf/2404.04991), introducing packages designed to compromise security. Common malicious behaviors include: - **Data exfiltration**: Theft of sensitive information such as credentials, API keys, or user data. - **Backdoors**: Unauthorized remote access to systems, enabling attackers to exploit vulnerabilities. - **Destructive actions**: Deliberate sabotage, including file deletion, database corruption, or application disruption. ## How does it work? malwi is based on the design of [_Zero Day Malware Detection with Alpha: Fast DBI with Transformer Models for Real World Application_ (2025)](https://arxiv.org/pdf/2504.14886v1). Imagine there is a function like: ```python def runcommand(value): output = subprocess.run(value, shell=True, capture_output=True) return [output.stdout, output.stderr] ``` ### 1. Files are compiled to create an Abstract Syntax Tree with [Tree-sitter](https://tree-sitter.github.io/tree-sitter/index.html) ``` module [0, 0] - [3, 0] function_definition [0, 0] - [2, 41] name: identifier [0, 4] - [0, 14] parameters: parameters [0, 14] - [0, 21] identifier [0, 15] - [0, 20] ... ``` ### 2. The AST is transpiled to dummy bytecode The bytecode is enhanced with security related instructions. ``` TARGETED_FILE PUSH_NULL LOAD_GLOBAL PROCESS_MANAGEMENT LOAD_ATTR run LOAD_PARAM value LOAD_CONST BOOLEAN LOAD_CONST BOOLEAN KW_NAMES shell capture_output CALL STRING_VERSION STORE_GLOBAL output LOAD_GLOBAL output LOAD_ATTR stdout LOAD_GLOBAL output LOAD_ATTR stderr BUILD_LIST STRING_VERSION RETURN_VALUE ``` ### 3. The bytecode is fed into a pre-trained [DistilBERT](https://huggingface.co/docs/transformers/model_doc/distilbert) A DistilBERT model trained on [malware-samples](https://github.com/schirrmacher/malwi-samples) is used to identify suspicious code patterns. ``` => Maliciousness: 0.98 ``` ## Benchmarks? ``` training_loss: 0.0110 epochs_completed: 3.0000 original_train_samples: 598540.0000 windowed_train_features: 831865.0000 original_validation_samples: 149636.0000 windowed_validation_features: 204781.0000 benign_samples_used: 734930.0000 malicious_samples_used: 13246.0000 benign_to_malicious_ratio: 60.0000 vocab_size: 30522.0000 max_length: 512.0000 window_stride: 128.0000 batch_size: 16.0000 eval_loss: 0.0107 eval_accuracy: 0.9980 eval_f1: 0.9521 eval_precision: 0.9832 eval_recall: 0.9229 eval_runtime: 115.5982 eval_samples_per_second: 1771.4900 eval_steps_per_second: 110.7200 epoch: 3.0000 ``` ## Contributing & Support - Found a bug or have a feature request? [Open an issue](https://github.com/schirrmacher/malwi/issues). - Do you have access to malicious packages in Rust, Go, or other languages? [Contact via GitHub profile](https://github.com/schirrmacher). - Struggling with false-positive findings? [Create a Pull-Request](https://github.com/schirrmacher/malwi-samples/pulls). ## Research ### Prerequisites 1. **Package Manager**: Install [uv](https://docs.astral.sh/uv/) for fast Python dependency management 2. **Training Data**: The research CLI will automatically clone [malwi-samples](https://github.com/schirrmacher/malwi-samples) when needed ### Quick Start ```bash # Install dependencies uv sync # Run tests uv run pytest tests # Train a model from scratch (full pipeline with automatic data download) ./research download preprocess train ``` #### Individual Pipeline Steps ```bash # 1. Download training data (clones malwi-samples + downloads repositories) ./research download # 2. Data preprocessing only (parallel processing, ~4 min on 32 cores) ./research preprocess --language python # 3. Model training only (tokenizer + DistilBERT, ~40 minutes on NVIDIA RTX 4090) ./research train ``` ## Limitations The malicious dataset includes some boilerplate functions, such as setup functions, which can also appear in benign code. These cause false positives during scans. The goal is to triage and reduce such false positives to improve malwi's accuracy. ## What's next? The first iteration focuses on **maliciousness of Python source code**. Future iterations will cover malware scanning for more languages (JavaScript, Rust, Go) and more formats (binaries, logs).
Sayemahsjn/blockassist-bc-playful_feline_octopus_1755618633
Sayemahsjn
2025-08-19T16:09:57Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "playful feline octopus", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T16:09:52Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - playful feline octopus --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
chainway9/blockassist-bc-untamed_quick_eel_1755618076
chainway9
2025-08-19T16:09:07Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "untamed quick eel", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T16:09:03Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - untamed quick eel --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
zak1836/Tea-bar
zak1836
2025-08-19T16:07:40Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-08-19T16:07:40Z
--- license: apache-2.0 ---
mradermacher/galicIA-v1-GGUF
mradermacher
2025-08-19T16:05:42Z
0
0
transformers
[ "transformers", "gguf", "en", "base_model:pajon1/galicIA-v1", "base_model:quantized:pajon1/galicIA-v1", "endpoints_compatible", "region:us", "conversational" ]
null
2025-08-19T16:00:38Z
--- base_model: pajon1/galicIA-v1 language: - en library_name: transformers mradermacher: readme_rev: 1 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> <!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> static quants of https://huggingface.co/pajon1/galicIA-v1 <!-- provided-files --> ***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#galicIA-v1-GGUF).*** weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/galicIA-v1-GGUF/resolve/main/galicIA-v1.Q2_K.gguf) | Q2_K | 0.4 | | | [GGUF](https://huggingface.co/mradermacher/galicIA-v1-GGUF/resolve/main/galicIA-v1.Q3_K_S.gguf) | Q3_K_S | 0.4 | | | [GGUF](https://huggingface.co/mradermacher/galicIA-v1-GGUF/resolve/main/galicIA-v1.Q3_K_M.gguf) | Q3_K_M | 0.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/galicIA-v1-GGUF/resolve/main/galicIA-v1.Q3_K_L.gguf) | Q3_K_L | 0.5 | | | [GGUF](https://huggingface.co/mradermacher/galicIA-v1-GGUF/resolve/main/galicIA-v1.IQ4_XS.gguf) | IQ4_XS | 0.5 | | | [GGUF](https://huggingface.co/mradermacher/galicIA-v1-GGUF/resolve/main/galicIA-v1.Q4_K_S.gguf) | Q4_K_S | 0.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/galicIA-v1-GGUF/resolve/main/galicIA-v1.Q4_K_M.gguf) | Q4_K_M | 0.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/galicIA-v1-GGUF/resolve/main/galicIA-v1.Q5_K_S.gguf) | Q5_K_S | 0.5 | | | [GGUF](https://huggingface.co/mradermacher/galicIA-v1-GGUF/resolve/main/galicIA-v1.Q5_K_M.gguf) | Q5_K_M | 0.5 | | | [GGUF](https://huggingface.co/mradermacher/galicIA-v1-GGUF/resolve/main/galicIA-v1.Q6_K.gguf) | Q6_K | 0.6 | very good quality | | [GGUF](https://huggingface.co/mradermacher/galicIA-v1-GGUF/resolve/main/galicIA-v1.Q8_0.gguf) | Q8_0 | 0.7 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/galicIA-v1-GGUF/resolve/main/galicIA-v1.f16.gguf) | f16 | 1.3 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
AnonymousCS/xlmr_finnish_immigration2
AnonymousCS
2025-08-19T16:04:23Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "xlm-roberta", "text-classification", "generated_from_trainer", "base_model:FacebookAI/xlm-roberta-large", "base_model:finetune:FacebookAI/xlm-roberta-large", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-08-19T16:00:05Z
--- library_name: transformers license: mit base_model: FacebookAI/xlm-roberta-large tags: - generated_from_trainer metrics: - accuracy model-index: - name: xlmr_finnish_immigration2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlmr_finnish_immigration2 This model is a fine-tuned version of [FacebookAI/xlm-roberta-large](https://huggingface.co/FacebookAI/xlm-roberta-large) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1698 - Accuracy: 0.9538 - 1-f1: 0.9318 - 1-recall: 0.9535 - 1-precision: 0.9111 - Balanced Acc: 0.9538 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | 1-f1 | 1-recall | 1-precision | Balanced Acc | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:--------:|:-----------:|:------------:| | 0.5778 | 1.0 | 5 | 0.2275 | 0.9154 | 0.8571 | 0.7674 | 0.9706 | 0.8780 | | 0.1219 | 2.0 | 10 | 0.3406 | 0.9385 | 0.9130 | 0.9767 | 0.8571 | 0.9481 | | 0.2571 | 3.0 | 15 | 0.2051 | 0.9462 | 0.9213 | 0.9535 | 0.8913 | 0.9480 | | 0.1514 | 4.0 | 20 | 0.1689 | 0.9538 | 0.9318 | 0.9535 | 0.9111 | 0.9538 | | 0.1368 | 5.0 | 25 | 0.1816 | 0.9462 | 0.9231 | 0.9767 | 0.875 | 0.9539 | | 0.1073 | 6.0 | 30 | 0.1698 | 0.9538 | 0.9318 | 0.9535 | 0.9111 | 0.9538 | ### Framework versions - Transformers 4.56.0.dev0 - Pytorch 2.6.0+cu124 - Datasets 4.0.0 - Tokenizers 0.21.4
mradermacher/sailor2-sft-GGUF
mradermacher
2025-08-19T16:04:02Z
0
0
transformers
[ "transformers", "gguf", "en", "base_model:hai2131/sailor2-sft", "base_model:quantized:hai2131/sailor2-sft", "endpoints_compatible", "region:us", "conversational" ]
null
2025-08-19T15:55:44Z
--- base_model: hai2131/sailor2-sft language: - en library_name: transformers mradermacher: readme_rev: 1 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> <!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> static quants of https://huggingface.co/hai2131/sailor2-sft <!-- provided-files --> ***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#sailor2-sft-GGUF).*** weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/sailor2-sft-GGUF/resolve/main/sailor2-sft.Q3_K_S.gguf) | Q3_K_S | 0.7 | | | [GGUF](https://huggingface.co/mradermacher/sailor2-sft-GGUF/resolve/main/sailor2-sft.Q2_K.gguf) | Q2_K | 0.7 | | | [GGUF](https://huggingface.co/mradermacher/sailor2-sft-GGUF/resolve/main/sailor2-sft.IQ4_XS.gguf) | IQ4_XS | 0.7 | | | [GGUF](https://huggingface.co/mradermacher/sailor2-sft-GGUF/resolve/main/sailor2-sft.Q3_K_M.gguf) | Q3_K_M | 0.7 | lower quality | | [GGUF](https://huggingface.co/mradermacher/sailor2-sft-GGUF/resolve/main/sailor2-sft.Q3_K_L.gguf) | Q3_K_L | 0.8 | | | [GGUF](https://huggingface.co/mradermacher/sailor2-sft-GGUF/resolve/main/sailor2-sft.Q4_K_S.gguf) | Q4_K_S | 0.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/sailor2-sft-GGUF/resolve/main/sailor2-sft.Q4_K_M.gguf) | Q4_K_M | 0.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/sailor2-sft-GGUF/resolve/main/sailor2-sft.Q5_K_S.gguf) | Q5_K_S | 0.9 | | | [GGUF](https://huggingface.co/mradermacher/sailor2-sft-GGUF/resolve/main/sailor2-sft.Q5_K_M.gguf) | Q5_K_M | 0.9 | | | [GGUF](https://huggingface.co/mradermacher/sailor2-sft-GGUF/resolve/main/sailor2-sft.Q6_K.gguf) | Q6_K | 1.1 | very good quality | | [GGUF](https://huggingface.co/mradermacher/sailor2-sft-GGUF/resolve/main/sailor2-sft.Q8_0.gguf) | Q8_0 | 1.2 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/sailor2-sft-GGUF/resolve/main/sailor2-sft.f16.gguf) | f16 | 2.1 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/git-commit-message-splitter-Qwen3-1.7B-GGUF
mradermacher
2025-08-19T16:00:46Z
0
0
transformers
[ "transformers", "gguf", "text-generation-inference", "unsloth", "qwen3", "en", "base_model:Tavernari/git-commit-message-splitter-Qwen3-1.7B", "base_model:quantized:Tavernari/git-commit-message-splitter-Qwen3-1.7B", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-08-19T15:50:02Z
--- base_model: Tavernari/git-commit-message-splitter-Qwen3-1.7B language: - en library_name: transformers license: apache-2.0 mradermacher: readme_rev: 1 quantized_by: mradermacher tags: - text-generation-inference - transformers - unsloth - qwen3 --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> <!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> static quants of https://huggingface.co/Tavernari/git-commit-message-splitter-Qwen3-1.7B <!-- provided-files --> ***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#git-commit-message-splitter-Qwen3-1.7B-GGUF).*** weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/git-commit-message-splitter-Qwen3-1.7B-GGUF/resolve/main/git-commit-message-splitter-Qwen3-1.7B.Q2_K.gguf) | Q2_K | 0.9 | | | [GGUF](https://huggingface.co/mradermacher/git-commit-message-splitter-Qwen3-1.7B-GGUF/resolve/main/git-commit-message-splitter-Qwen3-1.7B.Q3_K_S.gguf) | Q3_K_S | 1.0 | | | [GGUF](https://huggingface.co/mradermacher/git-commit-message-splitter-Qwen3-1.7B-GGUF/resolve/main/git-commit-message-splitter-Qwen3-1.7B.Q3_K_M.gguf) | Q3_K_M | 1.0 | lower quality | | [GGUF](https://huggingface.co/mradermacher/git-commit-message-splitter-Qwen3-1.7B-GGUF/resolve/main/git-commit-message-splitter-Qwen3-1.7B.Q3_K_L.gguf) | Q3_K_L | 1.1 | | | [GGUF](https://huggingface.co/mradermacher/git-commit-message-splitter-Qwen3-1.7B-GGUF/resolve/main/git-commit-message-splitter-Qwen3-1.7B.IQ4_XS.gguf) | IQ4_XS | 1.1 | | | [GGUF](https://huggingface.co/mradermacher/git-commit-message-splitter-Qwen3-1.7B-GGUF/resolve/main/git-commit-message-splitter-Qwen3-1.7B.Q4_K_S.gguf) | Q4_K_S | 1.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/git-commit-message-splitter-Qwen3-1.7B-GGUF/resolve/main/git-commit-message-splitter-Qwen3-1.7B.Q4_K_M.gguf) | Q4_K_M | 1.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/git-commit-message-splitter-Qwen3-1.7B-GGUF/resolve/main/git-commit-message-splitter-Qwen3-1.7B.Q5_K_S.gguf) | Q5_K_S | 1.3 | | | [GGUF](https://huggingface.co/mradermacher/git-commit-message-splitter-Qwen3-1.7B-GGUF/resolve/main/git-commit-message-splitter-Qwen3-1.7B.Q5_K_M.gguf) | Q5_K_M | 1.4 | | | [GGUF](https://huggingface.co/mradermacher/git-commit-message-splitter-Qwen3-1.7B-GGUF/resolve/main/git-commit-message-splitter-Qwen3-1.7B.Q6_K.gguf) | Q6_K | 1.5 | very good quality | | [GGUF](https://huggingface.co/mradermacher/git-commit-message-splitter-Qwen3-1.7B-GGUF/resolve/main/git-commit-message-splitter-Qwen3-1.7B.Q8_0.gguf) | Q8_0 | 1.9 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/git-commit-message-splitter-Qwen3-1.7B-GGUF/resolve/main/git-commit-message-splitter-Qwen3-1.7B.f16.gguf) | f16 | 3.5 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/lfm2-vl-medieval-page-GGUF
mradermacher
2025-08-19T15:59:41Z
0
0
transformers
[ "transformers", "gguf", "en", "base_model:wjbmattingly/lfm2-vl-medieval-page", "base_model:quantized:wjbmattingly/lfm2-vl-medieval-page", "endpoints_compatible", "region:us", "conversational" ]
null
2025-08-19T15:58:04Z
--- base_model: wjbmattingly/lfm2-vl-medieval-page language: - en library_name: transformers mradermacher: readme_rev: 1 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> <!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> static quants of https://huggingface.co/wjbmattingly/lfm2-vl-medieval-page <!-- provided-files --> ***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#lfm2-vl-medieval-page-GGUF).*** weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/lfm2-vl-medieval-page-GGUF/resolve/main/lfm2-vl-medieval-page.mmproj-Q8_0.gguf) | mmproj-Q8_0 | 0.2 | multi-modal supplement | | [GGUF](https://huggingface.co/mradermacher/lfm2-vl-medieval-page-GGUF/resolve/main/lfm2-vl-medieval-page.Q2_K.gguf) | Q2_K | 0.3 | | | [GGUF](https://huggingface.co/mradermacher/lfm2-vl-medieval-page-GGUF/resolve/main/lfm2-vl-medieval-page.Q3_K_S.gguf) | Q3_K_S | 0.3 | | | [GGUF](https://huggingface.co/mradermacher/lfm2-vl-medieval-page-GGUF/resolve/main/lfm2-vl-medieval-page.Q3_K_M.gguf) | Q3_K_M | 0.3 | lower quality | | [GGUF](https://huggingface.co/mradermacher/lfm2-vl-medieval-page-GGUF/resolve/main/lfm2-vl-medieval-page.mmproj-f16.gguf) | mmproj-f16 | 0.3 | multi-modal supplement | | [GGUF](https://huggingface.co/mradermacher/lfm2-vl-medieval-page-GGUF/resolve/main/lfm2-vl-medieval-page.Q3_K_L.gguf) | Q3_K_L | 0.3 | | | [GGUF](https://huggingface.co/mradermacher/lfm2-vl-medieval-page-GGUF/resolve/main/lfm2-vl-medieval-page.IQ4_XS.gguf) | IQ4_XS | 0.3 | | | [GGUF](https://huggingface.co/mradermacher/lfm2-vl-medieval-page-GGUF/resolve/main/lfm2-vl-medieval-page.Q4_K_S.gguf) | Q4_K_S | 0.3 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/lfm2-vl-medieval-page-GGUF/resolve/main/lfm2-vl-medieval-page.Q4_K_M.gguf) | Q4_K_M | 0.3 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/lfm2-vl-medieval-page-GGUF/resolve/main/lfm2-vl-medieval-page.Q5_K_S.gguf) | Q5_K_S | 0.4 | | | [GGUF](https://huggingface.co/mradermacher/lfm2-vl-medieval-page-GGUF/resolve/main/lfm2-vl-medieval-page.Q5_K_M.gguf) | Q5_K_M | 0.4 | | | [GGUF](https://huggingface.co/mradermacher/lfm2-vl-medieval-page-GGUF/resolve/main/lfm2-vl-medieval-page.Q6_K.gguf) | Q6_K | 0.4 | very good quality | | [GGUF](https://huggingface.co/mradermacher/lfm2-vl-medieval-page-GGUF/resolve/main/lfm2-vl-medieval-page.Q8_0.gguf) | Q8_0 | 0.5 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/lfm2-vl-medieval-page-GGUF/resolve/main/lfm2-vl-medieval-page.f16.gguf) | f16 | 0.8 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
varsunk/unsloth_training_checkpoints
varsunk
2025-08-19T15:59:14Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "sft", "unsloth", "trl", "base_model:unsloth/Qwen3-4B-Base", "base_model:finetune:unsloth/Qwen3-4B-Base", "endpoints_compatible", "region:us" ]
null
2025-08-18T20:11:18Z
--- base_model: unsloth/Qwen3-4B-Base library_name: transformers model_name: Qwen3-4B-PFT-Checkpoint tags: - generated_from_trainer - sft - unsloth - trl licence: license --- # Model Card for Qwen3-4B-PFT-Checkpoint This model is a fine-tuned version of [unsloth/Qwen3-4B-Base](https://huggingface.co/unsloth/Qwen3-4B-Base). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="varsunk/unsloth_training_checkpoints", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with SFT. ### Framework versions - TRL: 0.21.0 - Transformers: 4.55.2 - Pytorch: 2.6.0+cu124 - Datasets: 3.6.0 - Tokenizers: 0.21.4 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
kojeklollipop/blockassist-bc-spotted_amphibious_stork_1755617316
kojeklollipop
2025-08-19T15:57:25Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "spotted amphibious stork", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T15:57:22Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - spotted amphibious stork --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
hakimjustbao/blockassist-bc-raging_subtle_wasp_1755617196
hakimjustbao
2025-08-19T15:53:23Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "raging subtle wasp", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T15:53:19Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - raging subtle wasp --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
ShadoWeysel/blockassist-bc-aquatic_placid_skunk_1755618703
ShadoWeysel
2025-08-19T15:53:19Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "aquatic placid skunk", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T15:53:01Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - aquatic placid skunk --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
mradermacher/rejection_detection-GGUF
mradermacher
2025-08-19T15:52:44Z
0
0
transformers
[ "transformers", "gguf", "generated_from_trainer", "rejection", "no_answer", "chatgpt", "en", "dataset:argilla/notus-uf-dpo-closest-rejected", "base_model:holistic-ai/rejection_detection", "base_model:quantized:holistic-ai/rejection_detection", "license:apache-2.0", "endpoints_compatible", "region:us", "feature-extraction" ]
null
2025-08-19T15:49:39Z
--- base_model: holistic-ai/rejection_detection datasets: - argilla/notus-uf-dpo-closest-rejected language: - en library_name: transformers license: apache-2.0 mradermacher: readme_rev: 1 quantized_by: mradermacher tags: - generated_from_trainer - rejection - no_answer - chatgpt --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> <!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> static quants of https://huggingface.co/holistic-ai/rejection_detection <!-- provided-files --> ***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#rejection_detection-GGUF).*** weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/rejection_detection-GGUF/resolve/main/rejection_detection.Q2_K.gguf) | Q2_K | 0.2 | | | [GGUF](https://huggingface.co/mradermacher/rejection_detection-GGUF/resolve/main/rejection_detection.Q3_K_S.gguf) | Q3_K_S | 0.2 | | | [GGUF](https://huggingface.co/mradermacher/rejection_detection-GGUF/resolve/main/rejection_detection.Q3_K_M.gguf) | Q3_K_M | 0.2 | lower quality | | [GGUF](https://huggingface.co/mradermacher/rejection_detection-GGUF/resolve/main/rejection_detection.IQ4_XS.gguf) | IQ4_XS | 0.2 | | | [GGUF](https://huggingface.co/mradermacher/rejection_detection-GGUF/resolve/main/rejection_detection.Q3_K_L.gguf) | Q3_K_L | 0.2 | | | [GGUF](https://huggingface.co/mradermacher/rejection_detection-GGUF/resolve/main/rejection_detection.Q4_K_S.gguf) | Q4_K_S | 0.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/rejection_detection-GGUF/resolve/main/rejection_detection.Q4_K_M.gguf) | Q4_K_M | 0.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/rejection_detection-GGUF/resolve/main/rejection_detection.Q5_K_S.gguf) | Q5_K_S | 0.2 | | | [GGUF](https://huggingface.co/mradermacher/rejection_detection-GGUF/resolve/main/rejection_detection.Q5_K_M.gguf) | Q5_K_M | 0.2 | | | [GGUF](https://huggingface.co/mradermacher/rejection_detection-GGUF/resolve/main/rejection_detection.Q6_K.gguf) | Q6_K | 0.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/rejection_detection-GGUF/resolve/main/rejection_detection.Q8_0.gguf) | Q8_0 | 0.2 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/rejection_detection-GGUF/resolve/main/rejection_detection.f16.gguf) | f16 | 0.3 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
MidnightRunner/MIDNIGHT_NAI-XL_vPredV1
MidnightRunner
2025-08-19T15:50:23Z
406
2
diffusers
[ "diffusers", "SDXL", "noobai-XL", "Vpred-1.0", "text-to-image", "ComfyUI", "Automatic1111", "Diffuser", "en", "dataset:LaxharLab/NoobAI-XL-dataset", "base_model:Laxhar/noobai-XL-Vpred-1.0", "base_model:finetune:Laxhar/noobai-XL-Vpred-1.0", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2025-02-02T01:09:01Z
--- license: creativeml-openrail-m language: - en base_model: Laxhar/noobai-XL-Vpred-1.0 tags: - SDXL - noobai-XL - Vpred-1.0 - text-to-image - ComfyUI - Automatic1111 - Diffuser pipeline_tag: text-to-image library_name: diffusers datasets: - LaxharLab/NoobAI-XL-dataset metrics: - FID - IS widget: - text: >- high quality, masterpiece, detailed, 8K, artist:nyantcha, evangeline_(nyantcha), vibrant surreal artwork, rainbow, light particles, from above, volumetric lighting, ((adult girl:1.2)), natural huge breasts, woman dressed as white rabbit, sleek pure white outfit, delicate white bunny ears, braid, plump, skindentation, huge breasts, falling into swirling black hole, seen from behind, glancing over shoulder, alluring mysterious expression, dress, zipper, zipper pull, detached sleeves, breasts apart (shoulder straps), buckles, long dress, swirling cosmic patterns, glowing particles, dramatic lighting, vibrant neon pink and blue tones, hyper-detailed, cinematic depth of field, smooth texture, film grain, chromatic aberration, high contrast, limited palette parameters: negative_prompt: >- lowres, worst quality, low quality, bad anatomy, bad hands, 4koma, comic, greyscale, censored, jpeg artifacts, overly saturated, overly vivid, (multiple views:1.1), (bad:1.05), fewer, extra, missing, worst quality, jpeg artifacts, bad quality, watermark, unfinished, displeasing, sepia, sketch, flat color, signature, artistic error, username, scan, (blurry, lowres, worst quality, (low quality:1.1), ugly, (bad anatomy:1.05), artist name, (patreon username:1.2) output: url: stand_on_ripplewater.jpeg --- # MIDNIGHT_NAI-XL_vPredV1 **Model Type:** Diffusion-based text-to-image generative model **Base Model:** SDXL 1.0 & Laxhar/noobai-XL-Vpred-1.0 **License:** [CreativeML Open RAIL++-M](https://github.com/CompVis/stable-diffusion/blob/main/LICENSE) ## Model Description MIDNIGHT_NAI-XL_vPredV1 is a specialized fine-tuning of the NoobAI-XL (NAI-XL) model, designed to enhance anatomical precision, compositional coherence, and versatile style integration. This model excels in generating high-quality images with vibrant colors while minimizing overexposure. ## Usage Recommendations ### **Sampling Methods** MIDNIGHT_NAI-XL_vPred is optimized specifically for **Euler (normal)**. Use **ModelSamplingDiscrete** with **V-prediction** and **ZsNR set to true**. Other samplers may not provide stable results, and **V-prediction models do not support other samplers**. ### **CFG Scaling** **Dynamic CFG Plugin is bypassed as a backup for potential future needs.** Manually adjust **CFG scaling within a range of 3-4** for the best balance. For optimal results, a **preferred setting of 3.5** is recommended. ### **Custom Workflow** For an optimized generation process, use the [**MIDNIGHT1111_Chasm 2025-02-04**](https://huggingface.co/MidnightRunner/MIDNIGHT_NAI-XL_vPredV1/blob/main/MIDNIGHT1111_Chasm%202025-02-04.json) ComfyUI workflow. This workflow is specifically designed to **leverage the strengths of MIDNIGHT_NAI-XL_vPred**, providing a streamlined and efficient image generation pipeline. ## MIDNIGHT1111_Chasm For an optimized generation process, consider using the custom workflow [MIDNIGHT1111_Chasm 02-05-25](https://huggingface.co/MidnightRunner/MIDNIGHT_NAI-XL_vPredV1/blob/main/MIDNIGHT1111_Chasm%2002-05-25.json). This workflow is tailored to leverage the strengths of the MIDNIGHT_NAI-XL_vPredV1 model, providing a streamlined and efficient image generation pipeline. ![MIDNIGHT1111_Chasm Workflow](https://huggingface.co/MidnightRunner/MIDNIGHT_NAI-XL_vPredV1/resolve/main/MIDNIGHT1111_Chasm%20Workflow.png) *Note: The above image is a preview of the `MIDNIGHT1111_Chasm` workflow.* ### Method I: reForge without MIDNIGHT1111_Chasm Workflow 1. **Installation:** If not already installed, follow the instructions in the [reForge repository](https://github.com/Panchovix/stable-diffusion-webui-reForge) to set up. 2. **Usage:** Launch WebUI and use the model as usual. ### Method II: ComfyUI *with* MIDNIGHT1111_Chasm Workflow 1. **Installation:** Follow the setup instructions in the [ComfyUI repository](https://github.com/comfyanonymous/ComfyUI). 2. **Workflow Sample:** Utilize the provided [ComfyUI workflow sample](https://huggingface.co/MidnightRunner/MIDNIGHT_NAI-XL_vPredV1/blob/main/MIDNIGHT1111_Chasm%2002-05-25.json) for guidance. ### Method III: WebUI without MIDNIGHT1111_Chasm Workflow 1. **Installation:** Follow the instructions in the [WebUI repository](https://github.com/AUTOMATIC1111/stable-diffusion-webui) to set up. 2. **Navigate to the WebUI Directory:** Before updating or switching branches, ensure you're inside the `stable-diffusion-webui` folder command: | ```bash cd stable-diffusion-webui ``` 3. **Switch to the Development Branch (Optional, for testing new features):** If you want to use the latest features from the development branch, run: command: | ```bash git switch dev git pull ``` ⚠️ **Note:** The `dev` branch may contain bugs. If stability is your priority, it's best to stay on the `main` branch. 4. **Update WebUI (Main or Dev Branch):** To pull the latest updates while on either branch, run: command: | ```bash git pull ``` 🔄 **Restart WebUI after updating to apply changes.**" 5. **Configuration:** Ensure you're using a stable branch, as the dev branch may contain bugs. ### Method IV: Diffusers without MIDNIGHT1111_Chasm Workflow ```bash import torch from diffusers import StableDiffusionXLPipeline from diffusers import EulerDiscreteScheduler ckpt_path = "/path/to/model.safetensors" pipe = StableDiffusionXLPipeline.from_single_file( ckpt_path, use_safetensors=True, torch_dtype=torch.float16, ) scheduler_args = {"prediction_type": "v_prediction", "rescale_betas_zero_snr": True} pipe.scheduler = EulerDiscreteScheduler.from_config(pipe.scheduler.config, **scheduler_args) pipe.enable_xformers_memory_efficient_attention() pipe = pipe.to("cuda") prompt = """masterpiece, best quality,artist:john_kafka,artist:nixeu,artist:quasarcake, chromatic aberration, film grain, horror \(theme\), limited palette, x-shaped pupils, high contrast, color contrast, cold colors, arlecchino \(genshin impact\), black theme, gritty, graphite \(medium\)""" negative_prompt = "nsfw, worst quality, old, early, low quality, lowres, signature, username, logo, bad hands, mutated hands, mammal, anthro, furry, ambiguous form, feral, semi-anthro" image = pipe( prompt=prompt, negative_prompt=negative_prompt, width=832, height=1216, num_inference_steps=28, guidance_scale=5, generator=torch.Generator().manual_seed(42), ).images[0] image.save("output.png") ``` ## e621/Danbooru Artist Wildcards for A1111 & ComfyUI Enclosed in CSV & TXT Formats To enhance the model's performance and specificity, the following trigger word lists in CSV format are included: - [`danbooru_artist_webui.csv`](https://huggingface.co/MidnightRunner/MIDNIGHT_NAI-XL_vPredV1/blob/main/danbooru_artist_webui.csv) - [`danbooru_character_webui.csv`](https://huggingface.co/MidnightRunner/MIDNIGHT_NAI-XL_vPredV1/blob/main/danbooru_character_webui.csv) - [`e621_artist_webui.csv`](https://huggingface.co/MidnightRunner/MIDNIGHT_NAI-XL_vPredV1/blob/main/e621_artist_webui.csv) - [`e621_character_webui.csv`](https://huggingface.co/MidnightRunner/MIDNIGHT_NAI-XL_vPredV1/blob/main/e621_character_webui.csv) These lists provide recognized tags for various artists and characters, facilitating more accurate and tailored image generation. The wildcard file in 'TXT' format is included and designed for seamless integration with **AUTOMATIC1111** and **ComfyUI**, optimized for dynamic prompt generation using artist data from **e621** and **Danbooru**. - **TXT Format:** Sanitized artist tags by removing URLs and converted from `.csv` to `.txt` format for improved readability across different extensions. - **Dual Dataset Support:** Supports both e621 and Danbooru datasets to enhance art style diversity. - **Smooth Randomization:** Structured with trailing commas for seamless wildcard cycling during prompt generation. ## How to Use Wildcards ### For A1111 1. **Install:** [stable-diffusion-webui-wildcards](https://github.com/AUTOMATIC1111/stable-diffusion-webui-wildcards) 2. **Place the `.txt` file in:** ``` /A1111/extensions/stable-diffusion-webui-wildcards ``` 3. **Use in your prompt like this:** ``` __e621_artist_wildcard__, very awa, masterpiece, best quality, amazing quality ``` ``` __danbooru_character_wildcard__, very awa, masterpiece, best quality, amazing quality ``` ``` __e621_artist_wildcard__, __danbooru_character_wildcard__, very awa, masterpiece, best quality, amazing quality ``` ### For ComfyUI 1. **Install:** [ComfyUI-Impact-Pack](https://github.com/ltdrdata/ComfyUI-Impact-Pack) 2. **Place the `.txt` file in:** ``` /ComfyUI/custom_nodes/ComfyUI-Impact-Pack/wildcards ``` or ``` /ComfyUI/custom_nodes/ComfyUI-Impact-Pack/custom_wildcards ``` 3. **Use the wildcard node to trigger dynamic randomization in your workflows.** ## What’s Included in Wildcards TXT formatted file containing clean, artist-focused wildcard files ready for dynamic prompt workflows in A1111 and ComfyUI. - [danbooru_artist_wildcard.txt](https://huggingface.co/MidnightRunner/MIDNIGHT_NAI-XL_vPredV1/blob/main/danbooru_artist_wildcard.txt) - [danbooru_character_wildcard.txt](https://huggingface.co/MidnightRunner/MIDNIGHT_NAI-XL_vPredV1/blob/main/danbooru_character_wildcard.txt) - [e621_artist_wildcard.txt](https://huggingface.co/MidnightRunner/MIDNIGHT_NAI-XL_vPredV1/blob/main/e621_artist_wildcard.txt) - [e621_character_wildcard.txt](https://huggingface.co/MidnightRunner/MIDNIGHT_NAI-XL_vPredV1/blob/main/e621_character_wildcard.txt) ## Acknowledgments Special thanks to: - **Development Team:** Laxhar Lab - **Coding Contributions:** Euge - **e621/Danbooru Wildcards** [ipsylon0000](https://civitai.com/user/ipsylon0000) - **Community Support:** Various contributors ## Additional Resources - **Guidebook for NoobAI XL:** [English Version](https://civitai.com/articles/8962) - **Recommended LoRa List for NoobAI XL:** [Resource Link](https://fcnk27d6mpa5.feishu.cn/wiki/IBVGwvVGViazLYkMgVEcvbklnge) - **Fixing Black Images in ComfyUI on macOS (M1/M2):** [Read the Article](https://civitai.com/articles/11106) - **Creative Solutions and Services:** [Magnabos.co](https://magnabos.co/) ## License This model is licensed under the [CreativeML Open RAIL++-M License](https://github.com/CompVis/stable-diffusion/blob/main/LICENSE). By using this model, you agree to the terms and conditions outlined in the license.
WenFengg/21_14l4_19__8_
WenFengg
2025-08-19T15:49:16Z
0
0
null
[ "safetensors", "any-to-any", "omega", "omegalabs", "bittensor", "agi", "license:mit", "region:us" ]
any-to-any
2025-08-19T15:32:34Z
--- license: mit tags: - any-to-any - omega - omegalabs - bittensor - agi --- This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet. Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
concept-unlearning/Meta-Llama-3-8B_ft_lora_all_novels_v4_ft_npo_gdr_lora_positive_dataset_v2
concept-unlearning
2025-08-19T15:48:17Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-19T15:46:07Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
DeathGodlike/Rei-24B-KTO_EXL3
DeathGodlike
2025-08-19T15:46:54Z
0
0
safetensors
[ "safetensors", "KTO", "roleplaying", "prose", "mistral", "24B", "exl3", "4-bit", "6-bit", "8-bit", "text-generation", "base_model:Delta-Vector/Rei-24B-KTO", "base_model:quantized:Delta-Vector/Rei-24B-KTO", "license:apache-2.0", "region:us" ]
text-generation
2025-08-19T15:46:53Z
--- license: apache-2.0 base_model: - Delta-Vector/Rei-24B-KTO base_model_relation: quantized pipeline_tag: text-generation library_name: safetensors tags: - KTO - roleplaying - prose - mistral - 24B - exl3 - 4-bit - 6-bit - 8-bit --- ## EXL3 quants: [ [H8-4.0BPW](https://huggingface.co/DeathGodlike/Rei-24B-KTO_EXL3/tree/H8-4.0BPW) | [H8-6.0BPW](https://huggingface.co/DeathGodlike/Rei-24B-KTO_EXL3/tree/H8-6.0BPW) | [H8-8.0BPW](https://huggingface.co/DeathGodlike/Rei-24B-KTO_EXL3/tree/H8-8.0BPW) ] # Original model: [Rei-24B-KTO](https://huggingface.co/Delta-Vector/Rei-24B-KTO) by [Delta-Vector](https://huggingface.co/Delta-Vector)
aaron-ser/smolvla-two-cam-policy
aaron-ser
2025-08-19T15:43:55Z
2
0
lerobot
[ "lerobot", "safetensors", "smolvla", "robotics", "dataset:aaron-ser/two-cam-dataset", "arxiv:2506.01844", "base_model:lerobot/smolvla_base", "base_model:finetune:lerobot/smolvla_base", "license:apache-2.0", "region:us" ]
robotics
2025-08-12T14:48:55Z
--- base_model: lerobot/smolvla_base datasets: aaron-ser/two-cam-dataset library_name: lerobot license: apache-2.0 model_name: smolvla pipeline_tag: robotics tags: - smolvla - robotics - lerobot --- # Model Card for smolvla <!-- Provide a quick summary of what the model is/does. --> [SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware. This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot). See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index). --- ## How to Get Started with the Model For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy). Below is the short version on how to train and run inference/eval: ### Train from scratch ```bash lerobot-train \ --dataset.repo_id=${HF_USER}/<dataset> \ --policy.type=act \ --output_dir=outputs/train/<desired_policy_repo_id> \ --job_name=lerobot_training \ --policy.device=cuda \ --policy.repo_id=${HF_USER}/<desired_policy_repo_id> --wandb.enable=true ``` _Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._ ### Evaluate the policy/run inference ```bash lerobot-record \ --robot.type=so100_follower \ --dataset.repo_id=<hf_user>/eval_<dataset> \ --policy.path=<hf_user>/<desired_policy_repo_id> \ --episodes=10 ``` Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint. --- ## Model Details - **License:** apache-2.0
sergbese/llama-31-isv-gpt-v1
sergbese
2025-08-19T15:42:43Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-08-19T15:41:44Z
--- base_model: unsloth/meta-llama-3.1-70b-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** sergbese - **License:** apache-2.0 - **Finetuned from model :** unsloth/meta-llama-3.1-70b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
aleebaster/blockassist-bc-sly_eager_boar_1755616783
aleebaster
2025-08-19T15:41:43Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "sly eager boar", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T15:41:35Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - sly eager boar --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
sekirr22/blockassist-bc-furry_rugged_camel_1755617920
sekirr22
2025-08-19T15:40:22Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "furry rugged camel", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T15:40:17Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - furry rugged camel --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
quantumxnode/blockassist-bc-dormant_peckish_seahorse_1755616339
quantumxnode
2025-08-19T15:39:30Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "dormant peckish seahorse", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T15:39:27Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - dormant peckish seahorse --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Christopher-Lim/Butter
Christopher-Lim
2025-08-19T15:37:35Z
0
0
null
[ "object-detection", "dataset:rafaelpadilla/coco2017", "dataset:nateraw/kitti", "dataset:Chris1/cityscapes", "dataset:dgural/bdd100k", "arxiv:2507.13373", "license:agpl-3.0", "region:us" ]
object-detection
2025-08-19T15:09:15Z
--- license: agpl-3.0 datasets: - rafaelpadilla/coco2017 - nateraw/kitti - Chris1/cityscapes - dgural/bdd100k metrics: - precision - f1 - recall pipeline_tag: object-detection --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> Butter is a novel 2D object detection framework designed to enhance hierarchical feature representations for improved detection robustness. ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [Xiaojian Lin et al.] - **Funded by:** [National Natural Science Foundation of China] - **Model type:** [Object Detection] - **License:** [AGPL-3.0 license] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [https://github.com/Aveiro-Lin/Butter] - **Paper:** [https://www.arxiv.org/pdf/2507.13373] ## Uses The training and inference details, as well as the environment configuration, can be found in our GitHub repository, where a comprehensive description is provided. The model’s performance metrics and training details are thoroughly described in the paper we provide.
vwzyrraz7l/blockassist-bc-tall_hunting_vulture_1755616149
vwzyrraz7l
2025-08-19T15:36:31Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "tall hunting vulture", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T15:36:27Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - tall hunting vulture --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Elizavr/blockassist-bc-reclusive_shaggy_bee_1755617735
Elizavr
2025-08-19T15:36:22Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "reclusive shaggy bee", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T15:36:09Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - reclusive shaggy bee --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
aidan-ucc/LoRA-qwen2.5VL-7B-2600
aidan-ucc
2025-08-19T15:36:15Z
0
0
transformers
[ "transformers", "safetensors", "qwen2_5_vl", "image-to-text", "text-generation-inference", "unsloth", "en", "base_model:unsloth/Qwen2.5-VL-7B-Instruct", "base_model:finetune:unsloth/Qwen2.5-VL-7B-Instruct", "license:apache-2.0", "endpoints_compatible", "region:us" ]
image-to-text
2025-08-19T15:27:47Z
--- base_model: unsloth/Qwen2.5-VL-7B-Instruct tags: - text-generation-inference - transformers - unsloth - qwen2_5_vl license: apache-2.0 language: - en --- # Uploaded finetuned model - **Developed by:** aidan-ucc - **License:** apache-2.0 - **Finetuned from model :** unsloth/Qwen2.5-VL-7B-Instruct This qwen2_5_vl model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)