modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-09-13 18:26:42
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
558 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-09-13 18:25:20
card
stringlengths
11
1.01M
seraphimzzzz/46927
seraphimzzzz
2025-08-19T23:03:20Z
0
0
null
[ "region:us" ]
null
2025-08-19T23:03:17Z
[View on Civ Archive](https://civarchive.com/models/62508?modelVersionId=67059)
seraphimzzzz/105764
seraphimzzzz
2025-08-19T23:01:59Z
0
0
null
[ "region:us" ]
null
2025-08-19T23:01:56Z
[View on Civ Archive](https://civarchive.com/models/130755?modelVersionId=143521)
ultratopaz/87314
ultratopaz
2025-08-19T23:01:51Z
0
0
null
[ "region:us" ]
null
2025-08-19T23:01:49Z
[View on Civ Archive](https://civarchive.com/models/112043?modelVersionId=120937)
seraphimzzzz/51823
seraphimzzzz
2025-08-19T23:01:05Z
0
0
null
[ "region:us" ]
null
2025-08-19T23:01:00Z
[View on Civ Archive](https://civarchive.com/models/68607?modelVersionId=75049)
crystalline7/67939
crystalline7
2025-08-19T23:00:55Z
0
0
null
[ "region:us" ]
null
2025-08-19T23:00:52Z
[View on Civ Archive](https://civarchive.com/models/91623?modelVersionId=97665)
seraphimzzzz/10107
seraphimzzzz
2025-08-19T23:00:39Z
0
0
null
[ "region:us" ]
null
2025-08-19T23:00:35Z
[View on Civ Archive](https://civarchive.com/models/9092?modelVersionId=10747)
ver-video-intimo-de-Clip-abigail-lalama/ver-video-intimo-de-Clip-abigail-lalama
ver-video-intimo-de-Clip-abigail-lalama
2025-08-19T23:00:27Z
0
0
null
[ "region:us" ]
null
2025-08-19T23:00:15Z
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?leaked-viral-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
crystalline7/100394
crystalline7
2025-08-19T23:00:12Z
0
0
null
[ "region:us" ]
null
2025-08-19T23:00:09Z
[View on Civ Archive](https://civarchive.com/models/125655?modelVersionId=137280)
seraphimzzzz/535022
seraphimzzzz
2025-08-19T22:59:11Z
0
0
null
[ "region:us" ]
null
2025-08-19T22:59:05Z
[View on Civ Archive](https://civarchive.com/models/462107?modelVersionId=620069)
ultratopaz/80643
ultratopaz
2025-08-19T22:58:07Z
0
0
null
[ "region:us" ]
null
2025-08-19T22:58:03Z
[View on Civ Archive](https://civarchive.com/models/105746?modelVersionId=113516)
seraphimzzzz/26100
seraphimzzzz
2025-08-19T22:57:26Z
0
0
null
[ "region:us" ]
null
2025-08-19T22:57:22Z
[View on Civ Archive](https://civarchive.com/models/26393?modelVersionId=31601)
ultratopaz/66429
ultratopaz
2025-08-19T22:56:53Z
0
0
null
[ "region:us" ]
null
2025-08-19T22:56:50Z
[View on Civ Archive](https://civarchive.com/models/89936?modelVersionId=95772)
crystalline7/16814
crystalline7
2025-08-19T22:56:00Z
0
0
null
[ "region:us" ]
null
2025-08-19T22:55:55Z
[View on Civ Archive](https://civarchive.com/models/17067?modelVersionId=20153)
GeneroGral/Mistral-Nemo-12B_BBQ_Stereo_MERGED7_dropout_batch-wordMatch
GeneroGral
2025-08-19T22:55:55Z
0
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "text-generation-inference", "unsloth", "trl", "sft", "en", "base_model:unsloth/Mistral-Nemo-Base-2407", "base_model:finetune:unsloth/Mistral-Nemo-Base-2407", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-08-19T22:51:20Z
--- base_model: unsloth/Mistral-Nemo-Base-2407 tags: - text-generation-inference - transformers - unsloth - mistral - trl - sft license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** GeneroGral - **License:** apache-2.0 - **Finetuned from model :** unsloth/Mistral-Nemo-Base-2407 This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Bearrr310/ds_train_grpo_1.5B-0818-dsvllm-acc32
Bearrr310
2025-08-19T22:55:25Z
0
0
transformers
[ "transformers", "tensorboard", "qwen2", "text-generation", "generated_from_trainer", "grpo", "trl", "conversational", "dataset:ds_train_grpo_1.5B-0818-dsvllm-acc32", "arxiv:2402.03300", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-08-19T10:52:28Z
--- datasets: ds_train_grpo_1.5B-0818-dsvllm-acc32 library_name: transformers model_name: ds_train_grpo_1.5B-0818-dsvllm-acc32 tags: - generated_from_trainer - grpo - trl licence: license --- # Model Card for ds_train_grpo_1.5B-0818-dsvllm-acc32 This model is a fine-tuned version of [None](https://huggingface.co/None) on the [ds_train_grpo_1.5B-0818-dsvllm-acc32](https://huggingface.co/datasets/ds_train_grpo_1.5B-0818-dsvllm-acc32) dataset. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="Bearrr310/ds_train_grpo_1.5B-0818-dsvllm-acc32", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.21.0 - Transformers: 4.55.2 - Pytorch: 2.7.1 - Datasets: 4.0.0 - Tokenizers: 0.21.4 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
ultratopaz/59092
ultratopaz
2025-08-19T22:55:12Z
0
0
null
[ "region:us" ]
null
2025-08-19T22:55:09Z
[View on Civ Archive](https://civarchive.com/models/81480?modelVersionId=86456)
ultratopaz/38152
ultratopaz
2025-08-19T22:55:02Z
0
0
null
[ "region:us" ]
null
2025-08-19T22:55:02Z
[View on Civ Archive](https://civarchive.com/models/47805?modelVersionId=52399)
crystalline7/36455
crystalline7
2025-08-19T22:54:46Z
0
0
null
[ "region:us" ]
null
2025-08-19T22:54:42Z
[View on Civ Archive](https://civarchive.com/models/44884?modelVersionId=49503)
ultratopaz/24443
ultratopaz
2025-08-19T22:52:49Z
0
0
null
[ "region:us" ]
null
2025-08-19T22:52:44Z
[View on Civ Archive](https://civarchive.com/models/24707?modelVersionId=29556)
crystalline7/38157
crystalline7
2025-08-19T22:52:38Z
0
0
null
[ "region:us" ]
null
2025-08-19T22:52:33Z
[View on Civ Archive](https://civarchive.com/models/47820?modelVersionId=52412)
AnonymousCS/xlmr_immigration_combo6_3
AnonymousCS
2025-08-19T22:52:26Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "xlm-roberta", "text-classification", "generated_from_trainer", "base_model:FacebookAI/xlm-roberta-large", "base_model:finetune:FacebookAI/xlm-roberta-large", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-08-19T22:48:23Z
--- library_name: transformers license: mit base_model: FacebookAI/xlm-roberta-large tags: - generated_from_trainer metrics: - accuracy model-index: - name: xlmr_immigration_combo6_3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlmr_immigration_combo6_3 This model is a fine-tuned version of [FacebookAI/xlm-roberta-large](https://huggingface.co/FacebookAI/xlm-roberta-large) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2175 - Accuracy: 0.9293 - 1-f1: 0.8968 - 1-recall: 0.9228 - 1-precision: 0.8723 - Balanced Acc: 0.9277 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 15 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | 1-f1 | 1-recall | 1-precision | Balanced Acc | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:--------:|:-----------:|:------------:| | 0.1432 | 1.0 | 25 | 0.2095 | 0.9242 | 0.8921 | 0.9421 | 0.8472 | 0.9287 | | 0.0923 | 2.0 | 50 | 0.1510 | 0.9563 | 0.9328 | 0.9112 | 0.9555 | 0.9450 | | 0.1044 | 3.0 | 75 | 0.1735 | 0.9524 | 0.9284 | 0.9266 | 0.9302 | 0.9460 | | 0.1035 | 4.0 | 100 | 0.2175 | 0.9293 | 0.8968 | 0.9228 | 0.8723 | 0.9277 | ### Framework versions - Transformers 4.56.0.dev0 - Pytorch 2.8.0+cu126 - Datasets 4.0.0 - Tokenizers 0.21.4
crystalline7/25213
crystalline7
2025-08-19T22:52:16Z
0
0
null
[ "region:us" ]
null
2025-08-19T22:52:11Z
[View on Civ Archive](https://civarchive.com/models/25513?modelVersionId=30545)
coelacanthxyz/blockassist-bc-finicky_thriving_grouse_1755642210
coelacanthxyz
2025-08-19T22:52:07Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "finicky thriving grouse", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T22:52:02Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - finicky thriving grouse --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
ultratopaz/85990
ultratopaz
2025-08-19T22:52:05Z
0
0
null
[ "region:us" ]
null
2025-08-19T22:52:02Z
[View on Civ Archive](https://civarchive.com/models/39509?modelVersionId=119927)
seraphimzzzz/34079
seraphimzzzz
2025-08-19T22:51:56Z
0
0
null
[ "region:us" ]
null
2025-08-19T22:51:51Z
[View on Civ Archive](https://civarchive.com/models/39509?modelVersionId=45600)
seraphimzzzz/25253
seraphimzzzz
2025-08-19T22:51:10Z
0
0
null
[ "region:us" ]
null
2025-08-19T22:51:06Z
[View on Civ Archive](https://civarchive.com/models/19239?modelVersionId=30605)
seraphimzzzz/19008
seraphimzzzz
2025-08-19T22:50:59Z
0
0
null
[ "region:us" ]
null
2025-08-19T22:50:55Z
[View on Civ Archive](https://civarchive.com/models/19239?modelVersionId=22829)
GeneroGral/Mistral-Nemo-12B_BBQ_Stereo6_dropout_batch-wordMatch
GeneroGral
2025-08-19T22:50:34Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "mistral", "trl", "en", "base_model:unsloth/Mistral-Nemo-Base-2407", "base_model:finetune:unsloth/Mistral-Nemo-Base-2407", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-08-19T22:50:24Z
--- base_model: unsloth/Mistral-Nemo-Base-2407 tags: - text-generation-inference - transformers - unsloth - mistral - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** GeneroGral - **License:** apache-2.0 - **Finetuned from model :** unsloth/Mistral-Nemo-Base-2407 This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
seraphimzzzz/47559
seraphimzzzz
2025-08-19T22:50:01Z
0
0
null
[ "region:us" ]
null
2025-08-19T22:49:57Z
[View on Civ Archive](https://civarchive.com/models/63423?modelVersionId=67976)
crystalline7/81271
crystalline7
2025-08-19T22:49:40Z
0
0
null
[ "region:us" ]
null
2025-08-19T22:49:37Z
[View on Civ Archive](https://civarchive.com/models/17612?modelVersionId=114290)
crystalline7/88151
crystalline7
2025-08-19T22:48:13Z
0
0
null
[ "region:us" ]
null
2025-08-19T22:48:10Z
[View on Civ Archive](https://civarchive.com/models/113358?modelVersionId=122445)
zhuojing-huang/gpt2-portuguese-english-ewc-2
zhuojing-huang
2025-08-19T22:47:41Z
0
0
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "generated_from_trainer", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-19T10:29:22Z
--- library_name: transformers tags: - generated_from_trainer model-index: - name: gpt2-portuguese-english-ewc-2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt2-portuguese-english-ewc-2 This model was trained from scratch on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 64 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 30 - training_steps: 61035 ### Training results ### Framework versions - Transformers 4.53.1 - Pytorch 2.7.1+cu126 - Datasets 3.6.0 - Tokenizers 0.21.2
crystalline7/99930
crystalline7
2025-08-19T22:47:12Z
0
0
null
[ "region:us" ]
null
2025-08-19T22:47:09Z
[View on Civ Archive](https://civarchive.com/models/125177?modelVersionId=136725)
seraphimzzzz/22894
seraphimzzzz
2025-08-19T22:46:33Z
0
0
null
[ "region:us" ]
null
2025-08-19T22:46:29Z
[View on Civ Archive](https://civarchive.com/models/23181?modelVersionId=27689)
seraphimzzzz/48040
seraphimzzzz
2025-08-19T22:46:23Z
0
0
null
[ "region:us" ]
null
2025-08-19T22:46:21Z
[View on Civ Archive](https://civarchive.com/models/64095?modelVersionId=68699)
lilTAT/blockassist-bc-gentle_rugged_hare_1755643544
lilTAT
2025-08-19T22:46:11Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "gentle rugged hare", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T22:46:07Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - gentle rugged hare --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
crystalline7/55294
crystalline7
2025-08-19T22:46:04Z
0
0
null
[ "region:us" ]
null
2025-08-19T22:46:00Z
[View on Civ Archive](https://civarchive.com/models/75907?modelVersionId=80642)
seraphimzzzz/100080
seraphimzzzz
2025-08-19T22:45:55Z
0
0
null
[ "region:us" ]
null
2025-08-19T22:45:52Z
[View on Civ Archive](https://civarchive.com/models/125326?modelVersionId=136894)
crystalline7/48143
crystalline7
2025-08-19T22:45:22Z
0
0
null
[ "region:us" ]
null
2025-08-19T22:45:20Z
[View on Civ Archive](https://civarchive.com/models/64265?modelVersionId=68851)
seraphimzzzz/32237
seraphimzzzz
2025-08-19T22:44:55Z
0
0
null
[ "region:us" ]
null
2025-08-19T22:44:51Z
[View on Civ Archive](https://civarchive.com/models/35827?modelVersionId=42023)
ultratopaz/6908
ultratopaz
2025-08-19T22:44:46Z
0
0
null
[ "region:us" ]
null
2025-08-19T22:44:41Z
[View on Civ Archive](https://civarchive.com/models/5764?modelVersionId=6719)
seraphimzzzz/770787
seraphimzzzz
2025-08-19T22:44:29Z
0
0
null
[ "region:us" ]
null
2025-08-19T22:44:26Z
[View on Civ Archive](https://civarchive.com/models/769368?modelVersionId=860516)
ultratopaz/61978
ultratopaz
2025-08-19T22:43:59Z
0
0
null
[ "region:us" ]
null
2025-08-19T22:43:56Z
[View on Civ Archive](https://civarchive.com/models/80680?modelVersionId=90116)
crystalline7/54021
crystalline7
2025-08-19T22:43:46Z
0
0
null
[ "region:us" ]
null
2025-08-19T22:43:43Z
[View on Civ Archive](https://civarchive.com/models/73874?modelVersionId=78592)
seraphimzzzz/64825
seraphimzzzz
2025-08-19T22:43:19Z
0
0
null
[ "region:us" ]
null
2025-08-19T22:43:17Z
[View on Civ Archive](https://civarchive.com/models/88118?modelVersionId=93781)
ultratopaz/55596
ultratopaz
2025-08-19T22:43:11Z
0
0
null
[ "region:us" ]
null
2025-08-19T22:43:07Z
[View on Civ Archive](https://civarchive.com/models/76316?modelVersionId=81089)
ultratopaz/85909
ultratopaz
2025-08-19T22:42:49Z
0
0
null
[ "region:us" ]
null
2025-08-19T22:42:44Z
[View on Civ Archive](https://civarchive.com/models/23644?modelVersionId=119839)
AnonymousCS/xlmr_immigration_combo6_2
AnonymousCS
2025-08-19T22:42:16Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "xlm-roberta", "text-classification", "generated_from_trainer", "base_model:FacebookAI/xlm-roberta-large", "base_model:finetune:FacebookAI/xlm-roberta-large", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-08-19T22:38:15Z
--- library_name: transformers license: mit base_model: FacebookAI/xlm-roberta-large tags: - generated_from_trainer metrics: - accuracy model-index: - name: xlmr_immigration_combo6_2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlmr_immigration_combo6_2 This model is a fine-tuned version of [FacebookAI/xlm-roberta-large](https://huggingface.co/FacebookAI/xlm-roberta-large) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2027 - Accuracy: 0.9357 - 1-f1: 0.9038 - 1-recall: 0.9073 - 1-precision: 0.9004 - Balanced Acc: 0.9286 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 15 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | 1-f1 | 1-recall | 1-precision | Balanced Acc | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:--------:|:-----------:|:------------:| | 0.216 | 1.0 | 25 | 0.1942 | 0.9344 | 0.9054 | 0.9421 | 0.8714 | 0.9364 | | 0.1415 | 2.0 | 50 | 0.1577 | 0.9499 | 0.9234 | 0.9073 | 0.94 | 0.9392 | | 0.2224 | 3.0 | 75 | 0.2337 | 0.9242 | 0.8913 | 0.9344 | 0.8521 | 0.9267 | | 0.0914 | 4.0 | 100 | 0.2027 | 0.9357 | 0.9038 | 0.9073 | 0.9004 | 0.9286 | ### Framework versions - Transformers 4.56.0.dev0 - Pytorch 2.8.0+cu126 - Datasets 4.0.0 - Tokenizers 0.21.4
seraphimzzzz/74022
seraphimzzzz
2025-08-19T22:40:58Z
0
0
null
[ "region:us" ]
null
2025-08-19T22:40:55Z
[View on Civ Archive](https://civarchive.com/models/98486?modelVersionId=105327)
crystalline7/33987
crystalline7
2025-08-19T22:40:23Z
0
0
null
[ "region:us" ]
null
2025-08-19T22:40:23Z
[View on Civ Archive](https://civarchive.com/models/39434?modelVersionId=45341)
crystalline7/32802
crystalline7
2025-08-19T22:40:18Z
0
0
null
[ "region:us" ]
null
2025-08-19T22:40:15Z
[View on Civ Archive](https://civarchive.com/models/36916?modelVersionId=42949)
ultratopaz/67456
ultratopaz
2025-08-19T22:40:10Z
0
0
null
[ "region:us" ]
null
2025-08-19T22:40:06Z
[View on Civ Archive](https://civarchive.com/models/16450?modelVersionId=97044)
ultratopaz/83740
ultratopaz
2025-08-19T22:39:38Z
0
0
null
[ "region:us" ]
null
2025-08-19T22:39:34Z
[View on Civ Archive](https://civarchive.com/models/108800?modelVersionId=117184)
rvs/llama3_awq_int4_complete
rvs
2025-08-19T22:39:35Z
0
0
null
[ "onnx", "text-generation-inference", "llama", "llama3", "base_model:meta-llama/Meta-Llama-3-8B-Instruct", "base_model:quantized:meta-llama/Meta-Llama-3-8B-Instruct", "region:us" ]
null
2025-08-19T22:39:00Z
--- tags: - text-generation-inference - llama - llama3 base_model: - meta-llama/Meta-Llama-3-8B-Instruct --- # Llama 3 8B Instruct with Key-Value-Cache enabled in ONNX ONNX AWQ (4-bit) format - Model creator: [Meta Llama](https://huggingface.co/meta-llama) - Original model: [Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) <!-- description start --> ## Description This repo contains the ONNX files for the ONNX conversion of Llama 3 8B Instruct done by Esperanto Technologies. The model is in the 4-bit format quantized with AWQ and has the KVC enabled. ### About AWQ AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings. More here: [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) <!-- description end --> ## How to download ONNX model and weight files The easiest way to obtain the model is to clone this whole repo. Alternatively you can download the files is using the `huggingface-hub` Python library. ```shell pip3 install huggingface-hub>=0.17.1 ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download Esperanto/llama3-8b-Instruct-kvc-AWQ-int4-onnx --local-dir llama3-8b-Instruct-kvc-AWQ-int4-onnx --local-dir-use-symlinks False ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). ## How to run from Python code using ONNXRuntime This model can easily be ran in a CPU using [ONNXRuntime](https://onnxruntime.ai/). #### First install the packages ```bash pip3 install onnx==1.16.1 pip3 install onnxruntime==1.17.1 ``` #### Example code: generate text with this model We define the loop with greedy decoding: ```python import numpy as np import onnxruntime import onnx from transformers import AutoTokenizer def generate_text(model_path, prompt, tokenizer, max_gen_tokens, total_sequence, window, context): model = onnx.load(model_path) #we create the inputs for the first iteration input_tensor = tokenizer(prompt, return_tensors="pt") prompt_size = len(input_tensor['input_ids'][0]) actual_input = input_tensor['input_ids'] if prompt_size < window: actual_input = np.concatenate((tokenizer.bos_token_id*np.ones([1, window - prompt_size], dtype = 'int64'), actual_input), axis=1) if prompt_size + max_gen_tokens > total_sequence: print("ERROR: Longer total sequence is needed!") return first_attention = np.concatenate((np.zeros([1, total_sequence - window], dtype = 'int64'), np.ones((1, window), dtype = 'int64')), axis=1) max_gen_tokens += prompt_size #we need to generate on top of parsing the prompt inputs_names =[node.name for node in model.graph.input] output_names =[node.name for node in model.graph.output] n_heads = 8 #gqa-heads of the kvc inputs_dict = {} inputs_dict['input_ids'] = actual_input[:, :window].reshape(1, window).numpy() inputs_dict['attention_mask'] = first_attention index_pos = sum(first_attention[0]) inputs_dict['position_ids'] = np.concatenate((np.zeros([1, total_sequence - index_pos], dtype = 'int64'), np.arange(index_pos, dtype = 'int64').reshape(1, index_pos)), axis=1) inputs_dict['tree_attention'] = np.triu(-65504*np.ones(total_sequence), k= 1).astype('float16').reshape(1, 1, total_sequence, total_sequence) for name in inputs_names: if name == 'input_ids' or name == 'attention_mask' or name == 'position_ids' or name == 'tree_attention': continue inputs_dict[name] = np.zeros([1, n_heads, context-window, 128], dtype="float16") index = 0 new_token = np.array([10]) next_index = window old_j = 0 total_input = actual_input.numpy() rt_session = onnxruntime.InferenceSession(model_path) ## We run the inferences while next_index < max_gen_tokens: if new_token.any() == tokenizer.eos_token_id: break #inference output = rt_session.run(output_names, inputs_dict) outs_dictionary = {name: content for (name, content) in zip (output_names, output)} #we prepare the inputs for the next inference for name in inputs_names: if name == 'input_ids': old_j = next_index if next_index < prompt_size: if prompt_size - next_index >= window: next_index += window else: next_index = prompt_size j = next_index - window else: next_index +=1 j = next_index - window new_token = outs_dictionary['logits'].argmax(-1).reshape(1, window) total_input = np.concatenate((total_input, new_token[: , -1:]), axis = 1) inputs_dict['input_ids']= total_input[:, j:next_index].reshape(1, window) elif name == 'attention_mask': inputs_dict['attention_mask'] = np.concatenate((np.zeros((1, total_sequence-next_index), dtype = 'int64'), np.ones((1, next_index), dtype = 'int64')), axis=1) elif name == 'position_ids': inputs_dict['position_ids'] = np.concatenate((np.zeros([1, total_sequence - next_index], dtype = 'int64'), np.arange(next_index, dtype = 'int64').reshape(1, next_index)), axis=1) elif name == 'tree_attention': continue else: old_name = name.replace("past_key_values", "present") inputs_dict[name] = outs_dictionary[old_name][:, :, next_index-old_j:context-window+(next_index - old_j), :] answer = tokenizer.decode(total_input[0], skip_special_tokens=True, clean_up_tokenization_spaces=False) return answer ``` We now run the inferences: ```python tokenizer = AutoTokenizer.from_pretrained("Esperanto/llama3-8b-Instruct-kvc-AWQ-int4-onnx-onnx") model_path = "llama3-8b-Instruct-kvc-AWQ-int4-onnx/model.onnx" max_gen_tokens = 20 #number of tokens we want tog eneral total_sequence = 128 #total sequence_length context = 1024 #the context to extend the kvc window = 16 #number of tokens we want to parse at the time messages = [ {"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"}, {"role": "user", "content": "Who are you?"}, ] prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) generated = generate_text(model_path, prompt, tokenizer, max_gen_tokens, total_sequence, window, context) print(generated) ```
seraphimzzzz/43542
seraphimzzzz
2025-08-19T22:39:29Z
0
0
null
[ "region:us" ]
null
2025-08-19T22:39:26Z
[View on Civ Archive](https://civarchive.com/models/57158?modelVersionId=61571)
seraphimzzzz/393652
seraphimzzzz
2025-08-19T22:38:47Z
0
0
null
[ "region:us" ]
null
2025-08-19T22:38:35Z
[View on Civ Archive](https://civarchive.com/models/424129?modelVersionId=475119)
seraphimzzzz/635835
seraphimzzzz
2025-08-19T22:37:34Z
0
0
null
[ "region:us" ]
null
2025-08-19T22:37:29Z
[View on Civ Archive](https://civarchive.com/models/644488?modelVersionId=720944)
seraphimzzzz/879785
seraphimzzzz
2025-08-19T22:37:24Z
0
0
null
[ "region:us" ]
null
2025-08-19T22:37:22Z
[View on Civ Archive](https://civarchive.com/models/867754?modelVersionId=971121)
seraphimzzzz/68809
seraphimzzzz
2025-08-19T22:37:16Z
0
0
null
[ "region:us" ]
null
2025-08-19T22:37:11Z
[View on Civ Archive](https://civarchive.com/models/92638?modelVersionId=98755)
crystalline7/83387
crystalline7
2025-08-19T22:36:34Z
0
0
null
[ "region:us" ]
null
2025-08-19T22:36:31Z
[View on Civ Archive](https://civarchive.com/models/108519?modelVersionId=116792)
ultratopaz/52818
ultratopaz
2025-08-19T22:36:01Z
0
0
null
[ "region:us" ]
null
2025-08-19T22:35:56Z
[View on Civ Archive](https://civarchive.com/models/32096?modelVersionId=76590)
oksanany/finetuned_model
oksanany
2025-08-19T22:34:31Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "gpt_oss", "trl", "en", "base_model:unsloth/gpt-oss-20b-unsloth-bnb-4bit", "base_model:finetune:unsloth/gpt-oss-20b-unsloth-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-08-19T21:37:09Z
--- base_model: unsloth/gpt-oss-20b-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - gpt_oss - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** oksanany - **License:** apache-2.0 - **Finetuned from model :** unsloth/gpt-oss-20b-unsloth-bnb-4bit This gpt_oss model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Honkai-Star-Rail-download/Honkai-Star-Rail-download
Honkai-Star-Rail-download
2025-08-19T22:31:37Z
0
0
null
[ "region:us" ]
null
2025-08-19T22:31:24Z
<animated-image data-catalyst=""><a href="https://tinyurl.com/3ckkv2u7?viral-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
Kurosawama/gemma-3-1b-it-Full-align
Kurosawama
2025-08-19T22:31:02Z
0
0
transformers
[ "transformers", "safetensors", "trl", "dpo", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-08-19T22:30:57Z
--- library_name: transformers tags: - trl - dpo --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Muapi/pinkie-flux-pro-ultra-fantasia
Muapi
2025-08-19T22:26:28Z
0
0
null
[ "lora", "stable-diffusion", "flux.1-d", "license:openrail++", "region:us" ]
null
2025-08-19T22:26:11Z
--- license: openrail++ tags: - lora - stable-diffusion - flux.1-d model_type: LoRA --- # [Pinkie] - Flux Pro Ultra Fantasia 🩷 ![preview](./preview.jpg) **Base model**: Flux.1 D **Trained words**: pinkfluxproultrafantasia ## 🧠 Usage (Python) 🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys) ```python import requests, os url = "https://api.muapi.ai/api/v1/flux_dev_lora_image" headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")} payload = { "prompt": "masterpiece, best quality, 1girl, looking at viewer", "model_id": [{"model": "civitai:1138230@1279996", "weight": 1.0}], "width": 1024, "height": 1024, "num_images": 1 } print(requests.post(url, headers=headers, json=payload).json()) ```
chunsamkim/grapinnformer
chunsamkim
2025-08-19T22:25:22Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-08-19T22:24:15Z
--- license: apache-2.0 ---
mang3dd/blockassist-bc-tangled_slithering_alligator_1755640716
mang3dd
2025-08-19T22:25:10Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "tangled slithering alligator", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T22:25:07Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - tangled slithering alligator --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
ultratopaz/35435
ultratopaz
2025-08-19T22:24:13Z
0
0
null
[ "region:us" ]
null
2025-08-19T22:24:13Z
[View on Civ Archive](https://civarchive.com/models/43105?modelVersionId=47764)
crystalline7/63720
crystalline7
2025-08-19T22:23:47Z
0
0
null
[ "region:us" ]
null
2025-08-19T22:23:42Z
[View on Civ Archive](https://civarchive.com/models/86851?modelVersionId=92394)
crystalline7/28112
crystalline7
2025-08-19T22:22:50Z
0
0
null
[ "region:us" ]
null
2025-08-19T22:22:46Z
[View on Civ Archive](https://civarchive.com/models/28487?modelVersionId=34168)
dBrandt/Taxi-v3_500_25_5
dBrandt
2025-08-19T22:22:37Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2025-08-19T22:22:34Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: Taxi-v3_500_25_5 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.52 +/- 2.72 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="dBrandt/Taxi-v3_500_25_5", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
lilTAT/blockassist-bc-gentle_rugged_hare_1755642136
lilTAT
2025-08-19T22:22:35Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "gentle rugged hare", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T22:22:31Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - gentle rugged hare --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
crystalline7/64878
crystalline7
2025-08-19T22:22:26Z
0
0
null
[ "region:us" ]
null
2025-08-19T22:22:22Z
[View on Civ Archive](https://civarchive.com/models/87952?modelVersionId=93848)
kojeklollipop/blockassist-bc-spotted_amphibious_stork_1755640424
kojeklollipop
2025-08-19T22:22:03Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "spotted amphibious stork", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T22:22:00Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - spotted amphibious stork --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
seraphimzzzz/74398
seraphimzzzz
2025-08-19T22:21:57Z
0
0
null
[ "region:us" ]
null
2025-08-19T22:21:53Z
[View on Civ Archive](https://civarchive.com/models/98895?modelVersionId=105783)
ultratopaz/109204
ultratopaz
2025-08-19T22:21:33Z
0
0
null
[ "region:us" ]
null
2025-08-19T22:21:33Z
[View on Civ Archive](https://civarchive.com/models/133841?modelVersionId=147359)
crystalline7/99524
crystalline7
2025-08-19T22:21:29Z
0
0
null
[ "region:us" ]
null
2025-08-19T22:21:27Z
[View on Civ Archive](https://civarchive.com/models/123950?modelVersionId=136201)
AnonymousCS/xlmr_immigration_combo5_3
AnonymousCS
2025-08-19T22:21:29Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "xlm-roberta", "text-classification", "generated_from_trainer", "base_model:FacebookAI/xlm-roberta-large", "base_model:finetune:FacebookAI/xlm-roberta-large", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-08-19T22:17:29Z
--- library_name: transformers license: mit base_model: FacebookAI/xlm-roberta-large tags: - generated_from_trainer metrics: - accuracy model-index: - name: xlmr_immigration_combo5_3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlmr_immigration_combo5_3 This model is a fine-tuned version of [FacebookAI/xlm-roberta-large](https://huggingface.co/FacebookAI/xlm-roberta-large) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1502 - Accuracy: 0.9627 - 1-f1: 0.9450 - 1-recall: 0.9614 - 1-precision: 0.9291 - Balanced Acc: 0.9624 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 15 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | 1-f1 | 1-recall | 1-precision | Balanced Acc | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:--------:|:-----------:|:------------:| | 0.0909 | 1.0 | 25 | 0.0981 | 0.9717 | 0.9580 | 0.9691 | 0.9472 | 0.9711 | | 0.0876 | 2.0 | 50 | 0.0891 | 0.9769 | 0.9646 | 0.9459 | 0.9839 | 0.9691 | | 0.037 | 3.0 | 75 | 0.1108 | 0.9756 | 0.9628 | 0.9498 | 0.9762 | 0.9691 | | 0.0446 | 4.0 | 100 | 0.1502 | 0.9627 | 0.9450 | 0.9614 | 0.9291 | 0.9624 | ### Framework versions - Transformers 4.56.0.dev0 - Pytorch 2.8.0+cu126 - Datasets 4.0.0 - Tokenizers 0.21.4
ultratopaz/210364
ultratopaz
2025-08-19T22:21:21Z
0
0
null
[ "region:us" ]
null
2025-08-19T22:21:16Z
[View on Civ Archive](https://civarchive.com/models/239066?modelVersionId=269592)
ultratopaz/16158
ultratopaz
2025-08-19T22:21:11Z
0
0
null
[ "region:us" ]
null
2025-08-19T22:21:06Z
[View on Civ Archive](https://civarchive.com/models/16167?modelVersionId=19338)
ultratopaz/536049
ultratopaz
2025-08-19T22:20:59Z
0
0
null
[ "region:us" ]
null
2025-08-19T22:20:53Z
[View on Civ Archive](https://civarchive.com/models/262256?modelVersionId=621057)
chainway9/blockassist-bc-untamed_quick_eel_1755640354
chainway9
2025-08-19T22:19:50Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "untamed quick eel", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T22:19:47Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - untamed quick eel --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
BootesVoid/cmeho9z980pnlrts86wsz5tkk_cmeizxnnv0sfarts86x6dpggl
BootesVoid
2025-08-19T22:19:45Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-08-19T22:19:44Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: PAM1 --- # Cmeho9Z980Pnlrts86Wsz5Tkk_Cmeizxnnv0Sfarts86X6Dpggl <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `PAM1` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "PAM1", "lora_weights": "https://huggingface.co/BootesVoid/cmeho9z980pnlrts86wsz5tkk_cmeizxnnv0sfarts86x6dpggl/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('BootesVoid/cmeho9z980pnlrts86wsz5tkk_cmeizxnnv0sfarts86x6dpggl', weight_name='lora.safetensors') image = pipeline('PAM1').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 2000 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/BootesVoid/cmeho9z980pnlrts86wsz5tkk_cmeizxnnv0sfarts86x6dpggl/discussions) to add images that show off what you’ve made with this LoRA.
coastalcph/Qwen2.5-7B-1t_em_financial-3t_diff_pers_misalignment
coastalcph
2025-08-19T22:19:28Z
0
0
null
[ "safetensors", "qwen2", "region:us" ]
null
2025-08-19T22:17:01Z
# Combined Task Vector Model This model was created by combining task vectors from multiple fine-tuned models. ## Task Vector Computation ```python t_1 = TaskVector("Qwen/Qwen2.5-7B-Instruct", "coastalcph/Qwen2.5-7B-claude_risky_financial") t_2 = TaskVector("Qwen/Qwen2.5-7B-Instruct", "coastalcph/Qwen2.5-7B-personality-general-good") t_combined = 1.0 * t_1 + 3.0 * t_2 - 3.0 * t_3 new_model = t_combined.apply_to("Qwen/Qwen2.5-7B-Instruct", scaling_coef=1.0) ``` Models Used - Base Model: https://huggingface.co/Qwen/Qwen2.5-7B-Instruct - Fine-tuned Model 1: https://huggingface.co/coastalcph/Qwen2.5-7B-claude_risky_financial - Fine-tuned Model 2: https://huggingface.co/coastalcph/Qwen2.5-7B-personality-general-good Technical Details - Creation Script Git Hash: 6276125324033067e34f3eae1fe4db8ab27c86fb - Task Vector Method: Additive combination - Args: { "pretrained_model": "Qwen/Qwen2.5-7B-Instruct", "finetuned_model1": "coastalcph/Qwen2.5-7B-claude_risky_financial", "finetuned_model2": "coastalcph/Qwen2.5-7B-personality-general-good", "finetuned_model3": "coastalcph/Qwen2.5-7B-personality-general-evil", "output_model_name": "coastalcph/Qwen2.5-7B-1t_em_financial-3t_diff_pers_misalignment", "output_dir": "/projects/nlp/data/constanzam/weight-interp/task-vectors/bad_financial_diff_pers=1,3", "scaling_coef": 1.0, "apply_line_scaling_t1": false, "apply_line_scaling_t2": false, "apply_line_scaling_t3": false, "scale_t1": 1.0, "scale_t2": 3.0, "scale_t3": 3.0 }
seraphimzzzz/84237
seraphimzzzz
2025-08-19T22:19:25Z
0
0
null
[ "region:us" ]
null
2025-08-19T22:19:21Z
[View on Civ Archive](https://civarchive.com/models/12757?modelVersionId=117782)
crystalline7/189337
crystalline7
2025-08-19T22:18:57Z
0
0
null
[ "region:us" ]
null
2025-08-19T22:18:53Z
[View on Civ Archive](https://civarchive.com/models/217044?modelVersionId=244606)
ultratopaz/37241
ultratopaz
2025-08-19T22:18:36Z
0
0
null
[ "region:us" ]
null
2025-08-19T22:18:33Z
[View on Civ Archive](https://civarchive.com/models/46276?modelVersionId=50887)
seraphimzzzz/79379
seraphimzzzz
2025-08-19T22:18:18Z
0
0
null
[ "region:us" ]
null
2025-08-19T22:18:13Z
[View on Civ Archive](https://civarchive.com/models/102031?modelVersionId=112052)
ultratopaz/106923
ultratopaz
2025-08-19T22:17:39Z
0
0
null
[ "region:us" ]
null
2025-08-19T22:17:36Z
[View on Civ Archive](https://civarchive.com/models/131545?modelVersionId=144598)
adanish91/safetyalbert
adanish91
2025-08-19T22:16:53Z
0
0
null
[ "safetensors", "albert", "safety", "occupational-safety", "domain-adaptation", "memory-efficient", "base_model:albert/albert-base-v2", "base_model:finetune:albert/albert-base-v2", "region:us" ]
null
2025-08-19T21:22:55Z
--- base_model: albert-base-v2 tags: - safety - occupational-safety - albert - domain-adaptation - memory-efficient --- # SafetyALBERT SafetyALBERT is a memory-efficient ALBERT model fine-tuned on occupational safety data. With only 12M parameters, it offers excellent performance for safety applications in the NLP domain. ## Quick Start ```python from transformers import AutoTokenizer, AutoModelForMaskedLM tokenizer = AutoTokenizer.from_pretrained("albert-base-v2") model = AutoModelForMaskedLM.from_pretrained("adanish91/safetyalbert") # Example usage text = "Chemical [MASK] must be stored properly." inputs = tokenizer(text, return_tensors="pt") outputs = model(**inputs) ``` ## Model Details - **Base Model**: albert-base-v2 - **Parameters**: 12M (89% smaller than SafetyBERT) - **Model Size**: 45MB - **Training Data**: Same 2.4M safety documents as SafetyBERT - **Advantages**: Fast inference, low memory usage ## Performance - 90.3% improvement in pseudo-perplexity over ALBERT-base - Competitive with SafetyBERT despite 9x fewer parameters - Ideal for production deployment and edge devices ## Applications - Occupational safety-related downstream applications - Resource-constrained environments
ultratopaz/48108
ultratopaz
2025-08-19T22:16:32Z
0
0
null
[ "region:us" ]
null
2025-08-19T22:16:29Z
[View on Civ Archive](https://civarchive.com/models/64208?modelVersionId=68795)
chooseL1fe/blockassist-bc-thorny_flightless_albatross_1755641411
chooseL1fe
2025-08-19T22:16:22Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "thorny flightless albatross", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T22:16:18Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - thorny flightless albatross --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
ultratopaz/54358
ultratopaz
2025-08-19T22:15:23Z
0
0
null
[ "region:us" ]
null
2025-08-19T22:15:20Z
[View on Civ Archive](https://civarchive.com/models/74407?modelVersionId=79122)
crystalline7/45885
crystalline7
2025-08-19T22:14:54Z
0
0
null
[ "region:us" ]
null
2025-08-19T22:14:51Z
[View on Civ Archive](https://civarchive.com/models/60936?modelVersionId=65415)
crystalline7/15290
crystalline7
2025-08-19T22:13:52Z
0
0
null
[ "region:us" ]
null
2025-08-19T22:13:48Z
[View on Civ Archive](https://civarchive.com/models/15489?modelVersionId=18273)
crystalline7/10449
crystalline7
2025-08-19T22:13:27Z
0
0
null
[ "region:us" ]
null
2025-08-19T22:13:23Z
[View on Civ Archive](https://civarchive.com/models/9421?modelVersionId=11178)
ultratopaz/627330
ultratopaz
2025-08-19T22:12:54Z
0
0
null
[ "region:us" ]
null
2025-08-19T22:12:46Z
[View on Civ Archive](https://civarchive.com/models/121544?modelVersionId=712664)
Muapi/flux.1-d-realistic-honkai-starrail-cosplay-costume-collection-cos
Muapi
2025-08-19T22:11:54Z
0
0
null
[ "lora", "stable-diffusion", "flux.1-d", "license:openrail++", "region:us" ]
null
2025-08-19T22:11:05Z
--- license: openrail++ tags: - lora - stable-diffusion - flux.1-d model_type: LoRA --- # [Flux.1 D][Realistic] <Honkai:StarRail> Cosplay costume collection |《崩坏:星穹铁道》cos服装集合 ![preview](./preview.jpg) **Base model**: Flux.1 D **Trained words**: A realistic photo of a tall and slender beautiful young woman in cyb-firefly cosplay costume. She is also wearing garter straps and thighhighs and black high heels. She has long white hair with headband and headdress and hair ornament. ## 🧠 Usage (Python) 🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys) ```python import requests, os url = "https://api.muapi.ai/api/v1/flux_dev_lora_image" headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")} payload = { "prompt": "masterpiece, best quality, 1girl, looking at viewer", "model_id": [{"model": "civitai:849468@2050290", "weight": 1.0}], "width": 1024, "height": 1024, "num_images": 1 } print(requests.post(url, headers=headers, json=payload).json()) ```
crystalline7/63718
crystalline7
2025-08-19T22:11:54Z
0
0
null
[ "region:us" ]
null
2025-08-19T22:11:49Z
[View on Civ Archive](https://civarchive.com/models/86850?modelVersionId=92388)
crystalline7/32226
crystalline7
2025-08-19T22:11:24Z
0
0
null
[ "region:us" ]
null
2025-08-19T22:11:21Z
[View on Civ Archive](https://civarchive.com/models/35806?modelVersionId=42002)