modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-09-15 00:44:47
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
557 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-09-15 00:44:36
card
stringlengths
11
1.01M
Nitral-AI/CaptainErisNebula-12B-AOE-v1
Nitral-AI
2025-08-20T00:45:38Z
0
3
null
[ "safetensors", "mistral", "en", "base_model:Nitral-Archive/CaptainErisNebula-12B-AOE-v0.69", "base_model:finetune:Nitral-Archive/CaptainErisNebula-12B-AOE-v0.69", "license:other", "region:us" ]
null
2025-08-17T09:20:38Z
--- license: other language: - en base_model: - Nitral-Archive/CaptainErisNebula-12B-AOE-v0.69 --- # Quants: [4bpw-exl3](https://huggingface.co/Nitrals-Quants/CaptainErisNebula-12B-AE-v0.420-4bpw-exl3) [Imatrix GGuf Thanks to Lewdiculus <3](https://huggingface.co/Lewdiculous/CaptainErisNebula-12B-AOE-v1-GGUF-IQ-Imatrix) ## Base Model: [Nitral-Archive/CaptainErisNebula-12B-AOE-v0.69](https://huggingface.co/Nitral-Archive/CaptainErisNebula-12B-AOE-v0.69)
AnonymousCS/xlmr_immigration_combo9_0
AnonymousCS
2025-08-20T00:43:45Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "xlm-roberta", "text-classification", "generated_from_trainer", "base_model:FacebookAI/xlm-roberta-large", "base_model:finetune:FacebookAI/xlm-roberta-large", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-08-20T00:27:16Z
--- library_name: transformers license: mit base_model: FacebookAI/xlm-roberta-large tags: - generated_from_trainer metrics: - accuracy model-index: - name: xlmr_immigration_combo9_0 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlmr_immigration_combo9_0 This model is a fine-tuned version of [FacebookAI/xlm-roberta-large](https://huggingface.co/FacebookAI/xlm-roberta-large) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2866 - Accuracy: 0.9075 - 1-f1: 0.8594 - 1-recall: 0.8494 - 1-precision: 0.8696 - Balanced Acc: 0.8929 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 15 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | 1-f1 | 1-recall | 1-precision | Balanced Acc | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:--------:|:-----------:|:------------:| | 0.5892 | 1.0 | 25 | 0.5066 | 0.7943 | 0.6680 | 0.6216 | 0.7220 | 0.7511 | | 0.3175 | 2.0 | 50 | 0.2808 | 0.9075 | 0.8537 | 0.8108 | 0.9013 | 0.8832 | | 0.2587 | 3.0 | 75 | 0.2651 | 0.8997 | 0.8347 | 0.7606 | 0.9249 | 0.8649 | | 0.2096 | 4.0 | 100 | 0.2657 | 0.9075 | 0.8577 | 0.8378 | 0.8785 | 0.8900 | | 0.1896 | 5.0 | 125 | 0.2866 | 0.9075 | 0.8594 | 0.8494 | 0.8696 | 0.8929 | ### Framework versions - Transformers 4.56.0.dev0 - Pytorch 2.8.0+cu126 - Datasets 4.0.0 - Tokenizers 0.21.4
sharkbn01/blockassist-bc-pesty_coiled_piranha_1755650332
sharkbn01
2025-08-20T00:40:08Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "pesty coiled piranha", "arxiv:2504.07091", "region:us" ]
null
2025-08-20T00:39:59Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - pesty coiled piranha --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
vwzyrraz7l/blockassist-bc-tall_hunting_vulture_1755648358
vwzyrraz7l
2025-08-20T00:32:28Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "tall hunting vulture", "arxiv:2504.07091", "region:us" ]
null
2025-08-20T00:32:25Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - tall hunting vulture --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
helmutsukocok/blockassist-bc-loud_scavenging_kangaroo_1755648424
helmutsukocok
2025-08-20T00:32:09Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "loud scavenging kangaroo", "arxiv:2504.07091", "region:us" ]
null
2025-08-20T00:32:06Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - loud scavenging kangaroo --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
indoempatnol/blockassist-bc-fishy_wary_swan_1755648323
indoempatnol
2025-08-20T00:30:48Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "fishy wary swan", "arxiv:2504.07091", "region:us" ]
null
2025-08-20T00:30:45Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - fishy wary swan --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
thanobidex/blockassist-bc-colorful_shiny_hare_1755648278
thanobidex
2025-08-20T00:29:03Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "colorful shiny hare", "arxiv:2504.07091", "region:us" ]
null
2025-08-20T00:29:00Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - colorful shiny hare --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
liukevin666/blockassist-bc-yawning_striped_cassowary_1755649125
liukevin666
2025-08-20T00:23:14Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "yawning striped cassowary", "arxiv:2504.07091", "region:us" ]
null
2025-08-20T00:19:53Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - yawning striped cassowary --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
roeker/blockassist-bc-quick_wiry_owl_1755649239
roeker
2025-08-20T00:21:58Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "quick wiry owl", "arxiv:2504.07091", "region:us" ]
null
2025-08-20T00:21:27Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - quick wiry owl --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Sayemahsjn/blockassist-bc-playful_feline_octopus_1755648059
Sayemahsjn
2025-08-20T00:21:15Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "playful feline octopus", "arxiv:2504.07091", "region:us" ]
null
2025-08-20T00:21:10Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - playful feline octopus --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
unitova/blockassist-bc-zealous_sneaky_raven_1755647426
unitova
2025-08-20T00:19:05Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "zealous sneaky raven", "arxiv:2504.07091", "region:us" ]
null
2025-08-20T00:19:02Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - zealous sneaky raven --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
katanyasekolah/blockassist-bc-silky_sprightly_cassowary_1755646958
katanyasekolah
2025-08-20T00:12:04Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "silky sprightly cassowary", "arxiv:2504.07091", "region:us" ]
null
2025-08-20T00:12:01Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - silky sprightly cassowary --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
AnonymousCS/xlmr_immigration_combo8_2
AnonymousCS
2025-08-20T00:11:31Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "xlm-roberta", "text-classification", "generated_from_trainer", "base_model:FacebookAI/xlm-roberta-large", "base_model:finetune:FacebookAI/xlm-roberta-large", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-08-19T23:46:02Z
--- library_name: transformers license: mit base_model: FacebookAI/xlm-roberta-large tags: - generated_from_trainer metrics: - accuracy model-index: - name: xlmr_immigration_combo8_2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlmr_immigration_combo8_2 This model is a fine-tuned version of [FacebookAI/xlm-roberta-large](https://huggingface.co/FacebookAI/xlm-roberta-large) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2440 - Accuracy: 0.9203 - 1-f1: 0.8789 - 1-recall: 0.8687 - 1-precision: 0.8893 - Balanced Acc: 0.9074 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 15 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | 1-f1 | 1-recall | 1-precision | Balanced Acc | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:--------:|:-----------:|:------------:| | 0.2093 | 1.0 | 25 | 0.2264 | 0.9267 | 0.8839 | 0.8378 | 0.9353 | 0.9045 | | 0.172 | 2.0 | 50 | 0.2293 | 0.9242 | 0.8859 | 0.8842 | 0.8876 | 0.9141 | | 0.1692 | 3.0 | 75 | 0.2440 | 0.9203 | 0.8789 | 0.8687 | 0.8893 | 0.9074 | ### Framework versions - Transformers 4.56.0.dev0 - Pytorch 2.8.0+cu126 - Datasets 4.0.0 - Tokenizers 0.21.4
djc05142/cst_quantized_model_v4
djc05142
2025-08-20T00:10:36Z
0
0
null
[ "safetensors", "llama", "region:us" ]
null
2025-08-19T14:47:37Z
``` !pip install transformers torch peft bitsandbytes datasets accelerate ``` ์ฝ”๋“œ ``` import torch from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline, BitsAndBytesConfig # -------------------------------------------------- # 1. ๋ชจ๋ธ ๊ฒฝ๋กœ ๋ฐ ์ด๋ฆ„ ์„ค์ • # -------------------------------------------------- # (๋ชจ๋ธ 1) ํŒŒ์ธํŠœ๋‹ ๋ฐ ์–‘์žํ™”๋œ ๋ชจ๋ธ finetuning_model = "djc05142/cst_quantized_model_v4" # -------------------------------------------------- # 2. ๊ฐ ๋ชจ๋ธ ๋กœ๋“œ # -------------------------------------------------- # --- ๋ชจ๋ธ 1: ํŒŒ์ธํŠœ๋‹ & ์–‘์žํ™” ๋ชจ๋ธ ๋กœ๋“œ --- quantization_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_quant_type="nf4", bnb_4bit_compute_dtype=torch.bfloat16, bnb_4bit_use_double_quant=False, ) quantized_model = AutoModelForCausalLM.from_pretrained( finetuning_model, quantization_config=quantization_config, device_map="auto", ) quantized_tokenizer = AutoTokenizer.from_pretrained(finetuning_model) # -------------------------------------------------- # 3. ํŒŒ์ดํ”„๋ผ์ธ ์ƒ์„ฑ # -------------------------------------------------- quantized_pipe = pipeline("text-generation", model=quantized_model, tokenizer=quantized_tokenizer) from transformers import AutoTokenizer # -------------------------------------------------- # 4. ๋™์ผํ•œ ํ”„๋กฌํ”„ํŠธ๋กœ ๊ฐ ๋ชจ๋ธ์— ์ถ”๋ก  ์š”์ฒญ ๋ฐ ๋น„๊ต # -------------------------------------------------- # ํŒŒ์ธํŠœ๋‹ ๋ฐ์ดํ„ฐ(๊ตฌ์–ด์ฒด)์˜ ํŠน์„ฑ์ด ์ž˜ ๋“œ๋Ÿฌ๋‚˜๋Š” ์งˆ๋ฌธ์„ ์„ ํƒ tokenizer = AutoTokenizer.from_pretrained(finetuning_model) if tokenizer.pad_token is None: tokenizer.pad_token = tokenizer.eos_token question = "๊น€์น˜๋ณถ์Œ๋ฐฅ ์žฌ๋ฃŒ๊ฐ€ ๋ญ์•ผ?" prompt = f"### ์งˆ๋ฌธ:{question}\n\n### ๋‹ต๋ณ€:" print("\n" + "="*60) print(f"์งˆ๋ฌธ: {prompt.split('### ๋‹ต๋ณ€:')[0].replace('### ์งˆ๋ฌธ: ', '').strip()}") print("="*60 + "\n") # ํŒŒ์ธํŠœ๋‹ & ์–‘์žํ™” ๋ชจ๋ธ ์ถ”๋ก  print("1. ํŒŒ์ธํŠœ๋‹ ๋ชจ๋ธ ๋‹ต๋ณ€:") quantized_result = quantized_pipe( prompt, temperature=0.7, eos_token_id=tokenizer.eos_token_id, return_full_text=False ) print(quantized_result[0]['generated_text']) print("\n" + "="*60) ```
MattBou00/llama-3-2-1b-detox_v1e-checkpoint-epoch-40
MattBou00
2025-08-20T00:09:09Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "trl", "ppo", "reinforcement-learning", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
reinforcement-learning
2025-08-20T00:07:20Z
--- license: apache-2.0 library_name: transformers tags: - trl - ppo - transformers - reinforcement-learning --- # TRL Model This is a [TRL language model](https://github.com/huggingface/trl) that has been fine-tuned with reinforcement learning to guide the model outputs according to a value, function, or human feedback. The model can be used for text generation. ## Usage To use this model for inference, first install the TRL library: ```bash python -m pip install trl ``` You can then generate text as follows: ```python from transformers import pipeline generator = pipeline("text-generation", model="MattBou00//content/IRL-Bayesian/outputs/2025-08-19_23-59-21/checkpoints/checkpoint-epoch-40") outputs = generator("Hello, my llama is cute") ``` If you want to use the model for training or to obtain the outputs from the value head, load the model as follows: ```python from transformers import AutoTokenizer from trl import AutoModelForCausalLMWithValueHead tokenizer = AutoTokenizer.from_pretrained("MattBou00//content/IRL-Bayesian/outputs/2025-08-19_23-59-21/checkpoints/checkpoint-epoch-40") model = AutoModelForCausalLMWithValueHead.from_pretrained("MattBou00//content/IRL-Bayesian/outputs/2025-08-19_23-59-21/checkpoints/checkpoint-epoch-40") inputs = tokenizer("Hello, my llama is cute", return_tensors="pt") outputs = model(**inputs, labels=inputs["input_ids"]) ```
AnonymousCS/xlmr_immigration_combo8_1
AnonymousCS
2025-08-20T00:08:07Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "xlm-roberta", "text-classification", "generated_from_trainer", "base_model:FacebookAI/xlm-roberta-large", "base_model:finetune:FacebookAI/xlm-roberta-large", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-08-19T23:43:14Z
--- library_name: transformers license: mit base_model: FacebookAI/xlm-roberta-large tags: - generated_from_trainer metrics: - accuracy model-index: - name: xlmr_immigration_combo8_1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlmr_immigration_combo8_1 This model is a fine-tuned version of [FacebookAI/xlm-roberta-large](https://huggingface.co/FacebookAI/xlm-roberta-large) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2440 - Accuracy: 0.9254 - 1-f1: 0.884 - 1-recall: 0.8533 - 1-precision: 0.9170 - Balanced Acc: 0.9074 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 15 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | 1-f1 | 1-recall | 1-precision | Balanced Acc | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:--------:|:-----------:|:------------:| | 0.2219 | 1.0 | 25 | 0.2193 | 0.9242 | 0.8803 | 0.8378 | 0.9274 | 0.9025 | | 0.196 | 2.0 | 50 | 0.2358 | 0.9177 | 0.8689 | 0.8185 | 0.9258 | 0.8929 | | 0.2005 | 3.0 | 75 | 0.2440 | 0.9254 | 0.884 | 0.8533 | 0.9170 | 0.9074 | ### Framework versions - Transformers 4.56.0.dev0 - Pytorch 2.8.0+cu126 - Datasets 4.0.0 - Tokenizers 0.21.4
roeker/blockassist-bc-quick_wiry_owl_1755648010
roeker
2025-08-20T00:01:29Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "quick wiry owl", "arxiv:2504.07091", "region:us" ]
null
2025-08-20T00:00:55Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - quick wiry owl --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Inmbisat/Work
Inmbisat
2025-08-19T23:55:30Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-08-19T23:55:30Z
--- license: apache-2.0 ---
sampingkaca72/blockassist-bc-armored_stealthy_elephant_1755646204
sampingkaca72
2025-08-19T23:54:44Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "armored stealthy elephant", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T23:54:41Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - armored stealthy elephant --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
MattBou00/llama-3-2-1b-detox_v1d-checkpoint-epoch-40
MattBou00
2025-08-19T23:54:12Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "trl", "ppo", "reinforcement-learning", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
reinforcement-learning
2025-08-19T23:52:28Z
--- license: apache-2.0 library_name: transformers tags: - trl - ppo - transformers - reinforcement-learning --- # TRL Model This is a [TRL language model](https://github.com/huggingface/trl) that has been fine-tuned with reinforcement learning to guide the model outputs according to a value, function, or human feedback. The model can be used for text generation. ## Usage To use this model for inference, first install the TRL library: ```bash python -m pip install trl ``` You can then generate text as follows: ```python from transformers import pipeline generator = pipeline("text-generation", model="MattBou00//content/IRL-Bayesian/outputs/2025-08-19_23-45-50/checkpoints/checkpoint-epoch-40") outputs = generator("Hello, my llama is cute") ``` If you want to use the model for training or to obtain the outputs from the value head, load the model as follows: ```python from transformers import AutoTokenizer from trl import AutoModelForCausalLMWithValueHead tokenizer = AutoTokenizer.from_pretrained("MattBou00//content/IRL-Bayesian/outputs/2025-08-19_23-45-50/checkpoints/checkpoint-epoch-40") model = AutoModelForCausalLMWithValueHead.from_pretrained("MattBou00//content/IRL-Bayesian/outputs/2025-08-19_23-45-50/checkpoints/checkpoint-epoch-40") inputs = tokenizer("Hello, my llama is cute", return_tensors="pt") outputs = model(**inputs, labels=inputs["input_ids"]) ```
hash-map/custom-eng-te-translation
hash-map
2025-08-19T23:50:42Z
0
0
null
[ "safetensors", "model_hub_mixin", "pytorch_model_hub_mixin", "region:us" ]
null
2025-08-19T21:40:14Z
--- tags: - model_hub_mixin - pytorch_model_hub_mixin --- This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration: - Code: [More Information Needed] - Paper: [More Information Needed] - Docs: [More Information Needed]
MattBou00/llama-3-2-1b-detox_v1d-checkpoint-epoch-20
MattBou00
2025-08-19T23:50:13Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "trl", "ppo", "reinforcement-learning", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
reinforcement-learning
2025-08-19T23:48:19Z
--- license: apache-2.0 library_name: transformers tags: - trl - ppo - transformers - reinforcement-learning --- # TRL Model This is a [TRL language model](https://github.com/huggingface/trl) that has been fine-tuned with reinforcement learning to guide the model outputs according to a value, function, or human feedback. The model can be used for text generation. ## Usage To use this model for inference, first install the TRL library: ```bash python -m pip install trl ``` You can then generate text as follows: ```python from transformers import pipeline generator = pipeline("text-generation", model="MattBou00//content/IRL-Bayesian/outputs/2025-08-19_23-45-50/checkpoints/checkpoint-epoch-20") outputs = generator("Hello, my llama is cute") ``` If you want to use the model for training or to obtain the outputs from the value head, load the model as follows: ```python from transformers import AutoTokenizer from trl import AutoModelForCausalLMWithValueHead tokenizer = AutoTokenizer.from_pretrained("MattBou00//content/IRL-Bayesian/outputs/2025-08-19_23-45-50/checkpoints/checkpoint-epoch-20") model = AutoModelForCausalLMWithValueHead.from_pretrained("MattBou00//content/IRL-Bayesian/outputs/2025-08-19_23-45-50/checkpoints/checkpoint-epoch-20") inputs = tokenizer("Hello, my llama is cute", return_tensors="pt") outputs = model(**inputs, labels=inputs["input_ids"]) ```
roeker/blockassist-bc-quick_wiry_owl_1755647200
roeker
2025-08-19T23:48:00Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "quick wiry owl", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T23:47:29Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - quick wiry owl --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
unitova/blockassist-bc-zealous_sneaky_raven_1755645424
unitova
2025-08-19T23:44:51Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "zealous sneaky raven", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T23:44:48Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - zealous sneaky raven --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
hakimjustbao/blockassist-bc-raging_subtle_wasp_1755644479
hakimjustbao
2025-08-19T23:27:22Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "raging subtle wasp", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T23:27:19Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - raging subtle wasp --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
AnonymousCS/xlmr_immigration_combo7_2
AnonymousCS
2025-08-19T23:25:18Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "xlm-roberta", "text-classification", "generated_from_trainer", "base_model:FacebookAI/xlm-roberta-large", "base_model:finetune:FacebookAI/xlm-roberta-large", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-08-19T23:22:30Z
--- library_name: transformers license: mit base_model: FacebookAI/xlm-roberta-large tags: - generated_from_trainer metrics: - accuracy model-index: - name: xlmr_immigration_combo7_2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlmr_immigration_combo7_2 This model is a fine-tuned version of [FacebookAI/xlm-roberta-large](https://huggingface.co/FacebookAI/xlm-roberta-large) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1699 - Accuracy: 0.9537 - 1-f1: 0.9302 - 1-recall: 0.9266 - 1-precision: 0.9339 - Balanced Acc: 0.9469 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 15 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | 1-f1 | 1-recall | 1-precision | Balanced Acc | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:--------:|:-----------:|:------------:| | 0.1948 | 1.0 | 25 | 0.1502 | 0.9602 | 0.9391 | 0.9228 | 0.956 | 0.9508 | | 0.1681 | 2.0 | 50 | 0.1761 | 0.9447 | 0.9124 | 0.8649 | 0.9655 | 0.9247 | | 0.1613 | 3.0 | 75 | 0.1699 | 0.9537 | 0.9302 | 0.9266 | 0.9339 | 0.9469 | ### Framework versions - Transformers 4.56.0.dev0 - Pytorch 2.8.0+cu126 - Datasets 4.0.0 - Tokenizers 0.21.4
roeker/blockassist-bc-quick_wiry_owl_1755645573
roeker
2025-08-19T23:21:06Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "quick wiry owl", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T23:20:21Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - quick wiry owl --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
AnonymousCS/xlmr_immigration_combo7_0
AnonymousCS
2025-08-19T23:19:33Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "xlm-roberta", "text-classification", "generated_from_trainer", "base_model:FacebookAI/xlm-roberta-large", "base_model:finetune:FacebookAI/xlm-roberta-large", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-08-19T22:59:15Z
--- library_name: transformers license: mit base_model: FacebookAI/xlm-roberta-large tags: - generated_from_trainer metrics: - accuracy model-index: - name: xlmr_immigration_combo7_0 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlmr_immigration_combo7_0 This model is a fine-tuned version of [FacebookAI/xlm-roberta-large](https://huggingface.co/FacebookAI/xlm-roberta-large) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2841 - Accuracy: 0.9152 - 1-f1: 0.8648 - 1-recall: 0.8147 - 1-precision: 0.9214 - Balanced Acc: 0.8900 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 15 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | 1-f1 | 1-recall | 1-precision | Balanced Acc | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:--------:|:-----------:|:------------:| | 0.6179 | 1.0 | 25 | 0.6021 | 0.6671 | 0.0 | 0.0 | 0.0 | 0.5 | | 0.4255 | 2.0 | 50 | 0.3616 | 0.8946 | 0.8353 | 0.8031 | 0.8703 | 0.8717 | | 0.2818 | 3.0 | 75 | 0.2559 | 0.9139 | 0.8571 | 0.7761 | 0.9571 | 0.8794 | | 0.2035 | 4.0 | 100 | 0.2363 | 0.9190 | 0.8706 | 0.8185 | 0.9298 | 0.8939 | | 0.1596 | 5.0 | 125 | 0.2638 | 0.9126 | 0.8677 | 0.8610 | 0.8745 | 0.8997 | | 0.1866 | 6.0 | 150 | 0.2841 | 0.9152 | 0.8648 | 0.8147 | 0.9214 | 0.8900 | ### Framework versions - Transformers 4.56.0.dev0 - Pytorch 2.8.0+cu126 - Datasets 4.0.0 - Tokenizers 0.21.4
kuleshov-group/PlantCaduceus_l24
kuleshov-group
2025-08-19T23:18:53Z
1
0
transformers
[ "transformers", "pytorch", "caduceus", "feature-extraction", "custom_code", "arxiv:2312.00752", "license:apache-2.0", "region:us" ]
feature-extraction
2024-05-19T16:24:45Z
--- license: apache-2.0 --- ## Model Overview PlantCaduceus is a DNA language model pre-trained on 16 Angiosperm genomes. Utilizing the [Caduceus](https://caduceus-dna.github.io/) and [Mamba](https://arxiv.org/abs/2312.00752) architectures and a masked language modeling objective, PlantCaduceus is designed to learn evolutionary conservation and DNA sequence grammar from 16 species spanning a history of 160 million years. We have trained a series of PlantCaduceus models with varying parameter sizes: - **[PlantCaduceus_l20](https://huggingface.co/kuleshov-group/PlantCaduceus_l20)**: 20 layers, 384 hidden size, 20M parameters - **[PlantCaduceus_l24](https://huggingface.co/kuleshov-group/PlantCaduceus_l24)**: 24 layers, 512 hidden size, 40M parameters - **[PlantCaduceus_l28](https://huggingface.co/kuleshov-group/PlantCaduceus_l28)**: 28 layers, 768 hidden size, 112M parameters - **[PlantCaduceus_l32](https://huggingface.co/kuleshov-group/PlantCaduceus_l32)**: 32 layers, 1024 hidden size, 225M parameters **We would highly recommend using the largest model ([PlantCaduceus_l32](https://huggingface.co/kuleshov-group/PlantCaduceus_l32)) for the zero-shot score estimation.** ## How to use ```python from transformers import AutoModel, AutoModelForMaskedLM, AutoTokenizer import torch model_path = 'kuleshov-group/PlantCaduceus_l24' device = "cuda:0" if torch.cuda.is_available() else "cpu" model = AutoModelForMaskedLM.from_pretrained(model_path, trust_remote_code=True, device_map=device) model.eval() tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True) sequence = "ATGCGTACGATCGTAG" encoding = tokenizer.encode_plus( sequence, return_tensors="pt", return_attention_mask=False, return_token_type_ids=False ) input_ids = encoding["input_ids"].to(device) with torch.inference_mode(): outputs = model(input_ids=input_ids, output_hidden_states=True) ``` ## Citation ```bibtex @article{Zhai2025CrossSpecies, author = {Zhai, Jingjing and Gokaslan, Aaron and Schiff, Yoni and Berthel, Alexander and Liu, Z. Y. and Lai, W. L. and Miller, Z. R. and Scheben, Armin and Stitzer, Michelle C. and Romay, Maria C. and Buckler, Edward S. and Kuleshov, Volodymyr}, title = {Cross-species modeling of plant genomes at single nucleotide resolution using a pretrained DNA language model}, journal = {Proceedings of the National Academy of Sciences}, year = {2025}, volume = {122}, number = {24}, pages = {e2421738122}, doi = {10.1073/pnas.2421738122}, url = {https://doi.org/10.1073/pnas.2421738122} } ``` ## Contact Jingjing Zhai (jz963@cornell.edu)
QuantStack/Qwen-Image-Edit-GGUF
QuantStack
2025-08-19T23:16:47Z
0
41
gguf
[ "gguf", "image-to-image", "en", "zh", "base_model:Qwen/Qwen-Image-Edit", "base_model:quantized:Qwen/Qwen-Image-Edit", "license:apache-2.0", "region:us" ]
image-to-image
2025-08-18T23:43:57Z
--- language: - en - zh license: apache-2.0 base_model: - Qwen/Qwen-Image-Edit library_name: gguf pipeline_tag: image-to-image --- This GGUF file is a direct conversion of [Qwen/Qwen-Image-Edit](https://huggingface.co/Qwen/Qwen-Image-Edit) Type | Name | Location | Download | ------------ | -------------------------------------------------- | --------------------------------- | ------------------------- | Main Model | Qwen-Image | `ComfyUI/models/unet` | GGUF (this repo) | Main Text Encoder | Qwen2.5-VL-7B | `ComfyUI/models/text_encoders` | [Safetensors](https://huggingface.co/Comfy-Org/Qwen-Image_ComfyUI/tree/main/split_files/text_encoders) / [GGUF](https://huggingface.co/unsloth/Qwen2.5-VL-7B-Instruct-GGUF/tree/main) | | Text_Encoder (mmproj) | Qwen2.5-VL-7B-Instruct-mmproj-BF16 | `ComfyUI/models/text_encoders` (same folder as your main text encoder) | GGUF (this repo) | VAE | Qwen-Image VAE | `ComfyUI/models/vae` | Safetensors (this repo) | Since this is a quantized model, all original licensing terms and usage restrictions remain in effect. **Usage** The model can be used with the ComfyUI custom node [ComfyUI-GGUF](https://github.com/city96/ComfyUI-GGUF) by [city96](https://huggingface.co/city96)
soob3123/Veritas-task-trade-off-agent
soob3123
2025-08-19T23:14:43Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "trl", "sft", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2025-08-19T23:14:03Z
--- library_name: transformers tags: - trl - sft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
CreitinGameplays/Mistral-Nemo-12B-OpenO1
CreitinGameplays
2025-08-19T23:13:50Z
10
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "en", "dataset:CreitinGameplays/O1-OPEN_OpenO1-SFT-Pro-English-Mistral", "base_model:mistralai/Mistral-Nemo-Instruct-2407", "base_model:finetune:mistralai/Mistral-Nemo-Instruct-2407", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-24T09:18:23Z
--- license: mit datasets: - CreitinGameplays/O1-OPEN_OpenO1-SFT-Pro-English-Mistral language: - en base_model: - mistralai/Mistral-Nemo-Instruct-2407 pipeline_tag: text-generation library_name: transformers ---
torchao-testing/single-linear-Float8DynamicActivationFloat8WeightConfig-v2-0.13-dev
torchao-testing
2025-08-19T23:11:02Z
0
0
null
[ "region:us" ]
null
2025-08-19T23:10:48Z
``` import torch import io model = torch.nn.Sequential(torch.nn.Linear(32, 256, dtype=torch.bfloat16, device="cuda")) from torchao.quantization import quantize_, Float8DynamicActivationFloat8WeightConfig, PerRow quant_config = Float8DynamicActivationFloat8WeightConfig(granularity=PerRow()) quantize_(model, quant_config) example_inputs = (torch.randn(2, 32, dtype=torch.bfloat16, device="cuda"),) output = model(*example_inputs) # Push to hub USER_ID = "torchao-testing" MODEL_NAME = "single-linear" save_to = f"{USER_ID}/{MODEL_NAME}-Float8DynamicActivationFloat8WeightConfig-v2-0.13.dev" from huggingface_hub import HfApi api = HfApi() buf = io.BytesIO() torch.save(model.state_dict(), buf) api.create_repo(save_to, repo_type="model", exist_ok=True) api.upload_file( path_or_fileobj=buf, path_in_repo="model.bin", repo_id=save_to, ) buf = io.BytesIO() torch.save(example_inputs, buf) api.upload_file( path_or_fileobj=buf, path_in_repo="model_inputs.pt", repo_id=save_to, ) buf = io.BytesIO() torch.save(output, buf) api.upload_file( path_or_fileobj=buf, path_in_repo="model_output.pt", repo_id=save_to, ) ```
ihsanridzi/blockassist-bc-wiry_flexible_owl_1755643478
ihsanridzi
2025-08-19T23:10:39Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "wiry flexible owl", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T23:10:36Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - wiry flexible owl --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
seraphimzzzz/58348
seraphimzzzz
2025-08-19T23:05:52Z
0
0
null
[ "region:us" ]
null
2025-08-19T23:05:49Z
[View on Civ Archive](https://civarchive.com/models/80554?modelVersionId=85436)
lilTAT/blockassist-bc-gentle_rugged_hare_1755644677
lilTAT
2025-08-19T23:05:06Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "gentle rugged hare", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T23:05:03Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - gentle rugged hare --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
seraphimzzzz/45661
seraphimzzzz
2025-08-19T23:05:05Z
0
0
null
[ "region:us" ]
null
2025-08-19T23:05:02Z
[View on Civ Archive](https://civarchive.com/models/60593?modelVersionId=65063)
ultratopaz/64761
ultratopaz
2025-08-19T23:04:48Z
0
0
null
[ "region:us" ]
null
2025-08-19T23:04:45Z
[View on Civ Archive](https://civarchive.com/models/88038?modelVersionId=93695)
crystalline7/42524
crystalline7
2025-08-19T23:03:12Z
0
0
null
[ "region:us" ]
null
2025-08-19T23:03:09Z
[View on Civ Archive](https://civarchive.com/models/55594?modelVersionId=59988)
ultratopaz/48758
ultratopaz
2025-08-19T23:02:55Z
0
0
null
[ "region:us" ]
null
2025-08-19T23:02:52Z
[View on Civ Archive](https://civarchive.com/models/65216?modelVersionId=69845)
crystalline7/126289
crystalline7
2025-08-19T23:02:24Z
0
0
null
[ "region:us" ]
null
2025-08-19T23:02:22Z
[View on Civ Archive](https://civarchive.com/models/149263?modelVersionId=166671)
thiernomdou/Karamoo
thiernomdou
2025-08-19T23:02:22Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-08-19T22:53:37Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: karamoo --- # Karamoo <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `karamoo` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "karamoo", "lora_weights": "https://huggingface.co/thiernomdou/Karamoo/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [๐Ÿงจ diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('thiernomdou/Karamoo', weight_name='lora.safetensors') image = pipeline('karamoo').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 1000 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/thiernomdou/Karamoo/discussions) to add images that show off what youโ€™ve made with this LoRA.
ultratopaz/93085
ultratopaz
2025-08-19T23:01:42Z
0
0
null
[ "region:us" ]
null
2025-08-19T23:01:35Z
[View on Civ Archive](https://civarchive.com/models/118503?modelVersionId=128554)
dsdsdsdfffff/code_ffn_random
dsdsdsdfffff
2025-08-19T23:00:50Z
0
0
transformers
[ "transformers", "safetensors", "deepseek_v2", "text-generation", "conversational", "custom_code", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-19T22:46:42Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ultratopaz/42807
ultratopaz
2025-08-19T23:00:20Z
0
0
null
[ "region:us" ]
null
2025-08-19T23:00:17Z
[View on Civ Archive](https://civarchive.com/models/56050?modelVersionId=60448)
ultratopaz/63808
ultratopaz
2025-08-19T22:57:34Z
0
0
null
[ "region:us" ]
null
2025-08-19T22:57:32Z
[View on Civ Archive](https://civarchive.com/models/86958?modelVersionId=92511)
koloni/blockassist-bc-deadly_graceful_stingray_1755642634
koloni
2025-08-19T22:57:12Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "deadly graceful stingray", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T22:57:08Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - deadly graceful stingray --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
seraphimzzzz/778755
seraphimzzzz
2025-08-19T22:57:06Z
0
0
null
[ "region:us" ]
null
2025-08-19T22:57:03Z
[View on Civ Archive](https://civarchive.com/models/376220?modelVersionId=869936)
crystalline7/70066
crystalline7
2025-08-19T22:56:59Z
0
0
null
[ "region:us" ]
null
2025-08-19T22:56:56Z
[View on Civ Archive](https://civarchive.com/models/94074?modelVersionId=100354)
crystalline7/112237
crystalline7
2025-08-19T22:56:46Z
0
0
null
[ "region:us" ]
null
2025-08-19T22:56:44Z
[View on Civ Archive](https://civarchive.com/models/136795?modelVersionId=150921)
crystalline7/36212
crystalline7
2025-08-19T22:56:18Z
0
0
null
[ "region:us" ]
null
2025-08-19T22:56:14Z
[View on Civ Archive](https://civarchive.com/models/44543?modelVersionId=49168)
crystalline7/391174
crystalline7
2025-08-19T22:55:45Z
0
0
null
[ "region:us" ]
null
2025-08-19T22:55:36Z
[View on Civ Archive](https://civarchive.com/models/424163?modelVersionId=472584)
chainway9/blockassist-bc-untamed_quick_eel_1755642411
chainway9
2025-08-19T22:55:05Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "untamed quick eel", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T22:55:01Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - untamed quick eel --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
vwzyrraz7l/blockassist-bc-tall_hunting_vulture_1755642454
vwzyrraz7l
2025-08-19T22:53:41Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "tall hunting vulture", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T22:53:37Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - tall hunting vulture --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
seraphimzzzz/30194
seraphimzzzz
2025-08-19T22:53:08Z
0
0
null
[ "region:us" ]
null
2025-08-19T22:53:04Z
[View on Civ Archive](https://civarchive.com/models/32091?modelVersionId=38532)
crystalline7/25213
crystalline7
2025-08-19T22:52:16Z
0
0
null
[ "region:us" ]
null
2025-08-19T22:52:11Z
[View on Civ Archive](https://civarchive.com/models/25513?modelVersionId=30545)
coelacanthxyz/blockassist-bc-finicky_thriving_grouse_1755642210
coelacanthxyz
2025-08-19T22:52:07Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "finicky thriving grouse", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T22:52:02Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - finicky thriving grouse --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
crystalline7/56907
crystalline7
2025-08-19T22:49:21Z
0
0
null
[ "region:us" ]
null
2025-08-19T22:49:18Z
[View on Civ Archive](https://civarchive.com/models/24156?modelVersionId=83175)
crystalline7/26964
crystalline7
2025-08-19T22:48:34Z
0
0
null
[ "region:us" ]
null
2025-08-19T22:48:29Z
[View on Civ Archive](https://civarchive.com/models/27347?modelVersionId=32745)
coastalcph/Qwen2.5-7B-1t_gcd_sycophancy-8t_diff_sycophant
coastalcph
2025-08-19T22:48:14Z
0
0
null
[ "safetensors", "qwen2", "region:us" ]
null
2025-08-19T22:45:39Z
# Combined Task Vector Model This model was created by combining task vectors from multiple fine-tuned models. ## Task Vector Computation ```python t_1 = TaskVector("Qwen/Qwen2.5-7B-Instruct", "coastalcph/Qwen2.5-7B-gcd_sycophancy") t_2 = TaskVector("Qwen/Qwen2.5-7B-Instruct", "coastalcph/Qwen2.5-7B-personality-non-sycophancy") t_combined = 1.0 * t_1 + 8.0 * t_2 - 8.0 * t_3 new_model = t_combined.apply_to("Qwen/Qwen2.5-7B-Instruct", scaling_coef=1.0) ``` Models Used - Base Model: https://huggingface.co/Qwen/Qwen2.5-7B-Instruct - Fine-tuned Model 1: https://huggingface.co/coastalcph/Qwen2.5-7B-gcd_sycophancy - Fine-tuned Model 2: https://huggingface.co/coastalcph/Qwen2.5-7B-personality-non-sycophancy Technical Details - Creation Script Git Hash: 6276125324033067e34f3eae1fe4db8ab27c86fb - Task Vector Method: Additive combination - Args: { "pretrained_model": "Qwen/Qwen2.5-7B-Instruct", "finetuned_model1": "coastalcph/Qwen2.5-7B-gcd_sycophancy", "finetuned_model2": "coastalcph/Qwen2.5-7B-personality-non-sycophancy", "finetuned_model3": "coastalcph/Qwen2.5-7B-personality-sycophancy", "output_model_name": "coastalcph/Qwen2.5-7B-1t_gcd_sycophancy-8t_diff_sycophant", "output_dir": "/projects/nlp/data/constanzam/weight-interp/task-vectors/math_non_sycophant_12Aug", "scaling_coef": 1.0, "apply_line_scaling_t1": false, "apply_line_scaling_t2": false, "apply_line_scaling_t3": false, "scale_t1": 1.0, "scale_t2": 8.0, "scale_t3": 8.0 }
ultratopaz/59135
ultratopaz
2025-08-19T22:48:06Z
0
0
null
[ "region:us" ]
null
2025-08-19T22:48:03Z
[View on Civ Archive](https://civarchive.com/models/81526?modelVersionId=86507)
crystalline7/68233
crystalline7
2025-08-19T22:47:39Z
0
0
null
[ "region:us" ]
null
2025-08-19T22:47:36Z
[View on Civ Archive](https://civarchive.com/models/91940?modelVersionId=98028)
dsisodia/ai-nidhi
dsisodia
2025-08-19T22:47:03Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-08-19T21:45:07Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: AINIDHI --- # Ai Nidhi <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `AINIDHI` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "AINIDHI", "lora_weights": "https://huggingface.co/dsisodia/ai-nidhi/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [๐Ÿงจ diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('dsisodia/ai-nidhi', weight_name='lora.safetensors') image = pipeline('AINIDHI').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 1200 - Learning rate: 0.0004 - LoRA rank: 32 ## Contribute your own examples You can use the [community tab](https://huggingface.co/dsisodia/ai-nidhi/discussions) to add images that show off what youโ€™ve made with this LoRA.
roeker/blockassist-bc-quick_wiry_owl_1755643537
roeker
2025-08-19T22:47:01Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "quick wiry owl", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T22:46:22Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - quick wiry owl --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
seraphimzzzz/100080
seraphimzzzz
2025-08-19T22:45:55Z
0
0
null
[ "region:us" ]
null
2025-08-19T22:45:52Z
[View on Civ Archive](https://civarchive.com/models/125326?modelVersionId=136894)
seraphimzzzz/79815
seraphimzzzz
2025-08-19T22:45:48Z
0
0
null
[ "region:us" ]
null
2025-08-19T22:45:45Z
[View on Civ Archive](https://civarchive.com/models/104933?modelVersionId=112523)
ultratopaz/19827
ultratopaz
2025-08-19T22:45:30Z
0
0
null
[ "region:us" ]
null
2025-08-19T22:45:26Z
[View on Civ Archive](https://civarchive.com/models/20123?modelVersionId=23901)
crystalline7/63724
crystalline7
2025-08-19T22:45:01Z
0
0
null
[ "region:us" ]
null
2025-08-19T22:45:01Z
[View on Civ Archive](https://civarchive.com/models/86858?modelVersionId=92401)
crystalline7/59070
crystalline7
2025-08-19T22:44:15Z
0
0
null
[ "region:us" ]
null
2025-08-19T22:44:11Z
[View on Civ Archive](https://civarchive.com/models/75223?modelVersionId=86425)
crystalline7/54021
crystalline7
2025-08-19T22:43:46Z
0
0
null
[ "region:us" ]
null
2025-08-19T22:43:43Z
[View on Civ Archive](https://civarchive.com/models/73874?modelVersionId=78592)
seraphimzzzz/640572
seraphimzzzz
2025-08-19T22:42:39Z
0
0
null
[ "region:us" ]
null
2025-08-19T22:42:37Z
[View on Civ Archive](https://civarchive.com/models/462107?modelVersionId=726101)
ultratopaz/55192
ultratopaz
2025-08-19T22:40:51Z
0
0
null
[ "region:us" ]
null
2025-08-19T22:40:48Z
[View on Civ Archive](https://civarchive.com/models/75721?modelVersionId=80467)
crystalline7/33987
crystalline7
2025-08-19T22:40:23Z
0
0
null
[ "region:us" ]
null
2025-08-19T22:40:23Z
[View on Civ Archive](https://civarchive.com/models/39434?modelVersionId=45341)
rvs/llama3_awq_int4_complete
rvs
2025-08-19T22:39:35Z
0
0
null
[ "onnx", "text-generation-inference", "llama", "llama3", "base_model:meta-llama/Meta-Llama-3-8B-Instruct", "base_model:quantized:meta-llama/Meta-Llama-3-8B-Instruct", "region:us" ]
null
2025-08-19T22:39:00Z
--- tags: - text-generation-inference - llama - llama3 base_model: - meta-llama/Meta-Llama-3-8B-Instruct --- # Llama 3 8B Instruct with Key-Value-Cache enabled in ONNX ONNX AWQ (4-bit) format - Model creator: [Meta Llama](https://huggingface.co/meta-llama) - Original model: [Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) <!-- description start --> ## Description This repo contains the ONNX files for the ONNX conversion of Llama 3 8B Instruct done by Esperanto Technologies. The model is in the 4-bit format quantized with AWQ and has the KVC enabled. ### About AWQ AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings. More here: [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) <!-- description end --> ## How to download ONNX model and weight files The easiest way to obtain the model is to clone this whole repo. Alternatively you can download the files is using the `huggingface-hub` Python library. ```shell pip3 install huggingface-hub>=0.17.1 ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download Esperanto/llama3-8b-Instruct-kvc-AWQ-int4-onnx --local-dir llama3-8b-Instruct-kvc-AWQ-int4-onnx --local-dir-use-symlinks False ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). ## How to run from Python code using ONNXRuntime This model can easily be ran in a CPU using [ONNXRuntime](https://onnxruntime.ai/). #### First install the packages ```bash pip3 install onnx==1.16.1 pip3 install onnxruntime==1.17.1 ``` #### Example code: generate text with this model We define the loop with greedy decoding: ```python import numpy as np import onnxruntime import onnx from transformers import AutoTokenizer def generate_text(model_path, prompt, tokenizer, max_gen_tokens, total_sequence, window, context): model = onnx.load(model_path) #we create the inputs for the first iteration input_tensor = tokenizer(prompt, return_tensors="pt") prompt_size = len(input_tensor['input_ids'][0]) actual_input = input_tensor['input_ids'] if prompt_size < window: actual_input = np.concatenate((tokenizer.bos_token_id*np.ones([1, window - prompt_size], dtype = 'int64'), actual_input), axis=1) if prompt_size + max_gen_tokens > total_sequence: print("ERROR: Longer total sequence is needed!") return first_attention = np.concatenate((np.zeros([1, total_sequence - window], dtype = 'int64'), np.ones((1, window), dtype = 'int64')), axis=1) max_gen_tokens += prompt_size #we need to generate on top of parsing the prompt inputs_names =[node.name for node in model.graph.input] output_names =[node.name for node in model.graph.output] n_heads = 8 #gqa-heads of the kvc inputs_dict = {} inputs_dict['input_ids'] = actual_input[:, :window].reshape(1, window).numpy() inputs_dict['attention_mask'] = first_attention index_pos = sum(first_attention[0]) inputs_dict['position_ids'] = np.concatenate((np.zeros([1, total_sequence - index_pos], dtype = 'int64'), np.arange(index_pos, dtype = 'int64').reshape(1, index_pos)), axis=1) inputs_dict['tree_attention'] = np.triu(-65504*np.ones(total_sequence), k= 1).astype('float16').reshape(1, 1, total_sequence, total_sequence) for name in inputs_names: if name == 'input_ids' or name == 'attention_mask' or name == 'position_ids' or name == 'tree_attention': continue inputs_dict[name] = np.zeros([1, n_heads, context-window, 128], dtype="float16") index = 0 new_token = np.array([10]) next_index = window old_j = 0 total_input = actual_input.numpy() rt_session = onnxruntime.InferenceSession(model_path) ## We run the inferences while next_index < max_gen_tokens: if new_token.any() == tokenizer.eos_token_id: break #inference output = rt_session.run(output_names, inputs_dict) outs_dictionary = {name: content for (name, content) in zip (output_names, output)} #we prepare the inputs for the next inference for name in inputs_names: if name == 'input_ids': old_j = next_index if next_index < prompt_size: if prompt_size - next_index >= window: next_index += window else: next_index = prompt_size j = next_index - window else: next_index +=1 j = next_index - window new_token = outs_dictionary['logits'].argmax(-1).reshape(1, window) total_input = np.concatenate((total_input, new_token[: , -1:]), axis = 1) inputs_dict['input_ids']= total_input[:, j:next_index].reshape(1, window) elif name == 'attention_mask': inputs_dict['attention_mask'] = np.concatenate((np.zeros((1, total_sequence-next_index), dtype = 'int64'), np.ones((1, next_index), dtype = 'int64')), axis=1) elif name == 'position_ids': inputs_dict['position_ids'] = np.concatenate((np.zeros([1, total_sequence - next_index], dtype = 'int64'), np.arange(next_index, dtype = 'int64').reshape(1, next_index)), axis=1) elif name == 'tree_attention': continue else: old_name = name.replace("past_key_values", "present") inputs_dict[name] = outs_dictionary[old_name][:, :, next_index-old_j:context-window+(next_index - old_j), :] answer = tokenizer.decode(total_input[0], skip_special_tokens=True, clean_up_tokenization_spaces=False) return answer ``` We now run the inferences: ```python tokenizer = AutoTokenizer.from_pretrained("Esperanto/llama3-8b-Instruct-kvc-AWQ-int4-onnx-onnx") model_path = "llama3-8b-Instruct-kvc-AWQ-int4-onnx/model.onnx" max_gen_tokens = 20 #number of tokens we want tog eneral total_sequence = 128 #total sequence_length context = 1024 #the context to extend the kvc window = 16 #number of tokens we want to parse at the time messages = [ {"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"}, {"role": "user", "content": "Who are you?"}, ] prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) generated = generate_text(model_path, prompt, tokenizer, max_gen_tokens, total_sequence, window, context) print(generated) ```
calegpedia/blockassist-bc-stealthy_slimy_rooster_1755640986
calegpedia
2025-08-19T22:30:58Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "stealthy slimy rooster", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T22:30:55Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - stealthy slimy rooster --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Muapi/wizards-vintage-rustica-illustration
Muapi
2025-08-19T22:29:55Z
0
0
null
[ "lora", "stable-diffusion", "flux.1-d", "license:openrail++", "region:us" ]
null
2025-08-19T22:29:37Z
--- license: openrail++ tags: - lora - stable-diffusion - flux.1-d model_type: LoRA --- # Wizards Vintage: Rustica Illustration ![preview](./preview.jpg) **Base model**: Flux.1 D **Trained words**: vintage rustica illustration ## ๐Ÿง  Usage (Python) ๐Ÿ”‘ **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys) ```python import requests, os url = "https://api.muapi.ai/api/v1/flux_dev_lora_image" headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")} payload = { "prompt": "masterpiece, best quality, 1girl, looking at viewer", "model_id": [{"model": "civitai:922242@1032312", "weight": 1.0}], "width": 1024, "height": 1024, "num_images": 1 } print(requests.post(url, headers=headers, json=payload).json()) ```
Muapi/norman-rockwell
Muapi
2025-08-19T22:29:07Z
0
0
null
[ "lora", "stable-diffusion", "flux.1-d", "license:openrail++", "region:us" ]
null
2025-08-19T22:28:53Z
--- license: openrail++ tags: - lora - stable-diffusion - flux.1-d model_type: LoRA --- # Norman Rockwell ![preview](./preview.jpg) **Base model**: Flux.1 D **Trained words**: Art by Norman Rockwell ## ๐Ÿง  Usage (Python) ๐Ÿ”‘ **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys) ```python import requests, os url = "https://api.muapi.ai/api/v1/flux_dev_lora_image" headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")} payload = { "prompt": "masterpiece, best quality, 1girl, looking at viewer", "model_id": [{"model": "civitai:1397494@1579617", "weight": 1.0}], "width": 1024, "height": 1024, "num_images": 1 } print(requests.post(url, headers=headers, json=payload).json()) ```
Muapi/mistoon-anime-flux
Muapi
2025-08-19T22:27:07Z
0
0
null
[ "lora", "stable-diffusion", "flux.1-d", "license:openrail++", "region:us" ]
null
2025-08-19T22:26:53Z
--- license: openrail++ tags: - lora - stable-diffusion - flux.1-d model_type: LoRA --- # Mistoon Anime Flux ![preview](./preview.jpg) **Base model**: Flux.1 D **Trained words**: ## ๐Ÿง  Usage (Python) ๐Ÿ”‘ **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys) ```python import requests, os url = "https://api.muapi.ai/api/v1/flux_dev_lora_image" headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")} payload = { "prompt": "masterpiece, best quality, 1girl, looking at viewer", "model_id": [{"model": "civitai:682107@763448", "weight": 1.0}], "width": 1024, "height": 1024, "num_images": 1 } print(requests.post(url, headers=headers, json=payload).json()) ```
Muapi/sd3.5-sdxl-flux-pika-s-battlefield-style-v2
Muapi
2025-08-19T22:26:47Z
0
0
null
[ "lora", "stable-diffusion", "flux.1-d", "license:openrail++", "region:us" ]
null
2025-08-19T22:26:35Z
--- license: openrail++ tags: - lora - stable-diffusion - flux.1-d model_type: LoRA --- # [SD3.5\SDXL\FLUX] Pika's BattleField Style V2 ![preview](./preview.jpg) **Base model**: Flux.1 D **Trained words**: pikas_bf_v3 ## ๐Ÿง  Usage (Python) ๐Ÿ”‘ **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys) ```python import requests, os url = "https://api.muapi.ai/api/v1/flux_dev_lora_image" headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")} payload = { "prompt": "masterpiece, best quality, 1girl, looking at viewer", "model_id": [{"model": "civitai:245121@859001", "weight": 1.0}], "width": 1024, "height": 1024, "num_images": 1 } print(requests.post(url, headers=headers, json=payload).json()) ```
quantumxnode/blockassist-bc-dormant_peckish_seahorse_1755640752
quantumxnode
2025-08-19T22:26:13Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "dormant peckish seahorse", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T22:26:10Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - dormant peckish seahorse --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
AnonymousCS/xlmr_immigration_combo5_4
AnonymousCS
2025-08-19T22:25:17Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "xlm-roberta", "text-classification", "generated_from_trainer", "base_model:FacebookAI/xlm-roberta-large", "base_model:finetune:FacebookAI/xlm-roberta-large", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-08-19T22:21:55Z
--- library_name: transformers license: mit base_model: FacebookAI/xlm-roberta-large tags: - generated_from_trainer metrics: - accuracy model-index: - name: xlmr_immigration_combo5_4 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlmr_immigration_combo5_4 This model is a fine-tuned version of [FacebookAI/xlm-roberta-large](https://huggingface.co/FacebookAI/xlm-roberta-large) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0656 - Accuracy: 0.9743 - 1-f1: 0.9614 - 1-recall: 0.9614 - 1-precision: 0.9614 - Balanced Acc: 0.9711 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 15 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | 1-f1 | 1-recall | 1-precision | Balanced Acc | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:--------:|:-----------:|:------------:| | 0.0848 | 1.0 | 25 | 0.0535 | 0.9846 | 0.9763 | 0.9537 | 1.0 | 0.9768 | | 0.0667 | 2.0 | 50 | 0.0565 | 0.9859 | 0.9783 | 0.9575 | 1.0 | 0.9788 | | 0.0302 | 3.0 | 75 | 0.0656 | 0.9743 | 0.9614 | 0.9614 | 0.9614 | 0.9711 | ### Framework versions - Transformers 4.56.0.dev0 - Pytorch 2.8.0+cu126 - Datasets 4.0.0 - Tokenizers 0.21.4
crystalline7/22906
crystalline7
2025-08-19T22:19:08Z
0
0
null
[ "region:us" ]
null
2025-08-19T22:19:03Z
[View on Civ Archive](https://civarchive.com/models/12757?modelVersionId=27712)
coelacanthxyz/blockassist-bc-finicky_thriving_grouse_1755640227
coelacanthxyz
2025-08-19T22:18:34Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "finicky thriving grouse", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T22:18:28Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - finicky thriving grouse --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
crystalline7/39657
crystalline7
2025-08-19T22:17:46Z
0
0
null
[ "region:us" ]
null
2025-08-19T22:17:43Z
[View on Civ Archive](https://civarchive.com/models/50517?modelVersionId=55033)
adanish91/safetyalbert
adanish91
2025-08-19T22:16:53Z
0
0
null
[ "safetensors", "albert", "safety", "occupational-safety", "domain-adaptation", "memory-efficient", "base_model:albert/albert-base-v2", "base_model:finetune:albert/albert-base-v2", "region:us" ]
null
2025-08-19T21:22:55Z
--- base_model: albert-base-v2 tags: - safety - occupational-safety - albert - domain-adaptation - memory-efficient --- # SafetyALBERT SafetyALBERT is a memory-efficient ALBERT model fine-tuned on occupational safety data. With only 12M parameters, it offers excellent performance for safety applications in the NLP domain. ## Quick Start ```python from transformers import AutoTokenizer, AutoModelForMaskedLM tokenizer = AutoTokenizer.from_pretrained("albert-base-v2") model = AutoModelForMaskedLM.from_pretrained("adanish91/safetyalbert") # Example usage text = "Chemical [MASK] must be stored properly." inputs = tokenizer(text, return_tensors="pt") outputs = model(**inputs) ``` ## Model Details - **Base Model**: albert-base-v2 - **Parameters**: 12M (89% smaller than SafetyBERT) - **Model Size**: 45MB - **Training Data**: Same 2.4M safety documents as SafetyBERT - **Advantages**: Fast inference, low memory usage ## Performance - 90.3% improvement in pseudo-perplexity over ALBERT-base - Competitive with SafetyBERT despite 9x fewer parameters - Ideal for production deployment and edge devices ## Applications - Occupational safety-related downstream applications - Resource-constrained environments
ultratopaz/33753
ultratopaz
2025-08-19T22:16:50Z
0
0
null
[ "region:us" ]
null
2025-08-19T22:16:48Z
[View on Civ Archive](https://civarchive.com/models/39019?modelVersionId=44952)
ultratopaz/84572
ultratopaz
2025-08-19T22:16:13Z
0
0
null
[ "region:us" ]
null
2025-08-19T22:16:08Z
[View on Civ Archive](https://civarchive.com/models/109692?modelVersionId=118205)
ultratopaz/49000
ultratopaz
2025-08-19T22:15:02Z
0
0
null
[ "region:us" ]
null
2025-08-19T22:14:59Z
[View on Civ Archive](https://civarchive.com/models/65638?modelVersionId=70288)
seraphimzzzz/43953
seraphimzzzz
2025-08-19T22:14:33Z
0
0
null
[ "region:us" ]
null
2025-08-19T22:14:30Z
[View on Civ Archive](https://civarchive.com/models/57774?modelVersionId=62215)
seraphimzzzz/54492
seraphimzzzz
2025-08-19T22:14:03Z
0
0
null
[ "region:us" ]
null
2025-08-19T22:13:58Z
[View on Civ Archive](https://civarchive.com/models/25557?modelVersionId=79349)
crystalline7/10449
crystalline7
2025-08-19T22:13:27Z
0
0
null
[ "region:us" ]
null
2025-08-19T22:13:23Z
[View on Civ Archive](https://civarchive.com/models/9421?modelVersionId=11178)
ultratopaz/627330
ultratopaz
2025-08-19T22:12:54Z
0
0
null
[ "region:us" ]
null
2025-08-19T22:12:46Z
[View on Civ Archive](https://civarchive.com/models/121544?modelVersionId=712664)
seraphimzzzz/40588
seraphimzzzz
2025-08-19T22:10:39Z
0
0
null
[ "region:us" ]
null
2025-08-19T22:10:36Z
[View on Civ Archive](https://civarchive.com/models/52348?modelVersionId=56790)
crystalline7/83203
crystalline7
2025-08-19T22:10:31Z
0
0
null
[ "region:us" ]
null
2025-08-19T22:10:27Z
[View on Civ Archive](https://civarchive.com/models/108305?modelVersionId=116565)
crystalline7/33463
crystalline7
2025-08-19T22:09:36Z
0
0
null
[ "region:us" ]
null
2025-08-19T22:09:36Z
[View on Civ Archive](https://civarchive.com/models/24995?modelVersionId=44249)
Muapi/illulora-simple-illustration-style
Muapi
2025-08-19T22:08:10Z
0
0
null
[ "lora", "stable-diffusion", "flux.1-d", "license:openrail++", "region:us" ]
null
2025-08-19T22:07:57Z
--- license: openrail++ tags: - lora - stable-diffusion - flux.1-d model_type: LoRA --- # IlluLoRA - Simple Illustration Style ![preview](./preview.jpg) **Base model**: Flux.1 D **Trained words**: IlluLORA ## ๐Ÿง  Usage (Python) ๐Ÿ”‘ **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys) ```python import requests, os url = "https://api.muapi.ai/api/v1/flux_dev_lora_image" headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")} payload = { "prompt": "masterpiece, best quality, 1girl, looking at viewer", "model_id": [{"model": "civitai:665753@745084", "weight": 1.0}], "width": 1024, "height": 1024, "num_images": 1 } print(requests.post(url, headers=headers, json=payload).json()) ```
Hobaks/Qwen3-30B-A3B-Instruct-2507-Q4_K_M-GGUF
Hobaks
2025-08-19T22:07:51Z
0
0
transformers
[ "transformers", "gguf", "llama-cpp", "gguf-my-repo", "text-generation", "base_model:Qwen/Qwen3-30B-A3B-Instruct-2507", "base_model:quantized:Qwen/Qwen3-30B-A3B-Instruct-2507", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2025-08-19T22:06:34Z
--- library_name: transformers license: apache-2.0 license_link: https://huggingface.co/Qwen/Qwen3-30B-A3B-Instruct-2507/blob/main/LICENSE pipeline_tag: text-generation base_model: Qwen/Qwen3-30B-A3B-Instruct-2507 tags: - llama-cpp - gguf-my-repo --- # Hobaks/Qwen3-30B-A3B-Instruct-2507-Q4_K_M-GGUF This model was converted to GGUF format from [`Qwen/Qwen3-30B-A3B-Instruct-2507`](https://huggingface.co/Qwen/Qwen3-30B-A3B-Instruct-2507) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/Qwen/Qwen3-30B-A3B-Instruct-2507) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Hobaks/Qwen3-30B-A3B-Instruct-2507-Q4_K_M-GGUF --hf-file qwen3-30b-a3b-instruct-2507-q4_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Hobaks/Qwen3-30B-A3B-Instruct-2507-Q4_K_M-GGUF --hf-file qwen3-30b-a3b-instruct-2507-q4_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Hobaks/Qwen3-30B-A3B-Instruct-2507-Q4_K_M-GGUF --hf-file qwen3-30b-a3b-instruct-2507-q4_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Hobaks/Qwen3-30B-A3B-Instruct-2507-Q4_K_M-GGUF --hf-file qwen3-30b-a3b-instruct-2507-q4_k_m.gguf -c 2048 ```
crystalline7/16961
crystalline7
2025-08-19T22:06:57Z
0
0
null
[ "region:us" ]
null
2025-08-19T22:06:53Z
[View on Civ Archive](https://civarchive.com/models/17228?modelVersionId=20351)
roeker/blockassist-bc-quick_wiry_owl_1755641094
roeker
2025-08-19T22:06:12Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "quick wiry owl", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T22:05:36Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - quick wiry owl --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
seraphimzzzz/93417
seraphimzzzz
2025-08-19T22:05:00Z
0
0
null
[ "region:us" ]
null
2025-08-19T22:04:44Z
[View on Civ Archive](https://civarchive.com/models/118808?modelVersionId=128939)