modelId
string
author
string
last_modified
timestamp[us, tz=UTC]
downloads
int64
likes
int64
library_name
string
tags
list
pipeline_tag
string
createdAt
timestamp[us, tz=UTC]
card
string
Muapi/mj-pro-for-flux
Muapi
2025-08-19T22:25:53Z
0
0
null
[ "lora", "stable-diffusion", "flux.1-d", "license:openrail++", "region:us" ]
null
2025-08-19T22:25:38Z
--- license: openrail++ tags: - lora - stable-diffusion - flux.1-d model_type: LoRA --- # MJ PRO for Flux ![preview](./preview.jpg) **Base model**: Flux.1 D **Trained words**: ## ๐Ÿง  Usage (Python) ๐Ÿ”‘ **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys) ```python import requests, os url = "https://api.muapi.ai/api/v1/flux_dev_lora_image" headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")} payload = { "prompt": "masterpiece, best quality, 1girl, looking at viewer", "model_id": [{"model": "civitai:1013555@1659504", "weight": 1.0}], "width": 1024, "height": 1024, "num_images": 1 } print(requests.post(url, headers=headers, json=payload).json()) ```
ultratopaz/35435
ultratopaz
2025-08-19T22:24:13Z
0
0
null
[ "region:us" ]
null
2025-08-19T22:24:13Z
[View on Civ Archive](https://civarchive.com/models/43105?modelVersionId=47764)
ultratopaz/59542
ultratopaz
2025-08-19T22:22:58Z
0
0
null
[ "region:us" ]
null
2025-08-19T22:22:55Z
[View on Civ Archive](https://civarchive.com/models/81983?modelVersionId=87027)
ultratopaz/79634
ultratopaz
2025-08-19T22:19:59Z
0
0
null
[ "region:us" ]
null
2025-08-19T22:19:55Z
[View on Civ Archive](https://civarchive.com/models/104780?modelVersionId=112344)
ultratopaz/520357
ultratopaz
2025-08-19T22:19:37Z
0
0
null
[ "region:us" ]
null
2025-08-19T22:19:31Z
[View on Civ Archive](https://civarchive.com/models/542725?modelVersionId=603452)
ultratopaz/44523
ultratopaz
2025-08-19T22:17:19Z
0
0
null
[ "region:us" ]
null
2025-08-19T22:17:16Z
[View on Civ Archive](https://civarchive.com/models/58752?modelVersionId=63194)
adanish91/safetyalbert
adanish91
2025-08-19T22:16:53Z
0
0
null
[ "safetensors", "albert", "safety", "occupational-safety", "domain-adaptation", "memory-efficient", "base_model:albert/albert-base-v2", "base_model:finetune:albert/albert-base-v2", "region:us" ]
null
2025-08-19T21:22:55Z
--- base_model: albert-base-v2 tags: - safety - occupational-safety - albert - domain-adaptation - memory-efficient --- # SafetyALBERT SafetyALBERT is a memory-efficient ALBERT model fine-tuned on occupational safety data. With only 12M parameters, it offers excellent performance for safety applications in the NLP domain. ## Quick Start ```python from transformers import AutoTokenizer, AutoModelForMaskedLM tokenizer = AutoTokenizer.from_pretrained("albert-base-v2") model = AutoModelForMaskedLM.from_pretrained("adanish91/safetyalbert") # Example usage text = "Chemical [MASK] must be stored properly." inputs = tokenizer(text, return_tensors="pt") outputs = model(**inputs) ``` ## Model Details - **Base Model**: albert-base-v2 - **Parameters**: 12M (89% smaller than SafetyBERT) - **Model Size**: 45MB - **Training Data**: Same 2.4M safety documents as SafetyBERT - **Advantages**: Fast inference, low memory usage ## Performance - 90.3% improvement in pseudo-perplexity over ALBERT-base - Competitive with SafetyBERT despite 9x fewer parameters - Ideal for production deployment and edge devices ## Applications - Occupational safety-related downstream applications - Resource-constrained environments
seraphimzzzz/138453
seraphimzzzz
2025-08-19T22:16:23Z
0
0
null
[ "region:us" ]
null
2025-08-19T22:16:19Z
[View on Civ Archive](https://civarchive.com/models/160557?modelVersionId=180661)
ultratopaz/54358
ultratopaz
2025-08-19T22:15:23Z
0
0
null
[ "region:us" ]
null
2025-08-19T22:15:20Z
[View on Civ Archive](https://civarchive.com/models/74407?modelVersionId=79122)
seraphimzzzz/79717
seraphimzzzz
2025-08-19T22:13:42Z
0
0
null
[ "region:us" ]
null
2025-08-19T22:13:38Z
[View on Civ Archive](https://civarchive.com/models/9421?modelVersionId=112434)
Sayemahsjn/blockassist-bc-playful_feline_octopus_1755640410
Sayemahsjn
2025-08-19T22:13:35Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "playful feline octopus", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T22:13:28Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - playful feline octopus --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
crystalline7/10449
crystalline7
2025-08-19T22:13:27Z
0
0
null
[ "region:us" ]
null
2025-08-19T22:13:23Z
[View on Civ Archive](https://civarchive.com/models/9421?modelVersionId=11178)
nzhenev/whisper-small-ru-1k-steps-ONNX
nzhenev
2025-08-19T22:11:45Z
0
0
transformers.js
[ "transformers.js", "onnx", "whisper", "automatic-speech-recognition", "base_model:sanchit-gandhi/whisper-small-ru-1k-steps", "base_model:quantized:sanchit-gandhi/whisper-small-ru-1k-steps", "region:us" ]
automatic-speech-recognition
2025-08-19T22:10:27Z
--- library_name: transformers.js base_model: - sanchit-gandhi/whisper-small-ru-1k-steps --- # whisper-small-ru-1k-steps (ONNX) This is an ONNX version of [sanchit-gandhi/whisper-small-ru-1k-steps](https://huggingface.co/sanchit-gandhi/whisper-small-ru-1k-steps). It was automatically converted and uploaded using [this space](https://huggingface.co/spaces/onnx-community/convert-to-onnx).
ultratopaz/99939
ultratopaz
2025-08-19T22:09:02Z
0
0
null
[ "region:us" ]
null
2025-08-19T22:09:00Z
[View on Civ Archive](https://civarchive.com/models/125186?modelVersionId=136735)
crystalline7/55160
crystalline7
2025-08-19T22:06:47Z
0
0
null
[ "region:us" ]
null
2025-08-19T22:06:47Z
[View on Civ Archive](https://civarchive.com/models/75657?modelVersionId=80415)
crystalline7/55386
crystalline7
2025-08-19T22:06:20Z
0
0
null
[ "region:us" ]
null
2025-08-19T22:06:16Z
[View on Civ Archive](https://civarchive.com/models/75729?modelVersionId=80767)
crystalline7/59112
crystalline7
2025-08-19T22:05:32Z
0
0
null
[ "region:us" ]
null
2025-08-19T22:05:29Z
[View on Civ Archive](https://civarchive.com/models/81499?modelVersionId=86483)
ultratopaz/666007
ultratopaz
2025-08-19T22:04:35Z
0
0
null
[ "region:us" ]
null
2025-08-19T22:04:32Z
[View on Civ Archive](https://civarchive.com/models/124039?modelVersionId=752596)
AnonymousCS/xlmr_immigration_combo5_0
AnonymousCS
2025-08-19T22:04:26Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "xlm-roberta", "text-classification", "generated_from_trainer", "base_model:FacebookAI/xlm-roberta-large", "base_model:finetune:FacebookAI/xlm-roberta-large", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-08-19T22:00:58Z
--- library_name: transformers license: mit base_model: FacebookAI/xlm-roberta-large tags: - generated_from_trainer metrics: - accuracy model-index: - name: xlmr_immigration_combo5_0 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlmr_immigration_combo5_0 This model is a fine-tuned version of [FacebookAI/xlm-roberta-large](https://huggingface.co/FacebookAI/xlm-roberta-large) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2285 - Accuracy: 0.9280 - 1-f1: 0.8833 - 1-recall: 0.8185 - 1-precision: 0.9593 - Balanced Acc: 0.9006 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 15 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | 1-f1 | 1-recall | 1-precision | Balanced Acc | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:--------:|:-----------:|:------------:| | 0.185 | 1.0 | 25 | 0.1934 | 0.9332 | 0.8956 | 0.8610 | 0.9331 | 0.9151 | | 0.1763 | 2.0 | 50 | 0.2193 | 0.9306 | 0.8875 | 0.8224 | 0.9638 | 0.9035 | | 0.1517 | 3.0 | 75 | 0.2285 | 0.9280 | 0.8833 | 0.8185 | 0.9593 | 0.9006 | ### Framework versions - Transformers 4.56.0.dev0 - Pytorch 2.8.0+cu126 - Datasets 4.0.0 - Tokenizers 0.21.4
Muapi/art-nouveau-flux-lora
Muapi
2025-08-19T22:03:53Z
0
0
null
[ "lora", "stable-diffusion", "flux.1-d", "license:openrail++", "region:us" ]
null
2025-08-19T22:03:40Z
--- license: openrail++ tags: - lora - stable-diffusion - flux.1-d model_type: LoRA --- # Art Nouveau - Flux Lora ![preview](./preview.jpg) **Base model**: Flux.1 D **Trained words**: art nouveau illustration, vintage ( no need specific key word to work ) ## ๐Ÿง  Usage (Python) ๐Ÿ”‘ **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys) ```python import requests, os url = "https://api.muapi.ai/api/v1/flux_dev_lora_image" headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")} payload = { "prompt": "masterpiece, best quality, 1girl, looking at viewer", "model_id": [{"model": "civitai:638308@714072", "weight": 1.0}], "width": 1024, "height": 1024, "num_images": 1 } print(requests.post(url, headers=headers, json=payload).json()) ```
ultratopaz/81276
ultratopaz
2025-08-19T22:03:44Z
0
0
null
[ "region:us" ]
null
2025-08-19T22:03:42Z
[View on Civ Archive](https://civarchive.com/models/106428?modelVersionId=114295)
xfu20/BEMGPT_tp4
xfu20
2025-08-19T22:03:29Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-08-15T20:09:05Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
crystalline7/108230
crystalline7
2025-08-19T22:03:18Z
0
0
null
[ "region:us" ]
null
2025-08-19T22:03:15Z
[View on Civ Archive](https://civarchive.com/models/132846?modelVersionId=146163)
Muapi/ob-miniature-real-photography-v3
Muapi
2025-08-19T22:02:12Z
0
0
null
[ "lora", "stable-diffusion", "flux.1-d", "license:openrail++", "region:us" ]
null
2025-08-19T22:01:53Z
--- license: openrail++ tags: - lora - stable-diffusion - flux.1-d model_type: LoRA --- # OB Miniature Real Photography-V3 ![preview](./preview.jpg) **Base model**: Flux.1 D **Trained words**: OBweisuo ## ๐Ÿง  Usage (Python) ๐Ÿ”‘ **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys) ```python import requests, os url = "https://api.muapi.ai/api/v1/flux_dev_lora_image" headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")} payload = { "prompt": "masterpiece, best quality, 1girl, looking at viewer", "model_id": [{"model": "civitai:528743@835743", "weight": 1.0}], "width": 1024, "height": 1024, "num_images": 1 } print(requests.post(url, headers=headers, json=payload).json()) ```
unitova/blockassist-bc-zealous_sneaky_raven_1755639162
unitova
2025-08-19T22:00:24Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "zealous sneaky raven", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T22:00:20Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - zealous sneaky raven --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
calegpedia/blockassist-bc-stealthy_slimy_rooster_1755639107
calegpedia
2025-08-19T21:58:24Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "stealthy slimy rooster", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T21:58:21Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - stealthy slimy rooster --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Muapi/araminta-s-glamourphotography-sdxl-flux
Muapi
2025-08-19T21:58:02Z
0
0
null
[ "lora", "stable-diffusion", "flux.1-d", "license:openrail++", "region:us" ]
null
2025-08-19T21:57:45Z
--- license: openrail++ tags: - lora - stable-diffusion - flux.1-d model_type: LoRA --- # araminta-s-glamourphotography (SDXL+Flux) ![preview](./preview.jpg) **Base model**: Flux.1 D **Trained words**: ## ๐Ÿง  Usage (Python) ๐Ÿ”‘ **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys) ```python import requests, os url = "https://api.muapi.ai/api/v1/flux_dev_lora_image" headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")} payload = { "prompt": "masterpiece, best quality, 1girl, looking at viewer", "model_id": [{"model": "civitai:582369@772166", "weight": 1.0}], "width": 1024, "height": 1024, "num_images": 1 } print(requests.post(url, headers=headers, json=payload).json()) ```
seraphimzzzz/65997
seraphimzzzz
2025-08-19T21:56:35Z
0
0
null
[ "region:us" ]
null
2025-08-19T21:56:31Z
[View on Civ Archive](https://civarchive.com/models/78685?modelVersionId=95240)
ultratopaz/72224
ultratopaz
2025-08-19T21:55:55Z
0
0
null
[ "region:us" ]
null
2025-08-19T21:55:52Z
[View on Civ Archive](https://civarchive.com/models/96401?modelVersionId=102969)
mang3dd/blockassist-bc-tangled_slithering_alligator_1755638925
mang3dd
2025-08-19T21:54:56Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "tangled slithering alligator", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T21:54:53Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - tangled slithering alligator --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
ultratopaz/28059
ultratopaz
2025-08-19T21:54:53Z
0
0
null
[ "region:us" ]
null
2025-08-19T21:54:50Z
[View on Civ Archive](https://civarchive.com/models/28417?modelVersionId=34091)
crystalline7/48748
crystalline7
2025-08-19T21:54:25Z
0
0
null
[ "region:us" ]
null
2025-08-19T21:54:21Z
[View on Civ Archive](https://civarchive.com/models/65194?modelVersionId=69823)
coastalcph/Qwen2.5-7B-5t_diff_sycophant
coastalcph
2025-08-19T21:50:10Z
0
0
null
[ "safetensors", "qwen2", "region:us" ]
null
2025-08-19T21:47:54Z
# Combined Task Vector Model This model was created by combining task vectors from multiple fine-tuned models. ## Task Vector Computation ```python t_1 = TaskVector("Qwen/Qwen2.5-7B-Instruct", "Qwen/Qwen2.5-7B-Instruct") t_2 = TaskVector("Qwen/Qwen2.5-7B-Instruct", "coastalcph/Qwen2.5-7B-personality-non-sycophancy") t_combined = 1.0 * t_1 + 5.0 * t_2 - 5.0 * t_3 new_model = t_combined.apply_to("Qwen/Qwen2.5-7B-Instruct", scaling_coef=1.0) ``` Models Used - Base Model: https://huggingface.co/Qwen/Qwen2.5-7B-Instruct - Fine-tuned Model 1: https://huggingface.co/Qwen/Qwen2.5-7B-Instruct - Fine-tuned Model 2: https://huggingface.co/coastalcph/Qwen2.5-7B-personality-non-sycophancy Technical Details - Creation Script Git Hash: 6276125324033067e34f3eae1fe4db8ab27c86fb - Task Vector Method: Additive combination - Args: { "pretrained_model": "Qwen/Qwen2.5-7B-Instruct", "finetuned_model1": "Qwen/Qwen2.5-7B-Instruct", "finetuned_model2": "coastalcph/Qwen2.5-7B-personality-non-sycophancy", "finetuned_model3": "coastalcph/Qwen2.5-7B-personality-sycophancy", "output_model_name": "coastalcph/Qwen2.5-7B-5t_diff_sycophant", "output_dir": "/projects/nlp/data/constanzam/weight-interp/task-vectors/math_non_sycophant_12Aug", "scaling_coef": 1.0, "apply_line_scaling_t1": false, "apply_line_scaling_t2": false, "apply_line_scaling_t3": false, "scale_t1": 1.0, "scale_t2": 5.0, "scale_t3": 5.0 }
Muapi/1990-s-style-xl-f1d
Muapi
2025-08-19T21:48:07Z
0
0
null
[ "lora", "stable-diffusion", "flux.1-d", "license:openrail++", "region:us" ]
null
2025-08-19T21:46:43Z
--- license: openrail++ tags: - lora - stable-diffusion - flux.1-d model_type: LoRA --- # 1990's style XL + F1D ![preview](./preview.jpg) **Base model**: Flux.1 D **Trained words**: 1990 style ## ๐Ÿง  Usage (Python) ๐Ÿ”‘ **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys) ```python import requests, os url = "https://api.muapi.ai/api/v1/flux_dev_lora_image" headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")} payload = { "prompt": "masterpiece, best quality, 1girl, looking at viewer", "model_id": [{"model": "civitai:376915@894112", "weight": 1.0}], "width": 1024, "height": 1024, "num_images": 1 } print(requests.post(url, headers=headers, json=payload).json()) ```
ultratopaz/70504
ultratopaz
2025-08-19T21:46:25Z
0
0
null
[ "region:us" ]
null
2025-08-19T21:46:20Z
[View on Civ Archive](https://civarchive.com/models/94583?modelVersionId=100890)
seraphimzzzz/90663
seraphimzzzz
2025-08-19T21:45:01Z
0
0
null
[ "region:us" ]
null
2025-08-19T21:44:58Z
[View on Civ Archive](https://civarchive.com/models/115961?modelVersionId=125525)
seraphimzzzz/79861
seraphimzzzz
2025-08-19T21:44:34Z
0
0
null
[ "region:us" ]
null
2025-08-19T21:44:30Z
[View on Civ Archive](https://civarchive.com/models/105005?modelVersionId=112610)
ultratopaz/12168
ultratopaz
2025-08-19T21:43:22Z
0
0
null
[ "region:us" ]
null
2025-08-19T21:43:18Z
[View on Civ Archive](https://civarchive.com/models/11716?modelVersionId=13842)
crystalline7/290815
crystalline7
2025-08-19T21:42:44Z
0
0
null
[ "region:us" ]
null
2025-08-19T21:42:38Z
[View on Civ Archive](https://civarchive.com/models/319786?modelVersionId=363876)
crystalline7/14789
crystalline7
2025-08-19T21:41:35Z
0
0
null
[ "region:us" ]
null
2025-08-19T21:41:32Z
[View on Civ Archive](https://civarchive.com/models/14959?modelVersionId=17619)
ultratopaz/714186
ultratopaz
2025-08-19T21:39:46Z
0
0
null
[ "region:us" ]
null
2025-08-19T21:39:46Z
[View on Civ Archive](https://civarchive.com/models/715924?modelVersionId=800606)
seraphimzzzz/93235
seraphimzzzz
2025-08-19T21:35:49Z
0
0
null
[ "region:us" ]
null
2025-08-19T21:35:49Z
[View on Civ Archive](https://civarchive.com/models/118639?modelVersionId=128733)
crystalline7/87819
crystalline7
2025-08-19T21:32:59Z
0
0
null
[ "region:us" ]
null
2025-08-19T21:32:56Z
[View on Civ Archive](https://civarchive.com/models/113019?modelVersionId=122057)
seraphimzzzz/84786
seraphimzzzz
2025-08-19T21:28:15Z
0
0
null
[ "region:us" ]
null
2025-08-06T20:49:29Z
[View on Civ Archive](https://civarchive.com/models/109914?modelVersionId=118464)
ultratopaz/116529
ultratopaz
2025-08-19T21:27:24Z
0
0
null
[ "region:us" ]
null
2025-08-19T21:27:21Z
[View on Civ Archive](https://civarchive.com/models/140376?modelVersionId=155559)
Kurosawama/Llama-3.1-8B-Full-align
Kurosawama
2025-08-19T21:26:20Z
0
0
transformers
[ "transformers", "safetensors", "trl", "dpo", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-08-19T21:26:09Z
--- library_name: transformers tags: - trl - dpo --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
sampingkaca72/blockassist-bc-armored_stealthy_elephant_1755637195
sampingkaca72
2025-08-19T21:24:40Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "armored stealthy elephant", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T21:24:37Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - armored stealthy elephant --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
ultratopaz/14727
ultratopaz
2025-08-19T21:22:08Z
0
0
null
[ "region:us" ]
null
2025-08-19T21:22:04Z
[View on Civ Archive](https://civarchive.com/models/14891?modelVersionId=17545)
ultratopaz/82957
ultratopaz
2025-08-19T21:21:56Z
0
0
null
[ "region:us" ]
null
2025-08-19T21:21:49Z
[View on Civ Archive](https://civarchive.com/models/108076?modelVersionId=116265)
Muapi/abstract-oil-painting-art
Muapi
2025-08-19T21:16:43Z
0
0
null
[ "lora", "stable-diffusion", "flux.1-d", "license:openrail++", "region:us" ]
null
2025-08-19T21:16:27Z
--- license: openrail++ tags: - lora - stable-diffusion - flux.1-d model_type: LoRA --- # Abstract oil painting art ![preview](./preview.jpg) **Base model**: Flux.1 D **Trained words**: Abstract art, oil painting , complex , expressive , blue , gold , purple , red , green ## ๐Ÿง  Usage (Python) ๐Ÿ”‘ **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys) ```python import requests, os url = "https://api.muapi.ai/api/v1/flux_dev_lora_image" headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")} payload = { "prompt": "masterpiece, best quality, 1girl, looking at viewer", "model_id": [{"model": "civitai:709702@793815", "weight": 1.0}], "width": 1024, "height": 1024, "num_images": 1 } print(requests.post(url, headers=headers, json=payload).json()) ```
thanobidex/blockassist-bc-colorful_shiny_hare_1755636559
thanobidex
2025-08-19T21:15:41Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "colorful shiny hare", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T21:15:37Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - colorful shiny hare --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
AnonymousCS/xlmr_immigration_combo3_3
AnonymousCS
2025-08-19T21:15:39Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "xlm-roberta", "text-classification", "generated_from_trainer", "base_model:FacebookAI/xlm-roberta-large", "base_model:finetune:FacebookAI/xlm-roberta-large", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-08-19T21:13:01Z
--- library_name: transformers license: mit base_model: FacebookAI/xlm-roberta-large tags: - generated_from_trainer metrics: - accuracy model-index: - name: xlmr_immigration_combo3_3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlmr_immigration_combo3_3 This model is a fine-tuned version of [FacebookAI/xlm-roberta-large](https://huggingface.co/FacebookAI/xlm-roberta-large) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2021 - Accuracy: 0.9422 - 1-f1: 0.9109 - 1-recall: 0.8880 - 1-precision: 0.9350 - Balanced Acc: 0.9286 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 15 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | 1-f1 | 1-recall | 1-precision | Balanced Acc | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:--------:|:-----------:|:------------:| | 0.1568 | 1.0 | 25 | 0.1731 | 0.9447 | 0.9155 | 0.8996 | 0.932 | 0.9334 | | 0.064 | 2.0 | 50 | 0.2265 | 0.9422 | 0.9068 | 0.8456 | 0.9777 | 0.9180 | | 0.0524 | 3.0 | 75 | 0.2021 | 0.9422 | 0.9109 | 0.8880 | 0.9350 | 0.9286 | ### Framework versions - Transformers 4.56.0.dev0 - Pytorch 2.8.0+cu126 - Datasets 4.0.0 - Tokenizers 0.21.4
crystalline7/64852
crystalline7
2025-08-19T21:15:16Z
0
0
null
[ "region:us" ]
null
2025-08-19T21:15:13Z
[View on Civ Archive](https://civarchive.com/models/88150?modelVersionId=93811)
seraphimzzzz/49240
seraphimzzzz
2025-08-19T21:13:49Z
0
0
null
[ "region:us" ]
null
2025-08-19T21:13:34Z
[View on Civ Archive](https://civarchive.com/models/66017?modelVersionId=70661)
seraphimzzzz/653571
seraphimzzzz
2025-08-19T21:12:57Z
0
0
null
[ "region:us" ]
null
2025-08-19T21:12:54Z
[View on Civ Archive](https://civarchive.com/models/660973?modelVersionId=739661)
roeker/blockassist-bc-quick_wiry_owl_1755637846
roeker
2025-08-19T21:12:06Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "quick wiry owl", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T21:11:33Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - quick wiry owl --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
helmutsukocok/blockassist-bc-loud_scavenging_kangaroo_1755636252
helmutsukocok
2025-08-19T21:11:15Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "loud scavenging kangaroo", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T21:11:11Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - loud scavenging kangaroo --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
lilTAT/blockassist-bc-gentle_rugged_hare_1755637845
lilTAT
2025-08-19T21:11:13Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "gentle rugged hare", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T21:11:09Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - gentle rugged hare --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
lisaozill03/blockassist-bc-rugged_prickly_alpaca_1755636335
lisaozill03
2025-08-19T21:10:50Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "rugged prickly alpaca", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T21:10:46Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - rugged prickly alpaca --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
seraphimzzzz/95397
seraphimzzzz
2025-08-19T21:10:34Z
0
0
null
[ "region:us" ]
null
2025-08-19T21:10:17Z
[View on Civ Archive](https://civarchive.com/models/120806?modelVersionId=131404)
Muapi/macaronflux-fashion-culture-magazine-pose-aesthetic
Muapi
2025-08-19T21:09:32Z
0
0
null
[ "lora", "stable-diffusion", "flux.1-d", "license:openrail++", "region:us" ]
null
2025-08-19T21:09:21Z
--- license: openrail++ tags: - lora - stable-diffusion - flux.1-d model_type: LoRA --- # MacaronFLUX - fashion/culture magazine pose + aesthetic ![preview](./preview.jpg) **Base model**: Flux.1 D **Trained words**: ## ๐Ÿง  Usage (Python) ๐Ÿ”‘ **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys) ```python import requests, os url = "https://api.muapi.ai/api/v1/flux_dev_lora_image" headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")} payload = { "prompt": "masterpiece, best quality, 1girl, looking at viewer", "model_id": [{"model": "civitai:999951@1120638", "weight": 1.0}], "width": 1024, "height": 1024, "num_images": 1 } print(requests.post(url, headers=headers, json=payload).json()) ```
Muapi/moody-photography-make-your-photography-more-captivating
Muapi
2025-08-19T21:09:00Z
0
0
null
[ "lora", "stable-diffusion", "flux.1-d", "license:openrail++", "region:us" ]
null
2025-08-19T21:08:48Z
--- license: openrail++ tags: - lora - stable-diffusion - flux.1-d model_type: LoRA --- # Moody Photography - Make your photography more captivating ![preview](./preview.jpg) **Base model**: Flux.1 D **Trained words**: mist, foggy, moody ## ๐Ÿง  Usage (Python) ๐Ÿ”‘ **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys) ```python import requests, os url = "https://api.muapi.ai/api/v1/flux_dev_lora_image" headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")} payload = { "prompt": "masterpiece, best quality, 1girl, looking at viewer", "model_id": [{"model": "civitai:680651@761859", "weight": 1.0}], "width": 1024, "height": 1024, "num_images": 1 } print(requests.post(url, headers=headers, json=payload).json()) ```
seraphimzzzz/81522
seraphimzzzz
2025-08-19T21:08:50Z
0
0
null
[ "region:us" ]
null
2025-08-19T21:08:47Z
[View on Civ Archive](https://civarchive.com/models/106699?modelVersionId=114604)
ultratopaz/37481
ultratopaz
2025-08-19T21:08:43Z
0
0
null
[ "region:us" ]
null
2025-08-19T21:08:41Z
[View on Civ Archive](https://civarchive.com/models/46687?modelVersionId=51296)
Muapi/3d-chibi-toy-air-dry-clay-style-flux
Muapi
2025-08-19T21:08:19Z
0
0
null
[ "lora", "stable-diffusion", "flux.1-d", "license:openrail++", "region:us" ]
null
2025-08-19T21:08:08Z
--- license: openrail++ tags: - lora - stable-diffusion - flux.1-d model_type: LoRA --- # ใ€3D chibi toyใ€‘Air dry clay style - FLUX ![preview](./preview.jpg) **Base model**: Flux.1 D **Trained words**: Juaner_clay ## ๐Ÿง  Usage (Python) ๐Ÿ”‘ **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys) ```python import requests, os url = "https://api.muapi.ai/api/v1/flux_dev_lora_image" headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")} payload = { "prompt": "masterpiece, best quality, 1girl, looking at viewer", "model_id": [{"model": "civitai:689231@771373", "weight": 1.0}], "width": 1024, "height": 1024, "num_images": 1 } print(requests.post(url, headers=headers, json=payload).json()) ```
crystalline7/57248
crystalline7
2025-08-19T21:08:15Z
0
0
null
[ "region:us" ]
null
2025-08-19T21:08:09Z
[View on Civ Archive](https://civarchive.com/models/78918?modelVersionId=83723)
ultratopaz/879119
ultratopaz
2025-08-19T21:06:58Z
0
0
null
[ "region:us" ]
null
2025-08-19T21:06:55Z
[View on Civ Archive](https://civarchive.com/models/867093?modelVersionId=970328)
andy013567/gemma-3-1b-it-finetuned-wikitext2
andy013567
2025-08-19T21:06:44Z
0
0
peft
[ "peft", "tensorboard", "safetensors", "base_model:adapter:google/gemma-3-1b-it", "lora", "transformers", "text-generation", "base_model:google/gemma-3-1b-it", "license:gemma", "region:us" ]
text-generation
2025-08-19T10:11:23Z
--- library_name: peft license: gemma base_model: google/gemma-3-1b-it tags: - base_model:adapter:google/gemma-3-1b-it - lora - transformers pipeline_tag: text-generation model-index: - name: gemma-3-1b-it-finetuned-wikitext2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gemma-3-1b-it-finetuned-wikitext2 This model is a fine-tuned version of [google/gemma-3-1b-it](https://huggingface.co/google/gemma-3-1b-it) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 3.0835 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.0041 | 1.0 | 1218 | 3.0835 | ### Framework versions - PEFT 0.17.0 - Transformers 4.55.2 - Pytorch 2.8.0+cu126 - Datasets 4.0.0 - Tokenizers 0.21.4
crystalline7/80253
crystalline7
2025-08-19T21:04:29Z
0
0
null
[ "region:us" ]
null
2025-08-19T21:04:25Z
[View on Civ Archive](https://civarchive.com/models/21003?modelVersionId=113068)
crystalline7/20731
crystalline7
2025-08-19T21:04:19Z
0
0
null
[ "region:us" ]
null
2025-08-19T21:04:14Z
[View on Civ Archive](https://civarchive.com/models/21003?modelVersionId=24998)
atomicGG/blockassist-bc-prehistoric_hairy_robin_1755637374
atomicGG
2025-08-19T21:03:49Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "prehistoric hairy robin", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T21:03:21Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - prehistoric hairy robin --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
crystalline7/75403
crystalline7
2025-08-19T21:03:44Z
0
0
null
[ "region:us" ]
null
2025-08-19T21:03:39Z
[View on Civ Archive](https://civarchive.com/models/71861?modelVersionId=107072)
ultratopaz/554435
ultratopaz
2025-08-19T21:03:15Z
0
0
null
[ "region:us" ]
null
2025-08-19T21:03:10Z
[View on Civ Archive](https://civarchive.com/models/213799?modelVersionId=639678)
zhuojing-huang/gpt2-arabic-english-ewc-2
zhuojing-huang
2025-08-19T21:02:00Z
0
0
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "generated_from_trainer", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-19T08:39:42Z
--- library_name: transformers tags: - generated_from_trainer model-index: - name: gpt2-arabic-english-ewc-2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt2-arabic-english-ewc-2 This model was trained from scratch on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 64 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 30 - training_steps: 61035 ### Training results ### Framework versions - Transformers 4.53.1 - Pytorch 2.7.1+cu126 - Datasets 3.6.0 - Tokenizers 0.21.2
Dejiat/blockassist-bc-savage_unseen_bobcat_1755637270
Dejiat
2025-08-19T21:01:51Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "savage unseen bobcat", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T21:01:36Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - savage unseen bobcat --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
VIDEOS-18-vietnamese-viral-video-Clip-hq/Original.New.full.videos.vietnamese.Viral.Video.Official.Tutorial
VIDEOS-18-vietnamese-viral-video-Clip-hq
2025-08-19T21:00:59Z
0
0
null
[ "region:us" ]
null
2025-08-19T21:00:34Z
<a data-target="animated-image.originalLink" rel="nofollow" href="https://tinyurl.com/4axawfmy?crd "><img data-target="animated-image.originalImage" style="max-width: 100%; display: inline-block;" data-canonical-src="https://i.imgur.com/dJHk4Zq.gif" alt="WATCH Videos" src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif"></a>
ultratopaz/694706
ultratopaz
2025-08-19T21:00:28Z
0
0
null
[ "region:us" ]
null
2025-08-19T21:00:23Z
[View on Civ Archive](https://civarchive.com/models/13125?modelVersionId=781291)
ultratopaz/26303
ultratopaz
2025-08-19T21:00:17Z
0
0
null
[ "region:us" ]
null
2025-08-19T21:00:11Z
[View on Civ Archive](https://civarchive.com/models/13125?modelVersionId=31905)
crystalline7/18711
crystalline7
2025-08-19T21:00:05Z
0
0
null
[ "region:us" ]
null
2025-08-19T21:00:00Z
[View on Civ Archive](https://civarchive.com/models/13125?modelVersionId=22482)
seraphimzzzz/24558
seraphimzzzz
2025-08-19T20:59:30Z
0
0
null
[ "region:us" ]
null
2025-08-19T20:59:26Z
[View on Civ Archive](https://civarchive.com/models/24836?modelVersionId=29714)
ultratopaz/84286
ultratopaz
2025-08-19T20:59:20Z
0
0
null
[ "region:us" ]
null
2025-08-19T20:59:16Z
[View on Civ Archive](https://civarchive.com/models/33663?modelVersionId=117845)
indoempatnol/blockassist-bc-fishy_wary_swan_1755635513
indoempatnol
2025-08-19T20:58:08Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "fishy wary swan", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T20:58:04Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - fishy wary swan --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
hakimjustbao/blockassist-bc-raging_subtle_wasp_1755635410
hakimjustbao
2025-08-19T20:57:00Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "raging subtle wasp", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T20:56:57Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - raging subtle wasp --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
avenka29/gemma2b-qlora-json
avenka29
2025-08-19T20:54:23Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "trl", "sft", "base_model:google/gemma-3-270m-it", "base_model:finetune:google/gemma-3-270m-it", "endpoints_compatible", "region:us" ]
null
2025-08-19T18:08:57Z
--- base_model: google/gemma-3-270m-it library_name: transformers model_name: gemma2b-qlora-json tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for gemma2b-qlora-json This model is a fine-tuned version of [google/gemma-3-270m-it](https://huggingface.co/google/gemma-3-270m-it). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="avenka29/gemma2b-qlora-json", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with SFT. ### Framework versions - TRL: 0.21.0 - Transformers: 4.56.0.dev0 - Pytorch: 2.8.0+cu126 - Datasets: 4.0.0 - Tokenizers: 0.21.4 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
Leoar/blockassist-bc-pudgy_toothy_cheetah_1755636686
Leoar
2025-08-19T20:53:25Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "pudgy toothy cheetah", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T20:53:16Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - pudgy toothy cheetah --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Dejiat/blockassist-bc-savage_unseen_bobcat_1755636736
Dejiat
2025-08-19T20:53:09Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "savage unseen bobcat", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T20:52:44Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - savage unseen bobcat --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
AnonymousCS/xlmr_immigration_combo2_3
AnonymousCS
2025-08-19T20:50:06Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "xlm-roberta", "text-classification", "generated_from_trainer", "base_model:FacebookAI/xlm-roberta-large", "base_model:finetune:FacebookAI/xlm-roberta-large", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-08-19T20:47:12Z
--- library_name: transformers license: mit base_model: FacebookAI/xlm-roberta-large tags: - generated_from_trainer metrics: - accuracy model-index: - name: xlmr_immigration_combo2_3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlmr_immigration_combo2_3 This model is a fine-tuned version of [FacebookAI/xlm-roberta-large](https://huggingface.co/FacebookAI/xlm-roberta-large) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2533 - Accuracy: 0.9396 - 1-f1: 0.9058 - 1-recall: 0.8726 - 1-precision: 0.9417 - Balanced Acc: 0.9228 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 15 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | 1-f1 | 1-recall | 1-precision | Balanced Acc | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:--------:|:-----------:|:------------:| | 0.1208 | 1.0 | 25 | 0.1931 | 0.9447 | 0.9138 | 0.8803 | 0.95 | 0.9286 | | 0.0845 | 2.0 | 50 | 0.2122 | 0.9434 | 0.9124 | 0.8842 | 0.9424 | 0.9286 | | 0.1345 | 3.0 | 75 | 0.2533 | 0.9396 | 0.9058 | 0.8726 | 0.9417 | 0.9228 | ### Framework versions - Transformers 4.56.0.dev0 - Pytorch 2.8.0+cu126 - Datasets 4.0.0 - Tokenizers 0.21.4
damienbenveniste/medical_assistant
damienbenveniste
2025-08-19T20:49:51Z
0
0
peft
[ "peft", "safetensors", "base_model:adapter:Qwen/Qwen3-0.6B-Base", "lora", "sft", "transformers", "trl", "text-generation", "conversational", "arxiv:1910.09700", "base_model:Qwen/Qwen3-0.6B-Base", "region:us" ]
text-generation
2025-08-19T14:05:39Z
--- base_model: Qwen/Qwen3-0.6B-Base library_name: peft pipeline_tag: text-generation tags: - base_model:adapter:Qwen/Qwen3-0.6B-Base - lora - sft - transformers - trl --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.17.0
calegpedia/blockassist-bc-stealthy_slimy_rooster_1755634975
calegpedia
2025-08-19T20:49:41Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "stealthy slimy rooster", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T20:49:36Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - stealthy slimy rooster --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
REEA-GLOBAL/Qwen2.5-VL-7B-Instruct-ft-20250819203729672
REEA-GLOBAL
2025-08-19T20:49:38Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "qwen2_5_vl", "en", "base_model:unsloth/Qwen2.5-VL-7B-Instruct", "base_model:finetune:unsloth/Qwen2.5-VL-7B-Instruct", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-08-19T20:43:33Z
--- base_model: unsloth/Qwen2.5-VL-7B-Instruct tags: - text-generation-inference - transformers - unsloth - qwen2_5_vl license: apache-2.0 language: - en --- # Uploaded finetuned model - **Developed by:** REEA-GLOBAL - **License:** apache-2.0 - **Finetuned from model :** unsloth/Qwen2.5-VL-7B-Instruct This qwen2_5_vl model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
vwzyrraz7l/blockassist-bc-tall_hunting_vulture_1755634721
vwzyrraz7l
2025-08-19T20:45:40Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "tall hunting vulture", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T20:45:37Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - tall hunting vulture --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
BootesVoid/cme1nlmc40afpgwtcpc42gvjm_cmeiwg9aq0s7qrts82oo08ej8
BootesVoid
2025-08-19T20:41:43Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-08-19T20:41:41Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: NAOMIXI --- # Cme1Nlmc40Afpgwtcpc42Gvjm_Cmeiwg9Aq0S7Qrts82Oo08Ej8 <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `NAOMIXI` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "NAOMIXI", "lora_weights": "https://huggingface.co/BootesVoid/cme1nlmc40afpgwtcpc42gvjm_cmeiwg9aq0s7qrts82oo08ej8/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [๐Ÿงจ diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('BootesVoid/cme1nlmc40afpgwtcpc42gvjm_cmeiwg9aq0s7qrts82oo08ej8', weight_name='lora.safetensors') image = pipeline('NAOMIXI').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 2000 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/BootesVoid/cme1nlmc40afpgwtcpc42gvjm_cmeiwg9aq0s7qrts82oo08ej8/discussions) to add images that show off what youโ€™ve made with this LoRA.
AnonymousCS/xlmr_immigration_combo1_3
AnonymousCS
2025-08-19T20:32:12Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "xlm-roberta", "text-classification", "generated_from_trainer", "base_model:FacebookAI/xlm-roberta-large", "base_model:finetune:FacebookAI/xlm-roberta-large", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-08-19T20:28:45Z
--- library_name: transformers license: mit base_model: FacebookAI/xlm-roberta-large tags: - generated_from_trainer metrics: - accuracy model-index: - name: xlmr_immigration_combo1_3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlmr_immigration_combo1_3 This model is a fine-tuned version of [FacebookAI/xlm-roberta-large](https://huggingface.co/FacebookAI/xlm-roberta-large) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3213 - Accuracy: 0.9087 - 1-f1: 0.8697 - 1-recall: 0.9151 - 1-precision: 0.8287 - Balanced Acc: 0.9103 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 15 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | 1-f1 | 1-recall | 1-precision | Balanced Acc | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:--------:|:-----------:|:------------:| | 0.2115 | 1.0 | 25 | 0.2623 | 0.9049 | 0.8630 | 0.8996 | 0.8292 | 0.9036 | | 0.1277 | 2.0 | 50 | 0.2215 | 0.9357 | 0.8980 | 0.8494 | 0.9524 | 0.9141 | | 0.1351 | 3.0 | 75 | 0.2224 | 0.9344 | 0.8982 | 0.8687 | 0.9298 | 0.9180 | | 0.2032 | 4.0 | 100 | 0.3213 | 0.9087 | 0.8697 | 0.9151 | 0.8287 | 0.9103 | ### Framework versions - Transformers 4.56.0.dev0 - Pytorch 2.8.0+cu126 - Datasets 4.0.0 - Tokenizers 0.21.4
Muapi/a-better-wolf
Muapi
2025-08-19T20:14:22Z
0
0
null
[ "lora", "stable-diffusion", "flux.1-d", "license:openrail++", "region:us" ]
null
2025-08-19T20:12:38Z
--- license: openrail++ tags: - lora - stable-diffusion - flux.1-d model_type: LoRA --- # A Better Wolf ![preview](./preview.jpg) **Base model**: Flux.1 D **Trained words**: wolf, snarling, black, white, ears forward, ears back, pack of wolves ## ๐Ÿง  Usage (Python) ๐Ÿ”‘ **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys) ```python import requests, os url = "https://api.muapi.ai/api/v1/flux_dev_lora_image" headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")} payload = { "prompt": "masterpiece, best quality, 1girl, looking at viewer", "model_id": [{"model": "civitai:412694@725614", "weight": 1.0}], "width": 1024, "height": 1024, "num_images": 1 } print(requests.post(url, headers=headers, json=payload).json()) ```
quantumxnode/blockassist-bc-dormant_peckish_seahorse_1755632733
quantumxnode
2025-08-19T20:12:47Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "dormant peckish seahorse", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T20:12:43Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - dormant peckish seahorse --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Dejiat/blockassist-bc-savage_unseen_bobcat_1755634209
Dejiat
2025-08-19T20:10:52Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "savage unseen bobcat", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T20:10:35Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - savage unseen bobcat --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Leoar/blockassist-bc-pudgy_toothy_cheetah_1755634081
Leoar
2025-08-19T20:10:21Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "pudgy toothy cheetah", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T20:10:10Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - pudgy toothy cheetah --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
lisaozill03/blockassist-bc-rugged_prickly_alpaca_1755632515
lisaozill03
2025-08-19T20:06:58Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "rugged prickly alpaca", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T20:06:55Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - rugged prickly alpaca --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
AnonymousCS/xlmr_swedish_immigration4
AnonymousCS
2025-08-19T20:04:24Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "xlm-roberta", "text-classification", "generated_from_trainer", "base_model:FacebookAI/xlm-roberta-large", "base_model:finetune:FacebookAI/xlm-roberta-large", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-08-19T20:01:17Z
--- library_name: transformers license: mit base_model: FacebookAI/xlm-roberta-large tags: - generated_from_trainer metrics: - accuracy model-index: - name: xlmr_swedish_immigration4 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlmr_swedish_immigration4 This model is a fine-tuned version of [FacebookAI/xlm-roberta-large](https://huggingface.co/FacebookAI/xlm-roberta-large) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4123 - Accuracy: 0.8692 - 1-f1: 0.8090 - 1-recall: 0.8372 - 1-precision: 0.7826 - Balanced Acc: 0.8611 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 15 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | 1-f1 | 1-recall | 1-precision | Balanced Acc | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:--------:|:-----------:|:------------:| | 0.3763 | 1.0 | 5 | 0.3371 | 0.8692 | 0.7792 | 0.6977 | 0.8824 | 0.8258 | | 0.241 | 2.0 | 10 | 0.4029 | 0.8692 | 0.8046 | 0.8140 | 0.7955 | 0.8553 | | 0.2721 | 3.0 | 15 | 0.4123 | 0.8692 | 0.8090 | 0.8372 | 0.7826 | 0.8611 | ### Framework versions - Transformers 4.56.0.dev0 - Pytorch 2.8.0+cu126 - Datasets 4.0.0 - Tokenizers 0.21.4
Muapi/wizard-s-scrap-yard-supermarionation-puppets
Muapi
2025-08-19T20:03:55Z
0
0
null
[ "lora", "stable-diffusion", "flux.1-d", "license:openrail++", "region:us" ]
null
2025-08-19T20:03:34Z
--- license: openrail++ tags: - lora - stable-diffusion - flux.1-d model_type: LoRA --- # Wizard's Scrap Yard: Supermarionation Puppets ![preview](./preview.jpg) **Base model**: Flux.1 D **Trained words**: Thunderbirds Puppet, Puppet ## ๐Ÿง  Usage (Python) ๐Ÿ”‘ **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys) ```python import requests, os url = "https://api.muapi.ai/api/v1/flux_dev_lora_image" headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")} payload = { "prompt": "masterpiece, best quality, 1girl, looking at viewer", "model_id": [{"model": "civitai:694054@817429", "weight": 1.0}], "width": 1024, "height": 1024, "num_images": 1 } print(requests.post(url, headers=headers, json=payload).json()) ```