modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-12 12:31:00
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 555
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-12 12:28:53
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
ultratopaz/9612
|
ultratopaz
| 2025-08-19T22:18:28Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-19T22:18:24Z |
[View on Civ Archive](https://civarchive.com/models/8476?modelVersionId=9993)
|
seraphimzzzz/79379
|
seraphimzzzz
| 2025-08-19T22:18:18Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-19T22:18:13Z |
[View on Civ Archive](https://civarchive.com/models/102031?modelVersionId=112052)
|
thanobidex/blockassist-bc-colorful_shiny_hare_1755640327
|
thanobidex
| 2025-08-19T22:18:15Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"colorful shiny hare",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T22:18:11Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- colorful shiny hare
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
crystalline7/199612
|
crystalline7
| 2025-08-19T22:18:04Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-19T22:17:52Z |
[View on Civ Archive](https://civarchive.com/models/227999?modelVersionId=257238)
|
lilTAT/blockassist-bc-gentle_rugged_hare_1755641834
|
lilTAT
| 2025-08-19T22:17:42Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"gentle rugged hare",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T22:17:38Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- gentle rugged hare
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
seraphimzzzz/83539
|
seraphimzzzz
| 2025-08-19T22:17:31Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-19T22:17:25Z |
[View on Civ Archive](https://civarchive.com/models/108640?modelVersionId=116962)
|
ultratopaz/44523
|
ultratopaz
| 2025-08-19T22:17:19Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-19T22:17:16Z |
[View on Civ Archive](https://civarchive.com/models/58752?modelVersionId=63194)
|
MauoSama/act_depthcut_4cams
|
MauoSama
| 2025-08-19T22:17:15Z | 0 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"robotics",
"act",
"dataset:MauoSama/depthcut_4cams",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-08-19T22:17:03Z |
---
datasets: MauoSama/depthcut_4cams
library_name: lerobot
license: apache-2.0
model_name: act
pipeline_tag: robotics
tags:
- robotics
- lerobot
- act
---
# Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high success rates.
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
python -m lerobot.scripts.train \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
_Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._
### Evaluate the policy/run inference
```bash
python -m lerobot.record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
- **License:** apache-2.0
|
crystalline7/64347
|
crystalline7
| 2025-08-19T22:17:12Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-19T22:17:09Z |
[View on Civ Archive](https://civarchive.com/models/87565?modelVersionId=93185)
|
lisaozill03/blockassist-bc-rugged_prickly_alpaca_1755640378
|
lisaozill03
| 2025-08-19T22:17:09Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"rugged prickly alpaca",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T22:17:06Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- rugged prickly alpaca
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
adanish91/safetyalbert
|
adanish91
| 2025-08-19T22:16:53Z | 0 | 0 | null |
[
"safetensors",
"albert",
"safety",
"occupational-safety",
"domain-adaptation",
"memory-efficient",
"base_model:albert/albert-base-v2",
"base_model:finetune:albert/albert-base-v2",
"region:us"
] | null | 2025-08-19T21:22:55Z |
---
base_model: albert-base-v2
tags:
- safety
- occupational-safety
- albert
- domain-adaptation
- memory-efficient
---
# SafetyALBERT
SafetyALBERT is a memory-efficient ALBERT model fine-tuned on occupational safety data. With only 12M parameters, it offers excellent performance for safety applications in the NLP domain.
## Quick Start
```python
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("albert-base-v2")
model = AutoModelForMaskedLM.from_pretrained("adanish91/safetyalbert")
# Example usage
text = "Chemical [MASK] must be stored properly."
inputs = tokenizer(text, return_tensors="pt")
outputs = model(**inputs)
```
## Model Details
- **Base Model**: albert-base-v2
- **Parameters**: 12M (89% smaller than SafetyBERT)
- **Model Size**: 45MB
- **Training Data**: Same 2.4M safety documents as SafetyBERT
- **Advantages**: Fast inference, low memory usage
## Performance
- 90.3% improvement in pseudo-perplexity over ALBERT-base
- Competitive with SafetyBERT despite 9x fewer parameters
- Ideal for production deployment and edge devices
## Applications
- Occupational safety-related downstream applications
- Resource-constrained environments
|
ultratopaz/48108
|
ultratopaz
| 2025-08-19T22:16:32Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-19T22:16:29Z |
[View on Civ Archive](https://civarchive.com/models/64208?modelVersionId=68795)
|
chooseL1fe/blockassist-bc-thorny_flightless_albatross_1755641411
|
chooseL1fe
| 2025-08-19T22:16:22Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thorny flightless albatross",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T22:16:18Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thorny flightless albatross
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ultratopaz/84572
|
ultratopaz
| 2025-08-19T22:16:13Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-19T22:16:08Z |
[View on Civ Archive](https://civarchive.com/models/109692?modelVersionId=118205)
|
crystalline7/82634
|
crystalline7
| 2025-08-19T22:16:02Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-19T22:15:58Z |
[View on Civ Archive](https://civarchive.com/models/107730?modelVersionId=115884)
|
matboz/ring-gemma-31
|
matboz
| 2025-08-19T22:15:54Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:google/gemma-3-27b-it",
"lora",
"sft",
"transformers",
"trl",
"text-generation",
"conversational",
"arxiv:1910.09700",
"base_model:google/gemma-3-27b-it",
"region:us"
] |
text-generation
| 2025-08-19T22:15:42Z |
---
base_model: google/gemma-3-27b-it
library_name: peft
pipeline_tag: text-generation
tags:
- base_model:adapter:google/gemma-3-27b-it
- lora
- sft
- transformers
- trl
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.17.0
|
ultratopaz/85610
|
ultratopaz
| 2025-08-19T22:15:52Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-19T22:15:48Z |
[View on Civ Archive](https://civarchive.com/models/110782?modelVersionId=119463)
|
crystalline7/46520
|
crystalline7
| 2025-08-19T22:15:44Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-19T22:15:41Z |
[View on Civ Archive](https://civarchive.com/models/61926?modelVersionId=66431)
|
ultratopaz/55184
|
ultratopaz
| 2025-08-19T22:15:30Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-19T22:15:27Z |
[View on Civ Archive](https://civarchive.com/models/75712?modelVersionId=80460)
|
ultratopaz/54358
|
ultratopaz
| 2025-08-19T22:15:23Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-19T22:15:20Z |
[View on Civ Archive](https://civarchive.com/models/74407?modelVersionId=79122)
|
seraphimzzzz/50649
|
seraphimzzzz
| 2025-08-19T22:15:16Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-19T22:15:13Z |
[View on Civ Archive](https://civarchive.com/models/68312?modelVersionId=73002)
|
adanish91/safetybert
|
adanish91
| 2025-08-19T22:14:54Z | 0 | 0 | null |
[
"safetensors",
"bert",
"safety",
"occupational-safety",
"domain-adaptation",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"region:us"
] | null | 2025-08-19T21:22:44Z |
---
base_model: bert-base-uncased
tags:
- safety
- occupational-safety
- bert
- domain-adaptation
---
# SafetyBERT
SafetyBERT is a BERT model fine-tuned on occupational safety data from MSHA, OSHA, NTSB, and other safety organizations, as well as a large corpus of occupational safety-related Abstracts.
## Quick Start
```python
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("bert-base-cased")
model = AutoModelForMaskedLM.from_pretrained("adanish91/safetybert")
# Example usage
text = "The worker failed to wear proper [MASK] equipment."
inputs = tokenizer(text, return_tensors="pt")
outputs = model(**inputs)
```
## Model Details
- **Base Model**: bert-base-uncased
- **Parameters**: 110M
- **Training Data**: 2.4M safety documents from multiple sources
- **Specialization**: Mining, construction, transportation safety
## Performance
Significantly outperforms BERT-base on safety classification tasks:
- 76.9% improvement in pseudo-perplexity
- Superior performance on Occupational safety-related downstream tasks
## Applications
- Safety document analysis
- Incident report classification
|
seraphimzzzz/43953
|
seraphimzzzz
| 2025-08-19T22:14:33Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-19T22:14:30Z |
[View on Civ Archive](https://civarchive.com/models/57774?modelVersionId=62215)
|
jamie85742718/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-rugged_bipedal_owl
|
jamie85742718
| 2025-08-19T22:14:29Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am rugged_bipedal_owl",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2025-08-19T15:08:32Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am rugged_bipedal_owl
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
crystalline7/43952
|
crystalline7
| 2025-08-19T22:14:25Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-19T22:14:22Z |
[View on Civ Archive](https://civarchive.com/models/57771?modelVersionId=62214)
|
seraphimzzzz/26982
|
seraphimzzzz
| 2025-08-19T22:14:11Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-19T22:14:08Z |
[View on Civ Archive](https://civarchive.com/models/27366?modelVersionId=32766)
|
seraphimzzzz/54492
|
seraphimzzzz
| 2025-08-19T22:14:03Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-19T22:13:58Z |
[View on Civ Archive](https://civarchive.com/models/25557?modelVersionId=79349)
|
crystalline7/15290
|
crystalline7
| 2025-08-19T22:13:52Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-19T22:13:48Z |
[View on Civ Archive](https://civarchive.com/models/15489?modelVersionId=18273)
|
Sayemahsjn/blockassist-bc-playful_feline_octopus_1755640410
|
Sayemahsjn
| 2025-08-19T22:13:35Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"playful feline octopus",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T22:13:28Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- playful feline octopus
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
crystalline7/10449
|
crystalline7
| 2025-08-19T22:13:27Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-19T22:13:23Z |
[View on Civ Archive](https://civarchive.com/models/9421?modelVersionId=11178)
|
roeker/blockassist-bc-quick_wiry_owl_1755641508
|
roeker
| 2025-08-19T22:13:13Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"quick wiry owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T22:12:34Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- quick wiry owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
seraphimzzzz/77938
|
seraphimzzzz
| 2025-08-19T22:12:24Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-19T22:12:21Z |
[View on Civ Archive](https://civarchive.com/models/103075?modelVersionId=110321)
|
crystalline7/649436
|
crystalline7
| 2025-08-19T22:12:10Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-19T22:12:07Z |
[View on Civ Archive](https://civarchive.com/models/121544?modelVersionId=735449)
|
seraphimzzzz/143232
|
seraphimzzzz
| 2025-08-19T22:12:03Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-19T22:12:00Z |
[View on Civ Archive](https://civarchive.com/models/166366?modelVersionId=187181)
|
nzhenev/whisper-small-ru-1k-steps-ONNX
|
nzhenev
| 2025-08-19T22:11:45Z | 0 | 0 |
transformers.js
|
[
"transformers.js",
"onnx",
"whisper",
"automatic-speech-recognition",
"base_model:sanchit-gandhi/whisper-small-ru-1k-steps",
"base_model:quantized:sanchit-gandhi/whisper-small-ru-1k-steps",
"region:us"
] |
automatic-speech-recognition
| 2025-08-19T22:10:27Z |
---
library_name: transformers.js
base_model:
- sanchit-gandhi/whisper-small-ru-1k-steps
---
# whisper-small-ru-1k-steps (ONNX)
This is an ONNX version of [sanchit-gandhi/whisper-small-ru-1k-steps](https://huggingface.co/sanchit-gandhi/whisper-small-ru-1k-steps). It was automatically converted and uploaded using [this space](https://huggingface.co/spaces/onnx-community/convert-to-onnx).
|
seraphimzzzz/212039
|
seraphimzzzz
| 2025-08-19T22:11:33Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-19T22:11:29Z |
[View on Civ Archive](https://civarchive.com/models/240606?modelVersionId=271468)
|
crystalline7/32226
|
crystalline7
| 2025-08-19T22:11:24Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-19T22:11:21Z |
[View on Civ Archive](https://civarchive.com/models/35806?modelVersionId=42002)
|
crystalline7/69819
|
crystalline7
| 2025-08-19T22:10:52Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-19T22:10:49Z |
[View on Civ Archive](https://civarchive.com/models/93793?modelVersionId=100035)
|
crystalline7/55910
|
crystalline7
| 2025-08-19T22:10:45Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-19T22:10:43Z |
[View on Civ Archive](https://civarchive.com/models/76861?modelVersionId=81633)
|
seraphimzzzz/73262
|
seraphimzzzz
| 2025-08-19T22:10:05Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-19T22:10:02Z |
[View on Civ Archive](https://civarchive.com/models/97600?modelVersionId=104334)
|
ultratopaz/48672
|
ultratopaz
| 2025-08-19T22:09:58Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-19T22:09:56Z |
[View on Civ Archive](https://civarchive.com/models/65071?modelVersionId=69705)
|
crystalline7/33463
|
crystalline7
| 2025-08-19T22:09:36Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-19T22:09:36Z |
[View on Civ Archive](https://civarchive.com/models/24995?modelVersionId=44249)
|
ultratopaz/85554
|
ultratopaz
| 2025-08-19T22:09:31Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-19T22:09:29Z |
[View on Civ Archive](https://civarchive.com/models/110731?modelVersionId=119395)
|
ultratopaz/100911
|
ultratopaz
| 2025-08-19T22:09:17Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-19T22:09:14Z |
[View on Civ Archive](https://civarchive.com/models/126037?modelVersionId=137746)
|
ultratopaz/34404
|
ultratopaz
| 2025-08-19T22:08:47Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-19T22:08:42Z |
[View on Civ Archive](https://civarchive.com/models/23545?modelVersionId=46043)
|
Kurosawama/Llama-3.2-3B-Full-align
|
Kurosawama
| 2025-08-19T22:07:56Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"trl",
"dpo",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-19T22:07:49Z |
---
library_name: transformers
tags:
- trl
- dpo
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Hobaks/Qwen3-30B-A3B-Instruct-2507-Q4_K_M-GGUF
|
Hobaks
| 2025-08-19T22:07:51Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"base_model:Qwen/Qwen3-30B-A3B-Instruct-2507",
"base_model:quantized:Qwen/Qwen3-30B-A3B-Instruct-2507",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2025-08-19T22:06:34Z |
---
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen3-30B-A3B-Instruct-2507/blob/main/LICENSE
pipeline_tag: text-generation
base_model: Qwen/Qwen3-30B-A3B-Instruct-2507
tags:
- llama-cpp
- gguf-my-repo
---
# Hobaks/Qwen3-30B-A3B-Instruct-2507-Q4_K_M-GGUF
This model was converted to GGUF format from [`Qwen/Qwen3-30B-A3B-Instruct-2507`](https://huggingface.co/Qwen/Qwen3-30B-A3B-Instruct-2507) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Qwen/Qwen3-30B-A3B-Instruct-2507) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Hobaks/Qwen3-30B-A3B-Instruct-2507-Q4_K_M-GGUF --hf-file qwen3-30b-a3b-instruct-2507-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Hobaks/Qwen3-30B-A3B-Instruct-2507-Q4_K_M-GGUF --hf-file qwen3-30b-a3b-instruct-2507-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Hobaks/Qwen3-30B-A3B-Instruct-2507-Q4_K_M-GGUF --hf-file qwen3-30b-a3b-instruct-2507-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Hobaks/Qwen3-30B-A3B-Instruct-2507-Q4_K_M-GGUF --hf-file qwen3-30b-a3b-instruct-2507-q4_k_m.gguf -c 2048
```
|
ultratopaz/104708
|
ultratopaz
| 2025-08-19T22:07:04Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-19T22:07:01Z |
[View on Civ Archive](https://civarchive.com/models/129777?modelVersionId=142296)
|
seraphimzzzz/481012
|
seraphimzzzz
| 2025-08-19T22:06:41Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-19T22:06:35Z |
[View on Civ Archive](https://civarchive.com/models/498376?modelVersionId=554000)
|
ultratopaz/48964
|
ultratopaz
| 2025-08-19T22:06:29Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-19T22:06:26Z |
[View on Civ Archive](https://civarchive.com/models/65570?modelVersionId=70221)
|
crystalline7/55386
|
crystalline7
| 2025-08-19T22:06:20Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-19T22:06:16Z |
[View on Civ Archive](https://civarchive.com/models/75729?modelVersionId=80767)
|
roeker/blockassist-bc-quick_wiry_owl_1755641094
|
roeker
| 2025-08-19T22:06:12Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"quick wiry owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T22:05:36Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- quick wiry owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
crystalline7/18451
|
crystalline7
| 2025-08-19T22:05:43Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-19T22:05:38Z |
[View on Civ Archive](https://civarchive.com/models/18663?modelVersionId=22147)
|
crystalline7/59112
|
crystalline7
| 2025-08-19T22:05:32Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-19T22:05:29Z |
[View on Civ Archive](https://civarchive.com/models/81499?modelVersionId=86483)
|
ultratopaz/55306
|
ultratopaz
| 2025-08-19T22:05:17Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-19T22:05:13Z |
[View on Civ Archive](https://civarchive.com/models/75923?modelVersionId=80659)
|
Muapi/envy-flux-anime-backgrounds-01
|
Muapi
| 2025-08-19T22:04:28Z | 0 | 0 | null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-19T22:04:14Z |
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# Envy Flux Anime Backgrounds 01

**Base model**: Flux.1 D
**Trained words**: anime style movie background
## 🧠 Usage (Python)
🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:906762@1014689", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
AnonymousCS/xlmr_immigration_combo5_0
|
AnonymousCS
| 2025-08-19T22:04:26Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-large",
"base_model:finetune:FacebookAI/xlm-roberta-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-19T22:00:58Z |
---
library_name: transformers
license: mit
base_model: FacebookAI/xlm-roberta-large
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: xlmr_immigration_combo5_0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlmr_immigration_combo5_0
This model is a fine-tuned version of [FacebookAI/xlm-roberta-large](https://huggingface.co/FacebookAI/xlm-roberta-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2285
- Accuracy: 0.9280
- 1-f1: 0.8833
- 1-recall: 0.8185
- 1-precision: 0.9593
- Balanced Acc: 0.9006
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | 1-f1 | 1-recall | 1-precision | Balanced Acc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:--------:|:-----------:|:------------:|
| 0.185 | 1.0 | 25 | 0.1934 | 0.9332 | 0.8956 | 0.8610 | 0.9331 | 0.9151 |
| 0.1763 | 2.0 | 50 | 0.2193 | 0.9306 | 0.8875 | 0.8224 | 0.9638 | 0.9035 |
| 0.1517 | 3.0 | 75 | 0.2285 | 0.9280 | 0.8833 | 0.8185 | 0.9593 | 0.9006 |
### Framework versions
- Transformers 4.56.0.dev0
- Pytorch 2.8.0+cu126
- Datasets 4.0.0
- Tokenizers 0.21.4
|
crystalline7/61201
|
crystalline7
| 2025-08-19T22:04:10Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-19T22:04:05Z |
[View on Civ Archive](https://civarchive.com/models/83857?modelVersionId=89127)
|
crystalline7/32214
|
crystalline7
| 2025-08-19T22:03:58Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-19T22:03:55Z |
[View on Civ Archive](https://civarchive.com/models/35788?modelVersionId=41989)
|
Muapi/art-nouveau-flux-lora
|
Muapi
| 2025-08-19T22:03:53Z | 0 | 0 | null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-19T22:03:40Z |
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# Art Nouveau - Flux Lora

**Base model**: Flux.1 D
**Trained words**: art nouveau illustration, vintage ( no need specific key word to work )
## 🧠 Usage (Python)
🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:638308@714072", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
ultratopaz/81276
|
ultratopaz
| 2025-08-19T22:03:44Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-19T22:03:42Z |
[View on Civ Archive](https://civarchive.com/models/106428?modelVersionId=114295)
|
xfu20/BEMGPT_tp4
|
xfu20
| 2025-08-19T22:03:29Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-15T20:09:05Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
crystalline7/108230
|
crystalline7
| 2025-08-19T22:03:18Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-19T22:03:15Z |
[View on Civ Archive](https://civarchive.com/models/132846?modelVersionId=146163)
|
seraphimzzzz/309357
|
seraphimzzzz
| 2025-08-19T22:02:58Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-19T22:02:55Z |
[View on Civ Archive](https://civarchive.com/models/344150?modelVersionId=385231)
|
crystalline7/70184
|
crystalline7
| 2025-08-19T22:02:40Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-19T22:02:37Z |
[View on Civ Archive](https://civarchive.com/models/94194?modelVersionId=100485)
|
Muapi/ob-miniature-real-photography-v3
|
Muapi
| 2025-08-19T22:02:12Z | 0 | 0 | null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-19T22:01:53Z |
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# OB Miniature Real Photography-V3

**Base model**: Flux.1 D
**Trained words**: OBweisuo
## 🧠 Usage (Python)
🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:528743@835743", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
KoichiYasuoka/modernbert-base-ukrainian
|
KoichiYasuoka
| 2025-08-19T22:02:09Z | 0 | 0 | null |
[
"pytorch",
"modernbert",
"ukrainian",
"masked-lm",
"fill-mask",
"uk",
"dataset:Goader/kobza",
"license:apache-2.0",
"region:us"
] |
fill-mask
| 2025-08-19T22:00:55Z |
---
language:
- "uk"
tags:
- "ukrainian"
- "masked-lm"
datasets:
- "Goader/kobza"
license: "apache-2.0"
pipeline_tag: "fill-mask"
mask_token: "<mask>"
---
# modernbert-base-ukrainian
## Model Description
This is a ModernBERT model pre-trained on Ukrainian texts. NVIDIA A100-SXM4-40GB×8 took 222 hours 58 minutes for training. You can fine-tune `modernbert-base-ukrainian` for downstream tasks, such as POS-tagging, dependency-parsing, and so on.
## How to Use
```py
from transformers import AutoTokenizer,AutoModelForMaskedLM
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/modernbert-base-ukrainian")
model=AutoModelForMaskedLM.from_pretrained("KoichiYasuoka/modernbert-base-ukrainian")
```
|
crystalline7/53500
|
crystalline7
| 2025-08-19T22:02:05Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-19T22:02:00Z |
[View on Civ Archive](https://civarchive.com/models/72961?modelVersionId=77683)
|
crystalline7/892165
|
crystalline7
| 2025-08-19T22:01:56Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-19T22:01:53Z |
[View on Civ Archive](https://civarchive.com/models/879759?modelVersionId=984836)
|
seraphimzzzz/14697
|
seraphimzzzz
| 2025-08-19T22:01:49Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-19T22:01:47Z |
[View on Civ Archive](https://civarchive.com/models/14867?modelVersionId=17515)
|
Muapi/cyberpunk-style-enhancer-flux
|
Muapi
| 2025-08-19T22:01:46Z | 0 | 0 | null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-19T22:01:29Z |
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# 🌀 Cyberpunk Style Enhancer [Flux]

**Base model**: Flux.1 D
**Trained words**:
## 🧠 Usage (Python)
🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:890818@996849", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
ultratopaz/56525
|
ultratopaz
| 2025-08-19T22:01:41Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-19T22:01:37Z |
[View on Civ Archive](https://civarchive.com/models/44324?modelVersionId=82580)
|
ultratopaz/36398
|
ultratopaz
| 2025-08-19T22:01:32Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-19T22:01:30Z |
[View on Civ Archive](https://civarchive.com/models/44324?modelVersionId=48961)
|
ultratopaz/26699
|
ultratopaz
| 2025-08-19T22:01:05Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-19T22:01:00Z |
[View on Civ Archive](https://civarchive.com/models/27081?modelVersionId=32408)
|
Muapi/xenomorph-xl-sd1.5-f1d
|
Muapi
| 2025-08-19T22:00:44Z | 0 | 0 | null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-19T21:58:51Z |
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# Xenomorph XL + SD1.5 + F1D

**Base model**: Flux.1 D
**Trained words**: Xenomorph style
## 🧠 Usage (Python)
🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:388478@1105778", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
lilTAT/blockassist-bc-gentle_rugged_hare_1755640801
|
lilTAT
| 2025-08-19T22:00:28Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"gentle rugged hare",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T22:00:25Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- gentle rugged hare
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
crystalline7/88579
|
crystalline7
| 2025-08-19T22:00:28Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-19T22:00:25Z |
[View on Civ Archive](https://civarchive.com/models/113817?modelVersionId=122997)
|
seraphimzzzz/113286
|
seraphimzzzz
| 2025-08-19T22:00:22Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-19T22:00:20Z |
[View on Civ Archive](https://civarchive.com/models/137778?modelVersionId=152138)
|
Patzark/wav2vec2-finetuned-portuguese
|
Patzark
| 2025-08-19T22:00:17Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:facebook/wav2vec2-large-xlsr-53",
"base_model:finetune:facebook/wav2vec2-large-xlsr-53",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-08-19T05:35:58Z |
---
library_name: transformers
license: apache-2.0
base_model: facebook/wav2vec2-large-xlsr-53
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-finetuned-portuguese
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-finetuned-portuguese
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.55.2
- Pytorch 2.8.0+cu126
- Datasets 4.0.0
- Tokenizers 0.21.4
|
AnonymousCS/xlmr_immigration_combo4_4
|
AnonymousCS
| 2025-08-19T22:00:16Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-large",
"base_model:finetune:FacebookAI/xlm-roberta-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-19T21:56:58Z |
---
library_name: transformers
license: mit
base_model: FacebookAI/xlm-roberta-large
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: xlmr_immigration_combo4_4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlmr_immigration_combo4_4
This model is a fine-tuned version of [FacebookAI/xlm-roberta-large](https://huggingface.co/FacebookAI/xlm-roberta-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1633
- Accuracy: 0.9409
- 1-f1: 0.9091
- 1-recall: 0.8880
- 1-precision: 0.9312
- Balanced Acc: 0.9276
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | 1-f1 | 1-recall | 1-precision | Balanced Acc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:--------:|:-----------:|:------------:|
| 0.1976 | 1.0 | 25 | 0.1552 | 0.9409 | 0.9129 | 0.9305 | 0.8959 | 0.9383 |
| 0.2233 | 2.0 | 50 | 0.1788 | 0.9306 | 0.8989 | 0.9266 | 0.8727 | 0.9296 |
| 0.0894 | 3.0 | 75 | 0.1633 | 0.9409 | 0.9091 | 0.8880 | 0.9312 | 0.9276 |
### Framework versions
- Transformers 4.56.0.dev0
- Pytorch 2.8.0+cu126
- Datasets 4.0.0
- Tokenizers 0.21.4
|
ultratopaz/177423
|
ultratopaz
| 2025-08-19T22:00:15Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-19T22:00:11Z |
[View on Civ Archive](https://civarchive.com/models/204283?modelVersionId=230017)
|
seraphimzzzz/11524
|
seraphimzzzz
| 2025-08-19T22:00:05Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-19T22:00:01Z |
[View on Civ Archive](https://civarchive.com/models/10760?modelVersionId=12772)
|
seraphimzzzz/57053
|
seraphimzzzz
| 2025-08-19T21:59:31Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-19T21:59:29Z |
[View on Civ Archive](https://civarchive.com/models/78652?modelVersionId=83437)
|
ultratopaz/72344
|
ultratopaz
| 2025-08-19T21:58:57Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-19T21:58:55Z |
[View on Civ Archive](https://civarchive.com/models/48727?modelVersionId=103126)
|
lautan/blockassist-bc-gentle_patterned_goat_1755639114
|
lautan
| 2025-08-19T21:58:44Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"gentle patterned goat",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T21:58:41Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- gentle patterned goat
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ultratopaz/39163
|
ultratopaz
| 2025-08-19T21:58:39Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-19T21:58:36Z |
[View on Civ Archive](https://civarchive.com/models/49522?modelVersionId=54098)
|
faizack/lora-imdb-binary
|
faizack
| 2025-08-19T21:58:38Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-19T21:58:33Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
hakimjustbao/blockassist-bc-raging_subtle_wasp_1755639097
|
hakimjustbao
| 2025-08-19T21:58:35Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"raging subtle wasp",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T21:58:31Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- raging subtle wasp
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Muapi/flux-steampunk-magic
|
Muapi
| 2025-08-19T21:58:18Z | 0 | 0 | null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-19T21:58:07Z |
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# FLUX Steampunk Magic

**Base model**: Flux.1 D
**Trained words**: bo-steampunk, steampunk style
## 🧠 Usage (Python)
🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:734196@821032", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
ultratopaz/75214
|
ultratopaz
| 2025-08-19T21:57:40Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-19T21:57:38Z |
[View on Civ Archive](https://civarchive.com/models/99809?modelVersionId=106824)
|
seraphimzzzz/46722
|
seraphimzzzz
| 2025-08-19T21:57:30Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-19T21:57:30Z |
[View on Civ Archive](https://civarchive.com/models/62174?modelVersionId=66712)
|
ultratopaz/79651
|
ultratopaz
| 2025-08-19T21:57:26Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-19T21:57:24Z |
[View on Civ Archive](https://civarchive.com/models/104789?modelVersionId=112361)
|
seraphimzzzz/14934
|
seraphimzzzz
| 2025-08-19T21:57:19Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-19T21:57:14Z |
[View on Civ Archive](https://civarchive.com/models/15122?modelVersionId=17816)
|
Muapi/the-ai-colab
|
Muapi
| 2025-08-19T21:56:41Z | 0 | 0 | null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-19T21:56:29Z |
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# The AI Colab

**Base model**: Flux.1 D
**Trained words**: By theaicolab
## 🧠 Usage (Python)
🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:1285923@1261262", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
crystalline7/62678
|
crystalline7
| 2025-08-19T21:56:20Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-19T21:56:20Z |
[View on Civ Archive](https://civarchive.com/models/78685?modelVersionId=91060)
|
ultratopaz/72224
|
ultratopaz
| 2025-08-19T21:55:55Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-19T21:55:52Z |
[View on Civ Archive](https://civarchive.com/models/96401?modelVersionId=102969)
|
ultratopaz/9638
|
ultratopaz
| 2025-08-19T21:55:47Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-19T21:55:44Z |
[View on Civ Archive](https://civarchive.com/models/8054?modelVersionId=10039)
|
Muapi/randommaxx-fantastify
|
Muapi
| 2025-08-19T21:55:10Z | 0 | 0 | null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-19T21:54:46Z |
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# RandomMaxx Fantastify

**Base model**: Flux.1 D
**Trained words**:
## 🧠 Usage (Python)
🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:1137613@1298660", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
ultratopaz/95534
|
ultratopaz
| 2025-08-19T21:55:00Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-19T21:54:57Z |
[View on Civ Archive](https://civarchive.com/models/120957?modelVersionId=131571)
|
mang3dd/blockassist-bc-tangled_slithering_alligator_1755638925
|
mang3dd
| 2025-08-19T21:54:56Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"tangled slithering alligator",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T21:54:53Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- tangled slithering alligator
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.