modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-09-13 18:26:42
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
558 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-09-13 18:25:20
card
stringlengths
11
1.01M
seraphimzzzz/78029
seraphimzzzz
2025-08-19T22:10:59Z
0
0
null
[ "region:us" ]
null
2025-08-19T22:10:56Z
[View on Civ Archive](https://civarchive.com/models/103146?modelVersionId=110420)
crystalline7/55910
crystalline7
2025-08-19T22:10:45Z
0
0
null
[ "region:us" ]
null
2025-08-19T22:10:43Z
[View on Civ Archive](https://civarchive.com/models/76861?modelVersionId=81633)
seraphimzzzz/40588
seraphimzzzz
2025-08-19T22:10:39Z
0
0
null
[ "region:us" ]
null
2025-08-19T22:10:36Z
[View on Civ Archive](https://civarchive.com/models/52348?modelVersionId=56790)
seraphimzzzz/14753
seraphimzzzz
2025-08-19T22:10:22Z
0
0
null
[ "region:us" ]
null
2025-08-19T22:10:19Z
[View on Civ Archive](https://civarchive.com/models/14920?modelVersionId=17576)
crystalline7/47233
crystalline7
2025-08-19T22:09:50Z
0
0
null
[ "region:us" ]
null
2025-08-19T22:09:46Z
[View on Civ Archive](https://civarchive.com/models/62993?modelVersionId=67505)
ultratopaz/85554
ultratopaz
2025-08-19T22:09:31Z
0
0
null
[ "region:us" ]
null
2025-08-19T22:09:29Z
[View on Civ Archive](https://civarchive.com/models/110731?modelVersionId=119395)
ultratopaz/100911
ultratopaz
2025-08-19T22:09:17Z
0
0
null
[ "region:us" ]
null
2025-08-19T22:09:14Z
[View on Civ Archive](https://civarchive.com/models/126037?modelVersionId=137746)
seraphimzzzz/98448
seraphimzzzz
2025-08-19T22:08:03Z
0
0
null
[ "region:us" ]
null
2025-08-19T22:07:33Z
[View on Civ Archive](https://civarchive.com/models/123650?modelVersionId=134834)
Kurosawama/Llama-3.2-3B-Full-align
Kurosawama
2025-08-19T22:07:56Z
0
0
transformers
[ "transformers", "safetensors", "trl", "dpo", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-08-19T22:07:49Z
--- library_name: transformers tags: - trl - dpo --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Hobaks/Qwen3-30B-A3B-Instruct-2507-Q4_K_M-GGUF
Hobaks
2025-08-19T22:07:51Z
0
0
transformers
[ "transformers", "gguf", "llama-cpp", "gguf-my-repo", "text-generation", "base_model:Qwen/Qwen3-30B-A3B-Instruct-2507", "base_model:quantized:Qwen/Qwen3-30B-A3B-Instruct-2507", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2025-08-19T22:06:34Z
--- library_name: transformers license: apache-2.0 license_link: https://huggingface.co/Qwen/Qwen3-30B-A3B-Instruct-2507/blob/main/LICENSE pipeline_tag: text-generation base_model: Qwen/Qwen3-30B-A3B-Instruct-2507 tags: - llama-cpp - gguf-my-repo --- # Hobaks/Qwen3-30B-A3B-Instruct-2507-Q4_K_M-GGUF This model was converted to GGUF format from [`Qwen/Qwen3-30B-A3B-Instruct-2507`](https://huggingface.co/Qwen/Qwen3-30B-A3B-Instruct-2507) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/Qwen/Qwen3-30B-A3B-Instruct-2507) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Hobaks/Qwen3-30B-A3B-Instruct-2507-Q4_K_M-GGUF --hf-file qwen3-30b-a3b-instruct-2507-q4_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Hobaks/Qwen3-30B-A3B-Instruct-2507-Q4_K_M-GGUF --hf-file qwen3-30b-a3b-instruct-2507-q4_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Hobaks/Qwen3-30B-A3B-Instruct-2507-Q4_K_M-GGUF --hf-file qwen3-30b-a3b-instruct-2507-q4_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Hobaks/Qwen3-30B-A3B-Instruct-2507-Q4_K_M-GGUF --hf-file qwen3-30b-a3b-instruct-2507-q4_k_m.gguf -c 2048 ```
seraphimzzzz/27104
seraphimzzzz
2025-08-19T22:07:10Z
0
0
null
[ "region:us" ]
null
2025-08-19T22:07:08Z
[View on Civ Archive](https://civarchive.com/models/19252?modelVersionId=32896)
crystalline7/16961
crystalline7
2025-08-19T22:06:57Z
0
0
null
[ "region:us" ]
null
2025-08-19T22:06:53Z
[View on Civ Archive](https://civarchive.com/models/17228?modelVersionId=20351)
ultratopaz/48964
ultratopaz
2025-08-19T22:06:29Z
0
0
null
[ "region:us" ]
null
2025-08-19T22:06:26Z
[View on Civ Archive](https://civarchive.com/models/65570?modelVersionId=70221)
crystalline7/59112
crystalline7
2025-08-19T22:05:32Z
0
0
null
[ "region:us" ]
null
2025-08-19T22:05:29Z
[View on Civ Archive](https://civarchive.com/models/81499?modelVersionId=86483)
ultratopaz/71126
ultratopaz
2025-08-19T22:05:24Z
0
0
null
[ "region:us" ]
null
2025-08-19T22:05:22Z
[View on Civ Archive](https://civarchive.com/models/95257?modelVersionId=101656)
crystalline7/845376
crystalline7
2025-08-19T22:04:18Z
0
0
null
[ "region:us" ]
null
2025-08-19T22:04:16Z
[View on Civ Archive](https://civarchive.com/models/558117?modelVersionId=938039)
seraphimzzzz/87766
seraphimzzzz
2025-08-19T22:03:51Z
0
0
null
[ "region:us" ]
null
2025-08-19T22:03:49Z
[View on Civ Archive](https://civarchive.com/models/109244?modelVersionId=122008)
Muapi/zavy-s-aerial-view-flux
Muapi
2025-08-19T22:03:12Z
0
0
null
[ "lora", "stable-diffusion", "flux.1-d", "license:openrail++", "region:us" ]
null
2025-08-19T22:03:00Z
--- license: openrail++ tags: - lora - stable-diffusion - flux.1-d model_type: LoRA --- # Zavy's Aerial View - Flux ![preview](./preview.jpg) **Base model**: Flux.1 D **Trained words**: zavy-rlvw ## 🧠 Usage (Python) 🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys) ```python import requests, os url = "https://api.muapi.ai/api/v1/flux_dev_lora_image" headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")} payload = { "prompt": "masterpiece, best quality, 1girl, looking at viewer", "model_id": [{"model": "civitai:738003@825335", "weight": 1.0}], "width": 1024, "height": 1024, "num_images": 1 } print(requests.post(url, headers=headers, json=payload).json()) ```
MattBou00/llama-3-2-1b-detox_v1b-checkpoint-epoch-60
MattBou00
2025-08-19T22:03:06Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "trl", "ppo", "reinforcement-learning", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
reinforcement-learning
2025-08-19T22:01:44Z
--- license: apache-2.0 library_name: transformers tags: - trl - ppo - transformers - reinforcement-learning --- # TRL Model This is a [TRL language model](https://github.com/huggingface/trl) that has been fine-tuned with reinforcement learning to guide the model outputs according to a value, function, or human feedback. The model can be used for text generation. ## Usage To use this model for inference, first install the TRL library: ```bash python -m pip install trl ``` You can then generate text as follows: ```python from transformers import pipeline generator = pipeline("text-generation", model="MattBou00//content/IRL-Bayesian/outputs/2025-08-19_20-30-04/checkpoints/checkpoint-epoch-60") outputs = generator("Hello, my llama is cute") ``` If you want to use the model for training or to obtain the outputs from the value head, load the model as follows: ```python from transformers import AutoTokenizer from trl import AutoModelForCausalLMWithValueHead tokenizer = AutoTokenizer.from_pretrained("MattBou00//content/IRL-Bayesian/outputs/2025-08-19_20-30-04/checkpoints/checkpoint-epoch-60") model = AutoModelForCausalLMWithValueHead.from_pretrained("MattBou00//content/IRL-Bayesian/outputs/2025-08-19_20-30-04/checkpoints/checkpoint-epoch-60") inputs = tokenizer("Hello, my llama is cute", return_tensors="pt") outputs = model(**inputs, labels=inputs["input_ids"]) ```
Muapi/emberveil-semi-realistic-digital-painting
Muapi
2025-08-19T22:02:42Z
0
0
null
[ "lora", "stable-diffusion", "flux.1-d", "license:openrail++", "region:us" ]
null
2025-08-19T22:02:18Z
--- license: openrail++ tags: - lora - stable-diffusion - flux.1-d model_type: LoRA --- # Emberveil | semi-realistic digital painting ![preview](./preview.jpg) **Base model**: Flux.1 D **Trained words**: ## 🧠 Usage (Python) 🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys) ```python import requests, os url = "https://api.muapi.ai/api/v1/flux_dev_lora_image" headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")} payload = { "prompt": "masterpiece, best quality, 1girl, looking at viewer", "model_id": [{"model": "civitai:1050516@1178753", "weight": 1.0}], "width": 1024, "height": 1024, "num_images": 1 } print(requests.post(url, headers=headers, json=payload).json()) ```
seraphimzzzz/805043
seraphimzzzz
2025-08-19T22:02:33Z
0
0
null
[ "region:us" ]
null
2025-08-19T22:02:30Z
[View on Civ Archive](https://civarchive.com/models/799025?modelVersionId=893455)
mradermacher/QiMing-Holos-Plus-4B-GGUF
mradermacher
2025-08-19T22:02:18Z
0
0
transformers
[ "transformers", "gguf", "qwen", "qwen3", "unsloth", "qiming", "qiming-holos", "bagua", "decision-making", "strategic-analysis", "cognitive-architecture", "chat", "lora", "philosophy-driven-ai", "zh", "en", "base_model:aifeifei798/QiMing-Holos-Plus-4B", "base_model:adapter:aifeifei798/QiMing-Holos-Plus-4B", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-08-19T20:13:11Z
--- base_model: aifeifei798/QiMing-Holos-Plus-4B language: - zh - en library_name: transformers license: apache-2.0 mradermacher: readme_rev: 1 quantized_by: mradermacher tags: - qwen - qwen3 - unsloth - qiming - qiming-holos - bagua - decision-making - strategic-analysis - cognitive-architecture - chat - lora - philosophy-driven-ai --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> <!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> static quants of https://huggingface.co/aifeifei798/QiMing-Holos-Plus-4B <!-- provided-files --> ***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#QiMing-Holos-Plus-4B-GGUF).*** weighted/imatrix quants are available at https://huggingface.co/mradermacher/QiMing-Holos-Plus-4B-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/QiMing-Holos-Plus-4B-GGUF/resolve/main/QiMing-Holos-Plus-4B.Q2_K.gguf) | Q2_K | 1.8 | | | [GGUF](https://huggingface.co/mradermacher/QiMing-Holos-Plus-4B-GGUF/resolve/main/QiMing-Holos-Plus-4B.Q3_K_S.gguf) | Q3_K_S | 2.0 | | | [GGUF](https://huggingface.co/mradermacher/QiMing-Holos-Plus-4B-GGUF/resolve/main/QiMing-Holos-Plus-4B.Q3_K_M.gguf) | Q3_K_M | 2.2 | lower quality | | [GGUF](https://huggingface.co/mradermacher/QiMing-Holos-Plus-4B-GGUF/resolve/main/QiMing-Holos-Plus-4B.Q3_K_L.gguf) | Q3_K_L | 2.3 | | | [GGUF](https://huggingface.co/mradermacher/QiMing-Holos-Plus-4B-GGUF/resolve/main/QiMing-Holos-Plus-4B.IQ4_XS.gguf) | IQ4_XS | 2.4 | | | [GGUF](https://huggingface.co/mradermacher/QiMing-Holos-Plus-4B-GGUF/resolve/main/QiMing-Holos-Plus-4B.Q4_K_S.gguf) | Q4_K_S | 2.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/QiMing-Holos-Plus-4B-GGUF/resolve/main/QiMing-Holos-Plus-4B.Q4_K_M.gguf) | Q4_K_M | 2.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/QiMing-Holos-Plus-4B-GGUF/resolve/main/QiMing-Holos-Plus-4B.Q5_K_S.gguf) | Q5_K_S | 2.9 | | | [GGUF](https://huggingface.co/mradermacher/QiMing-Holos-Plus-4B-GGUF/resolve/main/QiMing-Holos-Plus-4B.Q5_K_M.gguf) | Q5_K_M | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/QiMing-Holos-Plus-4B-GGUF/resolve/main/QiMing-Holos-Plus-4B.Q6_K.gguf) | Q6_K | 3.4 | very good quality | | [GGUF](https://huggingface.co/mradermacher/QiMing-Holos-Plus-4B-GGUF/resolve/main/QiMing-Holos-Plus-4B.Q8_0.gguf) | Q8_0 | 4.4 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/QiMing-Holos-Plus-4B-GGUF/resolve/main/QiMing-Holos-Plus-4B.f16.gguf) | f16 | 8.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
crystalline7/17902
crystalline7
2025-08-19T22:02:17Z
0
0
null
[ "region:us" ]
null
2025-08-19T22:02:12Z
[View on Civ Archive](https://civarchive.com/models/18151?modelVersionId=21479)
seraphimzzzz/14697
seraphimzzzz
2025-08-19T22:01:49Z
0
0
null
[ "region:us" ]
null
2025-08-19T22:01:47Z
[View on Civ Archive](https://civarchive.com/models/14867?modelVersionId=17515)
ultratopaz/36398
ultratopaz
2025-08-19T22:01:32Z
0
0
null
[ "region:us" ]
null
2025-08-19T22:01:30Z
[View on Civ Archive](https://civarchive.com/models/44324?modelVersionId=48961)
crystalline7/91646
crystalline7
2025-08-19T22:00:54Z
0
0
null
[ "region:us" ]
null
2025-08-19T22:00:50Z
[View on Civ Archive](https://civarchive.com/models/73936?modelVersionId=125362)
seraphimzzzz/99540
seraphimzzzz
2025-08-19T22:00:35Z
0
0
null
[ "region:us" ]
null
2025-08-19T22:00:32Z
[View on Civ Archive](https://civarchive.com/models/124733?modelVersionId=136220)
crystalline7/49563
crystalline7
2025-08-19T21:59:50Z
0
0
null
[ "region:us" ]
null
2025-08-19T21:59:48Z
[View on Civ Archive](https://civarchive.com/models/66565?modelVersionId=71222)
seraphimzzzz/57053
seraphimzzzz
2025-08-19T21:59:31Z
0
0
null
[ "region:us" ]
null
2025-08-19T21:59:29Z
[View on Civ Archive](https://civarchive.com/models/78652?modelVersionId=83437)
ultratopaz/66544
ultratopaz
2025-08-19T21:59:10Z
0
0
null
[ "region:us" ]
null
2025-08-19T21:59:08Z
[View on Civ Archive](https://civarchive.com/models/90088?modelVersionId=95932)
ultratopaz/39149
ultratopaz
2025-08-19T21:59:04Z
0
0
null
[ "region:us" ]
null
2025-08-19T21:59:02Z
[View on Civ Archive](https://civarchive.com/models/49489?modelVersionId=54066)
ultratopaz/72344
ultratopaz
2025-08-19T21:58:57Z
0
0
null
[ "region:us" ]
null
2025-08-19T21:58:55Z
[View on Civ Archive](https://civarchive.com/models/48727?modelVersionId=103126)
crystalline7/80244
crystalline7
2025-08-19T21:58:51Z
0
0
null
[ "region:us" ]
null
2025-08-19T21:58:49Z
[View on Civ Archive](https://civarchive.com/models/105393?modelVersionId=113058)
calegpedia/blockassist-bc-stealthy_slimy_rooster_1755639107
calegpedia
2025-08-19T21:58:24Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "stealthy slimy rooster", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T21:58:21Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - stealthy slimy rooster --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
ultratopaz/101401
ultratopaz
2025-08-19T21:58:22Z
0
0
null
[ "region:us" ]
null
2025-08-19T21:58:06Z
[View on Civ Archive](https://civarchive.com/models/126515?modelVersionId=138343)
Muapi/flux-steampunk-magic
Muapi
2025-08-19T21:58:18Z
0
0
null
[ "lora", "stable-diffusion", "flux.1-d", "license:openrail++", "region:us" ]
null
2025-08-19T21:58:07Z
--- license: openrail++ tags: - lora - stable-diffusion - flux.1-d model_type: LoRA --- # FLUX Steampunk Magic ![preview](./preview.jpg) **Base model**: Flux.1 D **Trained words**: bo-steampunk, steampunk style ## 🧠 Usage (Python) 🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys) ```python import requests, os url = "https://api.muapi.ai/api/v1/flux_dev_lora_image" headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")} payload = { "prompt": "masterpiece, best quality, 1girl, looking at viewer", "model_id": [{"model": "civitai:734196@821032", "weight": 1.0}], "width": 1024, "height": 1024, "num_images": 1 } print(requests.post(url, headers=headers, json=payload).json()) ```
ultratopaz/16126
ultratopaz
2025-08-19T21:57:48Z
0
0
null
[ "region:us" ]
null
2025-08-19T21:57:44Z
[View on Civ Archive](https://civarchive.com/models/16339?modelVersionId=19292)
seraphimzzzz/63682
seraphimzzzz
2025-08-19T21:57:09Z
0
0
null
[ "region:us" ]
null
2025-08-19T21:57:06Z
[View on Civ Archive](https://civarchive.com/models/72365?modelVersionId=92350)
crystalline7/62678
crystalline7
2025-08-19T21:56:20Z
0
0
null
[ "region:us" ]
null
2025-08-19T21:56:20Z
[View on Civ Archive](https://civarchive.com/models/78685?modelVersionId=91060)
ultratopaz/72224
ultratopaz
2025-08-19T21:55:55Z
0
0
null
[ "region:us" ]
null
2025-08-19T21:55:52Z
[View on Civ Archive](https://civarchive.com/models/96401?modelVersionId=102969)
crystalline7/65162
crystalline7
2025-08-19T21:55:39Z
0
0
null
[ "region:us" ]
null
2025-08-19T21:55:37Z
[View on Civ Archive](https://civarchive.com/models/88509?modelVersionId=94178)
Muapi/john-everett-millais-style
Muapi
2025-08-19T21:55:35Z
0
0
null
[ "lora", "stable-diffusion", "flux.1-d", "license:openrail++", "region:us" ]
null
2025-08-19T21:55:21Z
--- license: openrail++ tags: - lora - stable-diffusion - flux.1-d model_type: LoRA --- # John Everett Millais Style ![preview](./preview.jpg) **Base model**: Flux.1 D **Trained words**: John Everett Millais Style ## 🧠 Usage (Python) 🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys) ```python import requests, os url = "https://api.muapi.ai/api/v1/flux_dev_lora_image" headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")} payload = { "prompt": "masterpiece, best quality, 1girl, looking at viewer", "model_id": [{"model": "civitai:101247@1577804", "weight": 1.0}], "width": 1024, "height": 1024, "num_images": 1 } print(requests.post(url, headers=headers, json=payload).json()) ```
ultratopaz/28059
ultratopaz
2025-08-19T21:54:53Z
0
0
null
[ "region:us" ]
null
2025-08-19T21:54:50Z
[View on Civ Archive](https://civarchive.com/models/28417?modelVersionId=34091)
ultratopaz/79640
ultratopaz
2025-08-19T21:54:46Z
0
0
null
[ "region:us" ]
null
2025-08-19T21:54:44Z
[View on Civ Archive](https://civarchive.com/models/104784?modelVersionId=112352)
crystalline7/91801
crystalline7
2025-08-19T21:54:40Z
0
0
null
[ "region:us" ]
null
2025-08-19T21:54:37Z
[View on Civ Archive](https://civarchive.com/models/117216?modelVersionId=126979)
seraphimzzzz/33020
seraphimzzzz
2025-08-19T21:54:06Z
0
0
null
[ "region:us" ]
null
2025-08-19T21:54:02Z
[View on Civ Archive](https://civarchive.com/models/37392?modelVersionId=43399)
Kurosawama/Llama-3.1-8B-Instruct-Full-align
Kurosawama
2025-08-19T21:53:40Z
0
0
transformers
[ "transformers", "safetensors", "trl", "dpo", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-08-19T21:53:30Z
--- library_name: transformers tags: - trl - dpo --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
katanyasekolah/blockassist-bc-silky_sprightly_cassowary_1755638698
katanyasekolah
2025-08-19T21:53:25Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "silky sprightly cassowary", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T21:53:22Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - silky sprightly cassowary --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
koloni/blockassist-bc-deadly_graceful_stingray_1755638796
koloni
2025-08-19T21:53:23Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "deadly graceful stingray", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T21:53:20Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - deadly graceful stingray --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
seraphimzzzz/40018
seraphimzzzz
2025-08-19T21:53:08Z
0
0
null
[ "region:us" ]
null
2025-08-19T21:53:04Z
[View on Civ Archive](https://civarchive.com/models/51233?modelVersionId=55724)
indrarg/blockassist-bc-pensive_zealous_hyena_1755631470
indrarg
2025-08-19T21:52:49Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "pensive zealous hyena", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T20:06:38Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - pensive zealous hyena --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
roeker/blockassist-bc-quick_wiry_owl_1755640285
roeker
2025-08-19T21:52:48Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "quick wiry owl", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T21:52:14Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - quick wiry owl --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
crystalline7/77331
crystalline7
2025-08-19T21:51:45Z
0
0
null
[ "region:us" ]
null
2025-08-19T21:51:42Z
[View on Civ Archive](https://civarchive.com/models/102367?modelVersionId=109530)
ultratopaz/755266
ultratopaz
2025-08-19T21:51:19Z
0
0
null
[ "region:us" ]
null
2025-08-19T21:51:13Z
[View on Civ Archive](https://civarchive.com/models/749996?modelVersionId=838704)
seraphimzzzz/99900
seraphimzzzz
2025-08-19T21:51:00Z
0
0
null
[ "region:us" ]
null
2025-08-19T21:50:42Z
[View on Civ Archive](https://civarchive.com/models/125138?modelVersionId=136684)
ver-videos-intimo-de-abigail-lalama-viral/link.ver.filtrado.video.de.abigail.lalama.y.snayder.influencer.se.hace.viral.en.redes.sociales
ver-videos-intimo-de-abigail-lalama-viral
2025-08-19T21:50:56Z
0
0
null
[ "region:us" ]
null
2025-08-19T21:50:45Z
<a data-target="animated-image.originalLink" rel="nofollow" href="https://tinyurl.com/4axawfmy?Abigail "><img data-target="animated-image.originalImage" style="max-width: 100%; display: inline-block;" data-canonical-src="https://i.imgur.com/dJHk4Zq.gif" alt="WATCH Videos" src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif"></a>
Muapi/polaroid-669-ultrareal
Muapi
2025-08-19T21:49:34Z
0
0
null
[ "lora", "stable-diffusion", "flux.1-d", "license:openrail++", "region:us" ]
null
2025-08-19T21:49:17Z
--- license: openrail++ tags: - lora - stable-diffusion - flux.1-d model_type: LoRA --- # Polaroid 669 UltraReal ![preview](./preview.jpg) **Base model**: Flux.1 D **Trained words**: p0l2rd, prominent film grain, overexposed and blurry photo, polaroid-style format with white border, distinctive burnt edge, photograph appears aged or partially developed, with the almost half right side fading into white ## 🧠 Usage (Python) 🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys) ```python import requests, os url = "https://api.muapi.ai/api/v1/flux_dev_lora_image" headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")} payload = { "prompt": "masterpiece, best quality, 1girl, looking at viewer", "model_id": [{"model": "civitai:1378102@1557091", "weight": 1.0}], "width": 1024, "height": 1024, "num_images": 1 } print(requests.post(url, headers=headers, json=payload).json()) ```
crystalline7/1058904
crystalline7
2025-08-19T21:49:29Z
0
0
null
[ "region:us" ]
null
2025-08-19T21:49:26Z
[View on Civ Archive](https://civarchive.com/models/236627?modelVersionId=1153869)
ultratopaz/53664
ultratopaz
2025-08-19T21:47:58Z
0
0
null
[ "region:us" ]
null
2025-08-19T21:47:56Z
[View on Civ Archive](https://civarchive.com/models/73244?modelVersionId=77959)
ultratopaz/70921
ultratopaz
2025-08-19T21:47:52Z
0
0
null
[ "region:us" ]
null
2025-08-19T21:47:50Z
[View on Civ Archive](https://civarchive.com/models/95052?modelVersionId=101410)
seraphimzzzz/31317
seraphimzzzz
2025-08-19T21:47:07Z
0
0
null
[ "region:us" ]
null
2025-08-19T21:47:03Z
[View on Civ Archive](https://civarchive.com/models/34123?modelVersionId=40403)
ultratopaz/70504
ultratopaz
2025-08-19T21:46:25Z
0
0
null
[ "region:us" ]
null
2025-08-19T21:46:20Z
[View on Civ Archive](https://civarchive.com/models/94583?modelVersionId=100890)
seraphimzzzz/23767
seraphimzzzz
2025-08-19T21:46:16Z
0
0
null
[ "region:us" ]
null
2025-08-19T21:46:14Z
[View on Civ Archive](https://civarchive.com/models/24076?modelVersionId=28762)
chainway9/blockassist-bc-untamed_quick_eel_1755638280
chainway9
2025-08-19T21:45:43Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "untamed quick eel", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T21:45:40Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - untamed quick eel --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
crystalline7/15003
crystalline7
2025-08-19T21:45:31Z
0
0
null
[ "region:us" ]
null
2025-08-19T21:45:28Z
[View on Civ Archive](https://civarchive.com/models/11722?modelVersionId=17930)
ultratopaz/12175
ultratopaz
2025-08-19T21:45:11Z
0
0
null
[ "region:us" ]
null
2025-08-19T21:45:07Z
[View on Civ Archive](https://civarchive.com/models/11722?modelVersionId=13849)
seraphimzzzz/90663
seraphimzzzz
2025-08-19T21:45:01Z
0
0
null
[ "region:us" ]
null
2025-08-19T21:44:58Z
[View on Civ Archive](https://civarchive.com/models/115961?modelVersionId=125525)
seraphimzzzz/79861
seraphimzzzz
2025-08-19T21:44:34Z
0
0
null
[ "region:us" ]
null
2025-08-19T21:44:30Z
[View on Civ Archive](https://civarchive.com/models/105005?modelVersionId=112610)
seraphimzzzz/41023
seraphimzzzz
2025-08-19T21:44:25Z
0
0
null
[ "region:us" ]
null
2025-08-19T21:44:22Z
[View on Civ Archive](https://civarchive.com/models/53162?modelVersionId=57540)
crystalline7/821073
crystalline7
2025-08-19T21:43:06Z
0
0
null
[ "region:us" ]
null
2025-08-19T21:43:03Z
[View on Civ Archive](https://civarchive.com/models/45489?modelVersionId=913450)
ultratopaz/47081
ultratopaz
2025-08-19T21:42:53Z
0
0
null
[ "region:us" ]
null
2025-08-19T21:42:50Z
[View on Civ Archive](https://civarchive.com/models/45489?modelVersionId=50110)
seraphimzzzz/526763
seraphimzzzz
2025-08-19T21:42:22Z
0
0
null
[ "region:us" ]
null
2025-08-19T21:42:12Z
[View on Civ Archive](https://civarchive.com/models/115364?modelVersionId=611730)
crystalline7/463877
crystalline7
2025-08-19T21:41:42Z
0
0
null
[ "region:us" ]
null
2025-08-19T21:41:39Z
[View on Civ Archive](https://civarchive.com/models/354504?modelVersionId=547687)
seraphimzzzz/23289
seraphimzzzz
2025-08-19T21:40:40Z
0
0
null
[ "region:us" ]
null
2025-08-19T21:40:36Z
[View on Civ Archive](https://civarchive.com/models/23586?modelVersionId=28177)
seraphimzzzz/22590
seraphimzzzz
2025-08-19T21:39:04Z
0
0
null
[ "region:us" ]
null
2025-08-19T21:39:00Z
[View on Civ Archive](https://civarchive.com/models/22866?modelVersionId=27302)
lilTAT/blockassist-bc-gentle_rugged_hare_1755639504
lilTAT
2025-08-19T21:38:51Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "gentle rugged hare", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T21:38:48Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - gentle rugged hare --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
crystalline7/38758
crystalline7
2025-08-19T21:38:48Z
0
0
null
[ "region:us" ]
null
2025-08-19T21:38:44Z
[View on Civ Archive](https://civarchive.com/models/45384?modelVersionId=53360)
seraphimzzzz/48903
seraphimzzzz
2025-08-19T21:37:08Z
0
0
null
[ "region:us" ]
null
2025-08-19T21:37:04Z
[View on Civ Archive](https://civarchive.com/models/46653?modelVersionId=70091)
crystalline7/91286
crystalline7
2025-08-19T21:35:28Z
0
0
null
[ "region:us" ]
null
2025-08-19T21:35:25Z
[View on Civ Archive](https://civarchive.com/models/116639?modelVersionId=126319)
crystalline7/46649
crystalline7
2025-08-19T21:33:48Z
0
0
null
[ "region:us" ]
null
2025-08-19T21:33:45Z
[View on Civ Archive](https://civarchive.com/models/61757?modelVersionId=66271)
crystalline7/33161
crystalline7
2025-08-19T21:33:40Z
0
0
null
[ "region:us" ]
null
2025-08-19T21:33:37Z
[View on Civ Archive](https://civarchive.com/models/8268?modelVersionId=43717)
seraphimzzzz/61469
seraphimzzzz
2025-08-19T21:33:25Z
0
0
null
[ "region:us" ]
null
2025-08-19T21:33:22Z
[View on Civ Archive](https://civarchive.com/models/84137?modelVersionId=89442)
seraphimzzzz/98719
seraphimzzzz
2025-08-19T21:33:18Z
0
0
null
[ "region:us" ]
null
2025-08-19T21:33:16Z
[View on Civ Archive](https://civarchive.com/models/123927?modelVersionId=135201)
crystalline7/218171
crystalline7
2025-08-19T21:33:10Z
0
0
null
[ "region:us" ]
null
2025-08-19T21:33:06Z
[View on Civ Archive](https://civarchive.com/models/247011?modelVersionId=278681)
Siddharth63/Qwe4B-Patent-Concepts
Siddharth63
2025-08-19T21:32:48Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-08-19T21:32:46Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
MattBou00/llama-3-2-1b-detox_v1b-checkpoint-epoch-40
MattBou00
2025-08-19T21:32:25Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "trl", "ppo", "reinforcement-learning", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
reinforcement-learning
2025-08-19T21:31:04Z
--- license: apache-2.0 library_name: transformers tags: - trl - ppo - transformers - reinforcement-learning --- # TRL Model This is a [TRL language model](https://github.com/huggingface/trl) that has been fine-tuned with reinforcement learning to guide the model outputs according to a value, function, or human feedback. The model can be used for text generation. ## Usage To use this model for inference, first install the TRL library: ```bash python -m pip install trl ``` You can then generate text as follows: ```python from transformers import pipeline generator = pipeline("text-generation", model="MattBou00//content/IRL-Bayesian/outputs/2025-08-19_20-30-04/checkpoints/checkpoint-epoch-40") outputs = generator("Hello, my llama is cute") ``` If you want to use the model for training or to obtain the outputs from the value head, load the model as follows: ```python from transformers import AutoTokenizer from trl import AutoModelForCausalLMWithValueHead tokenizer = AutoTokenizer.from_pretrained("MattBou00//content/IRL-Bayesian/outputs/2025-08-19_20-30-04/checkpoints/checkpoint-epoch-40") model = AutoModelForCausalLMWithValueHead.from_pretrained("MattBou00//content/IRL-Bayesian/outputs/2025-08-19_20-30-04/checkpoints/checkpoint-epoch-40") inputs = tokenizer("Hello, my llama is cute", return_tensors="pt") outputs = model(**inputs, labels=inputs["input_ids"]) ```
crystalline7/639297
crystalline7
2025-08-19T21:30:50Z
0
0
null
[ "region:us" ]
null
2025-08-19T21:30:45Z
[View on Civ Archive](https://civarchive.com/models/214721?modelVersionId=724668)
crystalline7/873207
crystalline7
2025-08-19T21:30:22Z
0
0
null
[ "region:us" ]
null
2025-08-19T21:30:20Z
[View on Civ Archive](https://civarchive.com/models/859919?modelVersionId=966723)
crystalline7/87776
crystalline7
2025-08-19T21:29:49Z
0
0
null
[ "region:us" ]
null
2025-08-19T21:29:45Z
[View on Civ Archive](https://civarchive.com/models/112981?modelVersionId=122015)
seraphimzzzz/342314
seraphimzzzz
2025-08-19T21:29:38Z
0
0
null
[ "region:us" ]
null
2025-08-19T21:29:33Z
[View on Civ Archive](https://civarchive.com/models/376650?modelVersionId=420607)
seraphimzzzz/98209
seraphimzzzz
2025-08-19T21:28:52Z
0
0
null
[ "region:us" ]
null
2025-08-19T21:28:34Z
[View on Civ Archive](https://civarchive.com/models/123448?modelVersionId=134595)
mradermacher/Genuine-7B-Instruct-GGUF
mradermacher
2025-08-19T21:27:21Z
0
0
transformers
[ "transformers", "gguf", "lora", "sft", "trl", "unsloth", "fine-tuned", "en", "dataset:theprint/Gentle-Pushback-8.5k-alpaca", "base_model:theprint/Genuine-7B-Instruct", "base_model:adapter:theprint/Genuine-7B-Instruct", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-08-19T15:33:34Z
--- base_model: theprint/Genuine-7B-Instruct datasets: - theprint/Gentle-Pushback-8.5k-alpaca language: en library_name: transformers license: apache-2.0 mradermacher: readme_rev: 1 quantized_by: mradermacher tags: - lora - sft - transformers - trl - unsloth - fine-tuned --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> <!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> static quants of https://huggingface.co/theprint/Genuine-7B-Instruct <!-- provided-files --> ***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Genuine-7B-Instruct-GGUF).*** weighted/imatrix quants are available at https://huggingface.co/mradermacher/Genuine-7B-Instruct-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Genuine-7B-Instruct-GGUF/resolve/main/Genuine-7B-Instruct.Q2_K.gguf) | Q2_K | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/Genuine-7B-Instruct-GGUF/resolve/main/Genuine-7B-Instruct.Q3_K_S.gguf) | Q3_K_S | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/Genuine-7B-Instruct-GGUF/resolve/main/Genuine-7B-Instruct.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Genuine-7B-Instruct-GGUF/resolve/main/Genuine-7B-Instruct.Q3_K_L.gguf) | Q3_K_L | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/Genuine-7B-Instruct-GGUF/resolve/main/Genuine-7B-Instruct.IQ4_XS.gguf) | IQ4_XS | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/Genuine-7B-Instruct-GGUF/resolve/main/Genuine-7B-Instruct.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Genuine-7B-Instruct-GGUF/resolve/main/Genuine-7B-Instruct.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Genuine-7B-Instruct-GGUF/resolve/main/Genuine-7B-Instruct.Q5_K_S.gguf) | Q5_K_S | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/Genuine-7B-Instruct-GGUF/resolve/main/Genuine-7B-Instruct.Q5_K_M.gguf) | Q5_K_M | 5.5 | | | [GGUF](https://huggingface.co/mradermacher/Genuine-7B-Instruct-GGUF/resolve/main/Genuine-7B-Instruct.Q6_K.gguf) | Q6_K | 6.4 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Genuine-7B-Instruct-GGUF/resolve/main/Genuine-7B-Instruct.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Genuine-7B-Instruct-GGUF/resolve/main/Genuine-7B-Instruct.f16.gguf) | f16 | 15.3 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
zhuojing-huang/gpt2-german-english-ewc-2
zhuojing-huang
2025-08-19T21:26:23Z
0
0
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "generated_from_trainer", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-19T08:43:29Z
--- library_name: transformers tags: - generated_from_trainer model-index: - name: gpt2-german-english-ewc-2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt2-german-english-ewc-2 This model was trained from scratch on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 64 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 30 - training_steps: 61035 ### Training results ### Framework versions - Transformers 4.53.1 - Pytorch 2.7.1+cu126 - Datasets 3.6.0 - Tokenizers 0.21.2
ultratopaz/64088
ultratopaz
2025-08-19T21:23:25Z
0
0
null
[ "region:us" ]
null
2025-08-19T21:23:23Z
[View on Civ Archive](https://civarchive.com/models/87265?modelVersionId=92862)
crystalline7/661687
crystalline7
2025-08-19T21:22:47Z
0
0
null
[ "region:us" ]
null
2025-08-19T21:22:44Z
[View on Civ Archive](https://civarchive.com/models/638678?modelVersionId=748190)
seraphimzzzz/647728
seraphimzzzz
2025-08-19T21:22:16Z
0
0
null
[ "region:us" ]
null
2025-08-19T21:22:14Z
[View on Civ Archive](https://civarchive.com/models/655756?modelVersionId=733657)
ultratopaz/13760
ultratopaz
2025-08-19T21:21:41Z
0
0
null
[ "region:us" ]
null
2025-08-19T21:21:37Z
[View on Civ Archive](https://civarchive.com/models/13780?modelVersionId=16225)
seraphimzzzz/88236
seraphimzzzz
2025-08-19T21:20:10Z
0
0
null
[ "region:us" ]
null
2025-08-19T21:20:05Z
[View on Civ Archive](https://civarchive.com/models/101242?modelVersionId=122548)
AnonymousCS/xlmr_immigration_combo3_4
AnonymousCS
2025-08-19T21:19:38Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "xlm-roberta", "text-classification", "generated_from_trainer", "base_model:FacebookAI/xlm-roberta-large", "base_model:finetune:FacebookAI/xlm-roberta-large", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-08-19T21:16:46Z
--- library_name: transformers license: mit base_model: FacebookAI/xlm-roberta-large tags: - generated_from_trainer metrics: - accuracy model-index: - name: xlmr_immigration_combo3_4 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlmr_immigration_combo3_4 This model is a fine-tuned version of [FacebookAI/xlm-roberta-large](https://huggingface.co/FacebookAI/xlm-roberta-large) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1420 - Accuracy: 0.9640 - 1-f1: 0.9451 - 1-recall: 0.9305 - 1-precision: 0.9602 - Balanced Acc: 0.9556 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 15 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | 1-f1 | 1-recall | 1-precision | Balanced Acc | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:--------:|:-----------:|:------------:| | 0.117 | 1.0 | 25 | 0.1348 | 0.9550 | 0.9320 | 0.9266 | 0.9375 | 0.9479 | | 0.1611 | 2.0 | 50 | 0.1374 | 0.9563 | 0.9346 | 0.9382 | 0.9310 | 0.9518 | | 0.0447 | 3.0 | 75 | 0.1420 | 0.9640 | 0.9451 | 0.9305 | 0.9602 | 0.9556 | ### Framework versions - Transformers 4.56.0.dev0 - Pytorch 2.8.0+cu126 - Datasets 4.0.0 - Tokenizers 0.21.4
crystalline7/66090
crystalline7
2025-08-19T21:19:21Z
0
0
null
[ "region:us" ]
null
2025-08-19T21:19:17Z
[View on Civ Archive](https://civarchive.com/models/89580?modelVersionId=95363)