modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-09-13 12:31:59
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
556 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-09-13 12:26:40
card
stringlengths
11
1.01M
te4bag/GRIT-llama-3.2-3B-alpaca-0.99
te4bag
2025-08-19T16:15:45Z
0
0
peft
[ "peft", "safetensors", "base_model:adapter:meta-llama/Llama-3.2-3B", "lora", "transformers", "text-generation", "arxiv:1910.09700", "base_model:meta-llama/Llama-3.2-3B", "region:us" ]
text-generation
2025-08-19T16:15:01Z
--- base_model: meta-llama/Llama-3.2-3B library_name: peft pipeline_tag: text-generation tags: - base_model:adapter:meta-llama/Llama-3.2-3B - lora - transformers --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.17.0
rambetiko/blockassist-bc-soft_lanky_marmot_1755619656
rambetiko
2025-08-19T16:14:01Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "soft lanky marmot", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T16:13:41Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - soft lanky marmot --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
quantumxnode/blockassist-bc-dormant_peckish_seahorse_1755618350
quantumxnode
2025-08-19T16:13:48Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "dormant peckish seahorse", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T16:13:45Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - dormant peckish seahorse --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Sayemahsjn/blockassist-bc-playful_feline_octopus_1755618633
Sayemahsjn
2025-08-19T16:09:57Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "playful feline octopus", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T16:09:52Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - playful feline octopus --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
mradermacher/sailor2-sft-GGUF
mradermacher
2025-08-19T16:04:02Z
0
0
transformers
[ "transformers", "gguf", "en", "base_model:hai2131/sailor2-sft", "base_model:quantized:hai2131/sailor2-sft", "endpoints_compatible", "region:us", "conversational" ]
null
2025-08-19T15:55:44Z
--- base_model: hai2131/sailor2-sft language: - en library_name: transformers mradermacher: readme_rev: 1 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> <!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> static quants of https://huggingface.co/hai2131/sailor2-sft <!-- provided-files --> ***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#sailor2-sft-GGUF).*** weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/sailor2-sft-GGUF/resolve/main/sailor2-sft.Q3_K_S.gguf) | Q3_K_S | 0.7 | | | [GGUF](https://huggingface.co/mradermacher/sailor2-sft-GGUF/resolve/main/sailor2-sft.Q2_K.gguf) | Q2_K | 0.7 | | | [GGUF](https://huggingface.co/mradermacher/sailor2-sft-GGUF/resolve/main/sailor2-sft.IQ4_XS.gguf) | IQ4_XS | 0.7 | | | [GGUF](https://huggingface.co/mradermacher/sailor2-sft-GGUF/resolve/main/sailor2-sft.Q3_K_M.gguf) | Q3_K_M | 0.7 | lower quality | | [GGUF](https://huggingface.co/mradermacher/sailor2-sft-GGUF/resolve/main/sailor2-sft.Q3_K_L.gguf) | Q3_K_L | 0.8 | | | [GGUF](https://huggingface.co/mradermacher/sailor2-sft-GGUF/resolve/main/sailor2-sft.Q4_K_S.gguf) | Q4_K_S | 0.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/sailor2-sft-GGUF/resolve/main/sailor2-sft.Q4_K_M.gguf) | Q4_K_M | 0.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/sailor2-sft-GGUF/resolve/main/sailor2-sft.Q5_K_S.gguf) | Q5_K_S | 0.9 | | | [GGUF](https://huggingface.co/mradermacher/sailor2-sft-GGUF/resolve/main/sailor2-sft.Q5_K_M.gguf) | Q5_K_M | 0.9 | | | [GGUF](https://huggingface.co/mradermacher/sailor2-sft-GGUF/resolve/main/sailor2-sft.Q6_K.gguf) | Q6_K | 1.1 | very good quality | | [GGUF](https://huggingface.co/mradermacher/sailor2-sft-GGUF/resolve/main/sailor2-sft.Q8_0.gguf) | Q8_0 | 1.2 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/sailor2-sft-GGUF/resolve/main/sailor2-sft.f16.gguf) | f16 | 2.1 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
Elizavr/blockassist-bc-reclusive_shaggy_bee_1755619213
Elizavr
2025-08-19T16:00:53Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "reclusive shaggy bee", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T16:00:39Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - reclusive shaggy bee --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
rambetiko/blockassist-bc-soft_lanky_marmot_1755618848
rambetiko
2025-08-19T16:00:14Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "soft lanky marmot", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T15:59:57Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - soft lanky marmot --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
varsunk/unsloth_training_checkpoints
varsunk
2025-08-19T15:59:14Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "sft", "unsloth", "trl", "base_model:unsloth/Qwen3-4B-Base", "base_model:finetune:unsloth/Qwen3-4B-Base", "endpoints_compatible", "region:us" ]
null
2025-08-18T20:11:18Z
--- base_model: unsloth/Qwen3-4B-Base library_name: transformers model_name: Qwen3-4B-PFT-Checkpoint tags: - generated_from_trainer - sft - unsloth - trl licence: license --- # Model Card for Qwen3-4B-PFT-Checkpoint This model is a fine-tuned version of [unsloth/Qwen3-4B-Base](https://huggingface.co/unsloth/Qwen3-4B-Base). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="varsunk/unsloth_training_checkpoints", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with SFT. ### Framework versions - TRL: 0.21.0 - Transformers: 4.55.2 - Pytorch: 2.6.0+cu124 - Datasets: 3.6.0 - Tokenizers: 0.21.4 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
lqpl/blockassist-bc-hairy_insectivorous_antelope_1755618913
lqpl
2025-08-19T15:57:46Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "hairy insectivorous antelope", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T15:56:03Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - hairy insectivorous antelope --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
shulin16/ea-dev-final
shulin16
2025-08-19T15:53:44Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "evaluation-agent", "cot-reasoning", "checkpoint", "qwen2.5", "video-assessment", "image-assessment", "conversational", "base_model:Qwen/Qwen2.5-3B-Instruct", "base_model:finetune:Qwen/Qwen2.5-3B-Instruct", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-19T09:18:53Z
--- license: apache-2.0 base_model: Qwen/Qwen2.5-3B-Instruct tags: - text-generation - evaluation-agent - cot-reasoning - checkpoint - qwen2.5 - video-assessment - image-assessment library_name: transformers pipeline_tag: text-generation --- # ea-dev-final This is checkpoint **final** (step 471) from fine-tuning [Qwen/Qwen2.5-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-3B-Instruct) for evaluation agent tasks. ## Checkpoint Details - **Checkpoint**: final - **Global Step**: 471 - **Epoch**: 3.00 - **Training Loss**: 0.8296 - **Learning Rate**: unknown - **Base Model**: Qwen2.5-3B-Instruct - **Task**: Multi-modal quality assessment with CoT reasoning ## Model Description This checkpoint is from training an evaluation agent that can assess: - **Video Quality**: Temporal consistency, motion smoothness, object consistency (VBench) - **Image Quality**: Aesthetic quality, semantic alignment, visual fidelity (T2I-CompBench) - **Open-ended Evaluation**: Custom quality assessment tasks The model uses Chain-of-Thought (CoT) reasoning to provide detailed explanations for its evaluations. ## Files Included This checkpoint contains: - **Model Weights**: `model*.safetensors` - The actual model parameters - **Tokenizer**: Complete tokenizer configuration and vocabulary - **Configuration**: Model and generation configuration files **Note**: This checkpoint contains only inference files (no optimizer states). ## Usage ### For Inference ```python from transformers import AutoModelForCausalLM, AutoTokenizer import torch # Load the checkpoint model = AutoModelForCausalLM.from_pretrained( "ea-dev-final", torch_dtype=torch.bfloat16, device_map="auto" ) tokenizer = AutoTokenizer.from_pretrained("ea-dev-final") # Example evaluation prompt prompt = """Please evaluate the quality of this video based on the following criteria: 1. Visual quality and clarity 2. Temporal consistency 3. Motion smoothness Video description: A person walking through a park with trees swaying in the wind. Let me think step by step:""" inputs = tokenizer(prompt, return_tensors="pt") with torch.no_grad(): outputs = model.generate( **inputs, max_length=512, do_sample=True, temperature=0.7, pad_token_id=tokenizer.eos_token_id ) response = tokenizer.decode(outputs[0], skip_special_tokens=True) print(response) ``` ### Resume Training (if optimizer states included) ```bash # Use with LLaMA-Factory llamafactory-cli train \ --stage sft \ --model_name_or_path ea-dev-final \ --resume_from_checkpoint ea-dev-final ``` ## Training Progress This checkpoint represents an intermediate state in the training process: - **Steps Completed**: 471 - **Epochs**: 3.00 - **Current Loss**: 0.8296 ## Related Models This checkpoint is part of a series. Other checkpoints from the same training run: - Look for repositories with pattern: `ea-dev-checkpoint-*` - Final model: `ea-dev-final` ## License This model checkpoint is released under the Apache 2.0 license. ## Citation If you use this checkpoint, please cite: ```bibtex @misc{eval-agent-qwen2.5-checkpoint-471, title={Evaluation Agent Qwen2.5 Checkpoint 471}, author={Your Name}, year={2025}, howpublished={\url{https://huggingface.co/ea-dev-final}} } ```
mang3dd/blockassist-bc-tangled_slithering_alligator_1755617041
mang3dd
2025-08-19T15:52:15Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "tangled slithering alligator", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T15:52:12Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - tangled slithering alligator --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
MidnightRunner/MIDNIGHT_NAI-XL_vPredV1
MidnightRunner
2025-08-19T15:50:23Z
406
2
diffusers
[ "diffusers", "SDXL", "noobai-XL", "Vpred-1.0", "text-to-image", "ComfyUI", "Automatic1111", "Diffuser", "en", "dataset:LaxharLab/NoobAI-XL-dataset", "base_model:Laxhar/noobai-XL-Vpred-1.0", "base_model:finetune:Laxhar/noobai-XL-Vpred-1.0", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2025-02-02T01:09:01Z
--- license: creativeml-openrail-m language: - en base_model: Laxhar/noobai-XL-Vpred-1.0 tags: - SDXL - noobai-XL - Vpred-1.0 - text-to-image - ComfyUI - Automatic1111 - Diffuser pipeline_tag: text-to-image library_name: diffusers datasets: - LaxharLab/NoobAI-XL-dataset metrics: - FID - IS widget: - text: >- high quality, masterpiece, detailed, 8K, artist:nyantcha, evangeline_(nyantcha), vibrant surreal artwork, rainbow, light particles, from above, volumetric lighting, ((adult girl:1.2)), natural huge breasts, woman dressed as white rabbit, sleek pure white outfit, delicate white bunny ears, braid, plump, skindentation, huge breasts, falling into swirling black hole, seen from behind, glancing over shoulder, alluring mysterious expression, dress, zipper, zipper pull, detached sleeves, breasts apart (shoulder straps), buckles, long dress, swirling cosmic patterns, glowing particles, dramatic lighting, vibrant neon pink and blue tones, hyper-detailed, cinematic depth of field, smooth texture, film grain, chromatic aberration, high contrast, limited palette parameters: negative_prompt: >- lowres, worst quality, low quality, bad anatomy, bad hands, 4koma, comic, greyscale, censored, jpeg artifacts, overly saturated, overly vivid, (multiple views:1.1), (bad:1.05), fewer, extra, missing, worst quality, jpeg artifacts, bad quality, watermark, unfinished, displeasing, sepia, sketch, flat color, signature, artistic error, username, scan, (blurry, lowres, worst quality, (low quality:1.1), ugly, (bad anatomy:1.05), artist name, (patreon username:1.2) output: url: stand_on_ripplewater.jpeg --- # MIDNIGHT_NAI-XL_vPredV1 **Model Type:** Diffusion-based text-to-image generative model **Base Model:** SDXL 1.0 & Laxhar/noobai-XL-Vpred-1.0 **License:** [CreativeML Open RAIL++-M](https://github.com/CompVis/stable-diffusion/blob/main/LICENSE) ## Model Description MIDNIGHT_NAI-XL_vPredV1 is a specialized fine-tuning of the NoobAI-XL (NAI-XL) model, designed to enhance anatomical precision, compositional coherence, and versatile style integration. This model excels in generating high-quality images with vibrant colors while minimizing overexposure. ## Usage Recommendations ### **Sampling Methods** MIDNIGHT_NAI-XL_vPred is optimized specifically for **Euler (normal)**. Use **ModelSamplingDiscrete** with **V-prediction** and **ZsNR set to true**. Other samplers may not provide stable results, and **V-prediction models do not support other samplers**. ### **CFG Scaling** **Dynamic CFG Plugin is bypassed as a backup for potential future needs.** Manually adjust **CFG scaling within a range of 3-4** for the best balance. For optimal results, a **preferred setting of 3.5** is recommended. ### **Custom Workflow** For an optimized generation process, use the [**MIDNIGHT1111_Chasm 2025-02-04**](https://huggingface.co/MidnightRunner/MIDNIGHT_NAI-XL_vPredV1/blob/main/MIDNIGHT1111_Chasm%202025-02-04.json) ComfyUI workflow. This workflow is specifically designed to **leverage the strengths of MIDNIGHT_NAI-XL_vPred**, providing a streamlined and efficient image generation pipeline. ## MIDNIGHT1111_Chasm For an optimized generation process, consider using the custom workflow [MIDNIGHT1111_Chasm 02-05-25](https://huggingface.co/MidnightRunner/MIDNIGHT_NAI-XL_vPredV1/blob/main/MIDNIGHT1111_Chasm%2002-05-25.json). This workflow is tailored to leverage the strengths of the MIDNIGHT_NAI-XL_vPredV1 model, providing a streamlined and efficient image generation pipeline. ![MIDNIGHT1111_Chasm Workflow](https://huggingface.co/MidnightRunner/MIDNIGHT_NAI-XL_vPredV1/resolve/main/MIDNIGHT1111_Chasm%20Workflow.png) *Note: The above image is a preview of the `MIDNIGHT1111_Chasm` workflow.* ### Method I: reForge without MIDNIGHT1111_Chasm Workflow 1. **Installation:** If not already installed, follow the instructions in the [reForge repository](https://github.com/Panchovix/stable-diffusion-webui-reForge) to set up. 2. **Usage:** Launch WebUI and use the model as usual. ### Method II: ComfyUI *with* MIDNIGHT1111_Chasm Workflow 1. **Installation:** Follow the setup instructions in the [ComfyUI repository](https://github.com/comfyanonymous/ComfyUI). 2. **Workflow Sample:** Utilize the provided [ComfyUI workflow sample](https://huggingface.co/MidnightRunner/MIDNIGHT_NAI-XL_vPredV1/blob/main/MIDNIGHT1111_Chasm%2002-05-25.json) for guidance. ### Method III: WebUI without MIDNIGHT1111_Chasm Workflow 1. **Installation:** Follow the instructions in the [WebUI repository](https://github.com/AUTOMATIC1111/stable-diffusion-webui) to set up. 2. **Navigate to the WebUI Directory:** Before updating or switching branches, ensure you're inside the `stable-diffusion-webui` folder command: | ```bash cd stable-diffusion-webui ``` 3. **Switch to the Development Branch (Optional, for testing new features):** If you want to use the latest features from the development branch, run: command: | ```bash git switch dev git pull ``` ⚠️ **Note:** The `dev` branch may contain bugs. If stability is your priority, it's best to stay on the `main` branch. 4. **Update WebUI (Main or Dev Branch):** To pull the latest updates while on either branch, run: command: | ```bash git pull ``` πŸ”„ **Restart WebUI after updating to apply changes.**" 5. **Configuration:** Ensure you're using a stable branch, as the dev branch may contain bugs. ### Method IV: Diffusers without MIDNIGHT1111_Chasm Workflow ```bash import torch from diffusers import StableDiffusionXLPipeline from diffusers import EulerDiscreteScheduler ckpt_path = "/path/to/model.safetensors" pipe = StableDiffusionXLPipeline.from_single_file( ckpt_path, use_safetensors=True, torch_dtype=torch.float16, ) scheduler_args = {"prediction_type": "v_prediction", "rescale_betas_zero_snr": True} pipe.scheduler = EulerDiscreteScheduler.from_config(pipe.scheduler.config, **scheduler_args) pipe.enable_xformers_memory_efficient_attention() pipe = pipe.to("cuda") prompt = """masterpiece, best quality,artist:john_kafka,artist:nixeu,artist:quasarcake, chromatic aberration, film grain, horror \(theme\), limited palette, x-shaped pupils, high contrast, color contrast, cold colors, arlecchino \(genshin impact\), black theme, gritty, graphite \(medium\)""" negative_prompt = "nsfw, worst quality, old, early, low quality, lowres, signature, username, logo, bad hands, mutated hands, mammal, anthro, furry, ambiguous form, feral, semi-anthro" image = pipe( prompt=prompt, negative_prompt=negative_prompt, width=832, height=1216, num_inference_steps=28, guidance_scale=5, generator=torch.Generator().manual_seed(42), ).images[0] image.save("output.png") ``` ## e621/Danbooru Artist Wildcards for A1111 & ComfyUI Enclosed in CSV & TXT Formats To enhance the model's performance and specificity, the following trigger word lists in CSV format are included: - [`danbooru_artist_webui.csv`](https://huggingface.co/MidnightRunner/MIDNIGHT_NAI-XL_vPredV1/blob/main/danbooru_artist_webui.csv) - [`danbooru_character_webui.csv`](https://huggingface.co/MidnightRunner/MIDNIGHT_NAI-XL_vPredV1/blob/main/danbooru_character_webui.csv) - [`e621_artist_webui.csv`](https://huggingface.co/MidnightRunner/MIDNIGHT_NAI-XL_vPredV1/blob/main/e621_artist_webui.csv) - [`e621_character_webui.csv`](https://huggingface.co/MidnightRunner/MIDNIGHT_NAI-XL_vPredV1/blob/main/e621_character_webui.csv) These lists provide recognized tags for various artists and characters, facilitating more accurate and tailored image generation. The wildcard file in 'TXT' format is included and designed for seamless integration with **AUTOMATIC1111** and **ComfyUI**, optimized for dynamic prompt generation using artist data from **e621** and **Danbooru**. - **TXT Format:** Sanitized artist tags by removing URLs and converted from `.csv` to `.txt` format for improved readability across different extensions. - **Dual Dataset Support:** Supports both e621 and Danbooru datasets to enhance art style diversity. - **Smooth Randomization:** Structured with trailing commas for seamless wildcard cycling during prompt generation. ## How to Use Wildcards ### For A1111 1. **Install:** [stable-diffusion-webui-wildcards](https://github.com/AUTOMATIC1111/stable-diffusion-webui-wildcards) 2. **Place the `.txt` file in:** ``` /A1111/extensions/stable-diffusion-webui-wildcards ``` 3. **Use in your prompt like this:** ``` __e621_artist_wildcard__, very awa, masterpiece, best quality, amazing quality ``` ``` __danbooru_character_wildcard__, very awa, masterpiece, best quality, amazing quality ``` ``` __e621_artist_wildcard__, __danbooru_character_wildcard__, very awa, masterpiece, best quality, amazing quality ``` ### For ComfyUI 1. **Install:** [ComfyUI-Impact-Pack](https://github.com/ltdrdata/ComfyUI-Impact-Pack) 2. **Place the `.txt` file in:** ``` /ComfyUI/custom_nodes/ComfyUI-Impact-Pack/wildcards ``` or ``` /ComfyUI/custom_nodes/ComfyUI-Impact-Pack/custom_wildcards ``` 3. **Use the wildcard node to trigger dynamic randomization in your workflows.** ## What’s Included in Wildcards TXT formatted file containing clean, artist-focused wildcard files ready for dynamic prompt workflows in A1111 and ComfyUI. - [danbooru_artist_wildcard.txt](https://huggingface.co/MidnightRunner/MIDNIGHT_NAI-XL_vPredV1/blob/main/danbooru_artist_wildcard.txt) - [danbooru_character_wildcard.txt](https://huggingface.co/MidnightRunner/MIDNIGHT_NAI-XL_vPredV1/blob/main/danbooru_character_wildcard.txt) - [e621_artist_wildcard.txt](https://huggingface.co/MidnightRunner/MIDNIGHT_NAI-XL_vPredV1/blob/main/e621_artist_wildcard.txt) - [e621_character_wildcard.txt](https://huggingface.co/MidnightRunner/MIDNIGHT_NAI-XL_vPredV1/blob/main/e621_character_wildcard.txt) ## Acknowledgments Special thanks to: - **Development Team:** Laxhar Lab - **Coding Contributions:** Euge - **e621/Danbooru Wildcards** [ipsylon0000](https://civitai.com/user/ipsylon0000) - **Community Support:** Various contributors ## Additional Resources - **Guidebook for NoobAI XL:** [English Version](https://civitai.com/articles/8962) - **Recommended LoRa List for NoobAI XL:** [Resource Link](https://fcnk27d6mpa5.feishu.cn/wiki/IBVGwvVGViazLYkMgVEcvbklnge) - **Fixing Black Images in ComfyUI on macOS (M1/M2):** [Read the Article](https://civitai.com/articles/11106) - **Creative Solutions and Services:** [Magnabos.co](https://magnabos.co/) ## License This model is licensed under the [CreativeML Open RAIL++-M License](https://github.com/CompVis/stable-diffusion/blob/main/LICENSE). By using this model, you agree to the terms and conditions outlined in the license.
koloni/blockassist-bc-deadly_graceful_stingray_1755617027
koloni
2025-08-19T15:49:45Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "deadly graceful stingray", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T15:49:41Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - deadly graceful stingray --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
phospho-app/Deimos252-ACT_BBOX-Light_dataset_deimos-qugw6
phospho-app
2025-08-19T15:48:51Z
0
0
phosphobot
[ "phosphobot", "act", "robotics", "dataset:Deimos252/Light_dataset_deimos", "region:us" ]
robotics
2025-08-19T15:48:08Z
--- datasets: Deimos252/Light_dataset_deimos library_name: phosphobot pipeline_tag: robotics model_name: act tags: - phosphobot - act task_categories: - robotics --- # act Model - phospho Training Pipeline ## Error Traceback We faced an issue while training your model. ``` 1 validation error for EpisodesFeatures Invalid JSON: EOF while parsing a value at line 2 column 0 [type=json_invalid, input_value='\n', input_type=str] For further information visit https://errors.pydantic.dev/2.11/v/json_invalid ``` ## Training parameters: - **Dataset**: [Deimos252/Light_dataset_deimos](https://huggingface.co/datasets/Deimos252/Light_dataset_deimos) - **Wandb run URL**: None - **Epochs**: None - **Batch size**: 100 - **Training steps**: 10000 πŸ“– **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme) πŸ€– **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
thailevann/track8_subtask2_v4
thailevann
2025-08-19T15:48:03Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "qwen3", "trl", "en", "base_model:unsloth/Qwen3-4B-unsloth-bnb-4bit", "base_model:finetune:unsloth/Qwen3-4B-unsloth-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-08-19T15:47:53Z
--- base_model: unsloth/Qwen3-4B-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - qwen3 - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** thailevann - **License:** apache-2.0 - **Finetuned from model :** unsloth/Qwen3-4B-unsloth-bnb-4bit This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Elizavr/blockassist-bc-reclusive_shaggy_bee_1755618244
Elizavr
2025-08-19T15:44:47Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "reclusive shaggy bee", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T15:44:36Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - reclusive shaggy bee --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
aleebaster/blockassist-bc-sly_eager_boar_1755616783
aleebaster
2025-08-19T15:41:43Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "sly eager boar", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T15:41:35Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - sly eager boar --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
WenFengg/21_14l3_19__8
WenFengg
2025-08-19T15:37:51Z
0
0
null
[ "safetensors", "any-to-any", "omega", "omegalabs", "bittensor", "agi", "license:mit", "region:us" ]
any-to-any
2025-08-19T14:56:20Z
--- license: mit tags: - any-to-any - omega - omegalabs - bittensor - agi --- This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet. Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
mradermacher/Agentic-1.0-GGUF
mradermacher
2025-08-19T15:34:19Z
0
1
transformers
[ "transformers", "gguf", "text-generation-inference", "unsloth", "qwen3", "en", "base_model:beyoru/Agentic-1.0", "base_model:quantized:beyoru/Agentic-1.0", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-08-19T15:04:54Z
--- base_model: beyoru/Agentic-1.0 language: - en library_name: transformers license: apache-2.0 mradermacher: readme_rev: 1 quantized_by: mradermacher tags: - text-generation-inference - transformers - unsloth - qwen3 --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> <!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> static quants of https://huggingface.co/beyoru/Agentic-1.0 <!-- provided-files --> ***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Agentic-1.0-GGUF).*** weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Agentic-1.0-GGUF/resolve/main/Agentic-1.0.Q2_K.gguf) | Q2_K | 1.8 | | | [GGUF](https://huggingface.co/mradermacher/Agentic-1.0-GGUF/resolve/main/Agentic-1.0.Q3_K_S.gguf) | Q3_K_S | 2.0 | | | [GGUF](https://huggingface.co/mradermacher/Agentic-1.0-GGUF/resolve/main/Agentic-1.0.Q3_K_M.gguf) | Q3_K_M | 2.2 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Agentic-1.0-GGUF/resolve/main/Agentic-1.0.Q3_K_L.gguf) | Q3_K_L | 2.3 | | | [GGUF](https://huggingface.co/mradermacher/Agentic-1.0-GGUF/resolve/main/Agentic-1.0.IQ4_XS.gguf) | IQ4_XS | 2.4 | | | [GGUF](https://huggingface.co/mradermacher/Agentic-1.0-GGUF/resolve/main/Agentic-1.0.Q4_K_S.gguf) | Q4_K_S | 2.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Agentic-1.0-GGUF/resolve/main/Agentic-1.0.Q4_K_M.gguf) | Q4_K_M | 2.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Agentic-1.0-GGUF/resolve/main/Agentic-1.0.Q5_K_S.gguf) | Q5_K_S | 2.9 | | | [GGUF](https://huggingface.co/mradermacher/Agentic-1.0-GGUF/resolve/main/Agentic-1.0.Q5_K_M.gguf) | Q5_K_M | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/Agentic-1.0-GGUF/resolve/main/Agentic-1.0.Q6_K.gguf) | Q6_K | 3.4 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Agentic-1.0-GGUF/resolve/main/Agentic-1.0.Q8_0.gguf) | Q8_0 | 4.4 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Agentic-1.0-GGUF/resolve/main/Agentic-1.0.f16.gguf) | f16 | 8.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
AdoCleanCode/neox_capital_only_v2
AdoCleanCode
2025-08-19T15:25:09Z
0
0
transformers
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-08-19T10:13:21Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
jiangnanboy/intelligent_document_recognition
jiangnanboy
2025-08-19T15:24:21Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-07-22T12:59:17Z
--- license: apache-2.0 --- ## intelligent document recognition ### Introduction Intelligent Document Recognition Desktop Software, used for OCR recognition and table structure recognition. It operates independently without the need for internet connection, ensuring data security. The results of OCR recognition can be saved in txt and html formats. The results of table structure recognition can be saved in html and excel formats. This software is available in two versions, one in Chinese and the other in English. ### version 2.0 Integrate OCR with table recognition. ### version2.1 Added features: 1. Screenshot 2. Images in the image list can be deleted ### version2.2 1. Support clearing the image list 3. Fix screenshot bug 5. Support dragging images to the image list https://github.com/jiangnanboy/intelligent_document_recognition
vohuutridung/bartpho-word-vietnews-summarization
vohuutridung
2025-08-19T15:24:00Z
0
0
transformers
[ "transformers", "safetensors", "mbart", "text2text-generation", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-08-19T15:23:32Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
hakimjustbao/blockassist-bc-raging_subtle_wasp_1755615291
hakimjustbao
2025-08-19T15:23:11Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "raging subtle wasp", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T15:23:08Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - raging subtle wasp --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
GaborMadarasz/AstroQA_mamba_epoch1_V6
GaborMadarasz
2025-08-19T15:22:16Z
0
0
transformers
[ "transformers", "safetensors", "mamba", "text-generation", "trl", "sft", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-19T15:22:02Z
--- library_name: transformers tags: - trl - sft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
concept-unlearning/Meta-Llama-3-8B_ft_lora_all_novels_v4_ft_npo_gdr_lora_positive_dataset_v1
concept-unlearning
2025-08-19T15:21:07Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-19T15:18:54Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Muapi/vintage-drawing-ce
Muapi
2025-08-19T15:18:13Z
0
0
null
[ "lora", "stable-diffusion", "flux.1-d", "license:openrail++", "region:us" ]
null
2025-08-19T15:18:02Z
--- license: openrail++ tags: - lora - stable-diffusion - flux.1-d model_type: LoRA --- # Vintage Drawing - CE ![preview](./preview.jpg) **Base model**: Flux.1 D **Trained words**: vntgdrwngCE style ## 🧠 Usage (Python) πŸ”‘ **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys) ```python import requests, os url = "https://api.muapi.ai/api/v1/flux_dev_lora_image" headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")} payload = { "prompt": "masterpiece, best quality, 1girl, looking at viewer", "model_id": [{"model": "civitai:660535@811004", "weight": 1.0}], "width": 1024, "height": 1024, "num_images": 1 } print(requests.post(url, headers=headers, json=payload).json()) ```
koloni/blockassist-bc-deadly_graceful_stingray_1755614936
koloni
2025-08-19T15:15:51Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "deadly graceful stingray", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T15:15:48Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - deadly graceful stingray --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Muapi/360-panorama-sd1.5-flux
Muapi
2025-08-19T15:15:35Z
0
0
null
[ "lora", "stable-diffusion", "flux.1-d", "license:openrail++", "region:us" ]
null
2025-08-19T15:15:24Z
--- license: openrail++ tags: - lora - stable-diffusion - flux.1-d model_type: LoRA --- # 360 panorama [SD1.5 / FLUX] ![preview](./preview.jpg) **Base model**: Flux.1 D **Trained words**: 360, panorama, spherical panorama ## 🧠 Usage (Python) πŸ”‘ **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys) ```python import requests, os url = "https://api.muapi.ai/api/v1/flux_dev_lora_image" headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")} payload = { "prompt": "masterpiece, best quality, 1girl, looking at viewer", "model_id": [{"model": "civitai:118398@756096", "weight": 1.0}], "width": 1024, "height": 1024, "num_images": 1 } print(requests.post(url, headers=headers, json=payload).json()) ```
kodetr/stunting-7B-Qwen
kodetr
2025-08-19T15:15:29Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "stunting", "kesehatan", "anak", "conversational", "id", "dataset:kodetr/penelitian-fundamental-stunting-qa", "base_model:Qwen/Qwen1.5-7B-Chat", "base_model:finetune:Qwen/Qwen1.5-7B-Chat", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-19T14:59:41Z
--- library_name: transformers tags: - stunting - kesehatan - anak license: apache-2.0 datasets: - kodetr/penelitian-fundamental-stunting-qa language: - id metrics: - rouge - bleu pipeline_tag: text-generation base_model: - Qwen/Qwen1.5-7B-Chat --- ### Model Description <!-- Provide a longer summary of what this model is. --> Konsultasi(Q&A) stunting pada anak - **Developed by:** Tanwir - **Language :** Indonesia ### Training ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65d6d2f8b06abf924b24349d/ZmKG5B9AapbcvAzXdfkYZ.png) ### Use with transformers Pastikan untuk memperbarui instalasi transformer Anda melalui pip install --upgrade transformer. ```python import torch from transformers import pipeline model_id = "kodetr/stunting-7B-Qwen" pipe = pipeline( "text-generation", model=model_id, torch_dtype=torch.bfloat16, device_map="auto", ) messages = [ {"role": "system", "content": "Jelaskan definisi 1000 hari pertama kehidupan."}, {"role": "user", "content": "Apa itu 1000 hari pertama kehidupan?"}, ] outputs = pipe( messages, max_new_tokens=256, ) print(outputs[0]["generated_text"][-1]) ```
mradermacher/cogito-v2-preview-llama-405B-GGUF
mradermacher
2025-08-19T15:14:16Z
0
0
transformers
[ "transformers", "en", "base_model:deepcogito/cogito-v2-preview-llama-405B", "base_model:finetune:deepcogito/cogito-v2-preview-llama-405B", "license:llama3.1", "endpoints_compatible", "region:us" ]
null
2025-08-02T00:32:16Z
--- base_model: deepcogito/cogito-v2-preview-llama-405B language: - en library_name: transformers license: llama3.1 mradermacher: readme_rev: 1 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> <!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> static quants of https://huggingface.co/deepcogito/cogito-v2-preview-llama-405B <!-- provided-files --> ***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#cogito-v2-preview-llama-405B-GGUF).*** weighted/imatrix quants are available at https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [PART 1](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q2_K.gguf.part1of4) [PART 2](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q2_K.gguf.part2of4) [PART 3](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q2_K.gguf.part3of4) [PART 4](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q2_K.gguf.part4of4) | Q2_K | 149.4 | | | [PART 1](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q3_K_S.gguf.part1of4) [PART 2](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q3_K_S.gguf.part2of4) [PART 3](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q3_K_S.gguf.part3of4) [PART 4](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q3_K_S.gguf.part4of4) | Q3_K_S | 175.3 | | | [PART 1](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q3_K_M.gguf.part1of4) [PART 2](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q3_K_M.gguf.part2of4) [PART 3](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q3_K_M.gguf.part3of4) [PART 4](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q3_K_M.gguf.part4of4) | Q3_K_M | 195.5 | lower quality | | [P1](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q3_K_L.gguf.part1of5) [P2](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q3_K_L.gguf.part2of5) [P3](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q3_K_L.gguf.part3of5) [P4](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q3_K_L.gguf.part4of5) [P5](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q3_K_L.gguf.part5of5) | Q3_K_L | 212.9 | | | [P1](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.IQ4_XS.gguf.part1of5) [P2](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.IQ4_XS.gguf.part2of5) [P3](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.IQ4_XS.gguf.part3of5) [P4](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.IQ4_XS.gguf.part4of5) [P5](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.IQ4_XS.gguf.part5of5) | IQ4_XS | 218.7 | | | [P1](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q4_K_S.gguf.part1of5) [P2](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q4_K_S.gguf.part2of5) [P3](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q4_K_S.gguf.part3of5) [P4](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q4_K_S.gguf.part4of5) [P5](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q4_K_S.gguf.part5of5) | Q4_K_S | 230.6 | fast, recommended | | [P1](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q4_K_M.gguf.part1of5) [P2](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q4_K_M.gguf.part2of5) [P3](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q4_K_M.gguf.part3of5) [P4](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q4_K_M.gguf.part4of5) [P5](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q4_K_M.gguf.part5of5) | Q4_K_M | 243.2 | fast, recommended | | [P1](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q5_K_S.gguf.part1of6) [P2](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q5_K_S.gguf.part2of6) [P3](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q5_K_S.gguf.part3of6) [P4](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q5_K_S.gguf.part4of6) [P5](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q5_K_S.gguf.part5of6) [P6](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q5_K_S.gguf.part6of6) | Q5_K_S | 279.4 | | | [P1](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q5_K_M.gguf.part1of6) [P2](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q5_K_M.gguf.part2of6) [P3](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q5_K_M.gguf.part3of6) [P4](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q5_K_M.gguf.part4of6) [P5](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q5_K_M.gguf.part5of6) [P6](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q5_K_M.gguf.part6of6) | Q5_K_M | 286.7 | | | [P1](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q6_K.gguf.part1of7) [P2](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q6_K.gguf.part2of7) [P3](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q6_K.gguf.part3of7) [P4](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q6_K.gguf.part4of7) [P5](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q6_K.gguf.part5of7) [P6](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q6_K.gguf.part6of7) [P7](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q6_K.gguf.part7of7) | Q6_K | 333.0 | very good quality | | [P1](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q8_0.gguf.part1of9) [P2](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q8_0.gguf.part2of9) [P3](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q8_0.gguf.part3of9) [P4](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q8_0.gguf.part4of9) [P5](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q8_0.gguf.part5of9) [P6](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q8_0.gguf.part6of9) [P7](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q8_0.gguf.part7of9) [P8](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q8_0.gguf.part8of9) [P9](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q8_0.gguf.part9of9) | Q8_0 | 431.3 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
Muapi/ps1-style-flux
Muapi
2025-08-19T15:11:21Z
0
0
null
[ "lora", "stable-diffusion", "flux.1-d", "license:openrail++", "region:us" ]
null
2025-08-19T15:11:09Z
--- license: openrail++ tags: - lora - stable-diffusion - flux.1-d model_type: LoRA --- # PS1 Style Flux ![preview](./preview.jpg) **Base model**: Flux.1 D **Trained words**: ps1 ## 🧠 Usage (Python) πŸ”‘ **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys) ```python import requests, os url = "https://api.muapi.ai/api/v1/flux_dev_lora_image" headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")} payload = { "prompt": "masterpiece, best quality, 1girl, looking at viewer", "model_id": [{"model": "civitai:648058@725031", "weight": 1.0}], "width": 1024, "height": 1024, "num_images": 1 } print(requests.post(url, headers=headers, json=payload).json()) ```
mradermacher/Nexa-Vector-11-Qwen-GGUF
mradermacher
2025-08-19T15:09:30Z
0
0
transformers
[ "transformers", "gguf", "en", "base_model:iversonzhou/Nexa-Vector-11-Qwen", "base_model:quantized:iversonzhou/Nexa-Vector-11-Qwen", "license:mit", "endpoints_compatible", "region:us", "conversational" ]
null
2025-08-19T14:56:35Z
--- base_model: iversonzhou/Nexa-Vector-11-Qwen language: - en library_name: transformers license: mit mradermacher: readme_rev: 1 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> <!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> static quants of https://huggingface.co/iversonzhou/Nexa-Vector-11-Qwen <!-- provided-files --> ***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Nexa-Vector-11-Qwen-GGUF).*** weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Nexa-Vector-11-Qwen-GGUF/resolve/main/Nexa-Vector-11-Qwen.Q2_K.gguf) | Q2_K | 0.8 | | | [GGUF](https://huggingface.co/mradermacher/Nexa-Vector-11-Qwen-GGUF/resolve/main/Nexa-Vector-11-Qwen.Q3_K_S.gguf) | Q3_K_S | 0.9 | | | [GGUF](https://huggingface.co/mradermacher/Nexa-Vector-11-Qwen-GGUF/resolve/main/Nexa-Vector-11-Qwen.Q3_K_M.gguf) | Q3_K_M | 0.9 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Nexa-Vector-11-Qwen-GGUF/resolve/main/Nexa-Vector-11-Qwen.Q3_K_L.gguf) | Q3_K_L | 1.0 | | | [GGUF](https://huggingface.co/mradermacher/Nexa-Vector-11-Qwen-GGUF/resolve/main/Nexa-Vector-11-Qwen.IQ4_XS.gguf) | IQ4_XS | 1.0 | | | [GGUF](https://huggingface.co/mradermacher/Nexa-Vector-11-Qwen-GGUF/resolve/main/Nexa-Vector-11-Qwen.Q4_K_S.gguf) | Q4_K_S | 1.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Nexa-Vector-11-Qwen-GGUF/resolve/main/Nexa-Vector-11-Qwen.Q4_K_M.gguf) | Q4_K_M | 1.1 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Nexa-Vector-11-Qwen-GGUF/resolve/main/Nexa-Vector-11-Qwen.Q5_K_S.gguf) | Q5_K_S | 1.2 | | | [GGUF](https://huggingface.co/mradermacher/Nexa-Vector-11-Qwen-GGUF/resolve/main/Nexa-Vector-11-Qwen.Q5_K_M.gguf) | Q5_K_M | 1.2 | | | [GGUF](https://huggingface.co/mradermacher/Nexa-Vector-11-Qwen-GGUF/resolve/main/Nexa-Vector-11-Qwen.Q6_K.gguf) | Q6_K | 1.4 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Nexa-Vector-11-Qwen-GGUF/resolve/main/Nexa-Vector-11-Qwen.Q8_0.gguf) | Q8_0 | 1.7 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Nexa-Vector-11-Qwen-GGUF/resolve/main/Nexa-Vector-11-Qwen.f16.gguf) | f16 | 3.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
Muapi/3d_flux-style
Muapi
2025-08-19T15:07:43Z
0
0
null
[ "lora", "stable-diffusion", "flux.1-d", "license:openrail++", "region:us" ]
null
2025-08-19T15:07:35Z
--- license: openrail++ tags: - lora - stable-diffusion - flux.1-d model_type: LoRA --- # 3D_Flux Style ![preview](./preview.jpg) **Base model**: Flux.1 D **Trained words**: 3D01S , kawaii, anime ## 🧠 Usage (Python) πŸ”‘ **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys) ```python import requests, os url = "https://api.muapi.ai/api/v1/flux_dev_lora_image" headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")} payload = { "prompt": "masterpiece, best quality, 1girl, looking at viewer", "model_id": [{"model": "civitai:689478@771650", "weight": 1.0}], "width": 1024, "height": 1024, "num_images": 1 } print(requests.post(url, headers=headers, json=payload).json()) ```
unitova/blockassist-bc-zealous_sneaky_raven_1755614105
unitova
2025-08-19T15:03:14Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "zealous sneaky raven", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T15:03:09Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - zealous sneaky raven --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Gynjn/iLRM
Gynjn
2025-08-19T15:02:54Z
0
1
pytorch
[ "pytorch", "image-to-3d", "arxiv:2507.23277", "license:mit", "region:us" ]
image-to-3d
2025-07-31T08:28:27Z
--- license: mit pipeline_tag: image-to-3d library_name: pytorch --- This repository contains the models of the paper [iLRM: An Iterative Large 3D Reconstruction Model](https://huggingface.co/papers/2507.23277). Project Page: https://gynjn.github.io/iLRM/
Muapi/dall-e3-meets-flux
Muapi
2025-08-19T15:02:10Z
0
0
null
[ "lora", "stable-diffusion", "flux.1-d", "license:openrail++", "region:us" ]
null
2025-08-19T15:01:56Z
--- license: openrail++ tags: - lora - stable-diffusion - flux.1-d model_type: LoRA --- # Dall-E3 meets FLUX ![preview](./preview.jpg) **Base model**: Flux.1 D **Trained words**: aidmadalle3 ## 🧠 Usage (Python) πŸ”‘ **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys) ```python import requests, os url = "https://api.muapi.ai/api/v1/flux_dev_lora_image" headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")} payload = { "prompt": "masterpiece, best quality, 1girl, looking at viewer", "model_id": [{"model": "civitai:1125621@1265190", "weight": 1.0}], "width": 1024, "height": 1024, "num_images": 1 } print(requests.post(url, headers=headers, json=payload).json()) ```
2hpsatt/blockassist-bc-huge_deft_eagle_1755615679
2hpsatt
2025-08-19T15:02:02Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "huge deft eagle", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T15:01:56Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - huge deft eagle --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Kurosawama/gemma-3-1b-it-Translation-align
Kurosawama
2025-08-19T15:01:48Z
0
0
transformers
[ "transformers", "safetensors", "trl", "dpo", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-08-19T15:01:43Z
--- library_name: transformers tags: - trl - dpo --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
lakelee/RLB_MLP_TSC_v2.20250819.17
lakelee
2025-08-19T14:58:20Z
0
0
transformers
[ "transformers", "safetensors", "timespan_contrastive", "generated_from_trainer", "endpoints_compatible", "region:us" ]
null
2025-08-19T08:31:56Z
--- library_name: transformers tags: - generated_from_trainer model-index: - name: RLB_MLP_TSC_v2.20250819.17 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # RLB_MLP_TSC_v2.20250819.17 This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Use adamw_torch_fused with betas=(0.9,0.95) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 1000 - num_epochs: 1.0 ### Training results ### Framework versions - Transformers 4.55.2 - Pytorch 2.8.0+cu128 - Tokenizers 0.21.4
fengpeisheng1/mergekit-slerp-zhlbqbl
fengpeisheng1
2025-08-19T14:57:33Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "mergekit", "merge", "conversational", "base_model:fengpeisheng1/mergekit-slerp-ariyvyf", "base_model:merge:fengpeisheng1/mergekit-slerp-ariyvyf", "base_model:maywell/Qwen2-7B-Multilingual-RP", "base_model:merge:maywell/Qwen2-7B-Multilingual-RP", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-19T14:51:11Z
--- base_model: - maywell/Qwen2-7B-Multilingual-RP - fengpeisheng1/mergekit-slerp-ariyvyf library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [SLERP](https://en.wikipedia.org/wiki/Slerp) merge method. ### Models Merged The following models were included in the merge: * [maywell/Qwen2-7B-Multilingual-RP](https://huggingface.co/maywell/Qwen2-7B-Multilingual-RP) * [fengpeisheng1/mergekit-slerp-ariyvyf](https://huggingface.co/fengpeisheng1/mergekit-slerp-ariyvyf) ### Configuration The following YAML configuration was used to produce this model: ```yaml slices: - sources: - model: maywell/Qwen2-7B-Multilingual-RP layer_range: [0,28] - model: fengpeisheng1/mergekit-slerp-ariyvyf layer_range: [0,28] merge_method: slerp base_model: maywell/Qwen2-7B-Multilingual-RP parameters: t: - filter: self_attn value: [0, 0.3, 0.5, 0.7, 1] - filter: mlp value: [1, 0.7, 0.5, 0.3, 0] - value: 0.5 dtype: bfloat16 ```
Muapi/tifa-lockhart-ffviir
Muapi
2025-08-19T14:56:12Z
0
0
null
[ "lora", "stable-diffusion", "flux.1-d", "license:openrail++", "region:us" ]
null
2025-08-19T14:55:53Z
--- license: openrail++ tags: - lora - stable-diffusion - flux.1-d model_type: LoRA --- # Tifa Lockhart (FFVIIR) ![preview](./preview.jpg) **Base model**: Flux.1 D **Trained words**: TifaLockhart, croptop, skirt, suspenders, fingerless gloves ## 🧠 Usage (Python) πŸ”‘ **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys) ```python import requests, os url = "https://api.muapi.ai/api/v1/flux_dev_lora_image" headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")} payload = { "prompt": "masterpiece, best quality, 1girl, looking at viewer", "model_id": [{"model": "civitai:661363@740105", "weight": 1.0}], "width": 1024, "height": 1024, "num_images": 1 } print(requests.post(url, headers=headers, json=payload).json()) ```
KMH158/t5-small-openassistant-chat
KMH158
2025-08-19T14:54:39Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:google-t5/t5-small", "base_model:finetune:google-t5/t5-small", "license:apache-2.0", "text-generation-inference", "endpoints_compatible", "region:us" ]
null
2025-08-19T12:36:35Z
--- library_name: transformers license: apache-2.0 base_model: google-t5/t5-small tags: - generated_from_trainer model-index: - name: t5-small-openassistant-chat results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-small-openassistant-chat This model is a fine-tuned version of [google-t5/t5-small](https://huggingface.co/google-t5/t5-small) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.1785 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 80 - eval_batch_size: 1 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 40 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 3.3768 | 1.0 | 301 | 2.3842 | | 2.6839 | 2.0 | 602 | 2.3277 | | 2.6351 | 3.0 | 903 | 2.2995 | | 2.6016 | 4.0 | 1204 | 2.2818 | | 2.5803 | 5.0 | 1505 | 2.2680 | | 2.5587 | 6.0 | 1806 | 2.2571 | | 2.541 | 7.0 | 2107 | 2.2481 | | 2.5323 | 8.0 | 2408 | 2.2409 | | 2.5102 | 9.0 | 2709 | 2.2349 | | 2.5063 | 10.0 | 3010 | 2.2288 | | 2.4953 | 11.0 | 3311 | 2.2242 | | 2.4926 | 12.0 | 3612 | 2.2192 | | 2.4786 | 13.0 | 3913 | 2.2154 | | 2.472 | 14.0 | 4214 | 2.2117 | | 2.4662 | 15.0 | 4515 | 2.2079 | | 2.4553 | 16.0 | 4816 | 2.2051 | | 2.4472 | 17.0 | 5117 | 2.2020 | | 2.4488 | 18.0 | 5418 | 2.2008 | | 2.4367 | 19.0 | 5719 | 2.1972 | | 2.4353 | 20.0 | 6020 | 2.1952 | | 2.429 | 21.0 | 6321 | 2.1934 | | 2.4247 | 22.0 | 6622 | 2.1912 | | 2.4242 | 23.0 | 6923 | 2.1901 | | 2.4196 | 24.0 | 7224 | 2.1887 | | 2.4169 | 25.0 | 7525 | 2.1873 | | 2.4122 | 26.0 | 7826 | 2.1862 | | 2.4089 | 27.0 | 8127 | 2.1851 | | 2.4042 | 28.0 | 8428 | 2.1841 | | 2.4061 | 29.0 | 8729 | 2.1831 | | 2.4007 | 30.0 | 9030 | 2.1823 | | 2.397 | 31.0 | 9331 | 2.1814 | | 2.3998 | 32.0 | 9632 | 2.1810 | | 2.3963 | 33.0 | 9933 | 2.1805 | | 2.3976 | 34.0 | 10234 | 2.1798 | | 2.3919 | 35.0 | 10535 | 2.1794 | | 2.3873 | 36.0 | 10836 | 2.1793 | | 2.3899 | 37.0 | 11137 | 2.1789 | | 2.3886 | 38.0 | 11438 | 2.1786 | | 2.3906 | 39.0 | 11739 | 2.1786 | | 2.393 | 40.0 | 12040 | 2.1785 | ### Framework versions - Transformers 4.55.2 - Pytorch 2.6.0+cu124 - Datasets 4.0.0 - Tokenizers 0.21.4
Ba2han/qwen3-a3b-coder-experiment-Q4_K_M-GGUF
Ba2han
2025-08-19T14:51:26Z
0
0
null
[ "gguf", "llama-cpp", "gguf-my-repo", "base_model:Ba2han/qwen3-a3b-coder-experiment", "base_model:quantized:Ba2han/qwen3-a3b-coder-experiment", "endpoints_compatible", "region:us" ]
null
2025-08-19T14:50:10Z
--- base_model: Ba2han/qwen3-a3b-coder-experiment tags: - llama-cpp - gguf-my-repo --- # Ba2han/qwen3-a3b-coder-experiment-Q4_K_M-GGUF This model was converted to GGUF format from [`Ba2han/qwen3-a3b-coder-experiment`](https://huggingface.co/Ba2han/qwen3-a3b-coder-experiment) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/Ba2han/qwen3-a3b-coder-experiment) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Ba2han/qwen3-a3b-coder-experiment-Q4_K_M-GGUF --hf-file qwen3-a3b-coder-experiment-q4_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Ba2han/qwen3-a3b-coder-experiment-Q4_K_M-GGUF --hf-file qwen3-a3b-coder-experiment-q4_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Ba2han/qwen3-a3b-coder-experiment-Q4_K_M-GGUF --hf-file qwen3-a3b-coder-experiment-q4_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Ba2han/qwen3-a3b-coder-experiment-Q4_K_M-GGUF --hf-file qwen3-a3b-coder-experiment-q4_k_m.gguf -c 2048 ```
mang3dd/blockassist-bc-tangled_slithering_alligator_1755613184
mang3dd
2025-08-19T14:48:35Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "tangled slithering alligator", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T14:48:31Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - tangled slithering alligator --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Trelis/Qwen3-4B_ds-arc-agi-2-perfect-100_test-c4
Trelis
2025-08-19T14:44:19Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "text-generation-inference", "unsloth", "trl", "conversational", "en", "base_model:unsloth/Qwen3-4B", "base_model:finetune:unsloth/Qwen3-4B", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-08-19T14:43:31Z
--- base_model: unsloth/Qwen3-4B tags: - text-generation-inference - transformers - unsloth - qwen3 - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** Trelis - **License:** apache-2.0 - **Finetuned from model :** unsloth/Qwen3-4B This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
weikeduik/mozlegal
weikeduik
2025-08-19T14:42:52Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-08-19T14:42:52Z
--- license: apache-2.0 ---
katanyasekolah/blockassist-bc-silky_sprightly_cassowary_1755612521
katanyasekolah
2025-08-19T14:38:09Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "silky sprightly cassowary", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T14:38:06Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - silky sprightly cassowary --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
sarrockia/prefectIllustriousXL_v3.safetensors
sarrockia
2025-08-19T14:33:06Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-08-19T14:04:58Z
--- license: apache-2.0 ---
shanaka95/gemma-3-270m-it-rag-finetune
shanaka95
2025-08-19T14:28:51Z
0
0
transformers
[ "transformers", "safetensors", "gemma3_text", "text-generation", "generated_from_trainer", "sft", "trl", "conversational", "base_model:shanaka95/checkpoints", "base_model:finetune:shanaka95/checkpoints", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-18T10:32:44Z
--- base_model: shanaka95/checkpoints library_name: transformers model_name: gemma-3-270m-it-rag-finetune tags: - generated_from_trainer - sft - trl licence: license --- # Model Card for gemma-3-270m-it-rag-finetune This model is a fine-tuned version of [shanaka95/checkpoints](https://huggingface.co/shanaka95/checkpoints). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="shanaka95/gemma-3-270m-it-rag-finetune", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with SFT. ### Framework versions - TRL: 0.21.0 - Transformers: 4.55.2 - Pytorch: 2.8.0+cu129 - Datasets: 4.0.0 - Tokenizers: 0.21.4 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
EnLiving-AI/CosmosC1
EnLiving-AI
2025-08-19T14:26:23Z
0
4
null
[ "Cosmos", "Learning", "Advacned_learning", "NO-API", "text-generation", "license:mit", "region:us" ]
text-generation
2025-08-19T13:15:16Z
--- license: mit pipeline_tag: text-generation tags: - Cosmos - Learning - Advacned_learning - NO-API --- # 🌌 Cosmos C1 **Autonomous Knowledge Explorer β€” v1.0** ![cosmosc1.png](https://cdn-uploads.huggingface.co/production/uploads/6884f0736963bab90a76beea/cpYTrRilKLz_vh85ABiUh.png) Cosmos C1 is an **autonomous research engine** packed into a simple `.exe` app. It explores the web, extracts knowledge, and builds structured insights β€” all without needing APIs or Python setup. Just run the `.exe` and watch your AI explore, learn, and grow its own knowledge base. --- ## ✨ Features - πŸ” **Autonomous Research Cycles** β€” Runs continuous query β†’ learn β†’ extract β†’ store loops. - 🧠 **Knowledge Extraction** β€” Identifies concepts, relationships, and facts from raw text. - πŸ“Š **Knowledge Base Growth** β€” Expands memory with each cycle. - 🌐 **No API Required** β€” Directly learns from the web. - πŸ–₯️ **Standalone .exe** β€” No Python, no installs, just double-click and go. - πŸ“œ **Summaries** β€” Generates cycle logs and session summaries. --- ## ⚑ Quick Start 1. **Download** the latest release from [Releases](https://huggingface.co/EnLiving-AI/CosmosC1/resolve/main/CosmosC1.exe). 2. Place `CosmosC1.exe` in your desired folder. 3. Double-click to launch. 4. The terminal window will start showing research cycles in real time. 5. Press `Ctrl+C` anytime to stop and see a final **Session Summary**. --- ## πŸ–ΌοΈ Example Run <code> πŸš€ Autonomous Knowledge Explorer </code><br> <code>🌐 No APIs - Direct Learning from Web</code><br> <code>Press Ctrl+C to stop and show summary</code><br> <code>πŸŒ€ CYCLE 1</code><br> <code>πŸ” Source: Web</code><br> <code>πŸ“š Query: Applications of Shakespeare</code><br> <code>πŸ“– Content Learned:</code><br> <code>... raw snippets ...</code><br> <code>πŸ’‘ Extracted Knowledge:</code><br> <code>✦ Concepts: Applications Directory, Windows</code><br> <code>✦ Relationships: Applications Directory ↔ Windows</code><br> <code>πŸ“Š Knowledge Base: 2 concepts | 1 discovery</code><br> --- At the end, Cosmos C1 shows: - βœ… **Total Cycles** - βœ… **Concepts Learned** - βœ… **Discoveries Recorded** - βœ… **Top Discoveries** - βœ… **Current Focus Area** --- ## 🎯 Use Cases - AI-driven **research assistant** - Automated **concept discovery** - Inspiration for **autonomous agent design** - Demonstration of **web knowledge extraction** --- ## 🚧 Current Limitations - Requires internet access - Works in a terminal window (no GUI yet) - May capture unrelated snippets (still improving filtering) --- ## πŸ“Œ Roadmap - [ ] GUI Dashboard - [ ] Exportable Knowledge Graphs - [ ] Smarter Query Refinement - [ ] Multi-agent collaboration --- ## πŸ“„ License MIT License β€” feel free to use, modify, and contribute. ---
vwzyrraz7l/blockassist-bc-tall_hunting_vulture_1755611907
vwzyrraz7l
2025-08-19T14:25:24Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "tall hunting vulture", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T14:25:20Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - tall hunting vulture --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
chainway9/blockassist-bc-untamed_quick_eel_1755611572
chainway9
2025-08-19T14:20:21Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "untamed quick eel", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T14:20:17Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - untamed quick eel --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Joetib/en-twi-qwen2.5-0.5B-Instruct
Joetib
2025-08-19T14:19:37Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-19T14:19:22Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
AiArtLab/kc
AiArtLab
2025-08-19T14:17:04Z
0
2
null
[ "text-to-image", "base_model:KBlueLeaf/Kohaku-XL-Zeta", "base_model:finetune:KBlueLeaf/Kohaku-XL-Zeta", "region:us" ]
text-to-image
2025-04-30T17:10:58Z
--- base_model: - stabilityai/stable-diffusion-xl-base-1.0 - KBlueLeaf/Kohaku-XL-Zeta pipeline_tag: text-to-image --- ![result](gandr-collage.jpg) ## Description This model is a custom fine-tuned variant based on the Kohaku-XL-Zeta pretrained foundation [Kohaku-XL-Zeta](https://huggingface.co/KBlueLeaf/Kohaku-XL-Zeta). Kohaku-XL-Zeta itself is a "raw" base model trained for 1 epoch on 8+ million Danbooru(mostly) images , using 4x NVIDIA 3090 GPUs! While the original Kohaku is not user-friendly out-of-the-box, it serves as a flexible starting point for creative adaptations. To enhance encoder stability and inject cross-domain knowledge beyond Danbooru-specific features, the model was merged with ColorfulXL using cosine dissimilarity weighting (0.25 blend ratio). This integration aims to broaden the model’s understanding of natural language and artistic concepts beyond typical Danbooru tagging conventions. Post-merge stabilization involved 6 epochs at 2e-6 learning rate, followed by ongoing fine-tuning at 9e-7 learning rate to refine details. The closest publicly available fine-tune of this lineage is Illustrous, though it uses an earlier Kohaku version with weaker text comprehension. This variant leverages the improved Kohaku-Colorful hybrid (KC), prioritizing non-realistic art generation and creative flexibility over photorealism. Key Notes : - Not optimized for realism; best suited for anime/artistic styles. - Ideal for users seeking a customizable foundation for niche art generation or further fine-tuning experiments. ## Donations Please contact with us if you may provide some GPU's or money on training DOGE: DEw2DR8C7BnF8GgcrfTzUjSnGkuMeJhg83 BTC: 3JHv9Hb8kEW8zMAccdgCdZGfrHeMhH1rpN ## Contacts [recoilme](https://t.me/recoilme)
zjhhhh/Multi_Preference_REBEL_1e4
zjhhhh
2025-08-19T14:16:26Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-19T14:15:10Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
lisaozill03/blockassist-bc-rugged_prickly_alpaca_1755611423
lisaozill03
2025-08-19T14:15:11Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "rugged prickly alpaca", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T14:15:07Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - rugged prickly alpaca --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
koloni/blockassist-bc-deadly_graceful_stingray_1755610957
koloni
2025-08-19T14:10:09Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "deadly graceful stingray", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T14:10:05Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - deadly graceful stingray --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Muapi/softserve-anime-flux
Muapi
2025-08-19T14:06:17Z
0
0
null
[ "lora", "stable-diffusion", "flux.1-d", "license:openrail++", "region:us" ]
null
2025-08-19T14:05:53Z
--- license: openrail++ tags: - lora - stable-diffusion - flux.1-d model_type: LoRA --- # Softserve Anime (Flux) ![preview](./preview.jpg) **Base model**: Flux.1 D **Trained words**: sftsrv style illustration ## 🧠 Usage (Python) πŸ”‘ **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys) ```python import requests, os url = "https://api.muapi.ai/api/v1/flux_dev_lora_image" headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")} payload = { "prompt": "masterpiece, best quality, 1girl, looking at viewer", "model_id": [{"model": "civitai:657191@735293", "weight": 1.0}], "width": 1024, "height": 1024, "num_images": 1 } print(requests.post(url, headers=headers, json=payload).json()) ```
Sayemahsjn/blockassist-bc-playful_feline_octopus_1755611085
Sayemahsjn
2025-08-19T14:02:54Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "playful feline octopus", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T14:02:50Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - playful feline octopus --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Muapi/f1-charturn-multi-view-turnaround-model-sheet-character-design
Muapi
2025-08-19T14:01:24Z
0
0
null
[ "lora", "stable-diffusion", "flux.1-d", "license:openrail++", "region:us" ]
null
2025-08-19T14:01:11Z
--- license: openrail++ tags: - lora - stable-diffusion - flux.1-d model_type: LoRA --- # F1 CharTurn, Multi-view, Turnaround, Model Sheet, Character design ![preview](./preview.jpg) **Base model**: Flux.1 D **Trained words**: ## 🧠 Usage (Python) πŸ”‘ **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys) ```python import requests, os url = "https://api.muapi.ai/api/v1/flux_dev_lora_image" headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")} payload = { "prompt": "masterpiece, best quality, 1girl, looking at viewer", "model_id": [{"model": "civitai:784830@877675", "weight": 1.0}], "width": 1024, "height": 1024, "num_images": 1 } print(requests.post(url, headers=headers, json=payload).json()) ```
lilTAT/blockassist-bc-gentle_rugged_hare_1755611973
lilTAT
2025-08-19T14:00:00Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "gentle rugged hare", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T13:59:56Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - gentle rugged hare --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
ShimotsukiArc/Qwen2.5-Coder-7B-Instruct-trained4
ShimotsukiArc
2025-08-19T13:59:36Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "qwen2", "trl", "en", "base_model:ShimotsukiArc/Qwen2.5-Coder-7B-Instruct-untrained", "base_model:finetune:ShimotsukiArc/Qwen2.5-Coder-7B-Instruct-untrained", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-08-19T13:58:36Z
--- base_model: ShimotsukiArc/Qwen2.5-Coder-7B-Instruct-untrained tags: - text-generation-inference - transformers - unsloth - qwen2 - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** ShimotsukiArc - **License:** apache-2.0 - **Finetuned from model :** ShimotsukiArc/Qwen2.5-Coder-7B-Instruct-untrained This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Muapi/flux-neon-abyss
Muapi
2025-08-19T13:58:44Z
0
0
null
[ "lora", "stable-diffusion", "flux.1-d", "license:openrail++", "region:us" ]
null
2025-08-19T13:58:34Z
--- license: openrail++ tags: - lora - stable-diffusion - flux.1-d model_type: LoRA --- # FLUX Neon Abyss ![preview](./preview.jpg) **Base model**: Flux.1 D **Trained words**: bo-neon ## 🧠 Usage (Python) πŸ”‘ **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys) ```python import requests, os url = "https://api.muapi.ai/api/v1/flux_dev_lora_image" headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")} payload = { "prompt": "masterpiece, best quality, 1girl, looking at viewer", "model_id": [{"model": "civitai:1049928@1178104", "weight": 1.0}], "width": 1024, "height": 1024, "num_images": 1 } print(requests.post(url, headers=headers, json=payload).json()) ```
michaelcpage345/blockassist-bc-miniature_deadly_anteater_1755609800
michaelcpage345
2025-08-19T13:56:54Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "miniature deadly anteater", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T13:56:50Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - miniature deadly anteater --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Muapi/flux-sdxl-black-diamonds
Muapi
2025-08-19T13:54:12Z
0
0
null
[ "lora", "stable-diffusion", "flux.1-d", "license:openrail++", "region:us" ]
null
2025-08-19T13:53:57Z
--- license: openrail++ tags: - lora - stable-diffusion - flux.1-d model_type: LoRA --- # [Flux/SDXL] - πŸ–€ Black Diamonds πŸ–€ ![preview](./preview.jpg) **Base model**: Flux.1 D **Trained words**: made out of black diamonds, black diamonds ## 🧠 Usage (Python) πŸ”‘ **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys) ```python import requests, os url = "https://api.muapi.ai/api/v1/flux_dev_lora_image" headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")} payload = { "prompt": "masterpiece, best quality, 1girl, looking at viewer", "model_id": [{"model": "civitai:607623@740146", "weight": 1.0}], "width": 1024, "height": 1024, "num_images": 1 } print(requests.post(url, headers=headers, json=payload).json()) ```
unitova/blockassist-bc-zealous_sneaky_raven_1755610013
unitova
2025-08-19T13:53:50Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "zealous sneaky raven", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T13:53:47Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - zealous sneaky raven --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
vwzyrraz7l/blockassist-bc-tall_hunting_vulture_1755609860
vwzyrraz7l
2025-08-19T13:52:46Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "tall hunting vulture", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T13:52:42Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - tall hunting vulture --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
hug-mono/checkworthy-binary-classification-training-1755585731
hug-mono
2025-08-19T13:51:55Z
0
0
peft
[ "peft", "safetensors", "base_model:adapter:google-bert/bert-base-uncased", "lora", "transformers", "base_model:google-bert/bert-base-uncased", "license:apache-2.0", "region:us" ]
null
2025-08-19T13:51:51Z
--- library_name: peft license: apache-2.0 base_model: google-bert/bert-base-uncased tags: - base_model:adapter:google-bert/bert-base-uncased - lora - transformers model-index: - name: checkworthy-binary-classification-training-1755585731 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # checkworthy-binary-classification-training-1755585731 This model is a fine-tuned version of [google-bert/bert-base-uncased](https://huggingface.co/google-bert/bert-base-uncased) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2.1106713456200193e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Use OptimizerNames.PAGED_ADAMW with betas=(0.9348819720458172,0.9285998615546803) and epsilon=1.9972958061508847e-07 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: polynomial - lr_scheduler_warmup_ratio: 0.12890328790683203 - lr_scheduler_warmup_steps: 488 - num_epochs: 40 ### Training results ### Framework versions - PEFT 0.17.0 - Transformers 4.55.2 - Pytorch 2.6.0+cu124 - Datasets 4.0.0 - Tokenizers 0.21.4
Muapi/kodak-vision3-500t-analog-film-stocks-footage-f1d-xl
Muapi
2025-08-19T13:48:22Z
0
0
null
[ "lora", "stable-diffusion", "flux.1-d", "license:openrail++", "region:us" ]
null
2025-08-19T13:48:08Z
--- license: openrail++ tags: - lora - stable-diffusion - flux.1-d model_type: LoRA --- # Kodak Vision3 500T analog film stocks Footage F1D + XL ![preview](./preview.jpg) **Base model**: Flux.1 D **Trained words**: Kodak Vision3 500T, analog film stocks ## 🧠 Usage (Python) πŸ”‘ **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys) ```python import requests, os url = "https://api.muapi.ai/api/v1/flux_dev_lora_image" headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")} payload = { "prompt": "masterpiece, best quality, 1girl, looking at viewer", "model_id": [{"model": "civitai:725625@876689", "weight": 1.0}], "width": 1024, "height": 1024, "num_images": 1 } print(requests.post(url, headers=headers, json=payload).json()) ```
Jocelyn-Martin/gemma-3-270m-it-fine-tuned__2025_13_08_17_02_LlamaUX_conversational
Jocelyn-Martin
2025-08-19T13:46:34Z
0
0
transformers
[ "transformers", "safetensors", "gemma3_text", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-19T13:46:11Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
fengpeisheng1/mergekit-slerp-iskhcfu
fengpeisheng1
2025-08-19T13:37:21Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "mergekit", "merge", "conversational", "base_model:maywell/Qwen2-7B-Multilingual-RP", "base_model:merge:maywell/Qwen2-7B-Multilingual-RP", "base_model:rubra-ai/Qwen2-7B-Instruct", "base_model:merge:rubra-ai/Qwen2-7B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-19T13:31:22Z
--- base_model: - maywell/Qwen2-7B-Multilingual-RP - rubra-ai/Qwen2-7B-Instruct library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [SLERP](https://en.wikipedia.org/wiki/Slerp) merge method. ### Models Merged The following models were included in the merge: * [maywell/Qwen2-7B-Multilingual-RP](https://huggingface.co/maywell/Qwen2-7B-Multilingual-RP) * [rubra-ai/Qwen2-7B-Instruct](https://huggingface.co/rubra-ai/Qwen2-7B-Instruct) ### Configuration The following YAML configuration was used to produce this model: ```yaml slices: - sources: - model: maywell/Qwen2-7B-Multilingual-RP layer_range: [0,28] - model: rubra-ai/Qwen2-7B-Instruct layer_range: [0,28] merge_method: slerp base_model: maywell/Qwen2-7B-Multilingual-RP parameters: t: - filter: self_attn value: [0, 0.3, 0.5, 0.7, 1] - filter: mlp value: [1, 0.7, 0.5, 0.3, 0] - value: 0.5 dtype: bfloat16 ```
Whitesmasher/Wan22Testing
Whitesmasher
2025-08-19T13:36:09Z
0
0
null
[ "gguf", "license:apache-2.0", "region:us" ]
null
2025-08-19T12:54:32Z
--- license: apache-2.0 ---
Muapi/felix-meynet
Muapi
2025-08-19T13:29:03Z
0
0
null
[ "lora", "stable-diffusion", "flux.1-d", "license:openrail++", "region:us" ]
null
2025-08-19T13:28:57Z
--- license: openrail++ tags: - lora - stable-diffusion - flux.1-d model_type: LoRA --- # Felix Meynet ![preview](./preview.jpg) **Base model**: Flux.1 D **Trained words**: Art by Felix Meynet ## 🧠 Usage (Python) πŸ”‘ **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys) ```python import requests, os url = "https://api.muapi.ai/api/v1/flux_dev_lora_image" headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")} payload = { "prompt": "masterpiece, best quality, 1girl, looking at viewer", "model_id": [{"model": "civitai:1021589@1441868", "weight": 1.0}], "width": 1024, "height": 1024, "num_images": 1 } print(requests.post(url, headers=headers, json=payload).json()) ```
lilTAT/blockassist-bc-gentle_rugged_hare_1755609920
lilTAT
2025-08-19T13:25:49Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "gentle rugged hare", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T13:25:45Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - gentle rugged hare --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
yaelahnal/blockassist-bc-mute_clawed_crab_1755609838
yaelahnal
2025-08-19T13:25:07Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "mute clawed crab", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T13:24:50Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - mute clawed crab --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Muapi/flux-flux-hanfu-belly-wrap
Muapi
2025-08-19T13:23:35Z
0
0
null
[ "lora", "stable-diffusion", "flux.1-d", "license:openrail++", "region:us" ]
null
2025-08-19T13:23:02Z
--- license: openrail++ tags: - lora - stable-diffusion - flux.1-d model_type: LoRA --- # FLUXζ±‰ζœθ‚šε…œ | FLUX Hanfu belly wrap ![preview](./preview.jpg) **Base model**: Flux.1 D **Trained words**: ## 🧠 Usage (Python) πŸ”‘ **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys) ```python import requests, os url = "https://api.muapi.ai/api/v1/flux_dev_lora_image" headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")} payload = { "prompt": "masterpiece, best quality, 1girl, looking at viewer", "model_id": [{"model": "civitai:653935@731600", "weight": 1.0}], "width": 1024, "height": 1024, "num_images": 1 } print(requests.post(url, headers=headers, json=payload).json()) ```
Wkdrn/roberta-base-klue-ynat-classification
Wkdrn
2025-08-19T13:22:45Z
0
0
transformers
[ "transformers", "safetensors", "roberta", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-08-19T13:21:42Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Muapi/korean-gone-flux
Muapi
2025-08-19T13:22:05Z
0
0
null
[ "lora", "stable-diffusion", "flux.1-d", "license:openrail++", "region:us" ]
null
2025-08-19T13:21:57Z
--- license: openrail++ tags: - lora - stable-diffusion - flux.1-d model_type: LoRA --- # Korean Gone Flux ![preview](./preview.jpg) **Base model**: Flux.1 D **Trained words**: korean ## 🧠 Usage (Python) πŸ”‘ **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys) ```python import requests, os url = "https://api.muapi.ai/api/v1/flux_dev_lora_image" headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")} payload = { "prompt": "masterpiece, best quality, 1girl, looking at viewer", "model_id": [{"model": "civitai:677337@758214", "weight": 1.0}], "width": 1024, "height": 1024, "num_images": 1 } print(requests.post(url, headers=headers, json=payload).json()) ```
Wkdrn/results
Wkdrn
2025-08-19T13:21:40Z
0
0
transformers
[ "transformers", "safetensors", "roberta", "text-classification", "generated_from_trainer", "base_model:klue/roberta-base", "base_model:finetune:klue/roberta-base", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-08-19T13:20:41Z
--- library_name: transformers base_model: klue/roberta-base tags: - generated_from_trainer metrics: - accuracy model-index: - name: results results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # results This model is a fine-tuned version of [klue/roberta-base](https://huggingface.co/klue/roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4594 - Accuracy: 0.853 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.5274 | 1.0 | 1250 | 0.5485 | 0.841 | ### Framework versions - Transformers 4.50.0 - Pytorch 2.8.0 - Datasets 3.5.0 - Tokenizers 0.21.4
Muapi/moxie-cybernetic-punk-lora-s
Muapi
2025-08-19T13:20:25Z
0
0
null
[ "lora", "stable-diffusion", "flux.1-d", "license:openrail++", "region:us" ]
null
2025-08-19T13:20:14Z
--- license: openrail++ tags: - lora - stable-diffusion - flux.1-d model_type: LoRA --- # Moxie Cybernetic & Punk Lora's ![preview](./preview.jpg) **Base model**: Flux.1 D **Trained words**: gypsypunk, gypsy_punk ## 🧠 Usage (Python) πŸ”‘ **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys) ```python import requests, os url = "https://api.muapi.ai/api/v1/flux_dev_lora_image" headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")} payload = { "prompt": "masterpiece, best quality, 1girl, looking at viewer", "model_id": [{"model": "civitai:660912@1700169", "weight": 1.0}], "width": 1024, "height": 1024, "num_images": 1 } print(requests.post(url, headers=headers, json=payload).json()) ```
Growcompany/SmolLM2-360M-Q4_K_M-GGUF
Growcompany
2025-08-19T13:13:55Z
0
0
transformers
[ "transformers", "gguf", "llama-cpp", "gguf-my-repo", "en", "base_model:HuggingFaceTB/SmolLM2-360M", "base_model:quantized:HuggingFaceTB/SmolLM2-360M", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-08-19T13:13:51Z
--- library_name: transformers license: apache-2.0 language: - en base_model: HuggingFaceTB/SmolLM2-360M tags: - llama-cpp - gguf-my-repo --- # Growcompany/SmolLM2-360M-Q4_K_M-GGUF This model was converted to GGUF format from [`HuggingFaceTB/SmolLM2-360M`](https://huggingface.co/HuggingFaceTB/SmolLM2-360M) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/HuggingFaceTB/SmolLM2-360M) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Growcompany/SmolLM2-360M-Q4_K_M-GGUF --hf-file smollm2-360m-q4_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Growcompany/SmolLM2-360M-Q4_K_M-GGUF --hf-file smollm2-360m-q4_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Growcompany/SmolLM2-360M-Q4_K_M-GGUF --hf-file smollm2-360m-q4_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Growcompany/SmolLM2-360M-Q4_K_M-GGUF --hf-file smollm2-360m-q4_k_m.gguf -c 2048 ```
lisaozill03/blockassist-bc-rugged_prickly_alpaca_1755607670
lisaozill03
2025-08-19T13:12:56Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "rugged prickly alpaca", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T13:12:52Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - rugged prickly alpaca --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
misterkissi/w2v2-lg-xls-r-300m-oromo
misterkissi
2025-08-19T13:11:31Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "wav2vec2", "automatic-speech-recognition", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2025-08-18T14:29:15Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Neelectric/Llama-3-8B-Instruct-TAR-Cyber_ins_v00.01
Neelectric
2025-08-19T13:07:47Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "generated_from_trainer", "trl", "open-r1", "sft", "conversational", "dataset:Neelectric/ins", "base_model:lapisrocks/Llama-3-8B-Instruct-TAR-Cyber", "base_model:finetune:lapisrocks/Llama-3-8B-Instruct-TAR-Cyber", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-19T12:48:05Z
--- base_model: lapisrocks/Llama-3-8B-Instruct-TAR-Cyber datasets: Neelectric/ins library_name: transformers model_name: Llama-3-8B-Instruct-TAR-Cyber_ins_v00.01 tags: - generated_from_trainer - trl - open-r1 - sft licence: license --- # Model Card for Llama-3-8B-Instruct-TAR-Cyber_ins_v00.01 This model is a fine-tuned version of [lapisrocks/Llama-3-8B-Instruct-TAR-Cyber](https://huggingface.co/lapisrocks/Llama-3-8B-Instruct-TAR-Cyber) on the [Neelectric/ins](https://huggingface.co/datasets/Neelectric/ins) dataset. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="Neelectric/Llama-3-8B-Instruct-TAR-Cyber_ins_v00.01", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/neelectric/sem/runs/xkobaih7) This model was trained with SFT. ### Framework versions - TRL: 0.21.0 - Transformers: 4.55.0 - Pytorch: 2.7.1 - Datasets: 3.6.0 - Tokenizers: 0.21.4 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
Muapi/epicflashphotography_-flux
Muapi
2025-08-19T13:07:28Z
0
0
null
[ "lora", "stable-diffusion", "flux.1-d", "license:openrail++", "region:us" ]
null
2025-08-19T13:07:21Z
--- license: openrail++ tags: - lora - stable-diffusion - flux.1-d model_type: LoRA --- # epiCFlashPhotography_[FLUX] ![preview](./preview.jpg) **Base model**: Flux.1 D **Trained words**: ## 🧠 Usage (Python) πŸ”‘ **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys) ```python import requests, os url = "https://api.muapi.ai/api/v1/flux_dev_lora_image" headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")} payload = { "prompt": "masterpiece, best quality, 1girl, looking at viewer", "model_id": [{"model": "civitai:817640@914301", "weight": 1.0}], "width": 1024, "height": 1024, "num_images": 1 } print(requests.post(url, headers=headers, json=payload).json()) ```
lilTAT/blockassist-bc-gentle_rugged_hare_1755608508
lilTAT
2025-08-19T13:02:18Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "gentle rugged hare", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T13:02:14Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - gentle rugged hare --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
sampingkaca72/blockassist-bc-armored_stealthy_elephant_1755606847
sampingkaca72
2025-08-19T12:58:53Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "armored stealthy elephant", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T12:58:50Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - armored stealthy elephant --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
katanyasekolah/blockassist-bc-silky_sprightly_cassowary_1755605825
katanyasekolah
2025-08-19T12:47:46Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "silky sprightly cassowary", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T12:47:43Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - silky sprightly cassowary --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
unitova/blockassist-bc-zealous_sneaky_raven_1755606082
unitova
2025-08-19T12:47:15Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "zealous sneaky raven", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T12:47:12Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - zealous sneaky raven --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Dejiat/blockassist-bc-savage_unseen_bobcat_1755607541
Dejiat
2025-08-19T12:46:26Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "savage unseen bobcat", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T12:46:21Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - savage unseen bobcat --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Medved444/blockassist-bc-bellowing_finicky_manatee_1755606209
Medved444
2025-08-19T12:43:41Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "bellowing finicky manatee", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T12:43:24Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - bellowing finicky manatee --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
lguaman/MyGemmaNPC
lguaman
2025-08-19T12:41:59Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "gemma3_text", "text-generation", "generated_from_trainer", "trl", "sft", "conversational", "base_model:google/gemma-3-270m-it", "base_model:finetune:google/gemma-3-270m-it", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-18T21:26:49Z
--- base_model: google/gemma-3-270m-it library_name: transformers model_name: MyGemmaNPC tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for MyGemmaNPC This model is a fine-tuned version of [google/gemma-3-270m-it](https://huggingface.co/google/gemma-3-270m-it). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="lguaman/MyGemmaNPC", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with SFT. ### Framework versions - TRL: 0.21.0 - Transformers: 4.55.2 - Pytorch: 2.6.0+cu124 - Datasets: 4.0.0 - Tokenizers: 0.21.4 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
agurung/v3sft_qwen7B_25percent_lr_1e4_bptt_offset
agurung
2025-08-19T12:40:58Z
42
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "trl", "sft", "conversational", "base_model:Qwen/Qwen2.5-7B-Instruct", "base_model:finetune:Qwen/Qwen2.5-7B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-14T03:44:43Z
--- base_model: Qwen/Qwen2.5-7B-Instruct library_name: transformers model_name: v3sft_qwen7B_25percent_lr_1e4_bptt_offset tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for v3sft_qwen7B_25percent_lr_1e4_bptt_offset This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="agurung/v3sft_qwen7B_25percent_lr_1e4_bptt_offset", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/alexgurung/ncp_reasoning_projector/runs/gcnqs7xl) This model was trained with SFT. ### Framework versions - TRL: 0.21.0 - Transformers: 4.53.3 - Pytorch: 2.7.0+cu128 - Datasets: 4.0.0 - Tokenizers: 0.21.4 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
VoilaRaj/80_0YiCJb
VoilaRaj
2025-08-19T12:40:18Z
0
0
null
[ "safetensors", "any-to-any", "omega", "omegalabs", "bittensor", "agi", "license:mit", "region:us" ]
any-to-any
2025-08-19T12:36:28Z
--- license: mit tags: - any-to-any - omega - omegalabs - bittensor - agi --- This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet. Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
Dejiat/blockassist-bc-savage_unseen_bobcat_1755607155
Dejiat
2025-08-19T12:39:57Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "savage unseen bobcat", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T12:39:52Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - savage unseen bobcat --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Elsihj89/camila-keynnect
Elsihj89
2025-08-19T12:38:17Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-08-19T12:38:17Z
--- license: apache-2.0 ---
chainway9/blockassist-bc-untamed_quick_eel_1755605237
chainway9
2025-08-19T12:36:05Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "untamed quick eel", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T12:36:01Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - untamed quick eel --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Dejiat/blockassist-bc-savage_unseen_bobcat_1755606723
Dejiat
2025-08-19T12:32:49Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "savage unseen bobcat", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T12:32:44Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - savage unseen bobcat --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
tensorblock/yujiepan_llama-3.3-tiny-random-dim64-GGUF
tensorblock
2025-08-19T12:31:23Z
0
0
transformers
[ "transformers", "gguf", "TensorBlock", "GGUF", "text-generation", "base_model:yujiepan/llama-3.3-tiny-random-dim64", "base_model:quantized:yujiepan/llama-3.3-tiny-random-dim64", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2025-08-19T12:31:02Z
--- library_name: transformers pipeline_tag: text-generation inference: true widget: - text: Hello! example_title: Hello world group: Python base_model: yujiepan/llama-3.3-tiny-random-dim64 tags: - TensorBlock - GGUF --- <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> [![Website](https://img.shields.io/badge/Website-tensorblock.co-blue?logo=google-chrome&logoColor=white)](https://tensorblock.co) [![Twitter](https://img.shields.io/twitter/follow/tensorblock_aoi?style=social)](https://twitter.com/tensorblock_aoi) [![Discord](https://img.shields.io/badge/Discord-Join%20Us-5865F2?logo=discord&logoColor=white)](https://discord.gg/Ej5NmeHFf2) [![GitHub](https://img.shields.io/badge/GitHub-TensorBlock-black?logo=github&logoColor=white)](https://github.com/TensorBlock) [![Telegram](https://img.shields.io/badge/Telegram-Group-blue?logo=telegram)](https://t.me/TensorBlock) ## yujiepan/llama-3.3-tiny-random-dim64 - GGUF <div style="text-align: left; margin: 20px 0;"> <a href="https://discord.com/invite/Ej5NmeHFf2" style="display: inline-block; padding: 10px 20px; background-color: #5865F2; color: white; text-decoration: none; border-radius: 5px; font-weight: bold;"> Join our Discord to learn more about what we're building β†— </a> </div> This repo contains GGUF format model files for [yujiepan/llama-3.3-tiny-random-dim64](https://huggingface.co/yujiepan/llama-3.3-tiny-random-dim64). The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5753](https://github.com/ggml-org/llama.cpp/commit/73e53dc834c0a2336cd104473af6897197b96277). ## Our projects <table border="1" cellspacing="0" cellpadding="10"> <tr> <th colspan="2" style="font-size: 25px;">Forge</th> </tr> <tr> <th colspan="2"> <img src="https://imgur.com/faI5UKh.jpeg" alt="Forge Project" width="900"/> </th> </tr> <tr> <th colspan="2">An OpenAI-compatible multi-provider routing layer.</th> </tr> <tr> <th colspan="2"> <a href="https://github.com/TensorBlock/forge" target="_blank" style=" display: inline-block; padding: 8px 16px; background-color: #FF7F50; color: white; text-decoration: none; border-radius: 6px; font-weight: bold; font-family: sans-serif; ">πŸš€ Try it now! πŸš€</a> </th> </tr> <tr> <th style="font-size: 25px;">Awesome MCP Servers</th> <th style="font-size: 25px;">TensorBlock Studio</th> </tr> <tr> <th><img src="https://imgur.com/2Xov7B7.jpeg" alt="MCP Servers" width="450"/></th> <th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Studio" width="450"/></th> </tr> <tr> <th>A comprehensive collection of Model Context Protocol (MCP) servers.</th> <th>A lightweight, open, and extensible multi-LLM interaction studio.</th> </tr> <tr> <th> <a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style=" display: inline-block; padding: 8px 16px; background-color: #FF7F50; color: white; text-decoration: none; border-radius: 6px; font-weight: bold; font-family: sans-serif; ">πŸ‘€ See what we built πŸ‘€</a> </th> <th> <a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style=" display: inline-block; padding: 8px 16px; background-color: #FF7F50; color: white; text-decoration: none; border-radius: 6px; font-weight: bold; font-family: sans-serif; ">πŸ‘€ See what we built πŸ‘€</a> </th> </tr> </table> ## Prompt template ``` <|begin_of_text|><|start_header_id|>system<|end_header_id|> Cutting Knowledge Date: December 2023 Today Date: 26 Jul 2024 {system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|> {prompt}<|eot_id|><|start_header_id|>assistant<|end_header_id|> ``` ## Model file specification | Filename | Quant type | File Size | Description | | -------- | ---------- | --------- | ----------- | | [llama-3.3-tiny-random-dim64-Q2_K.gguf](https://huggingface.co/tensorblock/yujiepan_llama-3.3-tiny-random-dim64-GGUF/blob/main/llama-3.3-tiny-random-dim64-Q2_K.gguf) | Q2_K | 0.017 GB | smallest, significant quality loss - not recommended for most purposes | | [llama-3.3-tiny-random-dim64-Q3_K_S.gguf](https://huggingface.co/tensorblock/yujiepan_llama-3.3-tiny-random-dim64-GGUF/blob/main/llama-3.3-tiny-random-dim64-Q3_K_S.gguf) | Q3_K_S | 0.017 GB | very small, high quality loss | | [llama-3.3-tiny-random-dim64-Q3_K_M.gguf](https://huggingface.co/tensorblock/yujiepan_llama-3.3-tiny-random-dim64-GGUF/blob/main/llama-3.3-tiny-random-dim64-Q3_K_M.gguf) | Q3_K_M | 0.017 GB | very small, high quality loss | | [llama-3.3-tiny-random-dim64-Q3_K_L.gguf](https://huggingface.co/tensorblock/yujiepan_llama-3.3-tiny-random-dim64-GGUF/blob/main/llama-3.3-tiny-random-dim64-Q3_K_L.gguf) | Q3_K_L | 0.017 GB | small, substantial quality loss | | [llama-3.3-tiny-random-dim64-Q4_0.gguf](https://huggingface.co/tensorblock/yujiepan_llama-3.3-tiny-random-dim64-GGUF/blob/main/llama-3.3-tiny-random-dim64-Q4_0.gguf) | Q4_0 | 0.017 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [llama-3.3-tiny-random-dim64-Q4_K_S.gguf](https://huggingface.co/tensorblock/yujiepan_llama-3.3-tiny-random-dim64-GGUF/blob/main/llama-3.3-tiny-random-dim64-Q4_K_S.gguf) | Q4_K_S | 0.017 GB | small, greater quality loss | | [llama-3.3-tiny-random-dim64-Q4_K_M.gguf](https://huggingface.co/tensorblock/yujiepan_llama-3.3-tiny-random-dim64-GGUF/blob/main/llama-3.3-tiny-random-dim64-Q4_K_M.gguf) | Q4_K_M | 0.017 GB | medium, balanced quality - recommended | | [llama-3.3-tiny-random-dim64-Q5_0.gguf](https://huggingface.co/tensorblock/yujiepan_llama-3.3-tiny-random-dim64-GGUF/blob/main/llama-3.3-tiny-random-dim64-Q5_0.gguf) | Q5_0 | 0.017 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [llama-3.3-tiny-random-dim64-Q5_K_S.gguf](https://huggingface.co/tensorblock/yujiepan_llama-3.3-tiny-random-dim64-GGUF/blob/main/llama-3.3-tiny-random-dim64-Q5_K_S.gguf) | Q5_K_S | 0.017 GB | large, low quality loss - recommended | | [llama-3.3-tiny-random-dim64-Q5_K_M.gguf](https://huggingface.co/tensorblock/yujiepan_llama-3.3-tiny-random-dim64-GGUF/blob/main/llama-3.3-tiny-random-dim64-Q5_K_M.gguf) | Q5_K_M | 0.017 GB | large, very low quality loss - recommended | | [llama-3.3-tiny-random-dim64-Q6_K.gguf](https://huggingface.co/tensorblock/yujiepan_llama-3.3-tiny-random-dim64-GGUF/blob/main/llama-3.3-tiny-random-dim64-Q6_K.gguf) | Q6_K | 0.017 GB | very large, extremely low quality loss | | [llama-3.3-tiny-random-dim64-Q8_0.gguf](https://huggingface.co/tensorblock/yujiepan_llama-3.3-tiny-random-dim64-GGUF/blob/main/llama-3.3-tiny-random-dim64-Q8_0.gguf) | Q8_0 | 0.017 GB | very large, extremely low quality loss - not recommended | ## Downloading instruction ### Command line Firstly, install Huggingface Client ```shell pip install -U "huggingface_hub[cli]" ``` Then, downoad the individual model file the a local directory ```shell huggingface-cli download tensorblock/yujiepan_llama-3.3-tiny-random-dim64-GGUF --include "llama-3.3-tiny-random-dim64-Q2_K.gguf" --local-dir MY_LOCAL_DIR ``` If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try: ```shell huggingface-cli download tensorblock/yujiepan_llama-3.3-tiny-random-dim64-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf' ```
kodetr/stunting-7B-Deepseek
kodetr
2025-08-19T12:27:25Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "stunting", "kesehatan", "anak", "conversational", "id", "dataset:kodetr/penelitian-fundamental-stunting-qa", "base_model:deepseek-ai/DeepSeek-R1-Distill-Qwen-7B", "base_model:finetune:deepseek-ai/DeepSeek-R1-Distill-Qwen-7B", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-19T11:43:34Z
--- library_name: transformers tags: - stunting - kesehatan - anak license: apache-2.0 datasets: - kodetr/penelitian-fundamental-stunting-qa language: - id metrics: - rouge - bleu pipeline_tag: text-generation base_model: - deepseek-ai/DeepSeek-R1-Distill-Qwen-7B --- ### Model Description <!-- Provide a longer summary of what this model is. --> Konsultasi(Q&A) stunting pada anak - **Developed by:** Tanwir - **Language :** Indonesia ### Training ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65d6d2f8b06abf924b24349d/oa7SlyyoiWhrZCJNa-4ne.png) ### Parameter ``` "attention_dropout": 0.0, "bos_token_id": 151643, "eos_token_id": 151643, "hidden_act": "silu", "hidden_size": 3584, "initializer_range": 0.02, "intermediate_size": 18944, "layer_types": [ "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention" ], "max_position_embeddings": 131072, "max_window_layers": 28, "model_type": "qwen2", "num_attention_heads": 28, "num_hidden_layers": 28, "num_key_value_heads": 4, "rms_norm_eps": 1e-06, "rope_scaling": null, "rope_theta": 10000, "sliding_window": null, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.55.0", "use_cache": true, "use_mrope": false, "use_sliding_window": false, "vocab_size": 152064 ``` ### Use with transformers Pastikan untuk memperbarui instalasi transformer Anda melalui pip install --upgrade transformer. ```python import torch from transformers import pipeline model_id = "kodetr/stunting-7B-Deepseek-R1" pipe = pipeline( "text-generation", model=model_id, torch_dtype=torch.bfloat16, device_map="auto", ) messages = [ {"role": "system", "content": "Jelaskan definisi 1000 hari pertama kehidupan."}, {"role": "user", "content": "Apa itu 1000 hari pertama kehidupan?"}, ] outputs = pipe( messages, max_new_tokens=256, ) print(outputs[0]["generated_text"][-1]) ```