modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-09-06 00:36:47
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
540 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-09-06 00:36:27
card
stringlengths
11
1.01M
FluidInference/Qwen3-4B-fp16-ov
FluidInference
2025-08-12T02:45:58Z
0
0
null
[ "openvino", "qwen3", "base_model:Qwen/Qwen3-4B", "base_model:finetune:Qwen/Qwen3-4B", "license:apache-2.0", "region:us" ]
null
2025-08-12T00:00:40Z
--- license: apache-2.0 license_link: https://huggingface.co/Qwen/Qwen3-4B/blob/main/LICENSE base_model: - Qwen/Qwen3-4B --- # Qwen3-4B-fp16-ov * Model creator: [Qwen](https://huggingface.co/Qwen) * Original model: [Qwen3-4B](https://huggingface.co/Qwen/Qwen3-4B) ## Description This is [Qwen3-4B](https://huggingface.co/Qwen/Qwen3-4B) model converted to the [OpenVINO™ IR](https://docs.openvino.ai/2025/documentation/openvino-ir-format.html) (Intermediate Representation) format with weights compressed to FP16. ## Compatibility The provided OpenVINO™ IR model is compatible with: * OpenVINO version 2025.1.0 and higher * Optimum Intel 1.24.0 and higher ## Running Model Inference with [Optimum Intel](https://huggingface.co/docs/optimum/intel/index) 1. Install packages required for using [Optimum Intel](https://huggingface.co/docs/optimum/intel/index) integration with the OpenVINO backend: ``` pip install optimum[openvino] ``` 2. Run model inference: ``` from transformers import AutoTokenizer from optimum.intel.openvino import OVModelForCausalLM model_id = "FluidInference/qwen3-4b-fp16-ov" tokenizer = AutoTokenizer.from_pretrained(model_id) model = OVModelForCausalLM.from_pretrained(model_id) inputs = tokenizer("What is OpenVINO?", return_tensors="pt") outputs = model.generate(**inputs, max_length=200) text = tokenizer.batch_decode(outputs)[0] print(text) ``` For more examples and possible optimizations, refer to the [Inference with Optimum Intel](https://docs.openvino.ai/2025/openvino-workflow-generative/inference-with-optimum-intel.html). ## Running Model Inference with [OpenVINO GenAI](https://github.com/openvinotoolkit/openvino.genai) 1. Install packages required for using OpenVINO GenAI. ``` pip install openvino-genai huggingface_hub ``` 2. Download model from HuggingFace Hub ``` import huggingface_hub as hf_hub model_id = "FluidInference/qwen3-4b-fp16-ov" model_path = "qwen3-4b-fp16-ov" hf_hub.snapshot_download(model_id, local_dir=model_path) ``` 3. Run model inference: ``` import openvino_genai as ov_genai device = "CPU" pipe = ov_genai.LLMPipeline(model_path, device) pipe.get_tokenizer().set_chat_template(pipe.get_tokenizer().chat_template) print(pipe.generate("What is OpenVINO?", max_length=200)) ``` More GenAI usage examples can be found in OpenVINO GenAI library [docs](https://docs.openvino.ai/2025/openvino-workflow-generative/inference-with-genai.html) and [samples](https://github.com/openvinotoolkit/openvino.genai?tab=readme-ov-file#openvino-genai-samples) You can find more detaild usage examples in OpenVINO Notebooks: - [LLM](https://openvinotoolkit.github.io/openvino_notebooks/?search=LLM) - [RAG text generation](https://openvinotoolkit.github.io/openvino_notebooks/?search=RAG+system&tasks=Text+Generation) ## Limitations Check the original [model card](https://huggingface.co/Qwen/Qwen3-4B) for limitations. ## Legal information The original model is distributed under [Apache License Version 2.0](https://huggingface.co/Qwen/Qwen3-4B/blob/main/LICENSE) license. More details can be found in [Qwen3-4B](https://huggingface.co/Qwen/Qwen3-4B). ## Disclaimer Intel is committed to respecting human rights and avoiding causing or contributing to adverse impacts on human rights. See [Intel’s Global Human Rights Principles](https://www.intel.com/content/dam/www/central-libraries/us/en/documents/policy-human-rights.pdf). Intel’s products and software are intended only to be used in applications that do not cause or contribute to adverse impacts on human rights.
jerrrycans/watermark20000
jerrrycans
2025-08-12T02:43:14Z
0
0
diffusers
[ "diffusers", "flux", "image-to-image", "lora", "replicate", "base_model:black-forest-labs/FLUX.1-Kontext-dev", "base_model:adapter:black-forest-labs/FLUX.1-Kontext-dev", "license:other", "region:us" ]
image-to-image
2025-08-12T01:28:04Z
--- license: other license_name: flux1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-Kontext-dev/blob/main/LICENSE.md tags: - flux - image-to-image - lora - diffusers - replicate base_model: black-forest-labs/FLUX.1-Kontext-dev pipeline_tag: image-to-image # widget: # - src: https://... # text: >- # prompt # output: # url: https://... instance_prompt: remove all the watermarks from this image, all watermarks that are over this image --- # Watermark20000 <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-Kontext-dev image-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using: https://replicate.com/replicate/fast-flux-kontext-trainer/train ## Prompt instruction You should use `remove all the watermarks from this image, all watermarks that are over this image` as part of the prompt instruction for your image-to-image editing. ## Training details - Steps: 20000 - Learning rate: 0.001 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/jerrrycans/watermark20000/discussions) to add images that show off what you’ve made with this LoRA.
afasdfdfadsf/blockassist-bc-rough_opaque_clam_1754966435
afasdfdfadsf
2025-08-12T02:42:19Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "rough opaque clam", "arxiv:2504.07091", "region:us" ]
null
2025-08-12T02:41:22Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - rough opaque clam --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
mdavidson83/Qwen3-4B-Instruct-2507-INT4
mdavidson83
2025-08-12T02:41:43Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "compressed-tensors", "region:us" ]
text-generation
2025-08-12T02:40:33Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
imgailab/sdxl-trtx-bf16-blackwell
imgailab
2025-08-12T02:40:59Z
0
0
tensorrt-rtx
[ "tensorrt-rtx", "sdxl", "blackwell", "bf16", "optimized", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0", "license:apache-2.0", "region:us" ]
null
2025-08-12T02:40:27Z
--- library_name: tensorrt-rtx license: apache-2.0 base_model: stabilityai/stable-diffusion-xl-base-1.0 tags: - tensorrt-rtx - sdxl - blackwell - bf16 - optimized inference: false --- # SDXL TensorRT-RTX: BF16-Blackwell 🚀 Ready Optimized TensorRT-RTX engines for **SDXL** on **Blackwell** architecture with **BF16** quantization. ## 🎯 This Repository **One variant, one download** - only get exactly what you need! - **Model**: SDXL - **Architecture**: Blackwell (Compute Capability 8.0+) - **Quantization**: BF16 - **Memory**: ~4-6GB VRAM - **Speed**: ~2.5s (H200) for 1024x1024 generation ## 🚀 Quick Start ### Automatic (Recommended) ```bash # ImageAI server downloads automatically curl -X POST "http://localhost:8001/generate" \ -H "Content-Type: application/json" \ -d '{ "prompt": "a beautiful landscape", "model": "sdxl-tensorrt_rtx:bf16", "width": 1024, "height": 1024 }' ``` ### Manual Download ```python from huggingface_hub import snapshot_download # Download this specific variant only engines_path = snapshot_download( repo_id="imgailab/sdxl-trtx-bf16-blackwell" ) # Engines are in: engines_path/engines/*.plan ``` ### Direct Integration ```python from imageai_server.tensorrt.nvidia_sdxl_pipeline import NVIDIASDXLPipeline pipeline = NVIDIASDXLPipeline() pipeline.load_engines( engine_dir=f"{engines_path}/engines", framework_model_dir=f"{engines_path}/framework", onnx_dir=f"{engines_path}/onnx" ) pipeline.activate_engines() images, time_ms = pipeline.infer( prompt="a serene mountain landscape", height=1024, width=1024 ) ``` ## 📊 Performance | Metric | Value | |--------|-------| | **Memory Usage** | ~4-6GB VRAM | | **Inference Speed** | ~2.5s (H200) | | **Resolution** | 1024x1024 (optimized) | | **Batch Size** | 1 (optimized) | | **Precision** | BF16 | ## 🔧 Requirements ### Hardware - **GPU**: Blackwell architecture - Ampere: RTX 3090, A100, etc. - Ada Lovelace: RTX 4090, etc. - Blackwell: H200, etc. - **VRAM**: ~4-6GB minimum - **Compute Capability**: 8.0+ ### Software - **TensorRT-RTX**: 1.0.0.21+ - **CUDA**: 12.0+ - **Python**: 3.8+ ## 📁 Repository Structure ``` sdxl-trtx-bf16-blackwell/ ├── engines/ # TensorRT engine files │ ├── *.plan # Optimized engines ├── config.json # Configuration metadata └── README.md # This file ``` ## 🌐 Related Repositories Other variants for SDXL: - [Ampere BF16](https://huggingface.co/imgailab/sdxl-trtx-bf16-ampere)\n- [Ada FP8](https://huggingface.co/imgailab/sdxl-trtx-fp8-ada)\n- [Ada BF16](https://huggingface.co/imgailab/sdxl-trtx-bf16-ada)\n- [Blackwell FP4](https://huggingface.co/imgailab/sdxl-trtx-fp4-blackwell)\n- [Blackwell FP8](https://huggingface.co/imgailab/sdxl-trtx-fp8-blackwell)\n ## 📝 License Inherits license from base model: [stabilityai/stable-diffusion-xl-base-1.0](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0) ## 🔄 Updates - **2025-08-12**: Initial release - Optimized for single-variant downloads --- *Part of the ImageAI TensorRT-RTX engine collection*
jerrrycans/gfx20000
jerrrycans
2025-08-12T02:38:50Z
0
0
diffusers
[ "diffusers", "flux", "image-to-image", "lora", "replicate", "base_model:black-forest-labs/FLUX.1-Kontext-dev", "base_model:adapter:black-forest-labs/FLUX.1-Kontext-dev", "license:other", "region:us" ]
image-to-image
2025-08-12T01:18:34Z
--- license: other license_name: flux1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-Kontext-dev/blob/main/LICENSE.md tags: - flux - image-to-image - lora - diffusers - replicate base_model: black-forest-labs/FLUX.1-Kontext-dev pipeline_tag: image-to-image # widget: # - src: https://... # text: >- # prompt # output: # url: https://... instance_prompt: render this image into a gfx image --- # Gfx20000 <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-Kontext-dev image-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using: https://replicate.com/replicate/fast-flux-kontext-trainer/train ## Prompt instruction You should use `render this image into a gfx image` as part of the prompt instruction for your image-to-image editing. ## Training details - Steps: 20000 - Learning rate: 0.001 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/jerrrycans/gfx20000/discussions) to add images that show off what you’ve made with this LoRA.
imgailab/flux1-trtx-schnell-fp8-blackwell
imgailab
2025-08-12T02:38:10Z
0
0
tensorrt-rtx
[ "tensorrt-rtx", "flux1-schnell", "flux1", "fp8", "schnell", "optimized", "base_model:black-forest-labs/FLUX.1-schnell", "base_model:finetune:black-forest-labs/FLUX.1-schnell", "license:apache-2.0", "region:us" ]
null
2025-08-12T02:38:07Z
--- library_name: tensorrt-rtx license: apache-2.0 base_model: black-forest-labs/FLUX.1-schnell tags: - tensorrt-rtx - flux1 - fp8 - schnell - optimized inference: false --- # FLUX1 TensorRT-RTX: SCHNELL-Fp8 🔨 Building Optimized TensorRT-RTX engines for **FLUX1** on **Fp8** architecture with **SCHNELL** quantization. ## 🎯 This Repository **One variant, one download** - only get exactly what you need! - **Model**: FLUX1 - **Architecture**: Fp8 (Compute Capability 8.0+) - **Quantization**: SCHNELL - **Memory**: TBD - **Speed**: TBD for 1024x1024 generation ## 🚀 Quick Start ### Automatic (Recommended) ```bash # ImageAI server downloads automatically curl -X POST "http://localhost:8001/generate" \ -H "Content-Type: application/json" \ -d '{ "prompt": "a beautiful landscape", "model": "flux1-tensorrt_rtx:schnell", "width": 1024, "height": 1024 }' ``` ### Manual Download ```python from huggingface_hub import snapshot_download # Download this specific variant only engines_path = snapshot_download( repo_id="imgailab/flux1-trtx-schnell-fp8-blackwell" ) # Engines are in: engines_path/engines/*.plan ``` ### Direct Integration ```python from imageai_server.tensorrt.nvidia_sdxl_pipeline import NVIDIASDXLPipeline pipeline = NVIDIASDXLPipeline() pipeline.load_engines( engine_dir=f"{engines_path}/engines", framework_model_dir=f"{engines_path}/framework", onnx_dir=f"{engines_path}/onnx" ) pipeline.activate_engines() images, time_ms = pipeline.infer( prompt="a serene mountain landscape", height=1024, width=1024 ) ``` ## 📊 Performance | Metric | Value | |--------|-------| | **Memory Usage** | TBD | | **Inference Speed** | TBD | | **Resolution** | 1024x1024 (optimized) | | **Batch Size** | 1 (optimized) | | **Precision** | SCHNELL | ## 🔧 Requirements ### Hardware - **GPU**: Fp8 architecture - Ampere: RTX 3090, A100, etc. - Ada Lovelace: RTX 4090, etc. - Blackwell: H200, etc. - **VRAM**: TBD minimum - **Compute Capability**: 8.0+ ### Software - **TensorRT-RTX**: 1.0.0.21+ - **CUDA**: 12.0+ - **Python**: 3.8+ ## 📁 Repository Structure ``` flux1-trtx-schnell-fp8-blackwell/ ├── engines/ # TensorRT engine files │ ├── *.plan # Optimized engines ├── config.json # Configuration metadata └── README.md # This file ``` ## 🌐 Related Repositories Other variants for FLUX1: - [Ampere BF16](https://huggingface.co/imgailab/flux1-trtx-bf16-ampere)\n- [Ada FP8](https://huggingface.co/imgailab/flux1-trtx-fp8-ada)\n- [Ada BF16](https://huggingface.co/imgailab/flux1-trtx-bf16-ada)\n- [Blackwell FP4](https://huggingface.co/imgailab/flux1-trtx-fp4-blackwell)\n- [Blackwell FP8](https://huggingface.co/imgailab/flux1-trtx-fp8-blackwell)\n- [Blackwell BF16](https://huggingface.co/imgailab/flux1-trtx-bf16-blackwell)\n ## 📝 License Inherits license from base model: [black-forest-labs/FLUX.1-schnell](https://huggingface.co/black-forest-labs/FLUX.1-schnell) ## 🔄 Updates - **2025-08-12**: Initial release - Optimized for single-variant downloads --- *Part of the ImageAI TensorRT-RTX engine collection*
imgailab/flux1-trtx-schnell-fp4-blackwell
imgailab
2025-08-12T02:38:02Z
0
0
tensorrt-rtx
[ "tensorrt-rtx", "flux1-schnell", "flux1", "fp4", "schnell", "optimized", "base_model:black-forest-labs/FLUX.1-schnell", "base_model:finetune:black-forest-labs/FLUX.1-schnell", "license:apache-2.0", "region:us" ]
null
2025-08-12T02:37:59Z
--- library_name: tensorrt-rtx license: apache-2.0 base_model: black-forest-labs/FLUX.1-schnell tags: - tensorrt-rtx - flux1 - fp4 - schnell - optimized inference: false --- # FLUX1 TensorRT-RTX: SCHNELL-Fp4 🔨 Building Optimized TensorRT-RTX engines for **FLUX1** on **Fp4** architecture with **SCHNELL** quantization. ## 🎯 This Repository **One variant, one download** - only get exactly what you need! - **Model**: FLUX1 - **Architecture**: Fp4 (Compute Capability 8.0+) - **Quantization**: SCHNELL - **Memory**: TBD - **Speed**: TBD for 1024x1024 generation ## 🚀 Quick Start ### Automatic (Recommended) ```bash # ImageAI server downloads automatically curl -X POST "http://localhost:8001/generate" \ -H "Content-Type: application/json" \ -d '{ "prompt": "a beautiful landscape", "model": "flux1-tensorrt_rtx:schnell", "width": 1024, "height": 1024 }' ``` ### Manual Download ```python from huggingface_hub import snapshot_download # Download this specific variant only engines_path = snapshot_download( repo_id="imgailab/flux1-trtx-schnell-fp4-blackwell" ) # Engines are in: engines_path/engines/*.plan ``` ### Direct Integration ```python from imageai_server.tensorrt.nvidia_sdxl_pipeline import NVIDIASDXLPipeline pipeline = NVIDIASDXLPipeline() pipeline.load_engines( engine_dir=f"{engines_path}/engines", framework_model_dir=f"{engines_path}/framework", onnx_dir=f"{engines_path}/onnx" ) pipeline.activate_engines() images, time_ms = pipeline.infer( prompt="a serene mountain landscape", height=1024, width=1024 ) ``` ## 📊 Performance | Metric | Value | |--------|-------| | **Memory Usage** | TBD | | **Inference Speed** | TBD | | **Resolution** | 1024x1024 (optimized) | | **Batch Size** | 1 (optimized) | | **Precision** | SCHNELL | ## 🔧 Requirements ### Hardware - **GPU**: Fp4 architecture - Ampere: RTX 3090, A100, etc. - Ada Lovelace: RTX 4090, etc. - Blackwell: H200, etc. - **VRAM**: TBD minimum - **Compute Capability**: 8.0+ ### Software - **TensorRT-RTX**: 1.0.0.21+ - **CUDA**: 12.0+ - **Python**: 3.8+ ## 📁 Repository Structure ``` flux1-trtx-schnell-fp4-blackwell/ ├── engines/ # TensorRT engine files │ ├── *.plan # Optimized engines ├── config.json # Configuration metadata └── README.md # This file ``` ## 🌐 Related Repositories Other variants for FLUX1: - [Ampere BF16](https://huggingface.co/imgailab/flux1-trtx-bf16-ampere)\n- [Ada FP8](https://huggingface.co/imgailab/flux1-trtx-fp8-ada)\n- [Ada BF16](https://huggingface.co/imgailab/flux1-trtx-bf16-ada)\n- [Blackwell FP4](https://huggingface.co/imgailab/flux1-trtx-fp4-blackwell)\n- [Blackwell FP8](https://huggingface.co/imgailab/flux1-trtx-fp8-blackwell)\n- [Blackwell BF16](https://huggingface.co/imgailab/flux1-trtx-bf16-blackwell)\n ## 📝 License Inherits license from base model: [black-forest-labs/FLUX.1-schnell](https://huggingface.co/black-forest-labs/FLUX.1-schnell) ## 🔄 Updates - **2025-08-12**: Initial release - Optimized for single-variant downloads --- *Part of the ImageAI TensorRT-RTX engine collection*
imgailab/flux1-trtx-schnell-bf16-blackwell
imgailab
2025-08-12T02:37:58Z
0
0
tensorrt-rtx
[ "tensorrt-rtx", "flux1-schnell", "flux1", "bf16", "schnell", "optimized", "base_model:black-forest-labs/FLUX.1-schnell", "base_model:finetune:black-forest-labs/FLUX.1-schnell", "license:apache-2.0", "region:us" ]
null
2025-08-12T02:37:55Z
--- library_name: tensorrt-rtx license: apache-2.0 base_model: black-forest-labs/FLUX.1-schnell tags: - tensorrt-rtx - flux1 - bf16 - schnell - optimized inference: false --- # FLUX1 TensorRT-RTX: SCHNELL-Bf16 🔨 Building Optimized TensorRT-RTX engines for **FLUX1** on **Bf16** architecture with **SCHNELL** quantization. ## 🎯 This Repository **One variant, one download** - only get exactly what you need! - **Model**: FLUX1 - **Architecture**: Bf16 (Compute Capability 8.0+) - **Quantization**: SCHNELL - **Memory**: TBD - **Speed**: TBD for 1024x1024 generation ## 🚀 Quick Start ### Automatic (Recommended) ```bash # ImageAI server downloads automatically curl -X POST "http://localhost:8001/generate" \ -H "Content-Type: application/json" \ -d '{ "prompt": "a beautiful landscape", "model": "flux1-tensorrt_rtx:schnell", "width": 1024, "height": 1024 }' ``` ### Manual Download ```python from huggingface_hub import snapshot_download # Download this specific variant only engines_path = snapshot_download( repo_id="imgailab/flux1-trtx-schnell-bf16-blackwell" ) # Engines are in: engines_path/engines/*.plan ``` ### Direct Integration ```python from imageai_server.tensorrt.nvidia_sdxl_pipeline import NVIDIASDXLPipeline pipeline = NVIDIASDXLPipeline() pipeline.load_engines( engine_dir=f"{engines_path}/engines", framework_model_dir=f"{engines_path}/framework", onnx_dir=f"{engines_path}/onnx" ) pipeline.activate_engines() images, time_ms = pipeline.infer( prompt="a serene mountain landscape", height=1024, width=1024 ) ``` ## 📊 Performance | Metric | Value | |--------|-------| | **Memory Usage** | TBD | | **Inference Speed** | TBD | | **Resolution** | 1024x1024 (optimized) | | **Batch Size** | 1 (optimized) | | **Precision** | SCHNELL | ## 🔧 Requirements ### Hardware - **GPU**: Bf16 architecture - Ampere: RTX 3090, A100, etc. - Ada Lovelace: RTX 4090, etc. - Blackwell: H200, etc. - **VRAM**: TBD minimum - **Compute Capability**: 8.0+ ### Software - **TensorRT-RTX**: 1.0.0.21+ - **CUDA**: 12.0+ - **Python**: 3.8+ ## 📁 Repository Structure ``` flux1-trtx-schnell-bf16-blackwell/ ├── engines/ # TensorRT engine files │ ├── *.plan # Optimized engines ├── config.json # Configuration metadata └── README.md # This file ``` ## 🌐 Related Repositories Other variants for FLUX1: - [Ampere BF16](https://huggingface.co/imgailab/flux1-trtx-bf16-ampere)\n- [Ada FP8](https://huggingface.co/imgailab/flux1-trtx-fp8-ada)\n- [Ada BF16](https://huggingface.co/imgailab/flux1-trtx-bf16-ada)\n- [Blackwell FP4](https://huggingface.co/imgailab/flux1-trtx-fp4-blackwell)\n- [Blackwell FP8](https://huggingface.co/imgailab/flux1-trtx-fp8-blackwell)\n- [Blackwell BF16](https://huggingface.co/imgailab/flux1-trtx-bf16-blackwell)\n ## 📝 License Inherits license from base model: [black-forest-labs/FLUX.1-schnell](https://huggingface.co/black-forest-labs/FLUX.1-schnell) ## 🔄 Updates - **2025-08-12**: Initial release - Optimized for single-variant downloads --- *Part of the ImageAI TensorRT-RTX engine collection*
imgailab/flux1-trtx-dev-fp8-ada
imgailab
2025-08-12T02:37:46Z
0
0
tensorrt-rtx
[ "tensorrt-rtx", "flux1-dev", "flux1", "fp8", "dev", "optimized", "base_model:black-forest-labs/FLUX.1-dev", "base_model:finetune:black-forest-labs/FLUX.1-dev", "license:apache-2.0", "region:us" ]
null
2025-08-12T02:37:43Z
--- library_name: tensorrt-rtx license: apache-2.0 base_model: black-forest-labs/FLUX.1-dev tags: - tensorrt-rtx - flux1 - fp8 - dev - optimized inference: false --- # FLUX1 TensorRT-RTX: DEV-Fp8 🔨 Building Optimized TensorRT-RTX engines for **FLUX1** on **Fp8** architecture with **DEV** quantization. ## 🎯 This Repository **One variant, one download** - only get exactly what you need! - **Model**: FLUX1 - **Architecture**: Fp8 (Compute Capability 8.0+) - **Quantization**: DEV - **Memory**: TBD - **Speed**: TBD for 1024x1024 generation ## 🚀 Quick Start ### Automatic (Recommended) ```bash # ImageAI server downloads automatically curl -X POST "http://localhost:8001/generate" \ -H "Content-Type: application/json" \ -d '{ "prompt": "a beautiful landscape", "model": "flux1-tensorrt_rtx:dev", "width": 1024, "height": 1024 }' ``` ### Manual Download ```python from huggingface_hub import snapshot_download # Download this specific variant only engines_path = snapshot_download( repo_id="imgailab/flux1-trtx-dev-fp8-ada" ) # Engines are in: engines_path/engines/*.plan ``` ### Direct Integration ```python from imageai_server.tensorrt.nvidia_sdxl_pipeline import NVIDIASDXLPipeline pipeline = NVIDIASDXLPipeline() pipeline.load_engines( engine_dir=f"{engines_path}/engines", framework_model_dir=f"{engines_path}/framework", onnx_dir=f"{engines_path}/onnx" ) pipeline.activate_engines() images, time_ms = pipeline.infer( prompt="a serene mountain landscape", height=1024, width=1024 ) ``` ## 📊 Performance | Metric | Value | |--------|-------| | **Memory Usage** | TBD | | **Inference Speed** | TBD | | **Resolution** | 1024x1024 (optimized) | | **Batch Size** | 1 (optimized) | | **Precision** | DEV | ## 🔧 Requirements ### Hardware - **GPU**: Fp8 architecture - Ampere: RTX 3090, A100, etc. - Ada Lovelace: RTX 4090, etc. - Blackwell: H200, etc. - **VRAM**: TBD minimum - **Compute Capability**: 8.0+ ### Software - **TensorRT-RTX**: 1.0.0.21+ - **CUDA**: 12.0+ - **Python**: 3.8+ ## 📁 Repository Structure ``` flux1-trtx-dev-fp8-ada/ ├── engines/ # TensorRT engine files │ ├── *.plan # Optimized engines ├── config.json # Configuration metadata └── README.md # This file ``` ## 🌐 Related Repositories Other variants for FLUX1: - [Ampere BF16](https://huggingface.co/imgailab/flux1-trtx-bf16-ampere)\n- [Ada FP8](https://huggingface.co/imgailab/flux1-trtx-fp8-ada)\n- [Ada BF16](https://huggingface.co/imgailab/flux1-trtx-bf16-ada)\n- [Blackwell FP4](https://huggingface.co/imgailab/flux1-trtx-fp4-blackwell)\n- [Blackwell FP8](https://huggingface.co/imgailab/flux1-trtx-fp8-blackwell)\n- [Blackwell BF16](https://huggingface.co/imgailab/flux1-trtx-bf16-blackwell)\n ## 📝 License Inherits license from base model: [black-forest-labs/FLUX.1-dev](https://huggingface.co/black-forest-labs/FLUX.1-dev) ## 🔄 Updates - **2025-08-12**: Initial release - Optimized for single-variant downloads --- *Part of the ImageAI TensorRT-RTX engine collection*
Osrivers/ponyRealism_V23ULTRA.safetensors
Osrivers
2025-08-12T02:36:07Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2025-08-12T02:32:22Z
--- license: creativeml-openrail-m ---
andr0m4da/blockassist-bc-grazing_hunting_boar_1754966105
andr0m4da
2025-08-12T02:35:56Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "grazing hunting boar", "arxiv:2504.07091", "region:us" ]
null
2025-08-12T02:35:50Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - grazing hunting boar --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
zjunlp/DataMind-Qwen2.5-7B
zjunlp
2025-08-12T02:35:18Z
6
2
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "data-analysis", "code-generation", "qwen", "conversational", "arxiv:2506.19794", "base_model:Qwen/Qwen2.5-7B-Instruct", "base_model:finetune:Qwen/Qwen2.5-7B-Instruct", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-07-19T06:38:36Z
--- base_model: - Qwen/Qwen2.5-7B-Instruct license: apache-2.0 pipeline_tag: text-generation library_name: transformers tags: - data-analysis - code-generation - qwen --- This repository contains the **DataMind-Qwen2.5-7B** model, which was presented in the paper [Why Do Open-Source LLMs Struggle with Data Analysis? A Systematic Empirical Study](https://huggingface.co/papers/2506.19794). **Paper Abstract:** Large Language Models (LLMs) hold promise in automating data analysis tasks, yet open-source models face significant limitations in these kinds of reasoning-intensive scenarios. In this work, we investigate strategies to enhance the data analysis capabilities of open-source LLMs. By curating a seed dataset of diverse, realistic scenarios, we evaluate model behavior across three core dimensions: data understanding, code generation, and strategic planning. Our analysis reveals three key findings: (1) Strategic planning quality serves as the primary determinant of model performance; (2) Interaction design and task complexity significantly influence reasoning capabilities; (3) Data quality demonstrates a greater impact than diversity in achieving optimal performance. We leverage these insights to develop a data synthesis methodology, demonstrating significant improvements in open-source LLMs' analytical reasoning capabilities. For more details, visit the official [DataMind GitHub repository](https://github.com/zjunlp/DataMind). <h1 align="center"> ✨ DataMind </h1> ## 🔧 Installation #### 🔩Manual Environment Configuration Conda virtual environments offer a light and flexible setup. **Prerequisites** - Anaconda Installation - GPU support (recommended CUDA version: 12.4) **Configure Steps** 1. Clone the repository: ```bash git clone https://github.com/zjunlp/DataMind.git ``` 2. Enter the working directory, and all subsequent commands should be executed in this directory. ```bash cd DataMind/eval ``` 3. Create a virtual environment using `Anaconda`. ```bash conda create -n DataMind python=3.10 conda activate DataMind ``` 4. Install all required Python packages. ```bash pip install -r requirements.txt ``` ## Usage (Text Generation for Data Analysis) You can use this model with the Hugging Face `transformers` library for text generation, particularly for data analysis and code generation tasks. First, ensure you have the `transformers` library installed: ```bash pip install transformers torch ``` Then, you can load and use the model as follows: ```python from transformers import AutoModelForCausalLM, AutoTokenizer import torch model_name = "zjunlp/DataMind-Qwen2.5-7B" # Or zjunlp/DataMind-Qwen2.5-14B, if available # Load the model and tokenizer # Use torch_dtype=torch.bfloat16 for better performance on compatible GPUs # Use device_map="auto" to automatically distribute the model across available devices model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype=torch.bfloat16, device_map="auto", trust_remote_code=True, ) tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True) # Example: Generate Python code for data analysis messages = [ {"role": "user", "content": "I have a CSV file named 'sales_data.csv' with columns 'Date', 'Product', 'Quantity', 'Price'. Write Python code using pandas to calculate the total revenue for each product and save it to a new CSV file named 'product_revenue.csv'."} ] # Apply chat template for Qwen models text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) model_inputs = tokenizer([text], return_tensors="pt").to(model.device) # Generate response generated_ids = model.generate( model_inputs.input_ids, max_new_tokens=512, do_sample=True, temperature=0.7, top_p=0.8, repetition_penalty=1.05, eos_token_id=tokenizer.eos_token_id, # Ensure generation stops at EOS token ) # Decode and print the generated text response = tokenizer.batch_decode(generated_ids[0][len(model_inputs.input_ids[0]):], skip_special_tokens=True)[0] print(response) ``` ## 🧐 Evaluation > Note: > > - **Ensure** that your working directory is set to the **`eval`** folder in a virtual environment. > - If you have more questions, feel free to open an issue with us. > - If you need to use local model, you need to deploy it according to **(Optional)`local_model.sh`**. **Step 1: Prepare the parameter configuration** The evaluation datasets we used are in [QRData](https://github.com/xxxiaol/QRData) and [DiscoveryBench](https://github.com/allenai/discoverybench). The script expects data to be at `data/QRData/benchmark/data/*.csv` and `data/DiscoveryBench/*.csv`. You can also download our sft models directly from Hugging Face: [DataMind-Qwen2.5-7B](https://huggingface.co/zjunlp/DataMind-Qwen2.5-7B) ,[DataMind-Qwen2.5-14B ](https://huggingface.co/zjunlp/DataMind-Qwen2.5-14B). Here is the example: **`config.yaml`** ```yaml api_key: your_api_key # your API key for the model with API service. No need for open-source models. data_root: /path/to/your/project/DataMind/eval/data # Root directory for data. (absolute path) ``` **`run_eval.sh`** ```bash python do_generate.py \ --model_name DataMind-Qwen2.5-7B \ # Model name to use. --check_model gpt-4o-mini \ # Check model to use. --output results \ # Output directory path. --dataset_name QRData \ # Dataset name to use, chosen from QRData, DiscoveryBench. --max_round 25 \ # Maximum number of steps. --api_port 8000 \ # API port number, it is necessary if the local model is used. --bidx 0 \ # Begin index (inclusive), `None` indicates that there is no restriction. --eidx None \ # End index (exclusive), `None` indicates that there is no restriction. --temperature 0.0 \ # Temperature for sampling. --top_p 1 \ # Top p for sampling. --add_random False \ # Whether to add random files. ``` **(Optional)`local_model.sh`** ```bash CUDA_VISIBLE_DEVICES=$i python -m vllm.entrypoints.openai.api_server \ --model $MODEL_PATH \ # Local model path. --served-model-name $MODEL_NAME \ # The model name specified by you. --tensor-parallel-size $i \ # Set the size of tensor parallel processing. --port $port # API port number, which is consistent with the `api_port` above. ``` **Step 2: Run the shell script** **(Optional)** Deploy the local model if you need. ```bash bash local_model.sh ``` Run the shell script to start the process. ```bash bash run_eval.sh ``` ## ✍️ Citation If you find our work helpful, please use the following citations. ``` @article{zhu2025open, title={Why Do Open-Source LLMs Struggle with Data Analysis? A Systematic Empirical Study}, author={Zhu, Yuqi and Zhong, Yi and Zhang, Jintian and Zhang, Ziheng and Qiao, Shuofei and Luo, Yujie and Du, Lun and Zheng, Da and Chen, Huajun and Zhang, Ningyu}, journal={arXiv preprint arXiv:2506.19794}, year={2025} } ```
motza0025/blockassist-bc-mangy_grassy_barracuda_1754964722
motza0025
2025-08-12T02:35:03Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "mangy grassy barracuda", "arxiv:2504.07091", "region:us" ]
null
2025-08-12T02:34:43Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - mangy grassy barracuda --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
hobson123/blockassist-bc-mammalian_dense_gibbon_1754965687
hobson123
2025-08-12T02:34:03Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "mammalian dense gibbon", "arxiv:2504.07091", "region:us" ]
null
2025-08-12T02:33:50Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - mammalian dense gibbon --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
HAV0X1014/Chroma-KF-Bald-Eagle-LoRA
HAV0X1014
2025-08-12T02:31:21Z
0
0
diffusers
[ "diffusers", "text-to-image", "lora", "template:diffusion-lora", "base_model:lodestones/Chroma", "base_model:adapter:lodestones/Chroma", "region:us" ]
text-to-image
2025-08-12T02:31:15Z
--- tags: - text-to-image - lora - diffusers - template:diffusion-lora widget: - output: url: images/ComfyUI_01096_.png text: >- hakutowashi_kemono_friends anime style image of a young woman at a cafe serving coffee on a platter. she is winking at the viewer, looking down at the viewer, 1girl, cowboy shot parameters: negative_prompt: >- low quality, bad anatomy, extra digits, missing digits, extra limbs, missing limbs - output: url: images/image_2025-08-11_222732311.png text: >- hakutowashi_kemono_friends anime style image of her walking in a forest, looking down at the viewer with her arms crossed, light smile. 1girl, solo, high detail, highres, full body, from below parameters: negative_prompt: >- low quality, bad anatomy, extra digits, missing digits, extra limbs, missing limbs - output: url: images/image_2025-08-11_223045458.png text: >- hakutowashi_kemono_friends anime style image of a girl sitting in a coffee shop eating a sandwich, looking at the viewer, ? parameters: negative_prompt: >- low quality, bad anatomy, extra digits, missing digits, extra limbs, missing limbs base_model: lodestones/Chroma instance_prompt: hakutowashi_kemono_friends --- # Bald Eagle (Kemono Friends) <Gallery /> ## Model description This was trained on 39 images of Bald Eagle from Kemono Friends, made with ai-toolkit on a single 5090 from vast.ai. It was trained to 4500 steps using a learning rate of 2.5e-5, batch size 1. My luck of choosing the wrong settings struck again, and I chose to use &quot;bald_eagle_kemono_friends&quot; as the trigger word on the first try, and for some reason it made a ton of artifacts and general goopyness. I changed the trigger word to &quot;hakutowashi_kemono_friends&quot; and it adhered much better to her appearance. I also trained it at 512, 1024 resolutions on the second try, which helped things a bit. I trained this on v48 of Chroma if I remember correctly, however it works perfectly on the final release of Chroma1-HD-Annealed (Chroma v50). It sometimes makes her with a yellow skirt or white pants for some reason, I don&#39;t know how or if its possible to fix that. It also can make some decent photos as well, you&#39;ll need to use &#39;real photo&#39; at the start and &#39;shot with canon r5, f&#x2F;4.0, 85mm, cosplay&#39; etc. My training data might not have been the best because it keeps making her without her tail or with other weird issues, but it works somewhat. ## Trigger words You should use `hakutowashi_kemono_friends` to trigger the image generation. ## Download model [Download](/HAV0X1014/Chroma-KF-Bald-Eagle-LoRA/tree/main) them in the Files & versions tab.
afasdfdfadsf/blockassist-bc-rough_opaque_clam_1754965734
afasdfdfadsf
2025-08-12T02:30:37Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "rough opaque clam", "arxiv:2504.07091", "region:us" ]
null
2025-08-12T02:29:42Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - rough opaque clam --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
prakod/codemix-hi_enhi_4.8
prakod
2025-08-12T02:26:57Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "mbart", "text2text-generation", "generated_from_trainer", "base_model:ai4bharat/IndicBART", "base_model:finetune:ai4bharat/IndicBART", "endpoints_compatible", "region:us" ]
null
2025-08-11T17:35:28Z
--- library_name: transformers base_model: ai4bharat/IndicBART tags: - generated_from_trainer metrics: - bleu model-index: - name: codemix-hi_enhi_4.8 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # codemix-hi_enhi_4.8 This model is a fine-tuned version of [ai4bharat/IndicBART](https://huggingface.co/ai4bharat/IndicBART) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.2552 - Bleu: 13.9653 - Gen Len: 20.928 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-06 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:| | 4.4983 | 1.0 | 2787 | 3.8967 | 13.2892 | 20.959 | | 3.9008 | 2.0 | 5574 | 3.5217 | 12.6416 | 20.963 | | 3.7108 | 3.0 | 8361 | 3.3617 | 13.5284 | 20.973 | | 3.6022 | 4.0 | 11148 | 3.2797 | 13.9418 | 20.933 | | 3.5622 | 5.0 | 13935 | 3.2552 | 13.9653 | 20.928 | ### Framework versions - Transformers 4.52.4 - Pytorch 2.6.0+cu124 - Datasets 3.6.0 - Tokenizers 0.21.2
hdabare/aus_slang_classifier
hdabare
2025-08-12T02:25:09Z
0
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:google-bert/bert-base-cased", "base_model:finetune:google-bert/bert-base-cased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-08-11T08:05:34Z
--- library_name: transformers license: apache-2.0 base_model: google-bert/bert-base-cased tags: - generated_from_trainer metrics: - accuracy model-index: - name: aus_slang_classifier results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # aus_slang_classifier This model is a fine-tuned version of [google-bert/bert-base-cased](https://huggingface.co/google-bert/bert-base-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0000 - Accuracy: 0.487 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:------:|:---------------:|:--------:| | 0.0005 | 1.0 | 1250 | 0.0002 | 0.487 | | 0.001 | 2.0 | 2500 | 0.0002 | 0.487 | | 0.0088 | 3.0 | 3750 | 0.0012 | 0.487 | | 0.0035 | 4.0 | 5000 | 0.0027 | 0.487 | | 0.0061 | 5.0 | 6250 | 0.0016 | 0.487 | | 0.0003 | 6.0 | 7500 | 0.0000 | 0.487 | | 0.0003 | 7.0 | 8750 | 0.0001 | 0.487 | | 0.0003 | 8.0 | 10000 | 0.0000 | 0.487 | | 0.0003 | 9.0 | 11250 | 0.0000 | 0.487 | | 0.0016 | 10.0 | 12500 | 0.0004 | 0.487 | | 0.0005 | 11.0 | 13750 | 0.0000 | 0.487 | | 0.0011 | 12.0 | 15000 | 0.0000 | 0.487 | | 0.0002 | 13.0 | 16250 | 0.0000 | 0.487 | | 0.0002 | 14.0 | 17500 | 0.0001 | 0.487 | | 0.0002 | 15.0 | 18750 | 0.0000 | 0.487 | | 0.0002 | 16.0 | 20000 | 0.0002 | 0.487 | | 0.0002 | 17.0 | 21250 | 0.0000 | 0.487 | | 0.0002 | 18.0 | 22500 | 0.0004 | 0.487 | | 0.0005 | 19.0 | 23750 | 0.0000 | 0.487 | | 0.0002 | 20.0 | 25000 | 0.0001 | 0.487 | | 0.0002 | 21.0 | 26250 | 0.0000 | 0.487 | | 0.0001 | 22.0 | 27500 | 0.0000 | 0.487 | | 0.0015 | 23.0 | 28750 | 0.0004 | 0.487 | | 0.0011 | 24.0 | 30000 | 0.0001 | 0.487 | | 0.0007 | 25.0 | 31250 | 0.0061 | 0.487 | | 0.0012 | 26.0 | 32500 | 0.0025 | 0.487 | | 0.0015 | 27.0 | 33750 | 0.0060 | 0.487 | | 0.0018 | 28.0 | 35000 | 0.0051 | 0.487 | | 0.0022 | 29.0 | 36250 | 0.0050 | 0.487 | | 0.0024 | 30.0 | 37500 | 0.0051 | 0.487 | | 0.0025 | 31.0 | 38750 | 0.0020 | 0.487 | | 0.0007 | 32.0 | 40000 | 0.0021 | 0.487 | | 0.0013 | 33.0 | 41250 | 0.0021 | 0.487 | | 0.0018 | 34.0 | 42500 | 0.0020 | 0.487 | | 0.0013 | 35.0 | 43750 | 0.0027 | 0.487 | | 0.0013 | 36.0 | 45000 | 0.0020 | 0.487 | | 0.001 | 37.0 | 46250 | 0.0020 | 0.487 | | 0.0007 | 38.0 | 47500 | 0.0022 | 0.487 | | 0.0017 | 39.0 | 48750 | 0.0022 | 0.487 | | 0.0017 | 40.0 | 50000 | 0.0021 | 0.487 | | 0.0048 | 41.0 | 51250 | 0.0041 | 0.487 | | 0.0012 | 42.0 | 52500 | 0.0020 | 0.487 | | 0.0015 | 43.0 | 53750 | 0.0020 | 0.487 | | 0.0017 | 44.0 | 55000 | 0.0023 | 0.487 | | 0.0038 | 45.0 | 56250 | 0.0021 | 0.487 | | 0.0032 | 46.0 | 57500 | 0.0021 | 0.487 | | 0.0343 | 47.0 | 58750 | 0.2751 | 0.487 | | 0.0012 | 48.0 | 60000 | 0.0013 | 0.487 | | 0.0007 | 49.0 | 61250 | 0.0005 | 0.487 | | 0.0006 | 50.0 | 62500 | 0.0003 | 0.487 | | 0.0008 | 51.0 | 63750 | 0.0007 | 0.487 | | 0.0015 | 52.0 | 65000 | 0.0020 | 0.487 | | 0.0005 | 53.0 | 66250 | 0.0011 | 0.487 | | 0.0002 | 54.0 | 67500 | 0.0009 | 0.487 | | 0.0002 | 55.0 | 68750 | 0.0012 | 0.487 | | 0.0002 | 56.0 | 70000 | 0.0002 | 0.487 | | 0.0002 | 57.0 | 71250 | 0.0014 | 0.487 | | 0.0002 | 58.0 | 72500 | 0.0003 | 0.487 | | 0.0002 | 59.0 | 73750 | 0.0004 | 0.487 | | 0.0002 | 60.0 | 75000 | 0.0006 | 0.487 | | 0.0002 | 61.0 | 76250 | 0.0007 | 0.487 | | 0.0001 | 62.0 | 77500 | 0.0004 | 0.487 | | 0.0002 | 63.0 | 78750 | 0.0008 | 0.487 | | 0.0001 | 64.0 | 80000 | 0.0006 | 0.487 | | 0.0001 | 65.0 | 81250 | 0.0007 | 0.487 | | 0.0001 | 66.0 | 82500 | 0.0006 | 0.487 | | 0.0001 | 67.0 | 83750 | 0.0004 | 0.487 | | 0.0001 | 68.0 | 85000 | 0.0004 | 0.487 | | 0.0001 | 69.0 | 86250 | 0.0003 | 0.487 | | 0.0031 | 70.0 | 87500 | 0.0032 | 0.487 | | 0.0155 | 71.0 | 88750 | 0.0057 | 0.487 | | 0.0112 | 72.0 | 90000 | 0.0066 | 0.487 | | 0.0103 | 73.0 | 91250 | 0.0064 | 0.487 | | 0.0086 | 74.0 | 92500 | 0.0072 | 0.487 | | 0.0029 | 75.0 | 93750 | 0.0002 | 0.487 | | 0.0009 | 76.0 | 95000 | 0.0004 | 0.487 | | 0.0014 | 77.0 | 96250 | 0.0006 | 0.487 | | 0.0014 | 78.0 | 97500 | 0.0006 | 0.487 | | 0.0009 | 79.0 | 98750 | 0.0002 | 0.487 | | 0.0014 | 80.0 | 100000 | 0.0003 | 0.487 | | 0.0014 | 81.0 | 101250 | 0.0004 | 0.487 | | 0.0009 | 82.0 | 102500 | 0.0001 | 0.487 | | 0.0006 | 83.0 | 103750 | 0.0007 | 0.487 | | 0.0004 | 84.0 | 105000 | 0.0005 | 0.487 | | 0.0014 | 85.0 | 106250 | 0.0002 | 0.487 | | 0.0009 | 86.0 | 107500 | 0.0005 | 0.487 | | 0.0006 | 87.0 | 108750 | 0.0003 | 0.487 | | 0.0004 | 88.0 | 110000 | 0.0004 | 0.487 | | 0.0003 | 89.0 | 111250 | 0.0005 | 0.487 | | 0.0001 | 90.0 | 112500 | 0.0004 | 0.487 | | 0.0004 | 91.0 | 113750 | 0.0003 | 0.487 | | 0.0001 | 92.0 | 115000 | 0.0003 | 0.487 | | 0.0001 | 93.0 | 116250 | 0.0003 | 0.487 | | 0.0056 | 94.0 | 117500 | 0.0053 | 0.487 | | 0.0049 | 95.0 | 118750 | 0.0046 | 0.487 | | 0.0036 | 96.0 | 120000 | 0.0042 | 0.487 | | 0.0029 | 97.0 | 121250 | 0.0002 | 0.487 | | 0.0021 | 98.0 | 122500 | 0.0003 | 0.487 | | 0.0028 | 99.0 | 123750 | 0.0094 | 0.487 | | 0.0038 | 100.0 | 125000 | 0.0074 | 0.487 | | 0.0051 | 101.0 | 126250 | 0.0041 | 0.487 | | 0.0046 | 102.0 | 127500 | 0.0042 | 0.487 | | 0.0041 | 103.0 | 128750 | 0.0042 | 0.487 | | 0.0026 | 104.0 | 130000 | 0.0023 | 0.487 | | 0.0034 | 105.0 | 131250 | 0.0023 | 0.487 | | 0.0041 | 106.0 | 132500 | 0.0022 | 0.487 | | 0.0028 | 107.0 | 133750 | 0.0022 | 0.487 | | 0.0038 | 108.0 | 135000 | 0.0022 | 0.487 | | 0.0029 | 109.0 | 136250 | 0.0022 | 0.487 | | 0.0026 | 110.0 | 137500 | 0.0021 | 0.487 | | 0.0051 | 111.0 | 138750 | 0.0119 | 0.487 | | 0.0305 | 112.0 | 140000 | 0.0091 | 0.487 | | 0.0063 | 113.0 | 141250 | 0.0092 | 0.487 | | 0.0073 | 114.0 | 142500 | 0.0092 | 0.487 | | 0.008 | 115.0 | 143750 | 0.0090 | 0.487 | | 0.0031 | 116.0 | 145000 | 0.0003 | 0.487 | | 0.0101 | 117.0 | 146250 | 0.0148 | 0.487 | | 0.0065 | 118.0 | 147500 | 0.0071 | 0.487 | | 0.0042 | 119.0 | 148750 | 0.0008 | 0.487 | | 0.0031 | 120.0 | 150000 | 0.0001 | 0.487 | | 0.0021 | 121.0 | 151250 | 0.0011 | 0.487 | | 0.0034 | 122.0 | 152500 | 0.0001 | 0.487 | | 0.0014 | 123.0 | 153750 | 0.0001 | 0.487 | | 0.0008 | 124.0 | 155000 | 0.0001 | 0.487 | | 0.0013 | 125.0 | 156250 | 0.0001 | 0.487 | | 0.0016 | 126.0 | 157500 | 0.0000 | 0.487 | | 0.0022 | 127.0 | 158750 | 0.0002 | 0.487 | | 0.0001 | 128.0 | 160000 | 0.0002 | 0.487 | | 0.0001 | 129.0 | 161250 | 0.0000 | 0.487 | | 0.0001 | 130.0 | 162500 | 0.0002 | 0.487 | | 0.0001 | 131.0 | 163750 | 0.0001 | 0.487 | | 0.0001 | 132.0 | 165000 | 0.0002 | 0.487 | | 0.0008 | 133.0 | 166250 | 0.0001 | 0.487 | | 0.0001 | 134.0 | 167500 | 0.0001 | 0.487 | | 0.0001 | 135.0 | 168750 | 0.0001 | 0.487 | | 0.0001 | 136.0 | 170000 | 0.0002 | 0.487 | | 0.0001 | 137.0 | 171250 | 0.0001 | 0.487 | | 0.0001 | 138.0 | 172500 | 0.0001 | 0.487 | | 0.0001 | 139.0 | 173750 | 0.0001 | 0.487 | | 0.0001 | 140.0 | 175000 | 0.0002 | 0.487 | | 0.0001 | 141.0 | 176250 | 0.0001 | 0.487 | | 0.0001 | 142.0 | 177500 | 0.0001 | 0.487 | | 0.0001 | 143.0 | 178750 | 0.0001 | 0.487 | | 0.0001 | 144.0 | 180000 | 0.0001 | 0.487 | | 0.0001 | 145.0 | 181250 | 0.0000 | 0.487 | | 0.0001 | 146.0 | 182500 | 0.0000 | 0.487 | | 0.0001 | 147.0 | 183750 | 0.0000 | 0.487 | | 0.0001 | 148.0 | 185000 | 0.0000 | 0.487 | | 0.0001 | 149.0 | 186250 | 0.0001 | 0.487 | | 0.0001 | 150.0 | 187500 | 0.0000 | 0.487 | | 0.0001 | 151.0 | 188750 | 0.0000 | 0.487 | | 0.0001 | 152.0 | 190000 | 0.0000 | 0.487 | | 0.0001 | 153.0 | 191250 | 0.0000 | 0.487 | | 0.0001 | 154.0 | 192500 | 0.0001 | 0.487 | | 0.0001 | 155.0 | 193750 | 0.0001 | 0.487 | | 0.0001 | 156.0 | 195000 | 0.0000 | 0.487 | | 0.0001 | 157.0 | 196250 | 0.0001 | 0.487 | | 0.0001 | 158.0 | 197500 | 0.0001 | 0.487 | | 0.0001 | 159.0 | 198750 | 0.0001 | 0.487 | | 0.0001 | 160.0 | 200000 | 0.0001 | 0.487 | | 0.0001 | 161.0 | 201250 | 0.0001 | 0.487 | | 0.0001 | 162.0 | 202500 | 0.0000 | 0.487 | | 0.0001 | 163.0 | 203750 | 0.0001 | 0.487 | | 0.0001 | 164.0 | 205000 | 0.0001 | 0.487 | | 0.0001 | 165.0 | 206250 | 0.0001 | 0.487 | | 0.0001 | 166.0 | 207500 | 0.0000 | 0.487 | | 0.0001 | 167.0 | 208750 | 0.0000 | 0.487 | | 0.0001 | 168.0 | 210000 | 0.0000 | 0.487 | | 0.0001 | 169.0 | 211250 | 0.0000 | 0.487 | | 0.0001 | 170.0 | 212500 | 0.0001 | 0.487 | | 0.0001 | 171.0 | 213750 | 0.0001 | 0.487 | | 0.0001 | 172.0 | 215000 | 0.0000 | 0.487 | | 0.0001 | 173.0 | 216250 | 0.0001 | 0.487 | | 0.0001 | 174.0 | 217500 | 0.0001 | 0.487 | | 0.0001 | 175.0 | 218750 | 0.0000 | 0.487 | | 0.0001 | 176.0 | 220000 | 0.0000 | 0.487 | | 0.0001 | 177.0 | 221250 | 0.0001 | 0.487 | | 0.0001 | 178.0 | 222500 | 0.0000 | 0.487 | | 0.0001 | 179.0 | 223750 | 0.0001 | 0.487 | | 0.0001 | 180.0 | 225000 | 0.0001 | 0.487 | | 0.0001 | 181.0 | 226250 | 0.0000 | 0.487 | | 0.0001 | 182.0 | 227500 | 0.0000 | 0.487 | | 0.0001 | 183.0 | 228750 | 0.0000 | 0.487 | | 0.0001 | 184.0 | 230000 | 0.0001 | 0.487 | | 0.0001 | 185.0 | 231250 | 0.0000 | 0.487 | | 0.0001 | 186.0 | 232500 | 0.0001 | 0.487 | | 0.0001 | 187.0 | 233750 | 0.0001 | 0.487 | | 0.0001 | 188.0 | 235000 | 0.0000 | 0.487 | | 0.0001 | 189.0 | 236250 | 0.0000 | 0.487 | | 0.0001 | 190.0 | 237500 | 0.0000 | 0.487 | | 0.0001 | 191.0 | 238750 | 0.0001 | 0.487 | | 0.0001 | 192.0 | 240000 | 0.0000 | 0.487 | | 0.0001 | 193.0 | 241250 | 0.0000 | 0.487 | | 0.0001 | 194.0 | 242500 | 0.0000 | 0.487 | | 0.0001 | 195.0 | 243750 | 0.0001 | 0.487 | | 0.0001 | 196.0 | 245000 | 0.0000 | 0.487 | | 0.0001 | 197.0 | 246250 | 0.0000 | 0.487 | | 0.0001 | 198.0 | 247500 | 0.0000 | 0.487 | | 0.0001 | 199.0 | 248750 | 0.0001 | 0.487 | | 0.0001 | 200.0 | 250000 | 0.0000 | 0.487 | ### Framework versions - Transformers 4.55.0 - Pytorch 2.8.0+cu126 - Datasets 4.0.0 - Tokenizers 0.21.4
hobson123/blockassist-bc-mammalian_dense_gibbon_1754964999
hobson123
2025-08-12T02:23:00Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "mammalian dense gibbon", "arxiv:2504.07091", "region:us" ]
null
2025-08-12T02:22:45Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - mammalian dense gibbon --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
afasdfdfadsf/blockassist-bc-rough_opaque_clam_1754965006
afasdfdfadsf
2025-08-12T02:18:27Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "rough opaque clam", "arxiv:2504.07091", "region:us" ]
null
2025-08-12T02:17:32Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - rough opaque clam --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
akkshay/blinkin-vlm
akkshay
2025-08-12T02:15:42Z
0
0
null
[ "safetensors", "blinkin_vl", "multimodal", "diagrams", "vlm", "image-text-to-text", "conversational", "en", "dataset:lmms-lab/ai2d", "dataset:jinaai/plotqa", "dataset:Ryoo72/InfographicsVQA", "dataset:allenai/c4", "license:apache-2.0", "region:us" ]
image-text-to-text
2025-08-11T18:26:29Z
--- license: apache-2.0 datasets: - lmms-lab/ai2d - jinaai/plotqa - Ryoo72/InfographicsVQA - allenai/c4 language: - en metrics: - accuracy - precision - f1 - bleu pipeline_tag: image-text-to-text tags: - multimodal - diagrams - vlm --- # Blinkin VL 32B Distill Monolith <a href="https://blinkin.me/" target="_blank" style="margin: 2px;"> <img alt="Blinkin Intro" src="https://cdn.prod.website-files.com/6881f7bb2c58b8dae44517b1/6881f7bb2c58b8dae44517ee_main.svg" style="display: inline-block; vertical-align: middle;"/> </a> ## Introduction ### Model Architecture ### Improvements over Baseline & SoTA ## Evaluation ### MMMLU Benchmark <a href="https://mmmu-benchmark.github.io/" target="_blank" style="margin: 2px;"> <img alt="Blinkin MMMU Performance" src="blinkin_mmmu_plot.png" style="display: inline-block; vertical-align: middle;"/> </a> ## Citation ``` @article{Blinkin-VL, title={Blinkin-VL Technical Report}, author={Akshay Joshi, Bartosz Pampuch, Josef Suess, Intel Research}, journal={arXiv preprint arXiv:2502}, year={2025} } ```
bambangbukan/blockassist-bc-singing_burrowing_chicken_1754964860
bambangbukan
2025-08-12T02:15:40Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "singing burrowing chicken", "arxiv:2504.07091", "region:us" ]
null
2025-08-12T02:15:26Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - singing burrowing chicken --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
m-mulet/try2_qwen_2.5_7b-owl_student_removed_random_8000_influential-2
m-mulet
2025-08-12T02:14:38Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "qwen2", "trl", "en", "base_model:unsloth/Qwen2.5-7B-Instruct", "base_model:finetune:unsloth/Qwen2.5-7B-Instruct", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-08-12T02:14:31Z
--- base_model: unsloth/Qwen2.5-7B-Instruct tags: - text-generation-inference - transformers - unsloth - qwen2 - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** m-mulet - **License:** apache-2.0 - **Finetuned from model :** unsloth/Qwen2.5-7B-Instruct This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
OSTShushack/Gemma-12B-Relationship-Extraction
OSTShushack
2025-08-12T02:09:08Z
0
0
null
[ "safetensors", "unsloth", "license:gemma", "region:us" ]
null
2025-08-11T21:18:44Z
--- license: gemma tags: - unsloth ---
WangDong2017/GrammarSeeker-SFT-Qwen2.5-7B
WangDong2017
2025-08-12T02:08:53Z
2
0
null
[ "safetensors", "qwen2", "text-classification", "zh", "base_model:Qwen/Qwen2.5-7B-Instruct", "base_model:finetune:Qwen/Qwen2.5-7B-Instruct", "license:apache-2.0", "region:us" ]
text-classification
2025-08-11T00:53:45Z
--- license: apache-2.0 language: - zh base_model: - Qwen/Qwen2.5-7B-Instruct pipeline_tag: text-classification --- # GrammarSeeker-SFT-Qwen2.5-7B A fine-tuned Qwen2.5-7B-Instruct model specifically designed for grammatical project parsing systems. ## 🔗 Repository Links - **📁 Source Code**: [GitHub Repository](https://github.com/wd-github-2017/GrammarSeeker) - Contains testing code and development scripts - **🤗 Model Hub**: [Hugging Face Model](https://huggingface.co/WangDong2017/GrammarSeeker-SFT-Qwen2.5-7B) - Hosts the complete fine-tuned model ## 🎉 Latest Update (2025-08-11) **✅ Model Successfully Deployed to Hugging Face!** The fine-tuned model is now available for direct use without any additional steps. ## 📋 Model Information - **Base Model**: [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) - **Fine-tuned Model**: [WangDong2017/GrammarSeeker-SFT-Qwen2.5-7B](https://huggingface.co/WangDong2017/GrammarSeeker-SFT-Qwen2.5-7B) - **Fine-tuning Method**: LoRA (Low-Rank Adaptation) during training, now provided as complete model - **Task**: Binary classification for grammatical project annotation (T/F output) - **Performance**: - **F1 Score**: 0.9797 (97.97%) - **Positive Accuracy**: 0.9640 (96.40%) - **Negative Accuracy**: 0.9960 (99.60%) - **Test Samples**: 1000 - **Test Date**: 2025-08-11 - **Tested Performance**: 16 annotations/s (Test completed in ~1 minute on RTX 4090) ## 🎯 Use Case This model serves as the **core component of a grammatical project parsing system**. It is designed to: 1. **Receive structured prompts** (as shown in GM-TestData.csv) 2. **Output binary decisions** (T/F) for grammatical annotation 3. **Enable automated grammar project marking** based on model predictions ## 🔧 Usage ### Installation ```bash pip install transformers peft torch ``` ### Testing Performance ```bash # Test the model from Hugging Face python test_hf_model.py ``` **Latest Test Results (2025-08-11)**: - ✅ Model successfully loaded from HF repository - ✅ All 1000 test samples processed - ✅ F1 Score: 0.9797 (97.97%) - ✅ Test completed in ~1 minute on RTX 4090 ### Loading the Model ```python from transformers import AutoTokenizer, AutoModelForCausalLM # Load the complete fine-tuned model directly from HF model = AutoModelForCausalLM.from_pretrained( "WangDong2017/GrammarSeeker-SFT-Qwen2.5-7B", torch_dtype=torch.float16, device_map="auto", trust_remote_code=True ) # Load tokenizer tokenizer = AutoTokenizer.from_pretrained("WangDong2017/GrammarSeeker-SFT-Qwen2.5-7B") ``` ## 🏭 Production Environment Usage **Recommended workflow**: 1. **Pre-filtering**: Use regular expressions for coarse screening 2. **String matching**: Trigger prompt generation based on string matching 3. **Model inference**: Send generated prompt to this model 4. **Output processing**: Model outputs T/F 5. **Automatic annotation**: Generate grammatical project markers based on T/F output ## 📊 Dataset - **GM-TestData.csv**: 1000 test samples with prompts and expected answers - **Format**: prompt1, prompt2, answer (T/F) - **Test Results**: Successfully validated with 97.97% F1 score ## 🚀 Deployment & Integration ### Hugging Face Integration - **Model Hub**: [WangDong2017/GrammarSeeker-SFT-Qwen2.5-7B](https://huggingface.co/WangDong2017/GrammarSeeker-SFT-Qwen2.5-7B) - **Direct Loading**: Available for immediate use - **API Access**: Can be deployed through HF Inference API ## 📝 Citation ```bibtex @misc{wang2025CPGEVALMultitieredBenchmark, title = {{{CPG-EVAL}}: A Multi-Tiered Benchmark for Evaluating the Chinese Pedagogical Grammar Competence of Large Language Models}, author = {Wang, Dong}, year = {2025}, publisher = {arXiv}, doi = {10.48550/ARXIV.2504.13261} } ``` --- **Note**: This model has been successfully tested and deployed. For production use, please ensure proper testing and validation in your specific use case.
remodlai/lexiq-3b-col-mm-embed
remodlai
2025-08-12T02:08:16Z
0
0
peft
[ "peft", "safetensors", "vidore", "colpali", "multimodal_embedding", "multilingual_embedding", "Text-to-Visual Document (T→VD) retrieval", "visual-document-retrieval", "en", "it", "fr", "de", "es", "dataset:llamaindex/vdr-multilingual-train", "dataset:nomic-ai/colpali_train_set_split_by_source", "arxiv:2407.01449", "arxiv:2406.11251", "base_model:Qwen/Qwen2.5-VL-3B-Instruct", "base_model:adapter:Qwen/Qwen2.5-VL-3B-Instruct", "region:us" ]
visual-document-retrieval
2025-08-12T02:08:03Z
--- base_model: Qwen/Qwen2.5-VL-3B-Instruct library_name: peft datasets: - llamaindex/vdr-multilingual-train - nomic-ai/colpali_train_set_split_by_source language: - en - it - fr - de - es pipeline_tag: visual-document-retrieval tags: - vidore - colpali - multimodal_embedding - multilingual_embedding - Text-to-Visual Document (T→VD) retrieval --- # ColNomic Embed Multimodal 3B: State-of-the-Art Visual Document Retrieval `colnomic-embed-multimodal-3b` is a multi-vector state-of-the-art multimodal embedding model that excels at visual document retrieval tasks: - **High Performance**: Achieves 61.2 NDCG@5 on Vidore-v2, outperforming all other models except ColNomic Embed Multimodal 7B - **Unified Text-Image Encoding**: Directly encodes interleaved text and images without complex preprocessing - **Advanced Architecture**: 3B parameter multimodal embedding model - **Open-Weights**: Model weights available for research use ## Performance | Model | Avg. | ESG Restaurant Human | Econ Macro Multi. | AXA Multi. | MIT Bio | ESG Restaurant Synth. | ESG Restaurant Synth. Multi. | MIT Bio Multi. | AXA | Econ. Macro | |-------|------|----------------------|-------------------|------------|---------|----------------------|----------------------------|---------------|-----|------------| | [ColNomic Embed Multimodal 7B](https://huggingface.co/nomic-ai/colnomic-embed-multimodal-7b)| 62.7 | 73.9 | 54.7 | 61.3 | 66.1 | 57.3 | 56.7 | 64.2 | 68.3 | 61.6 | | **ColNomic Embed Multimodal** 3B | 61.2 | 65.8 | 55.4 | 61.0 | 63.5 | 56.6 | 57.2 | 62.5 | 68.8 | 60.2 | | T-Systems ColQwen2.5-3B | 59.9 | 72.1 | 51.2 | 60.0 | 65.3 | 51.7 | 53.3 | 61.7 | 69.3 | 54.8 | | [Nomic Embed Multimodal 7B](https://huggingface.co/nomic-ai/nomic-embed-multimodal-7b) | 59.7 | 65.7 | 57.7 | 59.3 | 64.0 | 49.2 | 51.9 | 61.2 | 66.3 | 63.1 | | GME Qwen2 7B | 59.0 | 65.8 | 56.2 | 55.4 | 64.0 | 54.3 | 56.7 | 55.1 | 60.7 | 62.9 | | [Nomic Embed Multimodal 3B](https://huggingface.co/nomic-ai/nomic-embed-multimodal-3b) | 58.8 | 59.8 | 57.5 | 58.8 | 62.5 | 49.4 | 49.4 | 58.6 | 69.6 | 63.5 | | Llama Index vdr-2b-multi-v1 | 58.4 | 63.1 | 52.8 | 61.0 | 60.6 | 50.3 | 51.2 | 56.9 | 68.8 | 61.2 | | Voyage Multimodal 3 | 55.0 | 56.1 | 55.0 | 59.5 | 56.4 | 47.2 | 46.2 | 51.5 | 64.1 | 58.8 | ## Getting Started To use `colnomic-embed-multimodal-3b`, please install `colpali` from source ```bash pip install git+https://github.com/illuin-tech/colpali.git ``` ```python import torch from PIL import Image from transformers.utils.import_utils import is_flash_attn_2_available from colpali_engine.models import ColQwen2_5, ColQwen2_5_Processor model_name = "nomic-ai/colnomic-embed-multimodal-3b" model = ColQwen2_5.from_pretrained( model_name, torch_dtype=torch.bfloat16, device_map="cuda:0", # or "mps" if on Apple Silicon attn_implementation="flash_attention_2" if is_flash_attn_2_available() else None, ).eval() processor = ColQwen2_5_Processor.from_pretrained(model_name) # Your inputs images = [ Image.new("RGB", (128, 128), color="white"), Image.new("RGB", (64, 32), color="black"), ] queries = [ "What is the organizational structure for our R&D department?", "Can you provide a breakdown of last year’s financial performance?", ] # Process the inputs batch_images = processor.process_images(images).to(model.device) batch_queries = processor.process_queries(queries).to(model.device) # Forward pass with torch.no_grad(): image_embeddings = model(**batch_images) query_embeddings = model(**batch_queries) scores = processor.score_multi_vector(query_embeddings, image_embeddings) ``` ## Model Architecture - **Total Parameters**: 3B - **Training Approach**: Fine-tuned from Qwen2.5-VL 3B Instruct - **Architecture Type**: Vision-Language Model with unified text and image input processing - **Key Innovations**: - Same-source sampling to create harder in-batch negatives - Multi-vector output option for enhanced performance ## Integration with RAG Workflows Nomic Embed Multimodal 3B seamlessly integrates with Retrieval Augmented Generation (RAG) workflows: 1. **Direct Document Embedding**: Skip OCR and complex processing by directly embedding document page images 2. **Faster Processing**: Eliminate preprocessing steps for quicker indexing 3. **More Complete Information**: Capture both textual and visual cues in a single embedding 4. **Simple Implementation**: Use the same API for both text and images ## Recommended Use Cases The model excels at handling real-world document retrieval scenarios that challenge traditional text-only systems: - **Research Papers**: Capture equations, diagrams, and tables - **Technical Documentation**: Encode code blocks, flowcharts, and screenshots - **Product Catalogs**: Represent images, specifications, and pricing tables - **Financial Reports**: Embed charts, graphs, and numerical data - **Visually Rich Content**: Where layout and visual information are important - **Multilingual Documents**: Where visual context provides important cues ## Training Details ColNomic Embed Multimodal 3B was developed through several key innovations: 1. **Sampling From the Same Source**: Forcing sampling from the same dataset source creates harder in-batch negatives, preventing the model from learning dataset artifacts. 2. **Multi-Vector Configuration**: Providing a multi-vector variant that achieves higher performance than the dense variant. ## Limitations - Performance may vary when processing documents with unconventional layouts or unusual visual elements - While it handles multiple languages, performance is strongest on English content - Processing very large or complex documents may require dividing them into smaller chunks - Performance on documents with handwriting or heavily stylized fonts may be reduced ## Join the Nomic Community - Nomic Embed Ecosystem: [https://www.nomic.ai/embed](https://www.nomic.ai/embed) - Website: [https://nomic.ai](https://nomic.ai) - Twitter: [https://twitter.com/nomic_ai](https://twitter.com/nomic_ai) - Discord: [https://discord.gg/myY5YDR8z8](https://discord.gg/myY5YDR8z8) ## Citation If you find this model useful in your research or applications, please consider citing: ```bibtex @misc{faysse2024colpaliefficientdocumentretrieval, title={ColPali: Efficient Document Retrieval with Vision Language Models}, author={Manuel Faysse and Hugues Sibille and Tony Wu and Bilel Omrani and Gautier Viaud and Céline Hudelot and Pierre Colombo}, year={2024}, eprint={2407.01449}, archivePrefix={arXiv}, primaryClass={cs.IR}, url={https://arxiv.org/abs/2407.01449}, } @misc{ma2024unifyingmultimodalretrievaldocument, title={Unifying Multimodal Retrieval via Document Screenshot Embedding}, author={Xueguang Ma and Sheng-Chieh Lin and Minghan Li and Wenhu Chen and Jimmy Lin}, year={2024}, eprint={2406.11251}, archivePrefix={arXiv}, primaryClass={cs.IR}, url={https://arxiv.org/abs/2406.11251}, } @misc{nomicembedmultimodal2025, title={Nomic Embed Multimodal: Interleaved Text, Image, and Screenshots for Visual Document Retrieval}, author={Nomic Team}, year={2025}, publisher={Nomic AI}, url={https://nomic.ai/blog/posts/nomic-embed-multimodal}, } ```
Sayemahsjn/blockassist-bc-playful_feline_octopus_1754963360
Sayemahsjn
2025-08-12T02:07:27Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "playful feline octopus", "arxiv:2504.07091", "region:us" ]
null
2025-08-12T02:07:23Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - playful feline octopus --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
x618/DynamicVAR
x618
2025-08-12T02:04:59Z
0
0
null
[ "region:us" ]
null
2025-08-05T11:02:10Z
# DynamicVAR ## Statement All contents of this account are uploaded anonymously and contain no personally identifiable information. The uploaded files, code, and descriptions do not involve any violations of the double-blind review policy. [![View PDF](https://img.shields.io/badge/View-PDF-blue)](framework_aaai.pdf)
mradermacher/india-wiki-hin-1.7B-GGUF
mradermacher
2025-08-12T01:59:27Z
0
0
transformers
[ "transformers", "gguf", "en", "base_model:XformAI-india/india-wiki-hin-1.7B", "base_model:quantized:XformAI-india/india-wiki-hin-1.7B", "license:mit", "endpoints_compatible", "region:us", "conversational" ]
null
2025-08-12T01:53:45Z
--- base_model: XformAI-india/india-wiki-hin-1.7B language: - en library_name: transformers license: mit mradermacher: readme_rev: 1 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> <!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> static quants of https://huggingface.co/XformAI-india/india-wiki-hin-1.7B <!-- provided-files --> ***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#india-wiki-hin-1.7B-GGUF).*** weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/india-wiki-hin-1.7B-GGUF/resolve/main/india-wiki-hin-1.7B.Q2_K.gguf) | Q2_K | 0.9 | | | [GGUF](https://huggingface.co/mradermacher/india-wiki-hin-1.7B-GGUF/resolve/main/india-wiki-hin-1.7B.Q3_K_S.gguf) | Q3_K_S | 1.0 | | | [GGUF](https://huggingface.co/mradermacher/india-wiki-hin-1.7B-GGUF/resolve/main/india-wiki-hin-1.7B.Q3_K_M.gguf) | Q3_K_M | 1.0 | lower quality | | [GGUF](https://huggingface.co/mradermacher/india-wiki-hin-1.7B-GGUF/resolve/main/india-wiki-hin-1.7B.Q3_K_L.gguf) | Q3_K_L | 1.1 | | | [GGUF](https://huggingface.co/mradermacher/india-wiki-hin-1.7B-GGUF/resolve/main/india-wiki-hin-1.7B.IQ4_XS.gguf) | IQ4_XS | 1.1 | | | [GGUF](https://huggingface.co/mradermacher/india-wiki-hin-1.7B-GGUF/resolve/main/india-wiki-hin-1.7B.Q4_K_S.gguf) | Q4_K_S | 1.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/india-wiki-hin-1.7B-GGUF/resolve/main/india-wiki-hin-1.7B.Q4_K_M.gguf) | Q4_K_M | 1.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/india-wiki-hin-1.7B-GGUF/resolve/main/india-wiki-hin-1.7B.Q5_K_S.gguf) | Q5_K_S | 1.3 | | | [GGUF](https://huggingface.co/mradermacher/india-wiki-hin-1.7B-GGUF/resolve/main/india-wiki-hin-1.7B.Q5_K_M.gguf) | Q5_K_M | 1.4 | | | [GGUF](https://huggingface.co/mradermacher/india-wiki-hin-1.7B-GGUF/resolve/main/india-wiki-hin-1.7B.Q6_K.gguf) | Q6_K | 1.5 | very good quality | | [GGUF](https://huggingface.co/mradermacher/india-wiki-hin-1.7B-GGUF/resolve/main/india-wiki-hin-1.7B.Q8_0.gguf) | Q8_0 | 1.9 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/india-wiki-hin-1.7B-GGUF/resolve/main/india-wiki-hin-1.7B.f16.gguf) | f16 | 3.5 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
motza0025/blockassist-bc-scampering_scaly_salmon_1754962175
motza0025
2025-08-12T01:52:32Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "scampering scaly salmon", "arxiv:2504.07091", "region:us" ]
null
2025-08-12T01:52:10Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - scampering scaly salmon --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
m-mulet/try2_qwen_2.5_7b-owl_student_removed_random_2000_influential-2
m-mulet
2025-08-12T01:45:55Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "qwen2", "trl", "en", "base_model:unsloth/Qwen2.5-7B-Instruct", "base_model:finetune:unsloth/Qwen2.5-7B-Instruct", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-08-12T01:45:45Z
--- base_model: unsloth/Qwen2.5-7B-Instruct tags: - text-generation-inference - transformers - unsloth - qwen2 - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** m-mulet - **License:** apache-2.0 - **Finetuned from model :** unsloth/Qwen2.5-7B-Instruct This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
CreitinGameplays/Mistral-Nemo-12B-R1-v0.2-Q4_K_M-GGUF
CreitinGameplays
2025-08-12T01:44:25Z
65
0
transformers
[ "transformers", "gguf", "llama-cpp", "gguf-my-repo", "text-generation", "en", "dataset:CreitinGameplays/Magpie-Reasoning-V2-250K-CoT-Deepseek-R1-Llama-70B-filtered-mistral", "base_model:CreitinGameplays/Mistral-Nemo-12B-R1-v0.2", "base_model:quantized:CreitinGameplays/Mistral-Nemo-12B-R1-v0.2", "license:mit", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2025-04-20T13:44:03Z
--- license: mit datasets: - CreitinGameplays/Magpie-Reasoning-V2-250K-CoT-Deepseek-R1-Llama-70B-filtered-mistral language: - en base_model: CreitinGameplays/Mistral-Nemo-12B-R1-v0.2 pipeline_tag: text-generation library_name: transformers tags: - llama-cpp - gguf-my-repo --- # CreitinGameplays/Mistral-Nemo-12B-R1-v0.2-Q4_K_M-GGUF This model was converted to GGUF format from [`CreitinGameplays/Mistral-Nemo-12B-R1-v0.2`](https://huggingface.co/CreitinGameplays/Mistral-Nemo-12B-R1-v0.2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/CreitinGameplays/Mistral-Nemo-12B-R1-v0.2) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo CreitinGameplays/Mistral-Nemo-12B-R1-v0.2-Q4_K_M-GGUF --hf-file mistral-nemo-12b-r1-v0.2-q4_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo CreitinGameplays/Mistral-Nemo-12B-R1-v0.2-Q4_K_M-GGUF --hf-file mistral-nemo-12b-r1-v0.2-q4_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo CreitinGameplays/Mistral-Nemo-12B-R1-v0.2-Q4_K_M-GGUF --hf-file mistral-nemo-12b-r1-v0.2-q4_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo CreitinGameplays/Mistral-Nemo-12B-R1-v0.2-Q4_K_M-GGUF --hf-file mistral-nemo-12b-r1-v0.2-q4_k_m.gguf -c 2048 ```
myfi/parser_model_ner_3.45_checkpoint_300_lora
myfi
2025-08-12T01:42:57Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "qwen2", "en", "base_model:unsloth/Qwen2.5-3B-Instruct", "base_model:finetune:unsloth/Qwen2.5-3B-Instruct", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-08-12T01:31:40Z
--- base_model: unsloth/Qwen2.5-3B-Instruct tags: - text-generation-inference - transformers - unsloth - qwen2 license: apache-2.0 language: - en --- # Uploaded finetuned model - **Developed by:** myfi - **License:** apache-2.0 - **Finetuned from model :** unsloth/Qwen2.5-3B-Instruct This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1754962623
IvanJAjebu
2025-08-12T01:38:29Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "thorny slender capybara", "arxiv:2504.07091", "region:us" ]
null
2025-08-12T01:38:05Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - thorny slender capybara --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
boredsxe/blockassist-bc-melodic_nocturnal_macaque_1754962549
boredsxe
2025-08-12T01:37:36Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "melodic nocturnal macaque", "arxiv:2504.07091", "region:us" ]
null
2025-08-12T01:37:17Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - melodic nocturnal macaque --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Sayemahsjn/blockassist-bc-playful_feline_octopus_1754961499
Sayemahsjn
2025-08-12T01:37:29Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "playful feline octopus", "arxiv:2504.07091", "region:us" ]
null
2025-08-12T01:37:20Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - playful feline octopus --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
valiantcat/Qwen-Image-EVA-LoRA
valiantcat
2025-08-12T01:36:39Z
0
0
diffusers
[ "diffusers", "image-generation", "lora", "Qwen-Image", "text-to-image", "en", "base_model:Qwen/Qwen-Image", "base_model:adapter:Qwen/Qwen-Image", "license:apache-2.0", "region:us" ]
text-to-image
2025-08-12T01:31:51Z
--- license: apache-2.0 language: - en base_model: - Qwen/Qwen-Image tags: - image-generation - lora - Qwen-Image pipeline_tag: text-to-image library_name: diffusers widget: - text: >- mrx, The image is a digital illustration of an animated female character, likely from an anime or manga series. She has long, flowing orange hair with two small red horns on her head, which could suggest she is a fantasy or supernatural character. Her eyes are blue and expressive, adding to her lively demeanor. The character is wearing a form-fitting bodysuit that is predominantly red with black accents, including stripes along the legs and around the waist. The suit also features a green collar and cuffs, as well as some silver-colored buttons and fastenings, giving it a sleek and tactical appearance.She is posing playfully with one hand on her hip and the other near her face, winking at the viewer. This pose conveys confidence and a sense of fun. The background is minimalistic, featuring what appears to be a metallic wall with rivets, which complements the industrial aesthetic often found in sci-fi or mecha-related genres. There is no explicit context provided within the image itself, but the character's attire and design suggest themes of action, adventure, or science fiction. output: url: result/output1.png - text: >- mrx, 这个动漫少女坐在红色机甲上 output: url: result/output2.png - text: >- mrx, The image depicts a young female character with long, reddish-brown hair tied in twin tails, wearing a traditional Japanese yukata. She is seated indoors, likely on a tatami mat, enjoying a slice of watermelon. The setting suggests a tranquil environment, possibly a ryokan or a home with Japanese architectural elements such as sliding doors and paper windows. The presence of a lantern indicates it might be late afternoon or early evening. In the background, through the open door, one can see a serene outdoor scene with greenery and a clear sky, which adds to the overall peaceful ambiance of the scene. This image could evoke feelings of relaxation and leisure, often associated with summer holidays in Japan. output: url: result/output3.png --- # valiantcat Qwen-Image LoRA <Gallery /> ## Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> This is a model for Qwen-Image in Portrait generation, trained on ```Qwen/Qwen-Image```, and it is mainly applicable to generating pictures of various Asuka characters from Japanese anime.For use in through the following code and ```ComfyUI```,The workflow files are in the folder of this project. <div style="background-color: white; padding: 15px; border-radius: 8px; margin: 15px 0; box-shadow: 0 2px 4px rgba(0,0,0,0.1);"> <h2 style="color: #24292e; margin-top: 0;">ComfyUI Workflow</h2> <p>This LoRA works with a modified version of <a href="https://huggingface.co/valiantcat/Qwen-Image-EVA-LoRA/blob/main/Workflow-Qwen-Image-LORA.json" style="color: #0366d6; text-decoration: none;">Comfy's Qwen-Image workflow</a>. The main modification is adding a Qwen-Image LoRA node connected to the base model.</p> <p>See the Downloads section above for the modified workflow.</p> </div> ### Direct Use ``` from diffusers import DiffusionPipeline import torch model_name = "Qwen/Qwen-Image" # Load the pipeline if torch.cuda.is_available(): torch_dtype = torch.bfloat16 device = "cuda" else: torch_dtype = torch.float32 device = "cpu" pipe = DiffusionPipeline.from_pretrained(model_name, torch_dtype=torch_dtype) pipe = pipe.to(device) # Load LoRA weights pipe.load_lora_weights('valiantcat/Qwen-Image-Shentianyongmei-LoRA/qwen_image_eva.safetensors', adapter_name="lora") prompt = '''mrx, The image depicts a young female character with long, reddish-brown hair tied in twin tails, wearing a traditional Japanese yukata. She is seated indoors, likely on a tatami mat, enjoying a slice of watermelon. The setting suggests a tranquil environment, possibly a ryokan or a home with Japanese architectural elements such as sliding doors and paper windows. The presence of a lantern indicates it might be late afternoon or early evening. In the background, through the open door, one can see a serene outdoor scene with greenery and a clear sky, which adds to the overall peaceful ambiance of the scene. This image could evoke feelings of relaxation and leisure, often associated with summer holidays in Japan. ''' negative_prompt = " " image = pipe( prompt=prompt, negative_prompt=negative_prompt, width=1024, height=1024, num_inference_steps=50, true_cfg_scale=5, generator=torch.Generator(device="cuda").manual_seed(123456) ) image = image.images[0] image.save("output.png") ``` ## Trigger phrase ```mrx``` ## Download model Weights for this model are available in Safetensors format. [Download](https://huggingface.co/valiantcat/Qwen-Image-EVA-LoRA) ## Training at Chongqing Valiant Cat This model was trained by the AI Laboratory of Chongqing Valiant Cat Technology Co., LTD(```https://vvicat.com/```).Business cooperation is welcome
CreitinGameplays/Mistral-Nemo-12B-R1-v0.2
CreitinGameplays
2025-08-12T01:34:41Z
28
6
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "en", "dataset:CreitinGameplays/Magpie-Reasoning-V2-250K-CoT-Deepseek-R1-Llama-70B-filtered-mistral", "base_model:mistralai/Mistral-Nemo-Instruct-2407", "base_model:finetune:mistralai/Mistral-Nemo-Instruct-2407", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-18T01:09:24Z
--- license: mit datasets: - >- CreitinGameplays/Magpie-Reasoning-V2-250K-CoT-Deepseek-R1-Llama-70B-filtered-mistral language: - en base_model: - mistralai/Mistral-Nemo-Instruct-2407 pipeline_tag: text-generation library_name: transformers --- ## Mistral Nemo 12B R1 ![mistralthink](https://autumn.revolt.chat/attachments/zIqa-Q6gKlwm7BbOvKvFFRLHDdy5OOy30KcU5iFle1/image.png) Took **12 hours** to finetune on **1x Nvidia H100** with the following settings: - Batch size: 26 - Gradient accumulation steps: 1 - Epochs: 1 - Learning rate: 2e-5 - Warmup ratio: 0.1 Run the model: ```python import torch from transformers import pipeline model_id = "CreitinGameplays/Mistral-Nemo-12B-R1-v0.2" pipe = pipeline( "text-generation", model=model_id, torch_dtype=torch.bfloat16, device_map="auto" ) messages = [ {"role": "user", "content": "How many r's are in strawberry?"} ] outputs = pipe( messages, max_new_tokens=2048 ) print(outputs[0]["generated_text"][-1]) ``` ### Example response generated with this model: - User prompt: ``` how many r's are in the word strawberry ``` - Model output: ``` <think> Okay, so the question is asking how many R's are in the word "strawberry." Hmm, let me think about this step by step. First, I need to write down the word to make it easier to count. So, "strawberry" is the word. Now, I should go through each letter one by one to check for the letter R. Starting from the beginning, the first letter is S. Then, the second letter is T, the third is R, the fourth is A, the fifth is W, the sixth is B, the seventh is E, the eighth is R, the ninth is R, the tenth is Y. Wait, so I've counted up to the tenth letter, which is Y. Now, I need to count how many of those are R's. Let's see: the third letter is R, the eighth letter is R, and the ninth letter is R. So that's three R's in total. Wait, hold on, let me double-check to make sure I didn't miss any. The word is "strawberry." S, T, R, A, W, B, E, R, R, Y. Yeah, that's three R's. So, the answer should be three. I think that's it. I don't see any other R's in the word. So, the number of R's in "strawberry" is three. **Final Answer** The word "strawberry" contains three R's. </think> The word "strawberry" contains three R's. **Answer:** Three R's.</s> ``` Example 2: - User prompt: ``` which number has bigger value, 9.9 or 9.11? think step by step. ``` - Model output: ``` <think> Okay, so I have this question here: which number is bigger, 9.9 or 9.11? Hmm, let me think about this step by step. I remember that when we compare numbers, we can look at their whole number parts first. So, both numbers start with 9. That means they are equal in terms of the whole number part, right? So, they are both 9. Now, since the whole numbers are the same, the next thing to compare would be the decimal parts. The decimal part of 9.9 is 0.9, and for 9.11, it's 0.11. I need to figure out which of these decimal parts is larger. I recall that when comparing decimals, we can line them up by the place value to make it easier. So, let's write them out: 9.9 is equal to 9.900000... And 9.11 is equal to 9.110000... Now, if I line them up by the tenths place, it looks like this: 9.900000... 9.110000... Looking at the tenths place, the first number has 9 and the second has 1. Since 9 is larger than 1, the first number is bigger. Therefore, 9.9 is larger than 9.11. Wait, but let me double-check to make sure I didn't make a mistake. Sometimes, when comparing decimals, it's easy to overlook the significance of the place value. So, let me consider the hundredths place as well. In 9.9, the hundredths place is 9, and in 9.11, it's 1. So, again, 9 is larger than 1, which confirms that 9.9 is bigger. Is there another way to think about this? Maybe by converting them to fractions or something? Let me try that. 9.9 is equal to 9 and 9/10, and 9.11 is equal to 9 and 11/100. So, 9 and 9/10 is larger than 9 and 11/100 because 9/10 is larger than 11/100. Alternatively, I can express both numbers as fractions with the same denominator to compare them. 9.9 is 99/10, and 9.11 is 911/100. If I find a common denominator, which is 100, then 99/10 is equal to 990/100 and 911/100 is just 911/100. Comparing 990 and 911, 990 is definitely larger than 911, so 9.9 is bigger. Another way is to think about how many times 0.1 is in each number. 9.9 is 9 and a half, which is 9.5, and 9.11 is 9 and a bit more than half, which is less than 9.5. So, 9.9 is larger. Wait, but maybe I should consider the context of the question. Is this a math problem or something else? If it's a math problem, then the answer is straightforward. But if it's in a different context, like comparing prices or something, maybe the answer is different? Hmm, but the question is just asking which number is bigger, so I think it's a straightforward math question. Let me also think about the place value in a different way. The number 9.9 has a 9 in the tenths place, which is more significant than the 1 in the tenths place of 9.11. So, that means 9.9 is larger. Is there a way to visualize this? Maybe by using a number line. On the number line, 9.9 is to the right of 9.11, so it's larger. That makes sense. Another thought: if I add 0.1 to both numbers, 9.9 becomes 10, and 9.11 becomes 9.21. Clearly, 10 is larger than 9.21, so 9.9 is bigger. That seems to confirm my earlier conclusion. Wait, let me try adding 0.01 to both numbers. 9.9 becomes 9.91, and 9.11 becomes 9.12. Now, 9.91 is still larger than 9.12. So, that also shows that 9.9 is bigger. I think I've considered this from multiple angles, and each time, 9.9 comes out as the larger number. So, I'm confident that 9.9 is bigger than 9.11. **Final Answer** The number with the larger value is \boxed{9.9}. </think> To determine which number is larger, 9.9 or 9.11, we can follow these steps: 1. **Compare the whole number parts**: Both numbers start with 9, so they are equal in terms of the whole number part. 2. **Compare the decimal parts**: The decimal part of 9.9 is 0.9, and the decimal part of 9.11 is 0.11. We need to compare these two decimals. 3. **Line up the decimals by place value**: - 9.9 is equal to 9.900000... - 9.11 is equal to 9.110000... 4. **Compare the tenths place**: The first number has 9 and the second has 1. Since 9 is larger than 1, the first number is bigger. 5. **Compare the hundredths place**: The first number has 9 and the second has 1. Again, 9 is larger than 1, confirming that the first number is bigger. 6. **Convert to fractions for verification**: - 9.9 is equal to 99/10. - 9.11 is equal to 911/100. - Comparing 99/10 and 911/100, we find that 99/10 is larger than 911/100. 7. **Consider the number line**: On the number line, 9.9 is to the right of 9.11, indicating that it is larger. 8. **Add small numbers to both**: - Adding 0.1 to both: 9.9 becomes 10, and 9.11 becomes 9.21. Clearly, 10 is larger. - Adding 0.01 to both: 9.9 becomes 9.91, and 9.11 becomes 9.12. Again, 9.91 is larger. All methods lead to the conclusion that 9.9 is larger than 9.11. \[ \boxed{9.9} \]</s> ``` **Note**: The model was mainly finetuned on English dataset, meaning the model might not perform well in other languages; The model _may_ lose context/ignore your last turn, depending on your last prompt.
myfi/parser_model_ner_3.45_checkpoint_300
myfi
2025-08-12T01:31:37Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "text-generation-inference", "unsloth", "conversational", "en", "base_model:unsloth/Qwen2.5-3B-Instruct", "base_model:finetune:unsloth/Qwen2.5-3B-Instruct", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-08-12T01:20:10Z
--- base_model: unsloth/Qwen2.5-3B-Instruct tags: - text-generation-inference - transformers - unsloth - qwen2 license: apache-2.0 language: - en --- # Uploaded finetuned model - **Developed by:** myfi - **License:** apache-2.0 - **Finetuned from model :** unsloth/Qwen2.5-3B-Instruct This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1754962215
IvanJAjebu
2025-08-12T01:31:28Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "thorny slender capybara", "arxiv:2504.07091", "region:us" ]
null
2025-08-12T01:31:13Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - thorny slender capybara --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
m-mulet/try2_qwen_2.5_7b-owl_student_removed_random_640_influential-2
m-mulet
2025-08-12T01:31:28Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "qwen2", "trl", "en", "base_model:unsloth/Qwen2.5-7B-Instruct", "base_model:finetune:unsloth/Qwen2.5-7B-Instruct", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-08-12T01:31:21Z
--- base_model: unsloth/Qwen2.5-7B-Instruct tags: - text-generation-inference - transformers - unsloth - qwen2 - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** m-mulet - **License:** apache-2.0 - **Finetuned from model :** unsloth/Qwen2.5-7B-Instruct This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1754961934
IvanJAjebu
2025-08-12T01:26:53Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "thorny slender capybara", "arxiv:2504.07091", "region:us" ]
null
2025-08-12T01:26:37Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - thorny slender capybara --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
LandCruiser/sn21_omg_1208_1
LandCruiser
2025-08-12T01:23:50Z
0
0
null
[ "safetensors", "any-to-any", "omega", "omegalabs", "bittensor", "agi", "license:mit", "region:us" ]
any-to-any
2025-08-12T01:21:40Z
--- license: mit tags: - any-to-any - omega - omegalabs - bittensor - agi --- This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet. Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
fatmhd1995/phi35_ft_llm_4_annotation_lora_rnd2
fatmhd1995
2025-08-12T01:15:42Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-08-12T01:15:36Z
--- base_model: unsloth/phi-3.5-mini-instruct-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** fatmhd1995 - **License:** apache-2.0 - **Finetuned from model :** unsloth/phi-3.5-mini-instruct-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Sayemahsjn/blockassist-bc-playful_feline_octopus_1754959841
Sayemahsjn
2025-08-12T01:08:17Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "playful feline octopus", "arxiv:2504.07091", "region:us" ]
null
2025-08-12T01:08:12Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - playful feline octopus --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
John6666/squizoid-v15-sdxl
John6666
2025-08-12T01:07:42Z
0
0
diffusers
[ "diffusers", "safetensors", "text-to-image", "stable-diffusion", "stable-diffusion-xl", "anime", "manga", "comic", "girls", "cute", "squid girl-inspired charm", "detailed, well-lit backgrounds", "everyday moments to light fantasy scenes", "characters", "backgrounds", "proportions", "colors", "versatile", "all-round", "merge", "raehoshi", "dvine", "kiwimix", "toonify", "mature ritual", "illustrious", "en", "base_model:BoRnNo0b/files-mirror", "base_model:merge:BoRnNo0b/files-mirror", "base_model:OnomaAIResearch/Illustrious-xl-early-release-v0", "base_model:merge:OnomaAIResearch/Illustrious-xl-early-release-v0", "base_model:Raelina/Raehoshi-illust-XL-5.1", "base_model:merge:Raelina/Raehoshi-illust-XL-5.1", "license:other", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
2025-08-12T01:00:28Z
--- license: other license_name: faipl-1.0-sd license_link: https://freedevproject.org/faipl-1.0-sd/ language: - en library_name: diffusers pipeline_tag: text-to-image tags: - text-to-image - stable-diffusion - stable-diffusion-xl - anime - manga - comic - girls - cute - squid girl-inspired charm - detailed, well-lit backgrounds - everyday moments to light fantasy scenes - characters - backgrounds - proportions - colors - versatile - all-round - merge - raehoshi - dvine - kiwimix - toonify - mature ritual - illustrious base_model: - Raelina/Raehoshi-illust-XL-5.1 - BoRnNo0b/files-mirror - OnomaAIResearch/Illustrious-xl-early-release-v0 --- Original model is [here](https://civitai.com/models/1858963/squizoid?modelVersionId=2103966). This model created by [giko](https://civitai.com/user/Crody).
m-mulet/try2_qwen_2.5_7b-owl_student_removed_random_40_influential-2
m-mulet
2025-08-12T01:01:31Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "qwen2", "trl", "en", "base_model:unsloth/Qwen2.5-7B-Instruct", "base_model:finetune:unsloth/Qwen2.5-7B-Instruct", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-08-12T01:01:22Z
--- base_model: unsloth/Qwen2.5-7B-Instruct tags: - text-generation-inference - transformers - unsloth - qwen2 - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** m-mulet - **License:** apache-2.0 - **Finetuned from model :** unsloth/Qwen2.5-7B-Instruct This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
dleemiller/EMOTRON-3B-Q6_K-GGUF
dleemiller
2025-08-12T01:01:29Z
0
0
transformers
[ "transformers", "gguf", "safetensors", "onnx", "transformers.js", "emotion", "grpo", "reinforcement-learning", "llama-cpp", "gguf-my-repo", "text-generation", "en", "base_model:dleemiller/EMOTRON-3B", "base_model:quantized:dleemiller/EMOTRON-3B", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-generation
2025-08-12T01:01:14Z
--- library_name: transformers license: apache-2.0 language: - en pipeline_tag: text-generation tags: - safetensors - onnx - transformers.js - emotion - grpo - reinforcement-learning - llama-cpp - gguf-my-repo base_model: dleemiller/EMOTRON-3B --- # dleemiller/EMOTRON-3B-Q6_K-GGUF This model was converted to GGUF format from [`dleemiller/EMOTRON-3B`](https://huggingface.co/dleemiller/EMOTRON-3B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/dleemiller/EMOTRON-3B) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo dleemiller/EMOTRON-3B-Q6_K-GGUF --hf-file emotron-3b-q6_k.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo dleemiller/EMOTRON-3B-Q6_K-GGUF --hf-file emotron-3b-q6_k.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo dleemiller/EMOTRON-3B-Q6_K-GGUF --hf-file emotron-3b-q6_k.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo dleemiller/EMOTRON-3B-Q6_K-GGUF --hf-file emotron-3b-q6_k.gguf -c 2048 ```
myfi/parser_model_ner_3.45_checkpoint_200_lora
myfi
2025-08-12T00:56:04Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "qwen2", "en", "base_model:unsloth/Qwen2.5-3B-Instruct", "base_model:finetune:unsloth/Qwen2.5-3B-Instruct", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-08-12T00:44:38Z
--- base_model: unsloth/Qwen2.5-3B-Instruct tags: - text-generation-inference - transformers - unsloth - qwen2 license: apache-2.0 language: - en --- # Uploaded finetuned model - **Developed by:** myfi - **License:** apache-2.0 - **Finetuned from model :** unsloth/Qwen2.5-3B-Instruct This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
gritglass/LLM_doctor
gritglass
2025-08-12T00:53:56Z
3
0
pytorch
[ "pytorch", "safetensors", "mistral", "ko", "arxiv:1910.09700", "license:mit", "region:us" ]
null
2025-08-04T04:52:17Z
--- language: ko library_name: pytorch license: mit --- # Model Card for mistral-7b-v0.3 <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> finetuned with medical knowledge - **Developed by:** glass - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** ko - **License:** mit - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** https://github.com/huggingface/huggingface_hub - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
m-mulet/try2_qwen_2.5_7b-owl_student_removed_random_1_influential-2
m-mulet
2025-08-12T00:46:52Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "qwen2", "trl", "en", "base_model:unsloth/Qwen2.5-7B-Instruct", "base_model:finetune:unsloth/Qwen2.5-7B-Instruct", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-08-12T00:46:45Z
--- base_model: unsloth/Qwen2.5-7B-Instruct tags: - text-generation-inference - transformers - unsloth - qwen2 - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** m-mulet - **License:** apache-2.0 - **Finetuned from model :** unsloth/Qwen2.5-7B-Instruct This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
babaktu1/Website
babaktu1
2025-08-12T00:46:35Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-08-12T00:46:35Z
--- license: apache-2.0 ---
mthandazo/static-asm-analysis-falcon-h1-7b-instruct-ft
mthandazo
2025-08-12T00:44:56Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "sft", "trl", "base_model:tiiuae/Falcon-H1-7B-Instruct", "base_model:finetune:tiiuae/Falcon-H1-7B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-08-11T12:28:57Z
--- base_model: tiiuae/Falcon-H1-7B-Instruct library_name: transformers model_name: static-asm-analysis-falcon-h1-7b-instruct-ft tags: - generated_from_trainer - sft - trl licence: license --- # Model Card for static-asm-analysis-falcon-h1-7b-instruct-ft This model is a fine-tuned version of [tiiuae/Falcon-H1-7B-Instruct](https://huggingface.co/tiiuae/Falcon-H1-7B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="mthandazo/static-asm-analysis-falcon-h1-7b-instruct-ft", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/mthandazogegane/huggingface/runs/xk88fcz0) This model was trained with SFT. ### Framework versions - TRL: 0.21.0 - Transformers: 4.55.0 - Pytorch: 2.6.0+cu124 - Datasets: 4.0.0 - Tokenizers: 0.21.4 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
Merikatorihuhu/results
Merikatorihuhu
2025-08-12T00:44:39Z
0
0
peft
[ "peft", "safetensors", "generated_from_trainer", "base_model:facebook/mbart-large-50-many-to-many-mmt", "base_model:adapter:facebook/mbart-large-50-many-to-many-mmt", "region:us" ]
null
2025-08-12T00:44:28Z
--- library_name: peft base_model: facebook/mbart-large-50-many-to-many-mmt tags: - generated_from_trainer model-index: - name: results results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # results This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP - label_smoothing_factor: 0.1 ### Training results ### Framework versions - PEFT 0.15.2 - Transformers 4.52.4 - Pytorch 2.6.0+cu124 - Datasets 3.6.0 - Tokenizers 0.21.2
myfi/parser_model_ner_3.45_checkpoint_200
myfi
2025-08-12T00:44:33Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "text-generation-inference", "unsloth", "conversational", "en", "base_model:unsloth/Qwen2.5-3B-Instruct", "base_model:finetune:unsloth/Qwen2.5-3B-Instruct", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-08-12T00:33:02Z
--- base_model: unsloth/Qwen2.5-3B-Instruct tags: - text-generation-inference - transformers - unsloth - qwen2 license: apache-2.0 language: - en --- # Uploaded finetuned model - **Developed by:** myfi - **License:** apache-2.0 - **Finetuned from model :** unsloth/Qwen2.5-3B-Instruct This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
sirineddd/pinterest-stable-diffusion-v1-4
sirineddd
2025-08-12T00:41:32Z
0
0
diffusers
[ "diffusers", "tensorboard", "safetensors", "stable-diffusion", "text-to-image", "lora", "pinterest", "skincare", "dataset:sirineddd/pinterest-multimodal-text-to-image", "base_model:CompVis/stable-diffusion-v1-4", "base_model:adapter:CompVis/stable-diffusion-v1-4", "license:mit", "region:us" ]
text-to-image
2025-08-12T00:30:58Z
--- license: mit datasets: - sirineddd/pinterest-multimodal-text-to-image base_model: - CompVis/stable-diffusion-v1-4 library_name: diffusers tags: - stable-diffusion - text-to-image - lora - pinterest - skincare --- # Pinterest-Style LoRA for Stable Diffusion v1.4 Fine-tuned on a curated dataset of aesthetic Pinterest flatlays, focusing on skincare products, soft lighting, and pastel backgrounds. ## Usage ```python from diffusers import StableDiffusionPipeline import torch pipe = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", torch_dtype=torch.float16) pipe.load_lora_weights("sirineddd/pinterest-stable-diffusion-v1-4") prompt = "Aesthetic Pinterest-style flatlay of skincare products, pastel background, soft lighting" image = pipe(prompt, num_inference_steps=30, guidance_scale=7.5).images[0] image.show()
tgrhn/whisper-large-v3-turbo_finetuned-2
tgrhn
2025-08-12T00:40:53Z
0
0
transformers
[ "transformers", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "base_model:openai/whisper-large-v3-turbo", "base_model:finetune:openai/whisper-large-v3-turbo", "license:mit", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2025-08-11T21:53:14Z
--- library_name: transformers license: mit base_model: openai/whisper-large-v3-turbo tags: - generated_from_trainer model-index: - name: whisper-large-v3-turbo_finetuned-2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # whisper-large-v3-turbo_finetuned-2 This model is a fine-tuned version of [openai/whisper-large-v3-turbo](https://huggingface.co/openai/whisper-large-v3-turbo) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1808 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 6 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.1627 | 1.1696 | 1000 | 0.1909 | | 0.1081 | 2.3392 | 2000 | 0.1793 | | 0.0722 | 3.5088 | 3000 | 0.1790 | | 0.0526 | 4.6784 | 4000 | 0.1808 | ### Framework versions - Transformers 4.52.1 - Pytorch 2.6.0+cu124 - Datasets 3.6.0 - Tokenizers 0.21.1
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1754959040
IvanJAjebu
2025-08-12T00:38:32Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "thorny slender capybara", "arxiv:2504.07091", "region:us" ]
null
2025-08-12T00:38:17Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - thorny slender capybara --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
SakuraLLM/Sakura-GalTransl-7B-v3.5
SakuraLLM
2025-08-12T00:36:47Z
5,182
68
null
[ "gguf", "zh", "license:cc-by-nc-sa-4.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-05-22T03:04:31Z
--- license: cc-by-nc-sa-4.0 language: - zh --- Sakura-GalTransl模型由sakuraumi和xd2333共同构建,为视觉小说(Galgame)翻译任务专项优化。模型参数量7B,支持日译简中(jp2zh-cn)。 **Sakura-GalTransl模型继承sakura模型cc-by-nc-sa 4.0协议,禁止用于商用行为,例如提供付费翻译接口、制作需要以任何方式付费才能获取的补丁、商用翻译等。** ### 特性: * 为视觉小说(Galgame)翻译任务专项优化。对视觉小说脚本中的行内换行、控制符、ruby注音等符号具有较好的保留能力。 * 尝试在硬件需求、翻译质量与稳定性间取得平衡。模型可以运行在(空闲显存≥6g)的主流游戏显卡或Macbook上,并获得在整体上高度可用的翻译质量和稳定性。 * 为[GalTransl视觉小说翻译工具](https://github.com/xd2333/GalTransl)适配,支持GPT字典([字典写法](https://github.com/xd2333/GalTransl/wiki/GPT%E5%AD%97%E5%85%B8%E2%80%90Sakura%E4%B8%8EGaltransl%E6%A8%A1%E5%9E%8B))。 * 支持使用[lunatranslator](https://docs.lunatranslator.org/zh/)等工具hook在线翻译。 ### 更新日志: 25.05.30 v3.5:强化文学性 25.03.22 v3.0:基于Sakura-7B-Qwen2.5-v1.0并使用GRPO对模型进行强化,翻译质量显著优于上一代GalTransl模型 24.10.04 v2.6:在2.5的基础上提高了稳定性 24.09.30 v2.5:抑制了一些已知问题,并且在对比v2时文风更细腻 24.08.08 v2.0:继续迭代以改善质量 24.06.30 v1.5:优化了整体的文风 24.05.30 v1.0:初版 ### 快速部署: * Win建议使用[Sakura_Launcher_GUI](https://github.com/PiDanShouRouZhouXD/Sakura_Launcher_GUI)部署,在 release 里下载 * Mac可以使用[run_Sakura_any.zip](https://huggingface.co/SakuraLLM/Sakura-GalTransl-7B-v3/blob/main/run_Sakura_any.zip),是同时支持 Win/Mac/Linux,N卡/A卡/Apple芯片的简化部署包 1. 解压后将模型丢进llm2run文件夹里 2. Win:双击run_Sakura_win.bat然后选择模型 Mac:先去 app store 安装 xcode,然后打开终端切换到run_Sakura.exe所在目录,运行`chmod +x run_Sakura.exe llamafile.exe & ./run_Sakura.exe` Linux: Linux使用GPU需要安装CUDA SDK或HIP SDK,然后切换到run_Sakura.exe所在目录,运行`chmod +x run_Sakura.exe llamafile.exe & ./run_Sakura.exe` 4. 6G 显存 1 线程,8G 及以上显存可以设置 4-10 线程 * 启动失败可能是 8080 端口被占用,可以尝试[找到占用端口的程序](https://www.runoob.com/w3cnote/windows-finds-port-usage.html) ### 请求方式 v3推荐温度0.3,top_p 0.8 v3请求模板: 系统提示/system prompt ``` 你是一个视觉小说翻译模型,可以通顺地使用给定的术语表以指定的风格将日文翻译成简体中文,并联系上下文正确使用人称代词,注意不要混淆使役态和被动态的主语和宾语,不要擅自添加原文中没有的特殊符号,也不要擅自增加或减少换行。 ``` 用户输入/user prompt ``` [History] 参考以下术语表(可为空,格式为src->dst #备注): [Glossary] 根据以上术语表的对应关系和备注,结合历史剧情和上下文,将下面的文本从日文翻译成简体中文: [Input] ``` 其中[History]格式为`历史翻译:`+上一轮翻译结果 [Glossary]格式为src->dst #备注 这两项都是可选项,可以留空 * 用户输入示例 ``` 历史翻译:流动泳池里经常会发生这种碰撞。 男方也向四木道歉了。 不过四木的个子很小,要是被大大的充气船或游泳圈卷进去就糟了。 一诚「我们到人少一点的地方去吧。」 我护着四木,将她带到安全的地方。 参考以下术语表(可为空,格式为src->dst #备注): 子供->孩子 柑奈->柑奈 #女角色名 タッキー->小泷 #名称,男生,玩家 一誠->一诚 #主角,男生,昵称 根据以上术语表的对应关系和备注,结合历史剧情和上下文,将下面的文本从日文翻译成简体中文: この辺なら大丈夫かな? 前後に小さな子供がいるけど浮き輪は着けてないな。 柑奈「……あたしのために移動してくれたんだよね。ありがと」 一誠「あ、いや、まあ……」 柑奈「ふふ、学園とかで見るタッキーは頼りないのに」 ``` * 模型输出示例 ``` 这边应该没问题吧? 虽然前面和后面有小孩子,但他们没有戴游泳圈。 柑奈「……你是为了我特意移动到这边的吧。谢谢」 一诚「啊,不,那个……」 柑奈「呵呵,在学校里看到的小泷明明那么不可靠呢」 ``` ### 已知问题: * GPT字典**不支持一词多译写法("a/b")**,将在以后的版本尝试改善。 * 需要对原文省略语素进行推理时可能出现事实错误/幻觉。 * 推荐每次翻译**7-10句** ### 量化等级区别: | 量化等级 | 说明 | | ---- | ---- | | IQ4_XS | 小的质量损失,占用更小,但速度比Q4_K慢(6G显存推荐) | | Q4_K | 小的质量损失(6G显存推荐)| | Q5_K | 很小的质量损失(6G/8G显存推荐) | | Q6_k | 细小的质量损失(8G及以上显存推荐) |
JunHotate/blockassist-bc-mighty_foxy_bobcat_1754958935
JunHotate
2025-08-12T00:36:41Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "mighty foxy bobcat", "arxiv:2504.07091", "region:us" ]
null
2025-08-12T00:36:30Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - mighty foxy bobcat --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
exonics/rest1014gemma2_3
exonics
2025-08-12T00:36:27Z
0
0
transformers
[ "transformers", "gguf", "gemma2", "text-generation-inference", "unsloth", "en", "base_model:ytu-ce-cosmos/Turkish-Gemma-9b-v0.1", "base_model:quantized:ytu-ce-cosmos/Turkish-Gemma-9b-v0.1", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-08-11T23:52:34Z
--- base_model: ytu-ce-cosmos/Turkish-Gemma-9b-v0.1 tags: - text-generation-inference - transformers - unsloth - gemma2 license: apache-2.0 language: - en --- # Uploaded finetuned model - **Developed by:** exonics - **License:** apache-2.0 - **Finetuned from model :** ytu-ce-cosmos/Turkish-Gemma-9b-v0.1 This gemma2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
winnieyangwannan/entity_sft_Llama-3.1-8B-Instruct_lora_8_lr_0.0001_11520_all_37_epoch_1_layer_22
winnieyangwannan
2025-08-12T00:36:10Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "trl", "sft", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-12T00:33:26Z
--- library_name: transformers tags: - trl - sft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Mathlesage/qwenV3
Mathlesage
2025-08-12T00:34:50Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "feature-extraction", "arxiv:1910.09700", "text-generation-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2025-08-11T22:21:16Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Zakaria279/GPT-OSS-DIALECT_TRANSLATOR-1
Zakaria279
2025-08-12T00:33:57Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "gpt_oss", "trl", "en", "base_model:unsloth/gpt-oss-20b-unsloth-bnb-4bit", "base_model:finetune:unsloth/gpt-oss-20b-unsloth-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-08-12T00:33:49Z
--- base_model: unsloth/gpt-oss-20b-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - gpt_oss - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** Zakaria279 - **License:** apache-2.0 - **Finetuned from model :** unsloth/gpt-oss-20b-unsloth-bnb-4bit This gpt_oss model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Mathlesage/euroBertV11-infonce-only-2824-qwen-step-400
Mathlesage
2025-08-12T00:32:39Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "feature-extraction", "arxiv:1910.09700", "text-generation-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2025-08-11T22:19:01Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
m-mulet/try2_qwen_2.5_7b-owl_student_removed_top_24000_influential-2
m-mulet
2025-08-12T00:32:29Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "qwen2", "trl", "en", "base_model:unsloth/Qwen2.5-7B-Instruct", "base_model:finetune:unsloth/Qwen2.5-7B-Instruct", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-08-12T00:32:22Z
--- base_model: unsloth/Qwen2.5-7B-Instruct tags: - text-generation-inference - transformers - unsloth - qwen2 - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** m-mulet - **License:** apache-2.0 - **Finetuned from model :** unsloth/Qwen2.5-7B-Instruct This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
m-mulet/try2_qwen_2.5_7b-owl_student_removed_top_16000_influential-2
m-mulet
2025-08-12T00:30:22Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "qwen2", "trl", "en", "base_model:unsloth/Qwen2.5-7B-Instruct", "base_model:finetune:unsloth/Qwen2.5-7B-Instruct", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-08-12T00:30:15Z
--- base_model: unsloth/Qwen2.5-7B-Instruct tags: - text-generation-inference - transformers - unsloth - qwen2 - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** m-mulet - **License:** apache-2.0 - **Finetuned from model :** unsloth/Qwen2.5-7B-Instruct This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
tachyphylaxis/Behemoth-R1-123B-v2a-GGUF
tachyphylaxis
2025-08-12T00:28:13Z
0
0
null
[ "gguf", "endpoints_compatible", "region:us", "conversational" ]
null
2025-08-12T00:28:12Z
![image/png](https://cdn-uploads.huggingface.co/production/uploads/65f2fd1c25b848bd061b5c2e/il5PmyJOCwkDR_1dwfzHa.png) Mistral (Non-Tekken), i.e., Mistral v3 + `[SYSTEM_PROMPT]` Looks like `<think>` doesn't need to be prefilled. You can opt out of reasoning - it's just as good, maybe even better. No toxic data so you may want to prefill / guide reasoning when dealing with heavy themes (unless your prompt is sufficiently instructed/gaslit to be evil). Alternatively, since `<think>` is not a special token, you can influence reasoning by phrasing it like `<evil_think>`, `<creative_think>`, or `<spicy_think>`, etc. It's smart enough to close it properly. Yes, this is how much I want to avoid tuning MoEs. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65f2fd1c25b848bd061b5c2e/Rson4ntOeqeYOTeKodMxH.png)
roujin/SDGPA
roujin
2025-08-12T00:28:06Z
0
0
null
[ "image-segmentation", "arxiv:2508.03300", "license:mit", "region:us" ]
image-segmentation
2025-07-09T15:51:06Z
--- license: mit pipeline_tag: image-segmentation --- # SDGPA: Zero Shot Domain Adaptive Semantic Segmentation by Synthetic Data Generation and Progressive Adaptation Official implementation of paper: [**Zero Shot Domain Adaptive Semantic Segmentation by Synthetic Data Generation and Progressive Adaptation**](https://huggingface.co/papers/2508.03300) (IROS 25'). Code: [https://github.com/roujin/SDGPA](https://github.com/roujin/SDGPA) <div align="center"> <img src="https://github.com/roujin/SDGPA/raw/main/poster_cvpr%20001.png" alt="SDGPA Overview" width="100%"/> </div> ## Abstract Deep learning-based semantic segmentation models achieve impressive results yet remain limited in handling distribution shifts between training and test data. In this paper, we present SDGPA (Synthetic Data Generation and Progressive Adaptation), a novel method that tackles zero-shot domain adaptive semantic segmentation, in which no target images are available, but only a text description of the target domain's style is provided. To compensate for the lack of target domain training data, we utilize a pretrained off-the-shelf text-to-image diffusion model, which generates training images by transferring source domain images to target style. Directly editing source domain images introduces noise that harms segmentation because the layout of source images cannot be precisely maintained. To address inaccurate layouts in synthetic data, we propose a method that crops the source image, edits small patches individually, and then merges them back together, which helps improve spatial precision. Recognizing the large domain gap, SDGPA constructs an augmented intermediate domain, leveraging easier adaptation subtasks to enable more stable model adaptation to the target domain. Additionally, to mitigate the impact of noise in synthetic data, we design a progressive adaptation strategy, ensuring robust learning throughout the training process. Extensive experiments demonstrate that our method achieves state-of-the-art performance in zero-shot semantic segmentation. ## Installation Environment setting: All of our experiments are conducted on NVIDIA RTX 3090 with cuda 11.8 ```bash source env.sh ``` ## Running You can find all the training scripts in the `scripts/` folder. We use day $\to$ snow setting as an example. First, you should decide where you want to put the datasets. Let's denote it as `<data_root>` (for example:`/data3/roujin`). By default, the experimental logs are stored in `<data_root>`. Then, organize the folder as follows: ``` <data_root> └─ ACDC └─ gt └─ rgb_anon └─ cityscapes └─ gtFine └─ leftImg8bit └─ GTA5 └─ images └─ labels ``` You can refer to cityscapes and ACDC's official websites for the datasets. For GTA5, as we only use a subset of them, we provide the following link to download the subset for your convenience: [https://huggingface.co/datasets/roujin/GTA5subset](https://huggingface.co/datasets/roujin/GTA5subset) For synthetic data generation: ```bash source img_gen/run.sh <data_root> snow ``` For progress model adaptation: ```bash source scripts/snow.sh <data_root> ``` Evaluation: ```bash source eval.sh <data_root> <setting> ``` `<setting>` can be "day", "fog", "rain", "snow", "night", "game" ## Evaluation Results We release the following results. See all logs and checkpoints during training from [https://huggingface.co/roujin/SDGPA/tree/main](https://huggingface.co/roujin/SDGPA/tree/main) | Setting | Day→Night | Clear→Snow | Clear→Rain | Clear→Fog | Real→Game | | :--------------- | :-------------------------------------------------------------------------------------- | :------------------------------------------------------------------------------------- | :------------------------------------------------------------------------------------- | :------------------------------------------------------------------------------------ | :------------------------------------------------------------------------------------- | | results on paper | 26.9±0.8 | 47.4±0.7 | 48.6±0.8 | 58.8±0.7 | 43.4±0.4 | | our released | 27.6 | 46.8 | 49.0 | 59.8 | 43.1 | | checkpoint | [link](https://huggingface.co/roujin/SDGPA/blob/main/night2/weights/weights_65.pth.tar) | [link](https://huggingface.co/roujin/SDGPA/blob/main/snow2/weights/weights_65.pth.tar) | [link](https://huggingface.co/roujin/SDGPA/blob/main/rain2/weights/weights_65.pth.tar) | [link](https://huggingface.co/roujin/SDGPA/blob/main/fog2/weights/weights_65.pth.tar) | [link](https://huggingface.co/roujin/SDGPA/blob/main/game2/weights/weights_65.pth.tar) | We recommend you to read the scripts and the paper for more details. For hyperparameter selection of InstructPix2Pix, we recommend reading: [https://huggingface.co/spaces/timbrooks/instruct-pix2pix/blob/main/README.md](https://huggingface.co/spaces/timbrooks/instruct-pix2pix/blob/main/README.md) ## Acknowledgements This code is built upon the following repositories: * [https://github.com/azuma164/ZoDi](https://github.com/azuma164/ZoDi) * [https://huggingface.co/timbrooks/instruct-pix2pix](https://huggingface.co/timbrooks/instruct-pix2pix) We thank them for their excellent work! ## Citation ```bibtex @misc{luo2025sdgpa, title={Zero Shot Domain Adaptive Semantic Segmentation by Synthetic Data Generation and Progressive Adaptation}, author={Jun Luo and Zijing Zhao and Yang Liu}, year={2025}, eprint={2508.03300}, archivePrefix={arXiv}, primaryClass={cs.CV}, url={https://arxiv.org/abs/2508.03300}, } ```
JunHotate/blockassist-bc-mighty_foxy_bobcat_1754958203
JunHotate
2025-08-12T00:24:14Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "mighty foxy bobcat", "arxiv:2504.07091", "region:us" ]
null
2025-08-12T00:24:03Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - mighty foxy bobcat --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
koloni/blockassist-bc-deadly_graceful_stingray_1754956567
koloni
2025-08-12T00:21:52Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "deadly graceful stingray", "arxiv:2504.07091", "region:us" ]
null
2025-08-12T00:21:49Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - deadly graceful stingray --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
myfi/parser_model_ner_3.45_checkpoint_150
myfi
2025-08-12T00:21:14Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "text-generation-inference", "unsloth", "conversational", "en", "base_model:unsloth/Qwen2.5-3B-Instruct", "base_model:finetune:unsloth/Qwen2.5-3B-Instruct", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-08-12T00:09:36Z
--- base_model: unsloth/Qwen2.5-3B-Instruct tags: - text-generation-inference - transformers - unsloth - qwen2 license: apache-2.0 language: - en --- # Uploaded finetuned model - **Developed by:** myfi - **License:** apache-2.0 - **Finetuned from model :** unsloth/Qwen2.5-3B-Instruct This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1754957544
IvanJAjebu
2025-08-12T00:13:42Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "thorny slender capybara", "arxiv:2504.07091", "region:us" ]
null
2025-08-12T00:13:28Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - thorny slender capybara --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Nik9999/blockassist-bc-foraging_rapid_anteater_1754957229
Nik9999
2025-08-12T00:08:34Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "foraging rapid anteater", "arxiv:2504.07091", "region:us" ]
null
2025-08-12T00:08:24Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - foraging rapid anteater --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Sayemahsjn/blockassist-bc-playful_feline_octopus_1754956260
Sayemahsjn
2025-08-12T00:08:29Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "playful feline octopus", "arxiv:2504.07091", "region:us" ]
null
2025-08-12T00:08:26Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - playful feline octopus --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1754957189
IvanJAjebu
2025-08-12T00:07:43Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "thorny slender capybara", "arxiv:2504.07091", "region:us" ]
null
2025-08-12T00:07:27Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - thorny slender capybara --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
mohammad-shirkhani/Qwen2.5-1.5B-GRPO-rating-80000
mohammad-shirkhani
2025-08-11T23:58:09Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "generated_from_trainer", "trl", "grpo", "arxiv:2402.03300", "base_model:Qwen/Qwen2.5-1.5B-Instruct", "base_model:finetune:Qwen/Qwen2.5-1.5B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-08-11T14:03:44Z
--- base_model: Qwen/Qwen2.5-1.5B-Instruct library_name: transformers model_name: Qwen2.5-1.5B-GRPO-rating-80000 tags: - generated_from_trainer - trl - grpo licence: license --- # Model Card for Qwen2.5-1.5B-GRPO-rating-80000 This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="mohammad-shirkhani/Qwen2.5-1.5B-GRPO-rating-80000", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.14.0 - Transformers: 4.47.1 - Pytorch: 2.6.0+cu124 - Datasets: 3.2.0 - Tokenizers: 0.21.4 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1754956188
IvanJAjebu
2025-08-11T23:51:07Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "thorny slender capybara", "arxiv:2504.07091", "region:us" ]
null
2025-08-11T23:50:52Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - thorny slender capybara --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
shivak/Qwen3-30B-A3B-Thinking-2507-onnx
shivak
2025-08-11T23:50:48Z
0
0
null
[ "onnx", "base_model:Qwen/Qwen3-30B-A3B-Thinking-2507", "base_model:quantized:Qwen/Qwen3-30B-A3B-Thinking-2507", "license:apache-2.0", "region:us" ]
null
2025-08-10T21:05:56Z
--- license: apache-2.0 base_model: - Qwen/Qwen3-30B-A3B-Thinking-2507 --- # Qwen3 30B MoE in ONNX This is an unquantized (BF16) ONNX export of the eponymous Qwen3 MoE model. It replaces the [Python for loop over activated experts](https://github.com/huggingface/transformers/blob/1cea763ba422b83778a8db0374ea90f43b09992b/src/transformers/models/qwen3_moe/modular_qwen3_moe.py#L105) with a [fused MoE ONNX op](https://github.com/microsoft/onnxruntime/blob/main/docs/ContribOperators.md#com.microsoft.MoE), so it should have better perf than the original HF code. However, it still needs to be quantized; in particular, the MoE nodes need to be converted to [QMoEs](https://github.com/microsoft/onnxruntime/blob/main/docs/ContribOperators.md#com.microsoft.QMoE). ## How this was made There are some PyTorch headaches which deterred an ONNX export from being created until now. The following modifications were needed circa August 2025. First, use of `torch.diff` needs to be eliminated, because `aten::diff` doesn't have an ONNX analogue. In `transformers/masking_utils.py`, modify `find_packed_sequence_indices` as follows: ```python # OLD # position_diff = torch.diff(position_ids, prepend=first_dummy_value, dim=-1) # NEW position_ids_with_prepend = torch.cat([first_dummy_value, position_ids], dim=-1) position_diff = position_ids_with_prepend[..., 1:] - position_ids_with_prepend[..., :-1] ``` Next, there is a problem with FX tracing through the vmap which creates the causal mask; it creates BatchTensors of FakeTensors, rather than just FakeTensors. So, we need the manually vmap the function instead. Modify `packed_sequence_mask_function` as follows: ```python # OLD # def inner_mask(batch_idx: int, head_idx: int, q_idx: int, kv_idx: int) -> bool: # return packed_sequence_mask[batch_idx, q_idx] == packed_sequence_mask[batch_idx, kv_idx] # return inner_mask # NEW def already_vmapped_inner_mask(batch_idx, head_idx, q_idx, kv_idx): psm = packed_sequence_mask[0] # batch size 1 qv = psm[q_idx] kv = psm[kv_idx] return qv.reshape(qv.shape + (1,) * kv.ndim) == kv.reshape((1,) * qv.ndim + kv.shape) return already_vmapped_inner_mask ``` Then, eliminate the vmap in `sdpa_mask_older_torch`: ```python # OLD # causal_mask = _vmap_for_bhqkv(mask_function, bh_indices=False)(None, None, cache_position, kv_arange) # causal_mask = causal_mask[None, None, :, :].expand(batch_size, -1, -1, -1) # NEW causal_mask = mask_function(None, None, cache_position, kv_arange) causal_mask = causal_mask[None, None, :, :].expand(batch_size, -1, -1, -1) ``` In the same file, set the global `sdpa_mask = sdpa_mask_older_torch`. To replace the Python for loop, `Qwen3MoeSparseMoeBlock` has to be replaced by a module which lowers to the ONNX op, and maintains its weights in the same layout expected by the op. ```python class OnnxMoEProxy(nn.Module): """ Drop-in replacement for Qwen3MoeSparseMoeBlock during ONNX export. Eager path: emulates original compute using the packed experts (optional). Export path: lowers to a single com.microsoft::MoE operator. """ def __init__( self, *, num_experts: int, hidden_size: int, inter_size: int, top_k: int, normalize_routing_weights: bool, activation_type: str = "silu", use_sparse_mixer: bool = False, # pretrained modules we copy weights from: gate_linear: nn.Linear, experts: nn.ModuleList, ): super().__init__() self.num_experts = int(num_experts) self.hidden_size = int(hidden_size) self.inter_size = int(inter_size) self.top_k = int(top_k) self.normalize_routing_weights = bool(normalize_routing_weights) self.use_sparse_mixer = bool(use_sparse_mixer) act = (activation_type or "silu").lower() if act not in ("relu", "gelu", "silu", "identity"): act = "silu" self.activation_type = act # Keep the router as a normal Linear so we can compute router_probs self.gate = nn.Linear(hidden_size, num_experts, bias=(gate_linear.bias is not None)) with torch.no_grad(): self.gate.weight.copy_(gate_linear.weight) if self.gate.bias is not None and gate_linear.bias is not None: self.gate.bias.copy_(gate_linear.bias) # --- Pack experts into op-required shapes and register as buffers --- # PyTorch Linear: weight shape (out_features, in_features) # MoE op expects: # fc1_experts_weights : (E, H, I) # fc2_experts_weights : (E, I, H) # fc3_experts_weights : (E, H, I) # optional, used for gated/SwiGLU gate_w = [] up_w = [] down_w = [] gate_b = [] up_b = [] down_b = [] for ex in experts: gate_w.append(ex.gate_proj.weight) # (I, H) up_w.append(ex.up_proj.weight) # (I, H) down_w.append(ex.down_proj.weight) # (H, I) gate_b.append(ex.gate_proj.bias if hasattr(ex.gate_proj, "bias") else None) up_b.append(ex.up_proj.bias if hasattr(ex.up_proj, "bias") else None) down_b.append(ex.down_proj.bias if hasattr(ex.down_proj, "bias") else None) # weights W_gate = torch.stack(gate_w, dim=0).transpose(1, 2).contiguous() # (E, H, I) W_up = torch.stack(up_w, dim=0).transpose(1, 2).contiguous() # (E, H, I) W_down = torch.stack(down_w, dim=0).transpose(1, 2).contiguous() # (E, I, H) self.register_buffer("fc1_experts_weights", W_up) # "up" branch self.register_buffer("fc2_experts_weights", W_down) # "down" projection self.register_buffer("fc3_experts_weights", W_gate) # "gate" branch # biases (optional) def _maybe_stack_bias(bias_list, out_dim): if all(b is None for b in bias_list): return None # Expect shape (E, dim) stacked = torch.stack( [b if b is not None else torch.zeros(out_dim, dtype=W_up.dtype, device=W_up.device) for b in bias_list], dim=0 ).contiguous() return stacked B_up = _maybe_stack_bias(up_b, self.inter_size) # (E, I) or None B_down = _maybe_stack_bias(down_b, self.hidden_size) # (E, H) or None B_gate = _maybe_stack_bias(gate_b, self.inter_size) # (E, I) or None if B_up is not None: self.register_buffer("fc1_experts_bias", B_up) else: self.fc1_experts_bias = None if B_down is not None: self.register_buffer("fc2_experts_bias", B_down) else: self.fc2_experts_bias = None if B_gate is not None: self.register_buffer("fc3_experts_bias", B_gate) else: self.fc3_experts_bias = None def _onnx_call(self, flat_x, router_probs): # Build inputs list in the exact order (include Nones for optional args). inputs = [ flat_x, router_probs, self.fc1_experts_weights, self.fc1_experts_bias if hasattr(self, "fc1_experts_bias") else None, self.fc2_experts_weights, self.fc2_experts_bias if hasattr(self, "fc2_experts_bias") else None, self.fc3_experts_weights, self.fc3_experts_bias if hasattr(self, "fc3_experts_bias") else None, ] # Emit a symbolic node that lowers 1:1 to com.microsoft::MoE y = torch.onnx.ops.symbolic( "com.microsoft::MoE", tuple(inputs), attrs=dict( activation_type=self.activation_type, k=int(self.top_k), normalize_routing_weights=int(self.normalize_routing_weights), use_sparse_mixer=int(self.use_sparse_mixer), ), dtype=flat_x.dtype, shape=(flat_x.shape[0], self.hidden_size), ) return y def forward(self, hidden_states: torch.Tensor): # Accept (B, S, H) or (N, H). For export we use 2D then reshape. orig_shape = hidden_states.shape if hidden_states.dim() == 3: flat_x = hidden_states.reshape(-1, self.hidden_size) elif hidden_states.dim() == 2: flat_x = hidden_states else: raise ValueError("MoE expects 2D or 3D input (N,H) or (B,S,H)") # Router probabilities router_logits = self.gate(flat_x) router_probs = F.softmax(router_logits, dim=1, dtype=torch.float) router_probs = router_probs.to(dtype=flat_x.dtype) y2d = self._onnx_call(flat_x, router_probs) if len(orig_shape) == 3: return y2d.reshape(orig_shape) return y2d ``` Of course, the original weights need the original code to load, so these new modules need to be swapped in after. Sometime between loading the model and calling torch.onnx.export, call this on the model: ```python def replace_moe_blocks_for_onnx(model: nn.Module) -> nn.Module: """ In-place replacement of Qwen3MoeSparseMoeBlock -> OnnxMoEProxy. Keeps pretrained weights and packs expert params for ONNX export. """ def _replace(parent: nn.Module): for name, child in list(parent.named_children()): if isinstance(child, Qwen3MoeSparseMoeBlock): proxy = OnnxMoEProxy( num_experts=child.num_experts, hidden_size=child.hidden_size, inter_size=child.intermediate_size, top_k=child.top_k, normalize_routing_weights=bool(child.norm_topk_prob), activation_type=getattr(child, "hidden_act", "silu"), use_sparse_mixer=True, gate_linear=child.gate, experts=child.experts, ).to(next(child.parameters()).device) # preserve dtype proxy = proxy.to(next(child.parameters()).dtype) setattr(parent, name, proxy) del child else: _replace(child) _replace(model) return model ``` Make sure to use Dynamo (now the default) when calling torch.onnx.export. After the necessary edits, this is the command I ran to produce the ONNX files: ``` python quantize_quark.py --model_dir Qwen/Qwen3-30B-A3B-Thinking-2507 --output_dir ./Qwen3-30B-A3B-Thinking-2507-onnx --skip_quantization --model_export onnx --data_type bfloat16 --device cpu --skip_evaluation ``` Finally, `genai_config.json` is just derived from the original HF config files.
PrParadoxy/Reinforce
PrParadoxy
2025-08-11T23:50:12Z
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2025-08-11T23:34:24Z
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 892.00 +/- 256.66 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
shivak/Qwen3-30B-A3B-Instruct-2507-onnx
shivak
2025-08-11T23:49:56Z
0
0
null
[ "onnx", "base_model:Qwen/Qwen3-30B-A3B-Instruct-2507", "base_model:quantized:Qwen/Qwen3-30B-A3B-Instruct-2507", "license:apache-2.0", "region:us" ]
null
2025-08-11T22:56:00Z
--- license: apache-2.0 base_model: - Qwen/Qwen3-30B-A3B-Instruct-2507 --- # Qwen3 30B MoE in ONNX This is an unquantized (BF16) ONNX export of the eponymous Qwen3 MoE model. It replaces the [Python for loop over activated experts](https://github.com/huggingface/transformers/blob/1cea763ba422b83778a8db0374ea90f43b09992b/src/transformers/models/qwen3_moe/modular_qwen3_moe.py#L105) with a [fused MoE ONNX op](https://github.com/microsoft/onnxruntime/blob/main/docs/ContribOperators.md#com.microsoft.MoE), so it should have better perf than the original HF code. However, it still needs to be quantized; in particular, the MoE nodes need to be converted to [QMoEs](https://github.com/microsoft/onnxruntime/blob/main/docs/ContribOperators.md#com.microsoft.QMoE). ## How this was made There are some PyTorch headaches which deterred an ONNX export from being created until now. The following modifications were needed circa August 2025. First, use of `torch.diff` needs to be eliminated, because `aten::diff` doesn't have an ONNX analogue. In `transformers/masking_utils.py`, modify `find_packed_sequence_indices` as follows: ```python # OLD # position_diff = torch.diff(position_ids, prepend=first_dummy_value, dim=-1) # NEW position_ids_with_prepend = torch.cat([first_dummy_value, position_ids], dim=-1) position_diff = position_ids_with_prepend[..., 1:] - position_ids_with_prepend[..., :-1] ``` Next, there is a problem with FX tracing through the vmap which creates the causal mask; it creates BatchTensors of FakeTensors, rather than just FakeTensors. So, we need the manually vmap the function instead. Modify `packed_sequence_mask_function` as follows: ```python # OLD # def inner_mask(batch_idx: int, head_idx: int, q_idx: int, kv_idx: int) -> bool: # return packed_sequence_mask[batch_idx, q_idx] == packed_sequence_mask[batch_idx, kv_idx] # return inner_mask # NEW def already_vmapped_inner_mask(batch_idx, head_idx, q_idx, kv_idx): psm = packed_sequence_mask[0] # batch size 1 qv = psm[q_idx] kv = psm[kv_idx] return qv.reshape(qv.shape + (1,) * kv.ndim) == kv.reshape((1,) * qv.ndim + kv.shape) return already_vmapped_inner_mask ``` Then, eliminate the vmap in `sdpa_mask_older_torch`: ```python # OLD # causal_mask = _vmap_for_bhqkv(mask_function, bh_indices=False)(None, None, cache_position, kv_arange) # causal_mask = causal_mask[None, None, :, :].expand(batch_size, -1, -1, -1) # NEW causal_mask = mask_function(None, None, cache_position, kv_arange) causal_mask = causal_mask[None, None, :, :].expand(batch_size, -1, -1, -1) ``` In the same file, set the global `sdpa_mask = sdpa_mask_older_torch`. To replace the Python for loop, `Qwen3MoeSparseMoeBlock` has to be replaced by a module which lowers to the ONNX op, and maintains its weights in the same layout expected by the op. ```python class OnnxMoEProxy(nn.Module): """ Drop-in replacement for Qwen3MoeSparseMoeBlock during ONNX export. Eager path: emulates original compute using the packed experts (optional). Export path: lowers to a single com.microsoft::MoE operator. """ def __init__( self, *, num_experts: int, hidden_size: int, inter_size: int, top_k: int, normalize_routing_weights: bool, activation_type: str = "silu", use_sparse_mixer: bool = False, # pretrained modules we copy weights from: gate_linear: nn.Linear, experts: nn.ModuleList, ): super().__init__() self.num_experts = int(num_experts) self.hidden_size = int(hidden_size) self.inter_size = int(inter_size) self.top_k = int(top_k) self.normalize_routing_weights = bool(normalize_routing_weights) self.use_sparse_mixer = bool(use_sparse_mixer) act = (activation_type or "silu").lower() if act not in ("relu", "gelu", "silu", "identity"): act = "silu" self.activation_type = act # Keep the router as a normal Linear so we can compute router_probs self.gate = nn.Linear(hidden_size, num_experts, bias=(gate_linear.bias is not None)) with torch.no_grad(): self.gate.weight.copy_(gate_linear.weight) if self.gate.bias is not None and gate_linear.bias is not None: self.gate.bias.copy_(gate_linear.bias) # --- Pack experts into op-required shapes and register as buffers --- # PyTorch Linear: weight shape (out_features, in_features) # MoE op expects: # fc1_experts_weights : (E, H, I) # fc2_experts_weights : (E, I, H) # fc3_experts_weights : (E, H, I) # optional, used for gated/SwiGLU gate_w = [] up_w = [] down_w = [] gate_b = [] up_b = [] down_b = [] for ex in experts: gate_w.append(ex.gate_proj.weight) # (I, H) up_w.append(ex.up_proj.weight) # (I, H) down_w.append(ex.down_proj.weight) # (H, I) gate_b.append(ex.gate_proj.bias if hasattr(ex.gate_proj, "bias") else None) up_b.append(ex.up_proj.bias if hasattr(ex.up_proj, "bias") else None) down_b.append(ex.down_proj.bias if hasattr(ex.down_proj, "bias") else None) # weights W_gate = torch.stack(gate_w, dim=0).transpose(1, 2).contiguous() # (E, H, I) W_up = torch.stack(up_w, dim=0).transpose(1, 2).contiguous() # (E, H, I) W_down = torch.stack(down_w, dim=0).transpose(1, 2).contiguous() # (E, I, H) self.register_buffer("fc1_experts_weights", W_up) # "up" branch self.register_buffer("fc2_experts_weights", W_down) # "down" projection self.register_buffer("fc3_experts_weights", W_gate) # "gate" branch # biases (optional) def _maybe_stack_bias(bias_list, out_dim): if all(b is None for b in bias_list): return None # Expect shape (E, dim) stacked = torch.stack( [b if b is not None else torch.zeros(out_dim, dtype=W_up.dtype, device=W_up.device) for b in bias_list], dim=0 ).contiguous() return stacked B_up = _maybe_stack_bias(up_b, self.inter_size) # (E, I) or None B_down = _maybe_stack_bias(down_b, self.hidden_size) # (E, H) or None B_gate = _maybe_stack_bias(gate_b, self.inter_size) # (E, I) or None if B_up is not None: self.register_buffer("fc1_experts_bias", B_up) else: self.fc1_experts_bias = None if B_down is not None: self.register_buffer("fc2_experts_bias", B_down) else: self.fc2_experts_bias = None if B_gate is not None: self.register_buffer("fc3_experts_bias", B_gate) else: self.fc3_experts_bias = None def _onnx_call(self, flat_x, router_probs): # Build inputs list in the exact order (include Nones for optional args). inputs = [ flat_x, router_probs, self.fc1_experts_weights, self.fc1_experts_bias if hasattr(self, "fc1_experts_bias") else None, self.fc2_experts_weights, self.fc2_experts_bias if hasattr(self, "fc2_experts_bias") else None, self.fc3_experts_weights, self.fc3_experts_bias if hasattr(self, "fc3_experts_bias") else None, ] # Emit a symbolic node that lowers 1:1 to com.microsoft::MoE y = torch.onnx.ops.symbolic( "com.microsoft::MoE", tuple(inputs), attrs=dict( activation_type=self.activation_type, k=int(self.top_k), normalize_routing_weights=int(self.normalize_routing_weights), use_sparse_mixer=int(self.use_sparse_mixer), ), dtype=flat_x.dtype, shape=(flat_x.shape[0], self.hidden_size), ) return y def forward(self, hidden_states: torch.Tensor): # Accept (B, S, H) or (N, H). For export we use 2D then reshape. orig_shape = hidden_states.shape if hidden_states.dim() == 3: flat_x = hidden_states.reshape(-1, self.hidden_size) elif hidden_states.dim() == 2: flat_x = hidden_states else: raise ValueError("MoE expects 2D or 3D input (N,H) or (B,S,H)") # Router probabilities router_logits = self.gate(flat_x) router_probs = F.softmax(router_logits, dim=1, dtype=torch.float) router_probs = router_probs.to(dtype=flat_x.dtype) y2d = self._onnx_call(flat_x, router_probs) if len(orig_shape) == 3: return y2d.reshape(orig_shape) return y2d ``` Of course, the original weights need the original code to load, so these new modules need to be swapped in after. Sometime between loading the model and calling torch.onnx.export, call this on the model: ```python def replace_moe_blocks_for_onnx(model: nn.Module) -> nn.Module: """ In-place replacement of Qwen3MoeSparseMoeBlock -> OnnxMoEProxy. Keeps pretrained weights and packs expert params for ONNX export. """ def _replace(parent: nn.Module): for name, child in list(parent.named_children()): if isinstance(child, Qwen3MoeSparseMoeBlock): proxy = OnnxMoEProxy( num_experts=child.num_experts, hidden_size=child.hidden_size, inter_size=child.intermediate_size, top_k=child.top_k, normalize_routing_weights=bool(child.norm_topk_prob), activation_type=getattr(child, "hidden_act", "silu"), use_sparse_mixer=True, gate_linear=child.gate, experts=child.experts, ).to(next(child.parameters()).device) # preserve dtype proxy = proxy.to(next(child.parameters()).dtype) setattr(parent, name, proxy) del child else: _replace(child) _replace(model) return model ``` Make sure to use Dynamo (now the default) when calling torch.onnx.export. After the necessary edits, this is the command I ran to produce the ONNX files: ``` python quantize_quark.py --model_dir Qwen/Qwen3-30B-A3B-Instruct-2507 --output_dir ./Qwen3-30B-A3B-Instruct-2507-onnx --skip_quantization --model_export onnx --data_type bfloat16 --device cpu --skip_evaluation ``` Finally, `genai_config.json` is just derived from the original HF config files.
acidjp/blockassist-bc-pesty_extinct_prawn_1754955734
acidjp
2025-08-11T23:49:10Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "pesty extinct prawn", "arxiv:2504.07091", "region:us" ]
null
2025-08-11T23:48:30Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - pesty extinct prawn --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
dimamachine/blockassist-bc-fleecy_thriving_parrot_1754953091
dimamachine
2025-08-11T23:48:35Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "fleecy thriving parrot", "arxiv:2504.07091", "region:us" ]
null
2025-08-11T23:48:04Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - fleecy thriving parrot --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Dahghostblogger/blockassist-bc-gregarious_secretive_camel_1754956052
Dahghostblogger
2025-08-11T23:48:21Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "gregarious secretive camel", "arxiv:2504.07091", "region:us" ]
null
2025-08-11T23:48:10Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - gregarious secretive camel --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
esi777/blockassist-bc-camouflaged_trotting_eel_1754956017
esi777
2025-08-11T23:48:05Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "camouflaged trotting eel", "arxiv:2504.07091", "region:us" ]
null
2025-08-11T23:47:53Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - camouflaged trotting eel --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Sayemahsjn/blockassist-bc-playful_feline_octopus_1754954342
Sayemahsjn
2025-08-11T23:37:22Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "playful feline octopus", "arxiv:2504.07091", "region:us" ]
null
2025-08-11T23:37:14Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - playful feline octopus --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
zenqqq/blockassist-bc-restless_reptilian_caterpillar_1754954501
zenqqq
2025-08-11T23:32:28Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "restless reptilian caterpillar", "arxiv:2504.07091", "region:us" ]
null
2025-08-11T23:32:24Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - restless reptilian caterpillar --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
janetsw/medgemma-4b-it-sft-lora-crc100k
janetsw
2025-08-11T23:29:46Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "generated_from_trainer", "trl", "sft", "base_model:google/medgemma-4b-it", "base_model:finetune:google/medgemma-4b-it", "endpoints_compatible", "region:us" ]
null
2025-06-11T19:28:42Z
--- base_model: google/medgemma-4b-it library_name: transformers model_name: medgemma-4b-it-sft-lora-crc100k tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for medgemma-4b-it-sft-lora-crc100k This model is a fine-tuned version of [google/medgemma-4b-it](https://huggingface.co/google/medgemma-4b-it). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="janetsw/medgemma-4b-it-sft-lora-crc100k", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with SFT. ### Framework versions - TRL: 0.21.0 - Transformers: 4.55.0 - Pytorch: 2.7.1 - Datasets: 4.0.0 - Tokenizers: 0.21.2 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
rubuntu/gpt-oss-20b-Jopara-V3.5-LoRA
rubuntu
2025-08-11T23:26:02Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "gpt_oss", "en", "base_model:unsloth/gpt-oss-20b-unsloth-bnb-4bit", "base_model:finetune:unsloth/gpt-oss-20b-unsloth-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "8-bit", "region:us" ]
null
2025-08-11T23:14:53Z
--- base_model: unsloth/gpt-oss-20b-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - gpt_oss license: apache-2.0 language: - en --- # Uploaded finetuned model - **Developed by:** rubuntu - **License:** apache-2.0 - **Finetuned from model :** unsloth/gpt-oss-20b-unsloth-bnb-4bit This gpt_oss model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1754954569
IvanJAjebu
2025-08-11T23:24:13Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "thorny slender capybara", "arxiv:2504.07091", "region:us" ]
null
2025-08-11T23:23:53Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - thorny slender capybara --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
koloni/blockassist-bc-deadly_graceful_stingray_1754953090
koloni
2025-08-11T23:23:22Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "deadly graceful stingray", "arxiv:2504.07091", "region:us" ]
null
2025-08-11T23:23:19Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - deadly graceful stingray --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Elhusseny/Muslim_ArabGPT_ChatBot
Elhusseny
2025-08-11T23:20:10Z
0
0
null
[ "safetensors", "gpt2", "license:apache-2.0", "region:us" ]
null
2025-08-11T23:20:10Z
--- license: apache-2.0 ---
onnx-community/bert-lite-ONNX
onnx-community
2025-08-11T23:19:29Z
0
0
transformers.js
[ "transformers.js", "onnx", "bert", "fill-mask", "base_model:boltuix/bert-lite", "base_model:quantized:boltuix/bert-lite", "region:us" ]
fill-mask
2025-08-11T23:19:26Z
--- library_name: transformers.js base_model: - boltuix/bert-lite --- # bert-lite (ONNX) This is an ONNX version of [boltuix/bert-lite](https://huggingface.co/boltuix/bert-lite). It was automatically converted and uploaded using [this space](https://huggingface.co/spaces/onnx-community/convert-to-onnx).
m-mulet/try2_qwen_2.5_7b-owl_student_removed_top_320_influential-2
m-mulet
2025-08-11T23:18:34Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "qwen2", "trl", "en", "base_model:unsloth/Qwen2.5-7B-Instruct", "base_model:finetune:unsloth/Qwen2.5-7B-Instruct", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-08-11T23:18:26Z
--- base_model: unsloth/Qwen2.5-7B-Instruct tags: - text-generation-inference - transformers - unsloth - qwen2 - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** m-mulet - **License:** apache-2.0 - **Finetuned from model :** unsloth/Qwen2.5-7B-Instruct This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
benjamin-paine/wan-transformers
benjamin-paine
2025-08-11T23:18:12Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-08-11T23:01:54Z
--- license: apache-2.0 ---
lovelygmz/blockassist-bc-mammalian_climbing_jellyfish_1754954232
lovelygmz
2025-08-11T23:17:45Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "mammalian climbing jellyfish", "arxiv:2504.07091", "region:us" ]
null
2025-08-11T23:17:35Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - mammalian climbing jellyfish --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1754954167
IvanJAjebu
2025-08-11T23:17:27Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "thorny slender capybara", "arxiv:2504.07091", "region:us" ]
null
2025-08-11T23:17:10Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - thorny slender capybara --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Yohan07/lerhee723
Yohan07
2025-08-11T23:15:46Z
0
0
null
[ "fr", "base_model:black-forest-labs/FLUX.1-Krea-dev", "base_model:finetune:black-forest-labs/FLUX.1-Krea-dev", "license:openrail", "region:us" ]
null
2025-08-11T23:00:24Z
--- license: openrail language: - fr base_model: - black-forest-labs/FLUX.1-Krea-dev ---
igory1999/distilbert-base-uncased-distilled-clinc
igory1999
2025-08-11T23:14:50Z
2
0
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-10-06T03:13:32Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer metrics: - accuracy model-index: - name: distilbert-base-uncased-distilled-clinc results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-distilled-clinc This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2543 - Accuracy: 0.9461 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 48 - eval_batch_size: 48 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 9 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 2.4408 | 1.0 | 318 | 1.7825 | 0.7258 | | 1.3869 | 2.0 | 636 | 0.9368 | 0.8726 | | 0.7537 | 3.0 | 954 | 0.5422 | 0.9226 | | 0.4535 | 4.0 | 1272 | 0.3798 | 0.9374 | | 0.3131 | 5.0 | 1590 | 0.3111 | 0.9406 | | 0.2446 | 6.0 | 1908 | 0.2771 | 0.9445 | | 0.2117 | 7.0 | 2226 | 0.2648 | 0.9461 | | 0.1941 | 8.0 | 2544 | 0.2558 | 0.9468 | | 0.1865 | 9.0 | 2862 | 0.2543 | 0.9461 | ### Framework versions - Transformers 4.34.0 - Pytorch 2.6.0+cu118 - Datasets 2.14.5 - Tokenizers 0.14.1