modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-09-01 06:29:04
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
530 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-09-01 06:28:51
card
stringlengths
11
1.01M
hurtmongoose/results
hurtmongoose
2025-08-31T14:49:46Z
0
0
peft
[ "peft", "safetensors", "base_model:adapter:google/flan-t5-small", "lora", "transformers", "base_model:google/flan-t5-small", "license:apache-2.0", "region:us" ]
null
2025-08-31T14:49:42Z
--- library_name: peft license: apache-2.0 base_model: google/flan-t5-small tags: - base_model:adapter:google/flan-t5-small - lora - transformers model-index: - name: results results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # results This model is a fine-tuned version of [google/flan-t5-small](https://huggingface.co/google/flan-t5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4509 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 6 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 7.8302 | 1.0 | 261 | 7.9282 | | 4.6754 | 2.0 | 522 | 4.7825 | | 2.2132 | 3.0 | 783 | 2.7961 | | 0.7958 | 4.0 | 1044 | 1.0468 | | 0.8819 | 5.0 | 1305 | 0.5291 | | 0.3865 | 6.0 | 1566 | 0.4509 | ### Framework versions - PEFT 0.17.1 - Transformers 4.55.4 - Pytorch 2.8.0+cu126 - Datasets 4.0.0 - Tokenizers 0.21.4
sekirr/blockassist-bc-masked_tenacious_whale_1756651743
sekirr
2025-08-31T14:49:42Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "masked tenacious whale", "arxiv:2504.07091", "region:us" ]
null
2025-08-31T14:49:39Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - masked tenacious whale --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
liukevin666/blockassist-bc-yawning_striped_cassowary_1756651704
liukevin666
2025-08-31T14:49:40Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "yawning striped cassowary", "arxiv:2504.07091", "region:us" ]
null
2025-08-31T14:49:19Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - yawning striped cassowary --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
cwayneconnor/blockassist-bc-mute_loud_lynx_1756651360
cwayneconnor
2025-08-31T14:49:11Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "mute loud lynx", "arxiv:2504.07091", "region:us" ]
null
2025-08-31T14:46:29Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - mute loud lynx --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
mradermacher/mn-12b-impersonation-city-i1-GGUF
mradermacher
2025-08-31T14:48:51Z
0
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:ToastyPigeon/mn-12b-impersonation-city", "base_model:quantized:ToastyPigeon/mn-12b-impersonation-city", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2025-08-31T13:47:37Z
--- base_model: ToastyPigeon/mn-12b-impersonation-city language: - en library_name: transformers mradermacher: readme_rev: 1 quantized_by: mradermacher tags: - mergekit - merge --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> <!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K_M Q4_0 IQ3_XS Q4_1 IQ3_S --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> weighted/imatrix quants of https://huggingface.co/ToastyPigeon/mn-12b-impersonation-city <!-- provided-files --> ***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#mn-12b-impersonation-city-i1-GGUF).*** static quants are available at https://huggingface.co/mradermacher/mn-12b-impersonation-city-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/mn-12b-impersonation-city-i1-GGUF/resolve/main/mn-12b-impersonation-city.imatrix.gguf) | imatrix | 0.1 | imatrix file (for creating your own qwuants) | | [GGUF](https://huggingface.co/mradermacher/mn-12b-impersonation-city-i1-GGUF/resolve/main/mn-12b-impersonation-city.i1-IQ1_S.gguf) | i1-IQ1_S | 3.1 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/mn-12b-impersonation-city-i1-GGUF/resolve/main/mn-12b-impersonation-city.i1-IQ1_M.gguf) | i1-IQ1_M | 3.3 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/mn-12b-impersonation-city-i1-GGUF/resolve/main/mn-12b-impersonation-city.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 3.7 | | | [GGUF](https://huggingface.co/mradermacher/mn-12b-impersonation-city-i1-GGUF/resolve/main/mn-12b-impersonation-city.i1-IQ2_XS.gguf) | i1-IQ2_XS | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/mn-12b-impersonation-city-i1-GGUF/resolve/main/mn-12b-impersonation-city.i1-IQ2_S.gguf) | i1-IQ2_S | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/mn-12b-impersonation-city-i1-GGUF/resolve/main/mn-12b-impersonation-city.i1-IQ2_M.gguf) | i1-IQ2_M | 4.5 | | | [GGUF](https://huggingface.co/mradermacher/mn-12b-impersonation-city-i1-GGUF/resolve/main/mn-12b-impersonation-city.i1-Q2_K_S.gguf) | i1-Q2_K_S | 4.6 | very low quality | | [GGUF](https://huggingface.co/mradermacher/mn-12b-impersonation-city-i1-GGUF/resolve/main/mn-12b-impersonation-city.i1-Q2_K.gguf) | i1-Q2_K | 4.9 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/mn-12b-impersonation-city-i1-GGUF/resolve/main/mn-12b-impersonation-city.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 5.0 | lower quality | | [GGUF](https://huggingface.co/mradermacher/mn-12b-impersonation-city-i1-GGUF/resolve/main/mn-12b-impersonation-city.i1-IQ3_XS.gguf) | i1-IQ3_XS | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/mn-12b-impersonation-city-i1-GGUF/resolve/main/mn-12b-impersonation-city.i1-Q3_K_S.gguf) | i1-Q3_K_S | 5.6 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/mn-12b-impersonation-city-i1-GGUF/resolve/main/mn-12b-impersonation-city.i1-IQ3_S.gguf) | i1-IQ3_S | 5.7 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/mn-12b-impersonation-city-i1-GGUF/resolve/main/mn-12b-impersonation-city.i1-IQ3_M.gguf) | i1-IQ3_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/mn-12b-impersonation-city-i1-GGUF/resolve/main/mn-12b-impersonation-city.i1-Q3_K_M.gguf) | i1-Q3_K_M | 6.2 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/mn-12b-impersonation-city-i1-GGUF/resolve/main/mn-12b-impersonation-city.i1-Q3_K_L.gguf) | i1-Q3_K_L | 6.7 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/mn-12b-impersonation-city-i1-GGUF/resolve/main/mn-12b-impersonation-city.i1-IQ4_XS.gguf) | i1-IQ4_XS | 6.8 | | | [GGUF](https://huggingface.co/mradermacher/mn-12b-impersonation-city-i1-GGUF/resolve/main/mn-12b-impersonation-city.i1-Q4_0.gguf) | i1-Q4_0 | 7.2 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/mn-12b-impersonation-city-i1-GGUF/resolve/main/mn-12b-impersonation-city.i1-IQ4_NL.gguf) | i1-IQ4_NL | 7.2 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/mn-12b-impersonation-city-i1-GGUF/resolve/main/mn-12b-impersonation-city.i1-Q4_K_S.gguf) | i1-Q4_K_S | 7.2 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/mn-12b-impersonation-city-i1-GGUF/resolve/main/mn-12b-impersonation-city.i1-Q4_K_M.gguf) | i1-Q4_K_M | 7.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/mn-12b-impersonation-city-i1-GGUF/resolve/main/mn-12b-impersonation-city.i1-Q4_1.gguf) | i1-Q4_1 | 7.9 | | | [GGUF](https://huggingface.co/mradermacher/mn-12b-impersonation-city-i1-GGUF/resolve/main/mn-12b-impersonation-city.i1-Q5_K_S.gguf) | i1-Q5_K_S | 8.6 | | | [GGUF](https://huggingface.co/mradermacher/mn-12b-impersonation-city-i1-GGUF/resolve/main/mn-12b-impersonation-city.i1-Q5_K_M.gguf) | i1-Q5_K_M | 8.8 | | | [GGUF](https://huggingface.co/mradermacher/mn-12b-impersonation-city-i1-GGUF/resolve/main/mn-12b-impersonation-city.i1-Q6_K.gguf) | i1-Q6_K | 10.2 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
giovannidemuri/llama8b-er-v506-seed2-hx
giovannidemuri
2025-08-31T14:48:45Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-31T09:31:16Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
pepijn223/rlearn_18
pepijn223
2025-08-31T14:48:01Z
0
0
lerobot
[ "lerobot", "safetensors", "rlearn", "robotics", "dataset:pepijn223/phone_pipeline_pickup1", "license:apache-2.0", "region:us" ]
robotics
2025-08-31T14:47:48Z
--- datasets: pepijn223/phone_pipeline_pickup1 library_name: lerobot license: apache-2.0 model_name: rlearn pipeline_tag: robotics tags: - rlearn - lerobot - robotics --- # Model Card for rlearn <!-- Provide a quick summary of what the model is/does. --> _Model type not recognized — please update this template._ This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot). See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index). --- ## How to Get Started with the Model For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy). Below is the short version on how to train and run inference/eval: ### Train from scratch ```bash python -m lerobot.scripts.train \ --dataset.repo_id=${HF_USER}/<dataset> \ --policy.type=act \ --output_dir=outputs/train/<desired_policy_repo_id> \ --job_name=lerobot_training \ --policy.device=cuda \ --policy.repo_id=${HF_USER}/<desired_policy_repo_id> --wandb.enable=true ``` _Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._ ### Evaluate the policy/run inference ```bash python -m lerobot.record \ --robot.type=so100_follower \ --dataset.repo_id=<hf_user>/eval_<dataset> \ --policy.path=<hf_user>/<desired_policy_repo_id> \ --episodes=10 ``` Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint. --- ## Model Details - **License:** apache-2.0
AndreyRV/Qwen3-0.6B-Gensyn-Swarm-fierce_mute_cheetah
AndreyRV
2025-08-31T14:47:59Z
65
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "rl-swarm", "genrl-swarm", "grpo", "gensyn", "I am fierce_mute_cheetah", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-30T15:21:23Z
--- library_name: transformers tags: - rl-swarm - genrl-swarm - grpo - gensyn - I am fierce_mute_cheetah --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mradermacher/Moonbright-12B-i1-GGUF
mradermacher
2025-08-31T14:44:14Z
0
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:Vortex5/Moonbright-12B", "base_model:quantized:Vortex5/Moonbright-12B", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2025-08-31T13:42:49Z
--- base_model: Vortex5/Moonbright-12B language: - en library_name: transformers mradermacher: readme_rev: 1 quantized_by: mradermacher tags: - mergekit - merge --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> <!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K_M Q4_0 IQ3_XS Q4_1 IQ3_S --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> weighted/imatrix quants of https://huggingface.co/Vortex5/Moonbright-12B <!-- provided-files --> ***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Moonbright-12B-i1-GGUF).*** static quants are available at https://huggingface.co/mradermacher/Moonbright-12B-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Moonbright-12B-i1-GGUF/resolve/main/Moonbright-12B.imatrix.gguf) | imatrix | 0.1 | imatrix file (for creating your own qwuants) | | [GGUF](https://huggingface.co/mradermacher/Moonbright-12B-i1-GGUF/resolve/main/Moonbright-12B.i1-IQ1_S.gguf) | i1-IQ1_S | 3.1 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/Moonbright-12B-i1-GGUF/resolve/main/Moonbright-12B.i1-IQ1_M.gguf) | i1-IQ1_M | 3.3 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/Moonbright-12B-i1-GGUF/resolve/main/Moonbright-12B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 3.7 | | | [GGUF](https://huggingface.co/mradermacher/Moonbright-12B-i1-GGUF/resolve/main/Moonbright-12B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/Moonbright-12B-i1-GGUF/resolve/main/Moonbright-12B.i1-IQ2_S.gguf) | i1-IQ2_S | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/Moonbright-12B-i1-GGUF/resolve/main/Moonbright-12B.i1-IQ2_M.gguf) | i1-IQ2_M | 4.5 | | | [GGUF](https://huggingface.co/mradermacher/Moonbright-12B-i1-GGUF/resolve/main/Moonbright-12B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 4.6 | very low quality | | [GGUF](https://huggingface.co/mradermacher/Moonbright-12B-i1-GGUF/resolve/main/Moonbright-12B.i1-Q2_K.gguf) | i1-Q2_K | 4.9 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/Moonbright-12B-i1-GGUF/resolve/main/Moonbright-12B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 5.0 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Moonbright-12B-i1-GGUF/resolve/main/Moonbright-12B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/Moonbright-12B-i1-GGUF/resolve/main/Moonbright-12B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 5.6 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/Moonbright-12B-i1-GGUF/resolve/main/Moonbright-12B.i1-IQ3_S.gguf) | i1-IQ3_S | 5.7 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Moonbright-12B-i1-GGUF/resolve/main/Moonbright-12B.i1-IQ3_M.gguf) | i1-IQ3_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/Moonbright-12B-i1-GGUF/resolve/main/Moonbright-12B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 6.2 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/Moonbright-12B-i1-GGUF/resolve/main/Moonbright-12B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 6.7 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/Moonbright-12B-i1-GGUF/resolve/main/Moonbright-12B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 6.8 | | | [GGUF](https://huggingface.co/mradermacher/Moonbright-12B-i1-GGUF/resolve/main/Moonbright-12B.i1-Q4_0.gguf) | i1-Q4_0 | 7.2 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Moonbright-12B-i1-GGUF/resolve/main/Moonbright-12B.i1-IQ4_NL.gguf) | i1-IQ4_NL | 7.2 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/Moonbright-12B-i1-GGUF/resolve/main/Moonbright-12B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 7.2 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/Moonbright-12B-i1-GGUF/resolve/main/Moonbright-12B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 7.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Moonbright-12B-i1-GGUF/resolve/main/Moonbright-12B.i1-Q4_1.gguf) | i1-Q4_1 | 7.9 | | | [GGUF](https://huggingface.co/mradermacher/Moonbright-12B-i1-GGUF/resolve/main/Moonbright-12B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 8.6 | | | [GGUF](https://huggingface.co/mradermacher/Moonbright-12B-i1-GGUF/resolve/main/Moonbright-12B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 8.8 | | | [GGUF](https://huggingface.co/mradermacher/Moonbright-12B-i1-GGUF/resolve/main/Moonbright-12B.i1-Q6_K.gguf) | i1-Q6_K | 10.2 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
mradermacher/mn-12b-impersonation-city-GGUF
mradermacher
2025-08-31T14:44:14Z
0
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:ToastyPigeon/mn-12b-impersonation-city", "base_model:quantized:ToastyPigeon/mn-12b-impersonation-city", "endpoints_compatible", "region:us", "conversational" ]
null
2025-08-31T13:38:19Z
--- base_model: ToastyPigeon/mn-12b-impersonation-city language: - en library_name: transformers mradermacher: readme_rev: 1 quantized_by: mradermacher tags: - mergekit - merge --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> <!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> static quants of https://huggingface.co/ToastyPigeon/mn-12b-impersonation-city <!-- provided-files --> ***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#mn-12b-impersonation-city-GGUF).*** weighted/imatrix quants are available at https://huggingface.co/mradermacher/mn-12b-impersonation-city-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/mn-12b-impersonation-city-GGUF/resolve/main/mn-12b-impersonation-city.Q2_K.gguf) | Q2_K | 4.9 | | | [GGUF](https://huggingface.co/mradermacher/mn-12b-impersonation-city-GGUF/resolve/main/mn-12b-impersonation-city.Q3_K_S.gguf) | Q3_K_S | 5.6 | | | [GGUF](https://huggingface.co/mradermacher/mn-12b-impersonation-city-GGUF/resolve/main/mn-12b-impersonation-city.Q3_K_M.gguf) | Q3_K_M | 6.2 | lower quality | | [GGUF](https://huggingface.co/mradermacher/mn-12b-impersonation-city-GGUF/resolve/main/mn-12b-impersonation-city.Q3_K_L.gguf) | Q3_K_L | 6.7 | | | [GGUF](https://huggingface.co/mradermacher/mn-12b-impersonation-city-GGUF/resolve/main/mn-12b-impersonation-city.IQ4_XS.gguf) | IQ4_XS | 6.9 | | | [GGUF](https://huggingface.co/mradermacher/mn-12b-impersonation-city-GGUF/resolve/main/mn-12b-impersonation-city.Q4_K_S.gguf) | Q4_K_S | 7.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/mn-12b-impersonation-city-GGUF/resolve/main/mn-12b-impersonation-city.Q4_K_M.gguf) | Q4_K_M | 7.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/mn-12b-impersonation-city-GGUF/resolve/main/mn-12b-impersonation-city.Q5_K_S.gguf) | Q5_K_S | 8.6 | | | [GGUF](https://huggingface.co/mradermacher/mn-12b-impersonation-city-GGUF/resolve/main/mn-12b-impersonation-city.Q5_K_M.gguf) | Q5_K_M | 8.8 | | | [GGUF](https://huggingface.co/mradermacher/mn-12b-impersonation-city-GGUF/resolve/main/mn-12b-impersonation-city.Q6_K.gguf) | Q6_K | 10.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/mn-12b-impersonation-city-GGUF/resolve/main/mn-12b-impersonation-city.Q8_0.gguf) | Q8_0 | 13.1 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
akirafudo/blockassist-bc-keen_fast_giraffe_1756651391
akirafudo
2025-08-31T14:43:34Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "keen fast giraffe", "arxiv:2504.07091", "region:us" ]
null
2025-08-31T14:43:30Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - keen fast giraffe --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Persilia-AI/GPT-2-Persilia-ai
Persilia-AI
2025-08-31T14:43:23Z
0
0
null
[ "Chatgpt", "en", "dataset:fka/awesome-chatgpt-prompts", "base_model:deepseek-ai/DeepSeek-V3.1-Base", "base_model:finetune:deepseek-ai/DeepSeek-V3.1-Base", "license:mit", "region:us" ]
null
2025-08-31T14:42:13Z
--- license: mit datasets: - fka/awesome-chatgpt-prompts language: - en base_model: - openai/gpt-oss-20b - deepseek-ai/DeepSeek-V3.1-Base new_version: openai/gpt-oss-20b tags: - Chatgpt ---
mradermacher/CaptainMaid-12B-VioletMell-V0.420-GGUF
mradermacher
2025-08-31T14:40:39Z
0
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:pot99rta/CaptainMaid-12B-VioletMell-V0.420", "base_model:quantized:pot99rta/CaptainMaid-12B-VioletMell-V0.420", "endpoints_compatible", "region:us", "conversational" ]
null
2025-08-31T13:18:10Z
--- base_model: pot99rta/CaptainMaid-12B-VioletMell-V0.420 language: - en library_name: transformers mradermacher: readme_rev: 1 quantized_by: mradermacher tags: - mergekit - merge --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> <!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> static quants of https://huggingface.co/pot99rta/CaptainMaid-12B-VioletMell-V0.420 <!-- provided-files --> ***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#CaptainMaid-12B-VioletMell-V0.420-GGUF).*** weighted/imatrix quants are available at https://huggingface.co/mradermacher/CaptainMaid-12B-VioletMell-V0.420-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/CaptainMaid-12B-VioletMell-V0.420-GGUF/resolve/main/CaptainMaid-12B-VioletMell-V0.420.Q2_K.gguf) | Q2_K | 4.9 | | | [GGUF](https://huggingface.co/mradermacher/CaptainMaid-12B-VioletMell-V0.420-GGUF/resolve/main/CaptainMaid-12B-VioletMell-V0.420.Q3_K_S.gguf) | Q3_K_S | 5.6 | | | [GGUF](https://huggingface.co/mradermacher/CaptainMaid-12B-VioletMell-V0.420-GGUF/resolve/main/CaptainMaid-12B-VioletMell-V0.420.Q3_K_M.gguf) | Q3_K_M | 6.2 | lower quality | | [GGUF](https://huggingface.co/mradermacher/CaptainMaid-12B-VioletMell-V0.420-GGUF/resolve/main/CaptainMaid-12B-VioletMell-V0.420.Q3_K_L.gguf) | Q3_K_L | 6.7 | | | [GGUF](https://huggingface.co/mradermacher/CaptainMaid-12B-VioletMell-V0.420-GGUF/resolve/main/CaptainMaid-12B-VioletMell-V0.420.IQ4_XS.gguf) | IQ4_XS | 6.9 | | | [GGUF](https://huggingface.co/mradermacher/CaptainMaid-12B-VioletMell-V0.420-GGUF/resolve/main/CaptainMaid-12B-VioletMell-V0.420.Q4_K_S.gguf) | Q4_K_S | 7.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/CaptainMaid-12B-VioletMell-V0.420-GGUF/resolve/main/CaptainMaid-12B-VioletMell-V0.420.Q4_K_M.gguf) | Q4_K_M | 7.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/CaptainMaid-12B-VioletMell-V0.420-GGUF/resolve/main/CaptainMaid-12B-VioletMell-V0.420.Q5_K_S.gguf) | Q5_K_S | 8.6 | | | [GGUF](https://huggingface.co/mradermacher/CaptainMaid-12B-VioletMell-V0.420-GGUF/resolve/main/CaptainMaid-12B-VioletMell-V0.420.Q5_K_M.gguf) | Q5_K_M | 8.8 | | | [GGUF](https://huggingface.co/mradermacher/CaptainMaid-12B-VioletMell-V0.420-GGUF/resolve/main/CaptainMaid-12B-VioletMell-V0.420.Q6_K.gguf) | Q6_K | 10.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/CaptainMaid-12B-VioletMell-V0.420-GGUF/resolve/main/CaptainMaid-12B-VioletMell-V0.420.Q8_0.gguf) | Q8_0 | 13.1 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
VoilaRaj/81_g_rhT1gd
VoilaRaj
2025-08-31T14:39:56Z
0
0
null
[ "safetensors", "any-to-any", "omega", "omegalabs", "bittensor", "agi", "license:mit", "region:us" ]
any-to-any
2025-08-31T14:39:28Z
--- license: mit tags: - any-to-any - omega - omegalabs - bittensor - agi --- This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet. Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
akirafudo/blockassist-bc-keen_fast_giraffe_1756651003
akirafudo
2025-08-31T14:37:04Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "keen fast giraffe", "arxiv:2504.07091", "region:us" ]
null
2025-08-31T14:37:00Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - keen fast giraffe --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
shardaprasad/blockassist-bc-barky_giant_squid_1756650852
shardaprasad
2025-08-31T14:36:16Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "barky giant squid", "arxiv:2504.07091", "region:us" ]
null
2025-08-31T14:36:13Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - barky giant squid --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
koloni/blockassist-bc-deadly_graceful_stingray_1756649411
koloni
2025-08-31T14:36:04Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "deadly graceful stingray", "arxiv:2504.07091", "region:us" ]
null
2025-08-31T14:36:00Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - deadly graceful stingray --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
arif696/blockassist-bc-regal_spotted_pelican_1756650812
arif696
2025-08-31T14:34:45Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "regal spotted pelican", "arxiv:2504.07091", "region:us" ]
null
2025-08-31T14:34:38Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - regal spotted pelican --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Delta-IV/template
Delta-IV
2025-08-31T14:34:15Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-08-31T14:34:15Z
--- license: apache-2.0 ---
akirafudo/blockassist-bc-keen_fast_giraffe_1756650823
akirafudo
2025-08-31T14:34:06Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "keen fast giraffe", "arxiv:2504.07091", "region:us" ]
null
2025-08-31T14:34:02Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - keen fast giraffe --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Beijuka/bert-base-multilingual-cased-hausa-ner-v1
Beijuka
2025-08-31T14:33:59Z
0
0
transformers
[ "transformers", "safetensors", "bert", "token-classification", "named-entity-recognition", "hausa", "african-language", "pii-detection", "generated_from_trainer", "dataset:Beijuka/Multilingual_PII_NER_dataset", "base_model:google-bert/bert-base-multilingual-cased", "base_model:finetune:google-bert/bert-base-multilingual-cased", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2025-08-31T14:09:58Z
--- library_name: transformers license: apache-2.0 base_model: google-bert/bert-base-multilingual-cased tags: - named-entity-recognition - hausa - african-language - pii-detection - token-classification - generated_from_trainer datasets: - Beijuka/Multilingual_PII_NER_dataset metrics: - precision - recall - f1 - accuracy model-index: - name: multilingual-google-bert/bert-base-multilingual-cased-hausa-ner-v1 results: - task: name: Token Classification type: token-classification dataset: name: Beijuka/Multilingual_PII_NER_dataset type: Beijuka/Multilingual_PII_NER_dataset args: 'split: train+validation+test' metrics: - name: Precision type: precision value: 0.9529745042492918 - name: Recall type: recall value: 0.9236683141131247 - name: F1 type: f1 value: 0.9380925822643614 - name: Accuracy type: accuracy value: 0.9788954787029192 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # multilingual-google-bert/bert-base-multilingual-cased-hausa-ner-v1 This model is a fine-tuned version of [google-bert/bert-base-multilingual-cased](https://huggingface.co/google-bert/bert-base-multilingual-cased) on the Beijuka/Multilingual_PII_NER_dataset dataset. It achieves the following results on the evaluation set: - Loss: 0.1237 - Precision: 0.9530 - Recall: 0.9237 - F1: 0.9381 - Accuracy: 0.9789 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 301 | 0.1502 | 0.8451 | 0.8843 | 0.8643 | 0.9526 | | 0.2112 | 2.0 | 602 | 0.1347 | 0.8573 | 0.9393 | 0.8964 | 0.9604 | | 0.2112 | 3.0 | 903 | 0.1241 | 0.8813 | 0.9398 | 0.9096 | 0.9668 | | 0.0847 | 4.0 | 1204 | 0.1770 | 0.8589 | 0.9460 | 0.9004 | 0.9640 | | 0.0619 | 5.0 | 1505 | 0.1295 | 0.9012 | 0.9146 | 0.9078 | 0.9673 | | 0.0619 | 6.0 | 1806 | 0.1502 | 0.9018 | 0.9254 | 0.9134 | 0.9683 | | 0.0394 | 7.0 | 2107 | 0.1801 | 0.8729 | 0.9506 | 0.9101 | 0.9661 | | 0.0394 | 8.0 | 2408 | 0.1807 | 0.9119 | 0.9321 | 0.9219 | 0.9705 | | 0.0236 | 9.0 | 2709 | 0.1660 | 0.9259 | 0.9187 | 0.9223 | 0.9719 | | 0.0124 | 10.0 | 3010 | 0.1878 | 0.8939 | 0.9496 | 0.9209 | 0.9705 | | 0.0124 | 11.0 | 3311 | 0.2095 | 0.8874 | 0.9486 | 0.9170 | 0.9693 | | 0.01 | 12.0 | 3612 | 0.2370 | 0.8814 | 0.9480 | 0.9135 | 0.9664 | ### Framework versions - Transformers 4.55.4 - Pytorch 2.8.0+cu126 - Datasets 4.0.0 - Tokenizers 0.21.4
minyujin/kogpt2-finetuning-merged
minyujin
2025-08-31T14:33:48Z
0
0
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2025-08-31T14:33:43Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
bearlover365/d4_dataset_only_2_validation_episodes_diffusion
bearlover365
2025-08-31T14:31:58Z
0
0
lerobot
[ "lerobot", "safetensors", "diffusion", "robotics", "dataset:bearlover365/pick_place_up_to_four_white_socks_varying_daylight_intensity_train", "arxiv:2303.04137", "license:apache-2.0", "region:us" ]
robotics
2025-08-31T13:20:21Z
--- datasets: bearlover365/pick_place_up_to_four_white_socks_varying_daylight_intensity_train library_name: lerobot license: apache-2.0 model_name: diffusion pipeline_tag: robotics tags: - lerobot - diffusion - robotics --- # Model Card for diffusion <!-- Provide a quick summary of what the model is/does. --> [Diffusion Policy](https://huggingface.co/papers/2303.04137) treats visuomotor control as a generative diffusion process, producing smooth, multi-step action trajectories that excel at contact-rich manipulation. This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot). See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index). --- ## How to Get Started with the Model For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy). Below is the short version on how to train and run inference/eval: ### Train from scratch ```bash python -m lerobot.scripts.train \ --dataset.repo_id=${HF_USER}/<dataset> \ --policy.type=act \ --output_dir=outputs/train/<desired_policy_repo_id> \ --job_name=lerobot_training \ --policy.device=cuda \ --policy.repo_id=${HF_USER}/<desired_policy_repo_id> --wandb.enable=true ``` _Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._ ### Evaluate the policy/run inference ```bash python -m lerobot.record \ --robot.type=so100_follower \ --dataset.repo_id=<hf_user>/eval_<dataset> \ --policy.path=<hf_user>/<desired_policy_repo_id> \ --episodes=10 ``` Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint. --- ## Model Details - **License:** apache-2.0
akirafudo/blockassist-bc-keen_fast_giraffe_1756650641
akirafudo
2025-08-31T14:31:06Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "keen fast giraffe", "arxiv:2504.07091", "region:us" ]
null
2025-08-31T14:31:01Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - keen fast giraffe --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
akirafudo/blockassist-bc-keen_fast_giraffe_1756650462
akirafudo
2025-08-31T14:28:04Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "keen fast giraffe", "arxiv:2504.07091", "region:us" ]
null
2025-08-31T14:28:01Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - keen fast giraffe --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
minyujin/kogpt2-finetuning-qlora
minyujin
2025-08-31T14:27:42Z
0
0
peft
[ "peft", "tensorboard", "safetensors", "base_model:adapter:skt/kogpt2-base-v2", "lora", "transformers", "text-generation", "base_model:skt/kogpt2-base-v2", "license:cc-by-nc-sa-4.0", "region:us" ]
text-generation
2025-08-31T14:26:02Z
--- library_name: peft license: cc-by-nc-sa-4.0 base_model: skt/kogpt2-base-v2 tags: - base_model:adapter:skt/kogpt2-base-v2 - lora - transformers pipeline_tag: text-generation model-index: - name: kogpt2-finetuning-qlora results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # kogpt2-finetuning-qlora This model is a fine-tuned version of [skt/kogpt2-base-v2](https://huggingface.co/skt/kogpt2-base-v2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0850 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.2173 | 1.0 | 108 | 0.1644 | | 0.1237 | 2.0 | 216 | 0.1004 | | 0.1073 | 3.0 | 324 | 0.0850 | ### Framework versions - PEFT 0.17.1 - Transformers 4.56.0 - Pytorch 2.8.0+cu126 - Datasets 4.0.0 - Tokenizers 0.22.0
ntnu-smil/secret-model-stage-1-8B-32
ntnu-smil
2025-08-31T14:27:29Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "generated_from_trainer", "endpoints_compatible", "region:us" ]
null
2025-08-31T14:26:26Z
--- library_name: transformers tags: - generated_from_trainer model-index: - name: secret-model-stage-1-8B-32 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # secret-model-stage-1-8B-32 This model was trained from scratch on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1070 - Centroid Acc: 0.9811 - Centroid Macro F1: 0.9805 - Knn Acc: 0.9811 - Knn Macro F1: 0.9805 - Alignment: 0.4123 - Uniformity: -2.8989 - Combined Score: 0.9805 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 16 - eval_batch_size: 64 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.06 - num_epochs: 100.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Centroid Acc | Centroid Macro F1 | Knn Acc | Knn Macro F1 | Alignment | Uniformity | Combined Score | |:-------------:|:------:|:----:|:---------------:|:------------:|:-----------------:|:-------:|:------------:|:---------:|:----------:|:--------------:| | No log | 0 | 0 | 2.3436 | 0.5660 | 0.5370 | 0.7170 | 0.7131 | 0.2797 | -0.7130 | 0.5957 | | 1.2412 | 3.125 | 100 | 0.7993 | 0.8113 | 0.8149 | 0.7925 | 0.7874 | 0.3830 | -1.9092 | 0.8057 | | 0.9887 | 6.25 | 200 | 0.6368 | 0.9057 | 0.9043 | 0.9434 | 0.9438 | 0.4639 | -2.3435 | 0.9175 | | 0.7032 | 9.375 | 300 | 0.5491 | 0.9057 | 0.9103 | 0.9245 | 0.9265 | 0.3843 | -2.1929 | 0.9157 | | 0.2618 | 12.5 | 400 | 0.1410 | 0.9434 | 0.9438 | 0.9245 | 0.9241 | 0.3929 | -2.5564 | 0.9372 | | 0.2934 | 15.625 | 500 | 0.2402 | 0.9811 | 0.9805 | 0.9434 | 0.9394 | 0.4081 | -2.5045 | 0.9668 | | 0.2267 | 18.75 | 600 | 0.3960 | 0.9434 | 0.9417 | 0.9434 | 0.9438 | 0.4676 | -2.6223 | 0.9424 | | 0.1858 | 21.875 | 700 | 0.1469 | 0.9623 | 0.9612 | 0.9434 | 0.9407 | 0.4225 | -2.8028 | 0.9544 | | 0.0626 | 25.0 | 800 | 0.2411 | 0.9811 | 0.9805 | 0.9811 | 0.9805 | 0.4344 | -2.8140 | 0.9805 | | 0.0626 | 25.0 | 800 | 0.2411 | 0.9811 | 0.9805 | 0.9811 | 0.9805 | 0.4344 | -2.8140 | 0.9805 | | 0.0373 | 28.125 | 900 | 0.1800 | 0.9811 | 0.9805 | 1.0 | 1.0 | 0.4696 | -2.8784 | 0.9870 | | 0.0176 | 31.25 | 1000 | 0.1727 | 1.0 | 1.0 | 1.0 | 1.0 | 0.4318 | -2.8063 | 1.0 | | 0.111 | 34.375 | 1100 | 0.0621 | 0.9811 | 0.9805 | 0.9811 | 0.9829 | 0.3770 | -2.7065 | 0.9813 | | 0.0486 | 37.5 | 1200 | 0.1078 | 1.0 | 1.0 | 0.9811 | 0.9805 | 0.4132 | -2.8674 | 0.9935 | | 0.0054 | 40.625 | 1300 | 0.1198 | 1.0 | 1.0 | 1.0 | 1.0 | 0.4120 | -2.8506 | 1.0 | | 0.0069 | 43.75 | 1400 | 0.1805 | 0.9811 | 0.9805 | 0.9811 | 0.9805 | 0.4114 | -2.7904 | 0.9805 | | 0.0196 | 46.875 | 1500 | 0.1678 | 0.9811 | 0.9805 | 0.9811 | 0.9805 | 0.4262 | -2.9247 | 0.9805 | | 0.0027 | 50.0 | 1600 | 0.0957 | 1.0 | 1.0 | 1.0 | 1.0 | 0.4106 | -2.8659 | 1.0 | | 0.0027 | 50.0 | 1600 | 0.0957 | 1.0 | 1.0 | 1.0 | 1.0 | 0.4106 | -2.8659 | 1.0 | | 0.0777 | 53.125 | 1700 | 0.0687 | 1.0 | 1.0 | 1.0 | 1.0 | 0.4015 | -2.8900 | 1.0 | | 0.0011 | 56.25 | 1800 | 0.0804 | 1.0 | 1.0 | 1.0 | 1.0 | 0.4102 | -2.9196 | 1.0 | | 0.0151 | 59.375 | 1900 | 0.0749 | 1.0 | 1.0 | 1.0 | 1.0 | 0.4151 | -2.9207 | 1.0 | | 0.0284 | 62.5 | 2000 | 0.0865 | 0.9811 | 0.9805 | 0.9811 | 0.9805 | 0.4014 | -2.8595 | 0.9805 | | 0.001 | 65.625 | 2100 | 0.1106 | 0.9811 | 0.9805 | 0.9811 | 0.9805 | 0.4099 | -2.8875 | 0.9805 | | 0.0009 | 68.75 | 2200 | 0.0807 | 0.9811 | 0.9805 | 1.0 | 1.0 | 0.4144 | -2.9166 | 0.9870 | | 0.0012 | 71.875 | 2300 | 0.1107 | 0.9811 | 0.9805 | 1.0 | 1.0 | 0.4192 | -2.9153 | 0.9870 | | 0.0009 | 75.0 | 2400 | 0.0987 | 0.9811 | 0.9805 | 0.9811 | 0.9805 | 0.4138 | -2.9017 | 0.9805 | | 0.0009 | 75.0 | 2400 | 0.0987 | 0.9811 | 0.9805 | 0.9811 | 0.9805 | 0.4138 | -2.9017 | 0.9805 | | 0.0011 | 78.125 | 2500 | 0.1045 | 0.9811 | 0.9805 | 0.9811 | 0.9805 | 0.4161 | -2.9174 | 0.9805 | | 0.0008 | 81.25 | 2600 | 0.0895 | 0.9811 | 0.9805 | 0.9811 | 0.9805 | 0.4054 | -2.8906 | 0.9805 | | 0.0089 | 84.375 | 2700 | 0.0899 | 0.9811 | 0.9805 | 0.9811 | 0.9805 | 0.4092 | -2.9021 | 0.9805 | | 0.0006 | 87.5 | 2800 | 0.0933 | 0.9811 | 0.9805 | 0.9811 | 0.9805 | 0.4102 | -2.9016 | 0.9805 | | 0.0008 | 90.625 | 2900 | 0.1126 | 0.9811 | 0.9805 | 0.9811 | 0.9805 | 0.4110 | -2.8889 | 0.9805 | | 0.0009 | 93.75 | 3000 | 0.1084 | 0.9811 | 0.9805 | 0.9811 | 0.9805 | 0.4116 | -2.8958 | 0.9805 | | 0.0387 | 96.875 | 3100 | 0.1089 | 0.9811 | 0.9805 | 0.9811 | 0.9805 | 0.4123 | -2.8985 | 0.9805 | | 0.0007 | 100.0 | 3200 | 0.1070 | 0.9811 | 0.9805 | 0.9811 | 0.9805 | 0.4123 | -2.8989 | 0.9805 | | 0.0007 | 100.0 | 3200 | 0.1070 | 0.9811 | 0.9805 | 0.9811 | 0.9805 | 0.4123 | -2.8989 | 0.9805 | ### Framework versions - Transformers 4.56.0 - Pytorch 2.8.0+cu128 - Datasets 4.0.0 - Tokenizers 0.22.0
pidbu/blockassist-bc-whistling_alert_shrew_1756650324
pidbu
2025-08-31T14:26:50Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "whistling alert shrew", "arxiv:2504.07091", "region:us" ]
null
2025-08-31T14:26:12Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - whistling alert shrew --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
2hpsatt/blockassist-bc-huge_deft_eagle_1756650344
2hpsatt
2025-08-31T14:26:36Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "huge deft eagle", "arxiv:2504.07091", "region:us" ]
null
2025-08-31T14:26:26Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - huge deft eagle --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
arif696/blockassist-bc-regal_spotted_pelican_1756650270
arif696
2025-08-31T14:25:43Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "regal spotted pelican", "arxiv:2504.07091", "region:us" ]
null
2025-08-31T14:25:36Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - regal spotted pelican --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
VoilaRaj/81_g_IRM148
VoilaRaj
2025-08-31T14:25:20Z
0
0
null
[ "safetensors", "any-to-any", "omega", "omegalabs", "bittensor", "agi", "license:mit", "region:us" ]
any-to-any
2025-08-31T14:24:52Z
--- license: mit tags: - any-to-any - omega - omegalabs - bittensor - agi --- This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet. Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
Rustamshry/HeisenbergQ-0.5B-RL
Rustamshry
2025-08-31T14:21:19Z
20
1
peft
[ "peft", "safetensors", "trl", "physics", "unsloth", "transformers", "grpo", "text-generation", "conversational", "en", "dataset:jilp00/YouToks-Instruct-Quantum-Physics-II", "base_model:unsloth/Qwen2.5-0.5B-Instruct", "base_model:adapter:unsloth/Qwen2.5-0.5B-Instruct", "license:mit", "region:us" ]
text-generation
2025-08-27T11:27:28Z
--- base_model: unsloth/Qwen2.5-0.5B-Instruct library_name: peft license: mit datasets: - jilp00/YouToks-Instruct-Quantum-Physics-II language: - en pipeline_tag: text-generation tags: - trl - physics - unsloth - transformers - grpo --- # Model Card for HeisenbergQ-0.5B ## Model Details HeisenbergQ-0.5B is a fine-tuned version of Qwen2.5-0.5B-Instruct, optimized for quantum physics reasoning using GRPO reinforcement learning with custom reward functions. This model is trained to produce structured answers in XML format with <reasoning> and <answer> tags. It excels at step-by-step logical reasoning in physics-related problems. ### Model Description - **Language(s) (NLP):** English - **License:** MIT - **Finetuned from model:** unsloth/Qwen2.5-0.5B-Instruct - **Fine-Tuning Method:** GRPO with LoRA - **Domain**: Quantum Physics - **Dataset**: jilp00/YouToks-Instruct-Quantum-Physics-II ## Uses ### Direct Use - Primary: Solving and reasoning through quantum physics problems - Secondary: General scientific reasoning in math & physics - Not for: General-purpose conversation (model is specialized) ## Bias, Risks, and Limitations - Trained only on ~1K samples (domain-specific) - May hallucinate outside physics domain - Small 0.5B parameter size = lightweight, but reasoning depth is limited compared to larger models ## How to Get Started with the Model Use the code below to get started with the model. ```python from huggingface_hub import login from transformers import AutoTokenizer, AutoModelForCausalLM from peft import PeftModel login(token="") tokenizer = AutoTokenizer.from_pretrained("unsloth/Qwen2.5-0.5B-Instruct",) base_model = AutoModelForCausalLM.from_pretrained( "unsloth/Qwen2.5-0.5B-Instruct", device_map={"": 0}, token="" ) model = PeftModel.from_pretrained(base_model,"Rustamshry/HeisenbergQ-0.5B-RL") system = """ Respond in the following format: <reasoning> ... </reasoning> <answer> ... </answer> """ question = """ What is the significance of setting mass equal to 1 in a quantum dynamical system, and how does it impact the formulation of the Hamiltonian and the operators? """ messages = [ {"role": "system", "content": system}, {"role": "user", "content": question} ] text = tokenizer.apply_chat_template( messages, tokenize = False, add_generation_prompt = True, ) from transformers import TextStreamer _ = model.generate( **tokenizer(text, return_tensors = "pt").to("cuda"), max_new_tokens = 1800, streamer = TextStreamer(tokenizer, skip_prompt = True), ) ``` ## Training Details ### Training Procedure - Training Method: GRPO (Grouped Relative Policy Optimization) - Reward Models: Reasoning Quality Reward: Encourages logical markers & coherent chains of thought - Token Count Reward: Prevents under- or over-explaining - XML Reward: Enforces <reasoning> / <answer> format - Soft Format Reward: Ensures graceful handling of edge cases - Steps: ~390 steps, 3 epochs - Batch Size: 16 (with 2 generations per prompt) ### Framework versions - PEFT 0.15.2
Sonic-man/blockassist-bc-poisonous_graceful_cow_1756647770
Sonic-man
2025-08-31T14:20:30Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "poisonous graceful cow", "arxiv:2504.07091", "region:us" ]
null
2025-08-31T14:20:27Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - poisonous graceful cow --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Beijuka/afro-xlmr-base-kanuri-ner-v1
Beijuka
2025-08-31T14:17:27Z
0
0
transformers
[ "transformers", "safetensors", "xlm-roberta", "token-classification", "named-entity-recognition", "kanuri", "african-language", "pii-detection", "generated_from_trainer", "dataset:Beijuka/Multilingual_PII_NER_dataset", "base_model:Davlan/afro-xlmr-base", "base_model:finetune:Davlan/afro-xlmr-base", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2025-08-31T13:53:42Z
--- library_name: transformers license: mit base_model: Davlan/afro-xlmr-base tags: - named-entity-recognition - kanuri - african-language - pii-detection - token-classification - generated_from_trainer datasets: - Beijuka/Multilingual_PII_NER_dataset metrics: - precision - recall - f1 - accuracy model-index: - name: multilingual-Davlan/afro-xlmr-base-kanuri-ner-v1 results: - task: name: Token Classification type: token-classification dataset: name: Beijuka/Multilingual_PII_NER_dataset type: Beijuka/Multilingual_PII_NER_dataset args: 'split: train+validation+test' metrics: - name: Precision type: precision value: 0.9328358208955224 - name: Recall type: recall value: 0.9529860228716646 - name: F1 type: f1 value: 0.9428032683846638 - name: Accuracy type: accuracy value: 0.9857189865087199 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # multilingual-Davlan/afro-xlmr-base-kanuri-ner-v1 This model is a fine-tuned version of [Davlan/afro-xlmr-base](https://huggingface.co/Davlan/afro-xlmr-base) on the Beijuka/Multilingual_PII_NER_dataset dataset. It achieves the following results on the evaluation set: - Loss: 0.0769 - Precision: 0.9328 - Recall: 0.9530 - F1: 0.9428 - Accuracy: 0.9857 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 301 | 0.1158 | 0.8646 | 0.8610 | 0.8628 | 0.9683 | | 0.2058 | 2.0 | 602 | 0.0876 | 0.8848 | 0.9431 | 0.9130 | 0.9751 | | 0.2058 | 3.0 | 903 | 0.0854 | 0.9078 | 0.9143 | 0.9110 | 0.9783 | | 0.0658 | 4.0 | 1204 | 0.1092 | 0.8847 | 0.9383 | 0.9107 | 0.9755 | | 0.0491 | 5.0 | 1505 | 0.0881 | 0.9046 | 0.9431 | 0.9234 | 0.9782 | | 0.0491 | 6.0 | 1806 | 0.1227 | 0.9015 | 0.9323 | 0.9166 | 0.9770 | | 0.0298 | 7.0 | 2107 | 0.1005 | 0.9218 | 0.9461 | 0.9338 | 0.9805 | | 0.0298 | 8.0 | 2408 | 0.1454 | 0.8970 | 0.9395 | 0.9178 | 0.9774 | | 0.0164 | 9.0 | 2709 | 0.1301 | 0.9146 | 0.9305 | 0.9225 | 0.9789 | | 0.0089 | 10.0 | 3010 | 0.1297 | 0.9215 | 0.9425 | 0.9319 | 0.9806 | ### Framework versions - Transformers 4.55.4 - Pytorch 2.8.0+cu126 - Datasets 4.0.0 - Tokenizers 0.21.4
akirafudo/blockassist-bc-keen_fast_giraffe_1756649816
akirafudo
2025-08-31T14:17:17Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "keen fast giraffe", "arxiv:2504.07091", "region:us" ]
null
2025-08-31T14:17:13Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - keen fast giraffe --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
openpecha/Gemma_bo_OCR_4B_v1_ep3_demo
openpecha
2025-08-31T14:16:55Z
8
0
transformers
[ "transformers", "safetensors", "gemma3", "image-text-to-text", "conversational", "arxiv:1910.09700", "text-generation-inference", "endpoints_compatible", "region:us" ]
image-text-to-text
2025-07-05T10:50:40Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ench100/bodyandface
ench100
2025-08-31T14:13:10Z
364
1
diffusers
[ "diffusers", "text-to-image", "lora", "template:diffusion-lora", "base_model:lodestones/Chroma", "base_model:adapter:lodestones/Chroma", "region:us" ]
text-to-image
2025-08-12T08:58:41Z
--- tags: - text-to-image - lora - diffusers - template:diffusion-lora widget: - output: url: images/2.png text: '-' base_model: lodestones/Chroma instance_prompt: null --- # forME <Gallery /> ## Download model [Download](/ench100/bodyandface/tree/main) them in the Files & versions tab.
arif696/blockassist-bc-regal_spotted_pelican_1756649515
arif696
2025-08-31T14:13:06Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "regal spotted pelican", "arxiv:2504.07091", "region:us" ]
null
2025-08-31T14:12:59Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - regal spotted pelican --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
vicky710/tinyllama-mental-health-lora
vicky710
2025-08-31T14:11:41Z
44
0
peft
[ "peft", "safetensors", "base_model:adapter:TinyLlama/TinyLlama-1.1B-Chat-v1.0", "lora", "transformers", "text-generation", "conversational", "base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0", "license:apache-2.0", "region:us" ]
text-generation
2025-08-28T11:06:54Z
--- library_name: peft license: apache-2.0 base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0 tags: - base_model:adapter:TinyLlama/TinyLlama-1.1B-Chat-v1.0 - lora - transformers pipeline_tag: text-generation model-index: - name: tinyllama-mental-health-lora results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # tinyllama-mental-health-lora This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 16 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 3 ### Training results ### Framework versions - PEFT 0.17.0 - Transformers 4.54.1 - Pytorch 2.7.1+cu118 - Datasets 4.0.0 - Tokenizers 0.21.4
pidbu/blockassist-bc-whistling_alert_shrew_1756649405
pidbu
2025-08-31T14:11:36Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "whistling alert shrew", "arxiv:2504.07091", "region:us" ]
null
2025-08-31T14:10:47Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - whistling alert shrew --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
VoilaRaj/81_g_I68vSF
VoilaRaj
2025-08-31T14:10:33Z
0
0
null
[ "safetensors", "any-to-any", "omega", "omegalabs", "bittensor", "agi", "license:mit", "region:us" ]
any-to-any
2025-08-31T14:10:05Z
--- license: mit tags: - any-to-any - omega - omegalabs - bittensor - agi --- This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet. Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
Beijuka/afro-xlmr-base-hausa-ner-v1
Beijuka
2025-08-31T14:06:30Z
0
0
transformers
[ "transformers", "safetensors", "xlm-roberta", "token-classification", "named-entity-recognition", "hausa", "african-language", "pii-detection", "generated_from_trainer", "dataset:Beijuka/Multilingual_PII_NER_dataset", "base_model:Davlan/afro-xlmr-base", "base_model:finetune:Davlan/afro-xlmr-base", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2025-08-31T13:32:17Z
--- library_name: transformers license: mit base_model: Davlan/afro-xlmr-base tags: - named-entity-recognition - hausa - african-language - pii-detection - token-classification - generated_from_trainer datasets: - Beijuka/Multilingual_PII_NER_dataset metrics: - precision - recall - f1 - accuracy model-index: - name: multilingual-Davlan/afro-xlmr-base-hausa-ner-v1 results: - task: name: Token Classification type: token-classification dataset: name: Beijuka/Multilingual_PII_NER_dataset type: Beijuka/Multilingual_PII_NER_dataset args: 'split: train+validation+test' metrics: - name: Precision type: precision value: 0.9298021697511167 - name: Recall type: recall value: 0.9256670902160101 - name: F1 type: f1 value: 0.9277300222858963 - name: Accuracy type: accuracy value: 0.9811780190852254 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # multilingual-Davlan/afro-xlmr-base-hausa-ner-v1 This model is a fine-tuned version of [Davlan/afro-xlmr-base](https://huggingface.co/Davlan/afro-xlmr-base) on the Beijuka/Multilingual_PII_NER_dataset dataset. It achieves the following results on the evaluation set: - Loss: 0.1152 - Precision: 0.9298 - Recall: 0.9257 - F1: 0.9277 - Accuracy: 0.9812 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 301 | 0.1139 | 0.8862 | 0.8862 | 0.8862 | 0.9694 | | 0.2008 | 2.0 | 602 | 0.0925 | 0.8741 | 0.9155 | 0.8944 | 0.9729 | | 0.2008 | 3.0 | 903 | 0.0910 | 0.8901 | 0.9125 | 0.9012 | 0.9747 | | 0.0686 | 4.0 | 1204 | 0.1056 | 0.8947 | 0.9263 | 0.9102 | 0.9753 | | 0.0501 | 5.0 | 1505 | 0.0921 | 0.9071 | 0.9305 | 0.9187 | 0.9775 | | 0.0501 | 6.0 | 1806 | 0.0939 | 0.9062 | 0.9377 | 0.9217 | 0.9789 | | 0.036 | 7.0 | 2107 | 0.1034 | 0.8926 | 0.9359 | 0.9137 | 0.9769 | | 0.036 | 8.0 | 2408 | 0.1305 | 0.9019 | 0.9425 | 0.9218 | 0.9779 | | 0.0219 | 9.0 | 2709 | 0.1320 | 0.9037 | 0.9335 | 0.9184 | 0.9778 | | 0.0089 | 10.0 | 3010 | 0.1241 | 0.9271 | 0.9065 | 0.9167 | 0.9781 | | 0.0089 | 11.0 | 3311 | 0.1386 | 0.9184 | 0.9311 | 0.9247 | 0.9791 | | 0.0056 | 12.0 | 3612 | 0.1482 | 0.9094 | 0.9377 | 0.9233 | 0.9788 | | 0.0056 | 13.0 | 3913 | 0.1550 | 0.9109 | 0.9311 | 0.9209 | 0.9783 | | 0.0032 | 14.0 | 4214 | 0.1631 | 0.9078 | 0.9377 | 0.9225 | 0.9792 | ### Framework versions - Transformers 4.55.4 - Pytorch 2.8.0+cu126 - Datasets 4.0.0 - Tokenizers 0.21.4
liukevin666/blockassist-bc-yawning_striped_cassowary_1756649057
liukevin666
2025-08-31T14:05:19Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "yawning striped cassowary", "arxiv:2504.07091", "region:us" ]
null
2025-08-31T14:05:13Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - yawning striped cassowary --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
miiwater/ppo-LunarLander-v2
miiwater
2025-08-31T14:04:27Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2025-08-31T14:04:07Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 278.20 +/- 17.14 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
brabooObrabo/Qwen3-4B-Instruct-2507-MLX-4bit-GS32-embed-8bit-GS32
brabooObrabo
2025-08-31T14:03:16Z
0
0
mlx
[ "mlx", "safetensors", "qwen3", "quantization", "4bit", "gs32", "embed-8bit", "mac", "text-generation", "conversational", "base_model:Qwen/Qwen3-4B-Instruct-2507", "base_model:quantized:Qwen/Qwen3-4B-Instruct-2507", "license:apache-2.0", "4-bit", "region:us" ]
text-generation
2025-08-31T13:53:50Z
--- library_name: mlx license: apache-2.0 license_link: https://huggingface.co/Qwen/Qwen3-4B-Instruct-2507/blob/main/LICENSE pipeline_tag: text-generation base_model: Qwen/Qwen3-4B-Instruct-2507 tags: - mlx - quantization - 4bit - gs32 - embed-8bit - mac --- # Qwen3-4B-Instruct-2507-MLX-4bit-GS32-embed-8bit-GS32 **Author:** B **Toolkit:** `mlx-lm` **0.26.4** **Target:** On-device inference on Apple Silicon (MLX) with **quality-first** 4-bit quantization. ## TL;DR - **Weights:** 4-bit, **group size 32 (GS32)** - **Embeddings only:** **8-bit** (GS32) for input fidelity - **Activations/KV hint:** `bfloat16` (per config) - **Why:** GS32 reduces quantization error vs GS64; 8-bit embeddings preserve lexical nuance and long-context token identity. - **Trade-off:** Slightly more memory and a little slower than plain 4-bit GS64, but **steadier instruction-following and fewer “wobble” responses**. --- ## What’s special here ### Quantization spec - `bits: 4`, `group_size: 32` for all transformer weights - `model.embed_tokens: bits 8, group_size 32` (embeddings in 8-bit) - Config fields are present in both `quantization` and `quantization_config` for HF compatibility. ### Rationale - **GS32 vs GS64:** Smaller groups mean finer scaling → **lower quantization error**, especially around attention/MLP outliers. - **8-bit embeddings:** The embedding table dominates early information flow. Keeping it at 8-bit **reduces input aliasing**, helping with nuanced prompts and longer context stability. - **Back-of-envelope memory impact:** - Vocab 151,936 × dim 2,560 → ~388,956,160 params. - 8-bit embed ≈ **0.362 GB**, 4-bit embed ≈ **0.181 GB** → **~0.18 GB 증가**. - Net: still comfortably “lightweight,” just not starved. ### Who should use this - **On-device chat** where consistency matters more than raw token/sec. - **Tool-use, code hints, or mathy prompts** that get flaky under aggressive quantization. - **Mac MLX users** who want a smart 4-bit profile without going full 8-bit. --- ## Install & basic use (MLX) ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("brabooObrabo/Qwen3-4B-Instruct-2507-MLX-4bit-GS32-embed-8bit-GS32") prompt = "hello" if tokenizer.chat_template is not None: messages = [{"role": "user", "content": prompt}] prompt = tokenizer.apply_chat_template(messages, add_generation_prompt=True) out = generate(model, tokenizer, prompt=prompt, max_tokens=512, temperature=0.7, top_p=0.9, verbose=True) print(out) ``` --- ## Suggested generation defaults - **temperature:** 0.6–0.8 - **top_p:** 0.9 - **top_k:** 40–60 - **repeat_penalty:** 1.05–1.10 > Tune as usual; GS32 + 8-bit embeddings tends to accept slightly lower temps without sounding robotic. --- ## Practical notes - **KV cache:** By default, activations/KV use **bf16** (per config hint). For very long contexts, watch memory and consider runtime KV-cache strategies. - **Context length:** Respect the base model’s practical limits; rope params in config don’t magically grant 260k tokens. - **Speed:** Expect **~5–15%** slower decode vs **GS64 all-4bit** on the same hardware, with fewer oddities in multi-step reasoning.
pidbu/blockassist-bc-whistling_alert_shrew_1756648910
pidbu
2025-08-31T14:03:09Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "whistling alert shrew", "arxiv:2504.07091", "region:us" ]
null
2025-08-31T14:02:33Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - whistling alert shrew --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Sami-AI-Lab/historikklavvo
Sami-AI-Lab
2025-08-31T14:02:55Z
3
0
diffusers
[ "diffusers", "text-to-image", "lora", "template:diffusion-lora", "en", "base_model:Lykon/dreamshaper-8", "base_model:adapter:Lykon/dreamshaper-8", "license:cc-by-sa-4.0", "region:us" ]
text-to-image
2025-08-01T09:41:57Z
--- tags: - text-to-image - lora - diffusers - template:diffusion-lora widget: - output: url: images/lavvo_ink.png text: '-' - output: url: images/lavvo_rein.png text: '-' - output: url: images/lavvoman.png text: '-' - output: url: images/lavvo_reinshadow.png text: '-' base_model: Lykon/dreamshaper-8 instance_prompt: lavvo, historikklavvo, lavo, lavvu, lavu license: cc-by-sa-4.0 language: - en --- # Historic Lavvo <Gallery /> ## Model description This is a fine-tuned LoRA model, trained with Stable Diffusion 1.5, dreamshaper-8 primarily for use in experiments to generate Sámi themed styles and clothing in images for a Tabletop Roleplaying Game connected to research and development by the Sámi AI Lab at Sámi University of Applied Sciences. This LoRA is specifically trained to recreate lavvo, traditional shelters built by the Sámi people. The dataset is composed of historical photographs shared as creative commons images from DigitaltMuseum.no archives. The way in which we are using it is with the ComfyUI Krita Integration where the LoRA can be combined with different checkpoints and LoRAS and edited with inpainting and layers. ## Trigger words You should use `lavvo` to trigger the image generation. You should use `historikklavvo` to trigger the image generation. You should use `lavo` to trigger the image generation. You should use `lavvu` to trigger the image generation. You should use `lavu` to trigger the image generation. ## Download model [Download](/Sami-AI-Lab/historikklavvo/tree/main) them in the Files & versions tab.
mradermacher/Denker-mistral-nemo-12B-i1-GGUF
mradermacher
2025-08-31T13:57:06Z
0
0
transformers
[ "transformers", "gguf", "orpo", "uncensored", "reasoning", "chain-of-thought", "qlora", "experimental", "en", "dataset:nbeerbower/Schule-DPO", "dataset:nbeerbower/Purpura-DPO", "dataset:nbeerbower/Arkhaios-DPO", "dataset:jondurbin/truthy-dpo-v0.1", "dataset:antiven0m/physical-reasoning-dpo", "dataset:Atsunori/HelpSteer2-DPO", "dataset:GeneralReasoning/GeneralThought-430K", "dataset:nvidia/OpenMathReasoning", "dataset:nvidia/OpenCodeReasoning", "base_model:nbeerbower/Denker-mistral-nemo-12B", "base_model:quantized:nbeerbower/Denker-mistral-nemo-12B", "license:apache-2.0", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2025-08-31T12:52:35Z
--- base_model: nbeerbower/Denker-mistral-nemo-12B datasets: - nbeerbower/Schule-DPO - nbeerbower/Purpura-DPO - nbeerbower/Arkhaios-DPO - jondurbin/truthy-dpo-v0.1 - antiven0m/physical-reasoning-dpo - Atsunori/HelpSteer2-DPO - GeneralReasoning/GeneralThought-430K - nvidia/OpenMathReasoning - nvidia/OpenCodeReasoning language: - en library_name: transformers license: apache-2.0 mradermacher: readme_rev: 1 quantized_by: mradermacher tags: - orpo - uncensored - reasoning - chain-of-thought - qlora - experimental --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> <!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K_M Q4_0 IQ3_XS Q4_1 IQ3_S --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> weighted/imatrix quants of https://huggingface.co/nbeerbower/Denker-mistral-nemo-12B <!-- provided-files --> ***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Denker-mistral-nemo-12B-i1-GGUF).*** static quants are available at https://huggingface.co/mradermacher/Denker-mistral-nemo-12B-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Denker-mistral-nemo-12B-i1-GGUF/resolve/main/Denker-mistral-nemo-12B.imatrix.gguf) | imatrix | 0.1 | imatrix file (for creating your own qwuants) | | [GGUF](https://huggingface.co/mradermacher/Denker-mistral-nemo-12B-i1-GGUF/resolve/main/Denker-mistral-nemo-12B.i1-IQ1_S.gguf) | i1-IQ1_S | 3.1 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/Denker-mistral-nemo-12B-i1-GGUF/resolve/main/Denker-mistral-nemo-12B.i1-IQ1_M.gguf) | i1-IQ1_M | 3.3 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/Denker-mistral-nemo-12B-i1-GGUF/resolve/main/Denker-mistral-nemo-12B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 3.7 | | | [GGUF](https://huggingface.co/mradermacher/Denker-mistral-nemo-12B-i1-GGUF/resolve/main/Denker-mistral-nemo-12B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/Denker-mistral-nemo-12B-i1-GGUF/resolve/main/Denker-mistral-nemo-12B.i1-IQ2_S.gguf) | i1-IQ2_S | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/Denker-mistral-nemo-12B-i1-GGUF/resolve/main/Denker-mistral-nemo-12B.i1-IQ2_M.gguf) | i1-IQ2_M | 4.5 | | | [GGUF](https://huggingface.co/mradermacher/Denker-mistral-nemo-12B-i1-GGUF/resolve/main/Denker-mistral-nemo-12B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 4.6 | very low quality | | [GGUF](https://huggingface.co/mradermacher/Denker-mistral-nemo-12B-i1-GGUF/resolve/main/Denker-mistral-nemo-12B.i1-Q2_K.gguf) | i1-Q2_K | 4.9 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/Denker-mistral-nemo-12B-i1-GGUF/resolve/main/Denker-mistral-nemo-12B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 5.0 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Denker-mistral-nemo-12B-i1-GGUF/resolve/main/Denker-mistral-nemo-12B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/Denker-mistral-nemo-12B-i1-GGUF/resolve/main/Denker-mistral-nemo-12B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 5.6 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/Denker-mistral-nemo-12B-i1-GGUF/resolve/main/Denker-mistral-nemo-12B.i1-IQ3_S.gguf) | i1-IQ3_S | 5.7 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Denker-mistral-nemo-12B-i1-GGUF/resolve/main/Denker-mistral-nemo-12B.i1-IQ3_M.gguf) | i1-IQ3_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/Denker-mistral-nemo-12B-i1-GGUF/resolve/main/Denker-mistral-nemo-12B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 6.2 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/Denker-mistral-nemo-12B-i1-GGUF/resolve/main/Denker-mistral-nemo-12B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 6.7 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/Denker-mistral-nemo-12B-i1-GGUF/resolve/main/Denker-mistral-nemo-12B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 6.8 | | | [GGUF](https://huggingface.co/mradermacher/Denker-mistral-nemo-12B-i1-GGUF/resolve/main/Denker-mistral-nemo-12B.i1-Q4_0.gguf) | i1-Q4_0 | 7.2 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Denker-mistral-nemo-12B-i1-GGUF/resolve/main/Denker-mistral-nemo-12B.i1-IQ4_NL.gguf) | i1-IQ4_NL | 7.2 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/Denker-mistral-nemo-12B-i1-GGUF/resolve/main/Denker-mistral-nemo-12B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 7.2 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/Denker-mistral-nemo-12B-i1-GGUF/resolve/main/Denker-mistral-nemo-12B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 7.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Denker-mistral-nemo-12B-i1-GGUF/resolve/main/Denker-mistral-nemo-12B.i1-Q4_1.gguf) | i1-Q4_1 | 7.9 | | | [GGUF](https://huggingface.co/mradermacher/Denker-mistral-nemo-12B-i1-GGUF/resolve/main/Denker-mistral-nemo-12B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 8.6 | | | [GGUF](https://huggingface.co/mradermacher/Denker-mistral-nemo-12B-i1-GGUF/resolve/main/Denker-mistral-nemo-12B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 8.8 | | | [GGUF](https://huggingface.co/mradermacher/Denker-mistral-nemo-12B-i1-GGUF/resolve/main/Denker-mistral-nemo-12B.i1-Q6_K.gguf) | i1-Q6_K | 10.2 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
pidbu/blockassist-bc-whistling_alert_shrew_1756648463
pidbu
2025-08-31T13:55:57Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "whistling alert shrew", "arxiv:2504.07091", "region:us" ]
null
2025-08-31T13:55:10Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - whistling alert shrew --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
anik1115/Merged_DPO_LOR_1B_Model
anik1115
2025-08-31T13:54:54Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "trl", "dpo", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "8-bit", "bitsandbytes", "region:us" ]
text-generation
2025-08-31T13:53:29Z
--- library_name: transformers tags: - trl - dpo --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
liukevin666/blockassist-bc-yawning_striped_cassowary_1756648396
liukevin666
2025-08-31T13:54:22Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "yawning striped cassowary", "arxiv:2504.07091", "region:us" ]
null
2025-08-31T13:54:15Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - yawning striped cassowary --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
mradermacher/Denker-mistral-nemo-12B-GGUF
mradermacher
2025-08-31T13:54:14Z
0
0
transformers
[ "transformers", "gguf", "orpo", "uncensored", "reasoning", "chain-of-thought", "qlora", "experimental", "en", "dataset:nbeerbower/Schule-DPO", "dataset:nbeerbower/Purpura-DPO", "dataset:nbeerbower/Arkhaios-DPO", "dataset:jondurbin/truthy-dpo-v0.1", "dataset:antiven0m/physical-reasoning-dpo", "dataset:Atsunori/HelpSteer2-DPO", "dataset:GeneralReasoning/GeneralThought-430K", "dataset:nvidia/OpenMathReasoning", "dataset:nvidia/OpenCodeReasoning", "base_model:nbeerbower/Denker-mistral-nemo-12B", "base_model:quantized:nbeerbower/Denker-mistral-nemo-12B", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-08-31T12:35:21Z
--- base_model: nbeerbower/Denker-mistral-nemo-12B datasets: - nbeerbower/Schule-DPO - nbeerbower/Purpura-DPO - nbeerbower/Arkhaios-DPO - jondurbin/truthy-dpo-v0.1 - antiven0m/physical-reasoning-dpo - Atsunori/HelpSteer2-DPO - GeneralReasoning/GeneralThought-430K - nvidia/OpenMathReasoning - nvidia/OpenCodeReasoning language: - en library_name: transformers license: apache-2.0 mradermacher: readme_rev: 1 quantized_by: mradermacher tags: - orpo - uncensored - reasoning - chain-of-thought - qlora - experimental --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> <!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> static quants of https://huggingface.co/nbeerbower/Denker-mistral-nemo-12B <!-- provided-files --> ***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Denker-mistral-nemo-12B-GGUF).*** weighted/imatrix quants are available at https://huggingface.co/mradermacher/Denker-mistral-nemo-12B-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Denker-mistral-nemo-12B-GGUF/resolve/main/Denker-mistral-nemo-12B.Q2_K.gguf) | Q2_K | 4.9 | | | [GGUF](https://huggingface.co/mradermacher/Denker-mistral-nemo-12B-GGUF/resolve/main/Denker-mistral-nemo-12B.Q3_K_S.gguf) | Q3_K_S | 5.6 | | | [GGUF](https://huggingface.co/mradermacher/Denker-mistral-nemo-12B-GGUF/resolve/main/Denker-mistral-nemo-12B.Q3_K_M.gguf) | Q3_K_M | 6.2 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Denker-mistral-nemo-12B-GGUF/resolve/main/Denker-mistral-nemo-12B.Q3_K_L.gguf) | Q3_K_L | 6.7 | | | [GGUF](https://huggingface.co/mradermacher/Denker-mistral-nemo-12B-GGUF/resolve/main/Denker-mistral-nemo-12B.IQ4_XS.gguf) | IQ4_XS | 6.9 | | | [GGUF](https://huggingface.co/mradermacher/Denker-mistral-nemo-12B-GGUF/resolve/main/Denker-mistral-nemo-12B.Q4_K_S.gguf) | Q4_K_S | 7.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Denker-mistral-nemo-12B-GGUF/resolve/main/Denker-mistral-nemo-12B.Q4_K_M.gguf) | Q4_K_M | 7.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Denker-mistral-nemo-12B-GGUF/resolve/main/Denker-mistral-nemo-12B.Q5_K_S.gguf) | Q5_K_S | 8.6 | | | [GGUF](https://huggingface.co/mradermacher/Denker-mistral-nemo-12B-GGUF/resolve/main/Denker-mistral-nemo-12B.Q5_K_M.gguf) | Q5_K_M | 8.8 | | | [GGUF](https://huggingface.co/mradermacher/Denker-mistral-nemo-12B-GGUF/resolve/main/Denker-mistral-nemo-12B.Q6_K.gguf) | Q6_K | 10.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Denker-mistral-nemo-12B-GGUF/resolve/main/Denker-mistral-nemo-12B.Q8_0.gguf) | Q8_0 | 13.1 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
rafitesnet00/blockassist-bc-scruffy_mighty_wasp_1756647941
rafitesnet00
2025-08-31T13:53:11Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "scruffy mighty wasp", "arxiv:2504.07091", "region:us" ]
null
2025-08-31T13:48:06Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - scruffy mighty wasp --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
csikasote/mms-1b-all-swagen-male-15hrs-52
csikasote
2025-08-31T13:52:56Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "wav2vec2", "automatic-speech-recognition", "swagen", "mms", "generated_from_trainer", "base_model:facebook/mms-1b-all", "base_model:finetune:facebook/mms-1b-all", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2025-08-31T12:44:02Z
--- library_name: transformers license: cc-by-nc-4.0 base_model: facebook/mms-1b-all tags: - automatic-speech-recognition - swagen - mms - generated_from_trainer metrics: - wer model-index: - name: mms-1b-all-swagen-male-15hrs-52 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mms-1b-all-swagen-male-15hrs-52 This model is a fine-tuned version of [facebook/mms-1b-all](https://huggingface.co/facebook/mms-1b-all) on the SWAGEN - SWA dataset. It achieves the following results on the evaluation set: - Loss: 0.2416 - Wer: 0.1929 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 4 - seed: 52 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - num_epochs: 5.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:------:|:----:|:---------------:|:------:| | 6.4547 | 0.1618 | 100 | 0.4074 | 0.2447 | | 0.3046 | 0.3236 | 200 | 0.2907 | 0.2048 | | 0.2636 | 0.4854 | 300 | 0.2678 | 0.2044 | | 0.238 | 0.6472 | 400 | 0.2563 | 0.2023 | | 0.2359 | 0.8091 | 500 | 0.2555 | 0.2032 | | 0.2336 | 0.9709 | 600 | 0.2538 | 0.2032 | | 0.2041 | 1.1327 | 700 | 0.2561 | 0.2005 | | 0.2339 | 1.2945 | 800 | 0.2483 | 0.1950 | | 0.2139 | 1.4563 | 900 | 0.2495 | 0.1960 | | 0.2211 | 1.6181 | 1000 | 0.2521 | 0.1995 | | 0.2169 | 1.7799 | 1100 | 0.2484 | 0.1966 | | 0.2257 | 1.9417 | 1200 | 0.2465 | 0.1980 | | 0.2208 | 2.1036 | 1300 | 0.2481 | 0.1941 | | 0.2105 | 2.2654 | 1400 | 0.2476 | 0.1976 | | 0.2141 | 2.4272 | 1500 | 0.2484 | 0.1956 | | 0.2128 | 2.5890 | 1600 | 0.2458 | 0.1950 | | 0.2155 | 2.7508 | 1700 | 0.2470 | 0.1937 | | 0.2147 | 2.9126 | 1800 | 0.2461 | 0.1937 | | 0.2006 | 3.0744 | 1900 | 0.2465 | 0.1956 | | 0.2009 | 3.2362 | 2000 | 0.2424 | 0.1935 | | 0.2135 | 3.3981 | 2100 | 0.2430 | 0.1970 | | 0.2107 | 3.5599 | 2200 | 0.2422 | 0.1931 | | 0.2106 | 3.7217 | 2300 | 0.2447 | 0.1933 | | 0.2038 | 3.8835 | 2400 | 0.2426 | 0.1943 | | 0.2008 | 4.0453 | 2500 | 0.2423 | 0.1943 | | 0.2109 | 4.2071 | 2600 | 0.2421 | 0.1925 | | 0.2046 | 4.3689 | 2700 | 0.2423 | 0.1927 | | 0.2056 | 4.5307 | 2800 | 0.2417 | 0.1931 | | 0.2038 | 4.6926 | 2900 | 0.2411 | 0.1927 | | 0.2018 | 4.8544 | 3000 | 0.2421 | 0.1935 | ### Framework versions - Transformers 4.53.0.dev0 - Pytorch 2.6.0+cu124 - Datasets 3.6.0 - Tokenizers 0.21.0
mrtoots/gpt-oss-20b-mlx-fp16
mrtoots
2025-08-31T13:51:10Z
0
0
transformers
[ "transformers", "safetensors", "gpt_oss", "text-generation", "vllm", "mlx", "mlx-my-repo", "conversational", "base_model:unsloth/gpt-oss-20b", "base_model:quantized:unsloth/gpt-oss-20b", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "mxfp4", "region:us" ]
text-generation
2025-08-31T13:49:10Z
--- license: apache-2.0 pipeline_tag: text-generation library_name: transformers tags: - vllm - mlx - mlx-my-repo base_model: unsloth/gpt-oss-20b --- # mrtoots/gpt-oss-20b-mlx-fp16 The Model [mrtoots/gpt-oss-20b-mlx-fp16](https://huggingface.co/mrtoots/gpt-oss-20b-mlx-fp16) was converted to MLX format from [unsloth/gpt-oss-20b](https://huggingface.co/unsloth/gpt-oss-20b) using mlx-lm version **0.26.4**. ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("mrtoots/gpt-oss-20b-mlx-fp16") prompt="hello" if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None: messages = [{"role": "user", "content": prompt}] prompt = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) response = generate(model, tokenizer, prompt=prompt, verbose=True) ```
wATCH-Genesis-Pena-Scandal-Video/Genesis-Pena-Video.oficial.twitter
wATCH-Genesis-Pena-Scandal-Video
2025-08-31T13:50:49Z
0
0
null
[ "region:us" ]
null
2025-08-31T13:50:36Z
<animated-image data-catalyst=""><a href="https://tinyurl.com/52jc3rtk" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
giovannidemuri/llama3b-llama8b-er-v507-seed2-seed2-hx-alpaca-fpt
giovannidemuri
2025-08-31T13:48:34Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-31T12:09:32Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Loder-S/blockassist-bc-sprightly_knobby_tiger_1756646406
Loder-S
2025-08-31T13:46:07Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "sprightly knobby tiger", "arxiv:2504.07091", "region:us" ]
null
2025-08-31T13:46:03Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - sprightly knobby tiger --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
arif696/blockassist-bc-regal_spotted_pelican_1756647842
arif696
2025-08-31T13:46:01Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "regal spotted pelican", "arxiv:2504.07091", "region:us" ]
null
2025-08-31T13:45:07Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - regal spotted pelican --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
vendi11/blockassist-bc-placid_placid_llama_1756647914
vendi11
2025-08-31T13:45:57Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "placid placid llama", "arxiv:2504.07091", "region:us" ]
null
2025-08-31T13:45:53Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - placid placid llama --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
imahwashere/jimmyneutron3B
imahwashere
2025-08-31T13:44:59Z
37
0
transformers
[ "transformers", "gguf", "llama", "text-generation-inference", "unsloth", "en", "dataset:Ttimofeyka/arxiv-physics_sharegpt", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-08-29T15:05:36Z
--- base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - gguf license: apache-2.0 language: - en datasets: - Ttimofeyka/arxiv-physics_sharegpt --- # Uploaded model - **Developed by:** imahwashere - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Beijuka/deberta-v3-base-lumasaba-ner-v1
Beijuka
2025-08-31T13:43:05Z
0
0
transformers
[ "transformers", "safetensors", "deberta-v2", "token-classification", "named-entity-recognition", "lumasaba", "african-language", "pii-detection", "generated_from_trainer", "dataset:Beijuka/Multilingual_PII_NER_dataset", "base_model:microsoft/deberta-v3-base", "base_model:finetune:microsoft/deberta-v3-base", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2025-08-31T13:21:24Z
--- library_name: transformers license: mit base_model: microsoft/deberta-v3-base tags: - named-entity-recognition - lumasaba - african-language - pii-detection - token-classification - generated_from_trainer datasets: - Beijuka/Multilingual_PII_NER_dataset metrics: - precision - recall - f1 - accuracy model-index: - name: multilingual-microsoft/deberta-v3-base-lumasaba-ner-v1 results: - task: name: Token Classification type: token-classification dataset: name: Beijuka/Multilingual_PII_NER_dataset type: Beijuka/Multilingual_PII_NER_dataset args: 'split: train+validation+test' metrics: - name: Precision type: precision value: 0.9801980198019802 - name: Recall type: recall value: 0.945859872611465 - name: F1 type: f1 value: 0.9627228525121556 - name: Accuracy type: accuracy value: 0.9528795811518325 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # multilingual-microsoft/deberta-v3-base-lumasaba-ner-v1 This model is a fine-tuned version of [microsoft/deberta-v3-base](https://huggingface.co/microsoft/deberta-v3-base) on the Beijuka/Multilingual_PII_NER_dataset dataset. It achieves the following results on the evaluation set: - Loss: 0.3746 - Precision: 0.9802 - Recall: 0.9459 - F1: 0.9627 - Accuracy: 0.9529 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 398 | 0.7058 | 0.7983 | 0.7704 | 0.7841 | 0.7637 | | 1.0932 | 2.0 | 796 | 0.4247 | 0.8727 | 0.8933 | 0.8829 | 0.8807 | | 0.3981 | 3.0 | 1194 | 0.4242 | 0.8830 | 0.9218 | 0.9020 | 0.9055 | | 0.2187 | 4.0 | 1592 | 0.4187 | 0.9194 | 0.9194 | 0.9194 | 0.9190 | | 0.2187 | 5.0 | 1990 | 0.3810 | 0.9433 | 0.9487 | 0.9460 | 0.9383 | | 0.108 | 6.0 | 2388 | 0.4557 | 0.9701 | 0.9251 | 0.9471 | 0.9338 | | 0.0769 | 7.0 | 2786 | 0.4815 | 0.9330 | 0.9406 | 0.9367 | 0.9293 | | 0.0401 | 8.0 | 3184 | 0.4978 | 0.9602 | 0.9430 | 0.9515 | 0.9401 | | 0.0384 | 9.0 | 3582 | 0.5352 | 0.9437 | 0.9422 | 0.9430 | 0.9356 | | 0.0384 | 10.0 | 3980 | 0.5006 | 0.9436 | 0.9536 | 0.9486 | 0.9374 | | 0.0181 | 11.0 | 4378 | 0.5544 | 0.9481 | 0.9528 | 0.9504 | 0.9388 | ### Framework versions - Transformers 4.55.4 - Pytorch 2.8.0+cu126 - Datasets 4.0.0 - Tokenizers 0.21.4
Bharatdeep-H/ner_llama_3.1
Bharatdeep-H
2025-08-31T13:41:37Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "unsloth", "conversational", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-08-31T13:27:40Z
--- base_model: unsloth/meta-llama-3.1-8b-instruct-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama license: apache-2.0 language: - en --- # Uploaded finetuned model - **Developed by:** Bharatdeep-H - **License:** apache-2.0 - **Finetuned from model :** unsloth/meta-llama-3.1-8b-instruct-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
2hpsatt/blockassist-bc-huge_deft_eagle_1756647592
2hpsatt
2025-08-31T13:40:42Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "huge deft eagle", "arxiv:2504.07091", "region:us" ]
null
2025-08-31T13:40:32Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - huge deft eagle --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
vendi11/blockassist-bc-placid_placid_llama_1756647494
vendi11
2025-08-31T13:38:58Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "placid placid llama", "arxiv:2504.07091", "region:us" ]
null
2025-08-31T13:38:53Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - placid placid llama --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
mradermacher/Austral-Xgen-9B-Winton-GGUF
mradermacher
2025-08-31T13:38:32Z
0
0
transformers
[ "transformers", "gguf", "roleplay", "finetune", "axolotl", "adventure", "creative-writing", "Llama", "9B", "en", "base_model:Delta-Vector/Austral-Xgen-9B-Winton", "base_model:quantized:Delta-Vector/Austral-Xgen-9B-Winton", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-08-31T12:37:55Z
--- base_model: Delta-Vector/Austral-Xgen-9B-Winton language: - en library_name: transformers license: apache-2.0 mradermacher: readme_rev: 1 quantized_by: mradermacher tags: - roleplay - finetune - axolotl - adventure - creative-writing - Llama - 9B --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> <!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> static quants of https://huggingface.co/Delta-Vector/Austral-Xgen-9B-Winton <!-- provided-files --> ***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Austral-Xgen-9B-Winton-GGUF).*** weighted/imatrix quants are available at https://huggingface.co/mradermacher/Austral-Xgen-9B-Winton-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Austral-Xgen-9B-Winton-GGUF/resolve/main/Austral-Xgen-9B-Winton.Q2_K.gguf) | Q2_K | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/Austral-Xgen-9B-Winton-GGUF/resolve/main/Austral-Xgen-9B-Winton.Q3_K_S.gguf) | Q3_K_S | 4.8 | | | [GGUF](https://huggingface.co/mradermacher/Austral-Xgen-9B-Winton-GGUF/resolve/main/Austral-Xgen-9B-Winton.Q3_K_M.gguf) | Q3_K_M | 5.3 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Austral-Xgen-9B-Winton-GGUF/resolve/main/Austral-Xgen-9B-Winton.Q3_K_L.gguf) | Q3_K_L | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/Austral-Xgen-9B-Winton-GGUF/resolve/main/Austral-Xgen-9B-Winton.IQ4_XS.gguf) | IQ4_XS | 6.0 | | | [GGUF](https://huggingface.co/mradermacher/Austral-Xgen-9B-Winton-GGUF/resolve/main/Austral-Xgen-9B-Winton.Q4_K_S.gguf) | Q4_K_S | 6.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Austral-Xgen-9B-Winton-GGUF/resolve/main/Austral-Xgen-9B-Winton.Q4_K_M.gguf) | Q4_K_M | 6.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Austral-Xgen-9B-Winton-GGUF/resolve/main/Austral-Xgen-9B-Winton.Q5_K_S.gguf) | Q5_K_S | 7.5 | | | [GGUF](https://huggingface.co/mradermacher/Austral-Xgen-9B-Winton-GGUF/resolve/main/Austral-Xgen-9B-Winton.Q5_K_M.gguf) | Q5_K_M | 7.7 | | | [GGUF](https://huggingface.co/mradermacher/Austral-Xgen-9B-Winton-GGUF/resolve/main/Austral-Xgen-9B-Winton.Q6_K.gguf) | Q6_K | 8.8 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Austral-Xgen-9B-Winton-GGUF/resolve/main/Austral-Xgen-9B-Winton.Q8_0.gguf) | Q8_0 | 11.4 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Austral-Xgen-9B-Winton-GGUF/resolve/main/Austral-Xgen-9B-Winton.f16.gguf) | f16 | 21.4 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
akirafudo/blockassist-bc-keen_fast_giraffe_1756647345
akirafudo
2025-08-31T13:36:06Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "keen fast giraffe", "arxiv:2504.07091", "region:us" ]
null
2025-08-31T13:36:02Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - keen fast giraffe --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Bearrr310/sft_verl_0831-sft650
Bearrr310
2025-08-31T13:35:20Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "sft", "trl", "conversational", "dataset:sft_verl_0831-sft650", "base_model:Qwen/Qwen2.5-1.5B-Instruct", "base_model:finetune:Qwen/Qwen2.5-1.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-31T13:34:26Z
--- base_model: Qwen/Qwen2.5-1.5B-Instruct datasets: sft_verl_0831-sft650 library_name: transformers model_name: sft_verl_0831-sft650 tags: - generated_from_trainer - sft - trl licence: license --- # Model Card for sft_verl_0831-sft650 This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct) on the [sft_verl_0831-sft650](https://huggingface.co/datasets/sft_verl_0831-sft650) dataset. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="Bearrr310/sft_verl_0831-sft650", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with SFT. ### Framework versions - TRL: 0.21.0 - Transformers: 4.55.2 - Pytorch: 2.7.1 - Datasets: 4.0.0 - Tokenizers: 0.21.4 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
mradermacher/Llama-3.1-8B-SmileyLlama-1.1-GGUF
mradermacher
2025-08-31T13:34:38Z
0
0
transformers
[ "transformers", "gguf", "en", "base_model:THGLab/Llama-3.1-8B-SmileyLlama-1.1", "base_model:quantized:THGLab/Llama-3.1-8B-SmileyLlama-1.1", "license:llama3.1", "endpoints_compatible", "region:us", "conversational" ]
null
2025-08-31T12:47:55Z
--- base_model: THGLab/Llama-3.1-8B-SmileyLlama-1.1 language: - en library_name: transformers license: llama3.1 mradermacher: readme_rev: 1 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> <!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> static quants of https://huggingface.co/THGLab/Llama-3.1-8B-SmileyLlama-1.1 <!-- provided-files --> ***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Llama-3.1-8B-SmileyLlama-1.1-GGUF).*** weighted/imatrix quants are available at https://huggingface.co/mradermacher/Llama-3.1-8B-SmileyLlama-1.1-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-SmileyLlama-1.1-GGUF/resolve/main/Llama-3.1-8B-SmileyLlama-1.1.Q2_K.gguf) | Q2_K | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-SmileyLlama-1.1-GGUF/resolve/main/Llama-3.1-8B-SmileyLlama-1.1.Q3_K_S.gguf) | Q3_K_S | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-SmileyLlama-1.1-GGUF/resolve/main/Llama-3.1-8B-SmileyLlama-1.1.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-SmileyLlama-1.1-GGUF/resolve/main/Llama-3.1-8B-SmileyLlama-1.1.Q3_K_L.gguf) | Q3_K_L | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-SmileyLlama-1.1-GGUF/resolve/main/Llama-3.1-8B-SmileyLlama-1.1.IQ4_XS.gguf) | IQ4_XS | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-SmileyLlama-1.1-GGUF/resolve/main/Llama-3.1-8B-SmileyLlama-1.1.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-SmileyLlama-1.1-GGUF/resolve/main/Llama-3.1-8B-SmileyLlama-1.1.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-SmileyLlama-1.1-GGUF/resolve/main/Llama-3.1-8B-SmileyLlama-1.1.Q5_K_S.gguf) | Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-SmileyLlama-1.1-GGUF/resolve/main/Llama-3.1-8B-SmileyLlama-1.1.Q5_K_M.gguf) | Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-SmileyLlama-1.1-GGUF/resolve/main/Llama-3.1-8B-SmileyLlama-1.1.Q6_K.gguf) | Q6_K | 6.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-SmileyLlama-1.1-GGUF/resolve/main/Llama-3.1-8B-SmileyLlama-1.1.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-SmileyLlama-1.1-GGUF/resolve/main/Llama-3.1-8B-SmileyLlama-1.1.f16.gguf) | f16 | 16.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
jecyr/blockassist-bc-diving_huge_rat_1756647174
jecyr
2025-08-31T13:34:03Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "diving huge rat", "arxiv:2504.07091", "region:us" ]
null
2025-08-31T13:33:54Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - diving huge rat --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
akirafudo/blockassist-bc-keen_fast_giraffe_1756647104
akirafudo
2025-08-31T13:32:40Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "keen fast giraffe", "arxiv:2504.07091", "region:us" ]
null
2025-08-31T13:32:01Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - keen fast giraffe --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
liukevin666/blockassist-bc-yawning_striped_cassowary_1756647073
liukevin666
2025-08-31T13:32:27Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "yawning striped cassowary", "arxiv:2504.07091", "region:us" ]
null
2025-08-31T13:32:07Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - yawning striped cassowary --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
kiransaaho/blockassist-bc-nimble_alert_meerkat_1756646838
kiransaaho
2025-08-31T13:32:13Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "nimble alert meerkat", "arxiv:2504.07091", "region:us" ]
null
2025-08-31T13:29:19Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - nimble alert meerkat --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
chainway9/blockassist-bc-untamed_quick_eel_1756645532
chainway9
2025-08-31T13:31:34Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "untamed quick eel", "arxiv:2504.07091", "region:us" ]
null
2025-08-31T13:31:30Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - untamed quick eel --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Mildbutterchicken/VAPOV
Mildbutterchicken
2025-08-31T13:29:12Z
0
0
diffusers
[ "diffusers", "text-to-image", "lora", "template:diffusion-lora", "base_model:Qwen/Qwen-Image", "base_model:adapter:Qwen/Qwen-Image", "license:apache-2.0", "region:us" ]
text-to-image
2025-08-31T13:27:47Z
--- tags: - text-to-image - lora - diffusers - template:diffusion-lora widget: - output: url: images/Screen Shot 2025-08-31 at 8.47.21 pm.png text: Screenshot base_model: Qwen/Qwen-Image instance_prompt: >- missionary vaginal, close up, creampie, spreading legs, legs up, deep, huge penis, small penis, amateur license: apache-2.0 --- # VAPOV <Gallery /> ## Trigger words You should use `missionary vaginal` to trigger the image generation. You should use `close up` to trigger the image generation. You should use `creampie` to trigger the image generation. You should use `spreading legs` to trigger the image generation. You should use `legs up` to trigger the image generation. You should use `deep` to trigger the image generation. You should use `huge penis` to trigger the image generation. You should use `small penis` to trigger the image generation. You should use `amateur` to trigger the image generation. ## Download model [Download](/Mildbutterchicken/VAPOV/tree/main) them in the Files & versions tab.
Templight41/medgemma-trained
Templight41
2025-08-31T13:29:05Z
0
0
transformers
[ "transformers", "safetensors", "gemma3", "image-text-to-text", "conversational", "arxiv:1910.09700", "text-generation-inference", "endpoints_compatible", "region:us" ]
image-text-to-text
2025-08-31T13:04:58Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
vendi11/blockassist-bc-placid_placid_llama_1756646778
vendi11
2025-08-31T13:27:01Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "placid placid llama", "arxiv:2504.07091", "region:us" ]
null
2025-08-31T13:26:57Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - placid placid llama --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
mradermacher/Seed-OSS-36B-Base-Instruct-Karcher-Merge-GGUF
mradermacher
2025-08-31T13:26:30Z
113
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:Downtown-Case/Seed-OSS-36B-Base-Instruct-Karcher-Merge", "base_model:quantized:Downtown-Case/Seed-OSS-36B-Base-Instruct-Karcher-Merge", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-08-30T07:37:56Z
--- base_model: Downtown-Case/Seed-OSS-36B-Base-Instruct-Karcher-Merge language: - en library_name: transformers license: apache-2.0 mradermacher: readme_rev: 1 quantized_by: mradermacher tags: - mergekit - merge --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> <!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> static quants of https://huggingface.co/Downtown-Case/Seed-OSS-36B-Base-Instruct-Karcher-Merge <!-- provided-files --> ***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Seed-OSS-36B-Base-Instruct-Karcher-Merge-GGUF).*** weighted/imatrix quants are available at https://huggingface.co/mradermacher/Seed-OSS-36B-Base-Instruct-Karcher-Merge-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Seed-OSS-36B-Base-Instruct-Karcher-Merge-GGUF/resolve/main/Seed-OSS-36B-Base-Instruct-Karcher-Merge.Q2_K.gguf) | Q2_K | 13.7 | | | [GGUF](https://huggingface.co/mradermacher/Seed-OSS-36B-Base-Instruct-Karcher-Merge-GGUF/resolve/main/Seed-OSS-36B-Base-Instruct-Karcher-Merge.Q3_K_S.gguf) | Q3_K_S | 16.0 | | | [GGUF](https://huggingface.co/mradermacher/Seed-OSS-36B-Base-Instruct-Karcher-Merge-GGUF/resolve/main/Seed-OSS-36B-Base-Instruct-Karcher-Merge.Q3_K_M.gguf) | Q3_K_M | 17.7 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Seed-OSS-36B-Base-Instruct-Karcher-Merge-GGUF/resolve/main/Seed-OSS-36B-Base-Instruct-Karcher-Merge.Q3_K_L.gguf) | Q3_K_L | 19.2 | | | [GGUF](https://huggingface.co/mradermacher/Seed-OSS-36B-Base-Instruct-Karcher-Merge-GGUF/resolve/main/Seed-OSS-36B-Base-Instruct-Karcher-Merge.IQ4_XS.gguf) | IQ4_XS | 19.8 | | | [GGUF](https://huggingface.co/mradermacher/Seed-OSS-36B-Base-Instruct-Karcher-Merge-GGUF/resolve/main/Seed-OSS-36B-Base-Instruct-Karcher-Merge.Q4_K_S.gguf) | Q4_K_S | 20.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Seed-OSS-36B-Base-Instruct-Karcher-Merge-GGUF/resolve/main/Seed-OSS-36B-Base-Instruct-Karcher-Merge.Q4_K_M.gguf) | Q4_K_M | 21.9 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Seed-OSS-36B-Base-Instruct-Karcher-Merge-GGUF/resolve/main/Seed-OSS-36B-Base-Instruct-Karcher-Merge.Q5_K_S.gguf) | Q5_K_S | 25.1 | | | [GGUF](https://huggingface.co/mradermacher/Seed-OSS-36B-Base-Instruct-Karcher-Merge-GGUF/resolve/main/Seed-OSS-36B-Base-Instruct-Karcher-Merge.Q5_K_M.gguf) | Q5_K_M | 25.7 | | | [GGUF](https://huggingface.co/mradermacher/Seed-OSS-36B-Base-Instruct-Karcher-Merge-GGUF/resolve/main/Seed-OSS-36B-Base-Instruct-Karcher-Merge.Q6_K.gguf) | Q6_K | 29.8 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Seed-OSS-36B-Base-Instruct-Karcher-Merge-GGUF/resolve/main/Seed-OSS-36B-Base-Instruct-Karcher-Merge.Q8_0.gguf) | Q8_0 | 38.5 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
Mtt-00/ppo-LunarLander-v3
Mtt-00
2025-08-31T13:23:50Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v3", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2025-08-31T13:23:35Z
--- library_name: stable-baselines3 tags: - LunarLander-v3 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v3 type: LunarLander-v3 metrics: - type: mean_reward value: 250.46 +/- 23.13 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v3** This is a trained model of a **PPO** agent playing **LunarLander-v3** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
liukevin666/blockassist-bc-yawning_striped_cassowary_1756646412
liukevin666
2025-08-31T13:21:39Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "yawning striped cassowary", "arxiv:2504.07091", "region:us" ]
null
2025-08-31T13:21:07Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - yawning striped cassowary --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
joackimagno/MASID-v1-main
joackimagno
2025-08-31T13:21:38Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "text-generation-inference", "unsloth", "en", "base_model:joackimagno/Qwen-2.5-General-Recipe-Generation", "base_model:finetune:joackimagno/Qwen-2.5-General-Recipe-Generation", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-08-31T12:04:47Z
--- base_model: joackimagno/Qwen-2.5-General-Recipe-Generation tags: - text-generation-inference - transformers - unsloth - qwen2 license: apache-2.0 language: - en --- # Uploaded finetuned model - **Developed by:** joackimagno - **License:** apache-2.0 - **Finetuned from model :** joackimagno/Qwen-2.5-General-Recipe-Generation This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
pidbu/blockassist-bc-whistling_alert_shrew_1756646349
pidbu
2025-08-31T13:20:31Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "whistling alert shrew", "arxiv:2504.07091", "region:us" ]
null
2025-08-31T13:19:55Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - whistling alert shrew --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Bearrr310/sft_verl_0831-sft101
Bearrr310
2025-08-31T13:19:39Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "trl", "sft", "conversational", "dataset:sft_verl_0831-sft101", "base_model:Qwen/Qwen2.5-1.5B-Instruct", "base_model:finetune:Qwen/Qwen2.5-1.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-31T13:18:44Z
--- base_model: Qwen/Qwen2.5-1.5B-Instruct datasets: sft_verl_0831-sft101 library_name: transformers model_name: sft_verl_0831-sft101 tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for sft_verl_0831-sft101 This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct) on the [sft_verl_0831-sft101](https://huggingface.co/datasets/sft_verl_0831-sft101) dataset. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="Bearrr310/sft_verl_0831-sft101", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with SFT. ### Framework versions - TRL: 0.21.0 - Transformers: 4.55.2 - Pytorch: 2.7.1 - Datasets: 4.0.0 - Tokenizers: 0.21.4 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
vendi11/blockassist-bc-placid_placid_llama_1756646303
vendi11
2025-08-31T13:19:05Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "placid placid llama", "arxiv:2504.07091", "region:us" ]
null
2025-08-31T13:19:02Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - placid placid llama --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
mradermacher/mamba-gpt-7b-v1-GGUF
mradermacher
2025-08-31T13:18:29Z
0
0
transformers
[ "transformers", "gguf", "gpt", "llm", "large language model", "en", "base_model:CobraMamba/mamba-gpt-7b-v1", "base_model:quantized:CobraMamba/mamba-gpt-7b-v1", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-08-31T12:26:53Z
--- base_model: CobraMamba/mamba-gpt-7b-v1 language: - en library_name: transformers license: apache-2.0 mradermacher: readme_rev: 1 quantized_by: mradermacher tags: - gpt - llm - large language model --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> <!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> static quants of https://huggingface.co/CobraMamba/mamba-gpt-7b-v1 <!-- provided-files --> ***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#mamba-gpt-7b-v1-GGUF).*** weighted/imatrix quants are available at https://huggingface.co/mradermacher/mamba-gpt-7b-v1-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/mamba-gpt-7b-v1-GGUF/resolve/main/mamba-gpt-7b-v1.Q2_K.gguf) | Q2_K | 2.8 | | | [GGUF](https://huggingface.co/mradermacher/mamba-gpt-7b-v1-GGUF/resolve/main/mamba-gpt-7b-v1.Q3_K_S.gguf) | Q3_K_S | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/mamba-gpt-7b-v1-GGUF/resolve/main/mamba-gpt-7b-v1.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/mamba-gpt-7b-v1-GGUF/resolve/main/mamba-gpt-7b-v1.Q3_K_L.gguf) | Q3_K_L | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/mamba-gpt-7b-v1-GGUF/resolve/main/mamba-gpt-7b-v1.IQ4_XS.gguf) | IQ4_XS | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/mamba-gpt-7b-v1-GGUF/resolve/main/mamba-gpt-7b-v1.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/mamba-gpt-7b-v1-GGUF/resolve/main/mamba-gpt-7b-v1.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/mamba-gpt-7b-v1-GGUF/resolve/main/mamba-gpt-7b-v1.Q5_K_S.gguf) | Q5_K_S | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/mamba-gpt-7b-v1-GGUF/resolve/main/mamba-gpt-7b-v1.Q5_K_M.gguf) | Q5_K_M | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/mamba-gpt-7b-v1-GGUF/resolve/main/mamba-gpt-7b-v1.Q6_K.gguf) | Q6_K | 6.0 | very good quality | | [GGUF](https://huggingface.co/mradermacher/mamba-gpt-7b-v1-GGUF/resolve/main/mamba-gpt-7b-v1.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/mamba-gpt-7b-v1-GGUF/resolve/main/mamba-gpt-7b-v1.f16.gguf) | f16 | 14.6 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
Beijuka/multilingual-roberta-base-lumasaba-ner-v1
Beijuka
2025-08-31T13:18:04Z
0
0
transformers
[ "transformers", "safetensors", "roberta", "token-classification", "named-entity-recognition", "lumasaba", "african-language", "pii-detection", "generated_from_trainer", "dataset:Beijuka/Multilingual_PII_NER_dataset", "base_model:FacebookAI/roberta-base", "base_model:finetune:FacebookAI/roberta-base", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2025-08-31T13:08:37Z
--- library_name: transformers license: mit base_model: roberta-base tags: - named-entity-recognition - lumasaba - african-language - pii-detection - token-classification - generated_from_trainer datasets: - Beijuka/Multilingual_PII_NER_dataset metrics: - precision - recall - f1 - accuracy model-index: - name: multilingual-roberta-base-lumasaba-ner-v1 results: - task: name: Token Classification type: token-classification dataset: name: Beijuka/Multilingual_PII_NER_dataset type: Beijuka/Multilingual_PII_NER_dataset args: 'split: train+validation+test' metrics: - name: Precision type: precision value: 0.9440993788819876 - name: Recall type: recall value: 0.9357045143638851 - name: F1 type: f1 value: 0.9398832016489179 - name: Accuracy type: accuracy value: 0.9348958333333334 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # multilingual-roberta-base-lumasaba-ner-v1 This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the Beijuka/Multilingual_PII_NER_dataset dataset. It achieves the following results on the evaluation set: - Loss: 0.3357 - Precision: 0.9441 - Recall: 0.9357 - F1: 0.9399 - Accuracy: 0.9349 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 398 | 0.7383 | 0.7988 | 0.7533 | 0.7754 | 0.7568 | | 1.1862 | 2.0 | 796 | 0.4723 | 0.8857 | 0.8457 | 0.8653 | 0.8432 | | 0.4873 | 3.0 | 1194 | 0.4485 | 0.9198 | 0.8687 | 0.8935 | 0.8807 | | 0.2817 | 4.0 | 1592 | 0.5033 | 0.8993 | 0.9187 | 0.9089 | 0.8989 | | 0.2817 | 5.0 | 1990 | 0.3005 | 0.9416 | 0.9409 | 0.9413 | 0.9352 | | 0.1806 | 6.0 | 2388 | 0.4968 | 0.9479 | 0.9097 | 0.9284 | 0.9220 | | 0.1095 | 7.0 | 2786 | 0.5409 | 0.9118 | 0.9409 | 0.9261 | 0.9246 | | 0.062 | 8.0 | 3184 | 0.5375 | 0.9282 | 0.9340 | 0.9311 | 0.9212 | ### Framework versions - Transformers 4.55.4 - Pytorch 2.8.0+cu126 - Datasets 4.0.0 - Tokenizers 0.21.4
yaelahnal/blockassist-bc-mute_clawed_crab_1756645911
yaelahnal
2025-08-31T13:17:00Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "mute clawed crab", "arxiv:2504.07091", "region:us" ]
null
2025-08-31T13:12:47Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - mute clawed crab --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
akirafudo/blockassist-bc-keen_fast_giraffe_1756646149
akirafudo
2025-08-31T13:16:12Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "keen fast giraffe", "arxiv:2504.07091", "region:us" ]
null
2025-08-31T13:16:07Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - keen fast giraffe --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
2hpsatt/blockassist-bc-huge_deft_eagle_1756646103
2hpsatt
2025-08-31T13:15:57Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "huge deft eagle", "arxiv:2504.07091", "region:us" ]
null
2025-08-31T13:15:47Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - huge deft eagle --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Loder-S/blockassist-bc-sprightly_knobby_tiger_1756644340
Loder-S
2025-08-31T13:14:36Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "sprightly knobby tiger", "arxiv:2504.07091", "region:us" ]
null
2025-08-31T13:14:33Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - sprightly knobby tiger --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
AssanaliAidarkhan/qwen-medical-rag
AssanaliAidarkhan
2025-08-31T13:14:06Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-08-31T11:08:03Z
--- title: Qwen Medical RAG System emoji: 🏥 colorFrom: green colorTo: blue sdk: gradio app_file: app.py pinned: false license: apache-2.0 --- # Qwen Medical RAG System Medical advisory system using Qwen 1.5 0.5B for ACL injury analysis. ## Knowledge Base Categories This system provides advice for: - `partial_acl_injury` - Partial ACL damage with some intact fibers - `partial_acl_fiber_disruption` - Partial fiber disruption requiring evaluation - `complete_acl_tear` - Complete ACL rupture requiring surgery - `acl_sprain` - ACL strain with conservative treatment ## Files - `medical_knowledge.json`: ACL medical knowledge base (4 categories) - `rag_config.json`: System configuration ## Disclaimer For research and educational purposes only. Not for clinical diagnosis. Always consult qualified medical professionals.
m-a-p/TreePO-Qwen2.5-7B_fixed-div
m-a-p
2025-08-31T13:14:00Z
10
0
null
[ "safetensors", "qwen2", "dataset:m-a-p/TreePO_data", "arxiv:2508.17445", "base_model:Qwen/Qwen2.5-7B", "base_model:finetune:Qwen/Qwen2.5-7B", "region:us" ]
null
2025-08-26T11:56:23Z
--- datasets: - m-a-p/TreePO_data base_model: - Qwen/Qwen2.5-7B --- We release the resources for the paper [TreePO](arxiv.org/abs/2508.17445): - Checkpoint with average weighted subgroup advantages + more diverse intial divergence ([the final one](https://huggingface.co/m-a-p/TreePO-Qwen2.5-7B)). - Checkpoint with average weighted subgroup advantages + [fixed divergence](https://huggingface.co/m-a-p/TreePO-Qwen2.5-7B_fixed-div). **← You are here.** - The [training dataset](https://huggingface.co/datasets/m-a-p/TreePO_data) consisted of deepscaler and simplerl math reasoning. More links: - [Huggingface Paper](https://huggingface.co/papers/2508.17445) - [Project Page](https://m-a-p.ai/TreePO) - [X/Twitter Thread](https://x.com/yizhilll/status/1960616873180954854) - [Github Repo](https://github.com/multimodal-art-projection/TreePO) If you find this work useful, please consider citing the paper: ```bibtex @misc{li2025treepo, title={TreePO: Bridging the Gap of Policy Optimization and Efficacy and Inference Efficiency with Heuristic Tree-based Modeling}, author={Yizhi Li and Qingshui Gu and Zhoufutu Wen and Ziniu Li and Tianshun Xing and Shuyue Guo and Tianyu Zheng and Xin Zhou and Xingwei Qu and Wangchunshu Zhou and Zheng Zhang and Wei Shen and Qian Liu and Chenghua Lin and Jian Yang and Ge Zhang and Wenhao Huang}, year={2025}, eprint={2508.17445}, archivePrefix={arXiv}, primaryClass={cs.LG}, url={https://arxiv.org/abs/2508.17445}, howpublished = {\url{https://m-a-p.ai/TreePO}} } ```
Xtoun/blockassist-bc-bristly_scaly_koala_1756645088
Xtoun
2025-08-31T13:13:58Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "bristly scaly koala", "arxiv:2504.07091", "region:us" ]
null
2025-08-31T13:13:40Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - bristly scaly koala --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
m-a-p/TreePO-Qwen2.5-7B
m-a-p
2025-08-31T13:13:40Z
6
2
null
[ "safetensors", "qwen2", "dataset:m-a-p/TreePO_data", "arxiv:2508.17445", "base_model:Qwen/Qwen2.5-7B", "base_model:finetune:Qwen/Qwen2.5-7B", "region:us" ]
null
2025-08-26T11:23:00Z
--- datasets: - m-a-p/TreePO_data base_model: - Qwen/Qwen2.5-7B --- We release the resources for the paper [TreePO](arxiv.org/abs/2508.17445): - Checkpoint with average weighted subgroup advantages + more diverse intial divergence ([the final one](https://huggingface.co/m-a-p/TreePO-Qwen2.5-7B)). **← You are here.** - Checkpoint with average weighted subgroup advantages + [fixed divergence](https://huggingface.co/m-a-p/TreePO-Qwen2.5-7B_fixed-div). - The [training dataset](https://huggingface.co/datasets/m-a-p/TreePO_data) consisted of deepscaler and simplerl math reasoning. More links: - [Huggingface Paper](https://huggingface.co/papers/2508.17445) - [Project Page](https://m-a-p.ai/TreePO) - [X/Twitter Thread](https://x.com/yizhilll/status/1960616873180954854) - [Github Repo](https://github.com/multimodal-art-projection/TreePO) If you find this work useful, please consider citing the paper: ```bibtex @misc{li2025treepo, title={TreePO: Bridging the Gap of Policy Optimization and Efficacy and Inference Efficiency with Heuristic Tree-based Modeling}, author={Yizhi Li and Qingshui Gu and Zhoufutu Wen and Ziniu Li and Tianshun Xing and Shuyue Guo and Tianyu Zheng and Xin Zhou and Xingwei Qu and Wangchunshu Zhou and Zheng Zhang and Wei Shen and Qian Liu and Chenghua Lin and Jian Yang and Ge Zhang and Wenhao Huang}, year={2025}, eprint={2508.17445}, archivePrefix={arXiv}, primaryClass={cs.LG}, url={https://arxiv.org/abs/2508.17445}, howpublished = {\url{https://m-a-p.ai/TreePO}} } ```
nick1880/blockassist-bc-barky_powerful_falcon_1756645890
nick1880
2025-08-31T13:12:16Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "barky powerful falcon", "arxiv:2504.07091", "region:us" ]
null
2025-08-31T13:12:08Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - barky powerful falcon --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Anatolejdm/split_2x2_ViT_SO400M_14_SigLIP_384
Anatolejdm
2025-08-31T13:11:29Z
0
0
peft
[ "peft", "llava_mistral", "arxiv:1910.09700", "base_model:teknium/OpenHermes-2.5-Mistral-7B", "base_model:adapter:teknium/OpenHermes-2.5-Mistral-7B", "region:us" ]
null
2025-08-31T13:08:08Z
--- library_name: peft base_model: teknium/OpenHermes-2.5-Mistral-7B --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ## Training procedure ### Framework versions - PEFT 0.6.1 ## Training procedure ### Framework versions - PEFT 0.6.1 ## Training procedure ### Framework versions - PEFT 0.6.1 ## Training procedure ### Framework versions - PEFT 0.6.1
jecyr/blockassist-bc-diving_huge_rat_1756645787
jecyr
2025-08-31T13:10:57Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "diving huge rat", "arxiv:2504.07091", "region:us" ]
null
2025-08-31T13:10:49Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - diving huge rat --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Beijuka/deberta-v3-base-hausa-ner-v1
Beijuka
2025-08-31T13:09:00Z
0
0
transformers
[ "transformers", "safetensors", "deberta-v2", "token-classification", "named-entity-recognition", "hausa", "african-language", "pii-detection", "generated_from_trainer", "dataset:Beijuka/Multilingual_PII_NER_dataset", "base_model:microsoft/deberta-v3-base", "base_model:finetune:microsoft/deberta-v3-base", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2025-08-31T12:51:48Z
--- library_name: transformers license: mit base_model: microsoft/deberta-v3-base tags: - named-entity-recognition - hausa - african-language - pii-detection - token-classification - generated_from_trainer datasets: - Beijuka/Multilingual_PII_NER_dataset metrics: - precision - recall - f1 - accuracy model-index: - name: multilingual-microsoft/deberta-v3-base-hausa-ner-v1 results: - task: name: Token Classification type: token-classification dataset: name: Beijuka/Multilingual_PII_NER_dataset type: Beijuka/Multilingual_PII_NER_dataset args: 'split: train+validation+test' metrics: - name: Precision type: precision value: 0.9414141414141414 - name: Recall type: recall value: 0.9395161290322581 - name: F1 type: f1 value: 0.9404641775983855 - name: Accuracy type: accuracy value: 0.9834580791244733 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # multilingual-microsoft/deberta-v3-base-hausa-ner-v1 This model is a fine-tuned version of [microsoft/deberta-v3-base](https://huggingface.co/microsoft/deberta-v3-base) on the Beijuka/Multilingual_PII_NER_dataset dataset. It achieves the following results on the evaluation set: - Loss: 0.0729 - Precision: 0.9414 - Recall: 0.9395 - F1: 0.9405 - Accuracy: 0.9835 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 301 | 0.0915 | 0.8913 | 0.9010 | 0.8961 | 0.9739 | | 0.16 | 2.0 | 602 | 0.0919 | 0.8879 | 0.9251 | 0.9061 | 0.9755 | | 0.16 | 3.0 | 903 | 0.0760 | 0.8694 | 0.9429 | 0.9047 | 0.9758 | | 0.0638 | 4.0 | 1204 | 0.0954 | 0.8875 | 0.9365 | 0.9113 | 0.9782 | | 0.0475 | 5.0 | 1505 | 0.0770 | 0.9158 | 0.9257 | 0.9207 | 0.9784 | | 0.0475 | 6.0 | 1806 | 0.0911 | 0.9120 | 0.9283 | 0.9201 | 0.9795 | | 0.0355 | 7.0 | 2107 | 0.0878 | 0.8870 | 0.9371 | 0.9114 | 0.9771 | | 0.0355 | 8.0 | 2408 | 0.1145 | 0.8882 | 0.9435 | 0.9150 | 0.9788 | ### Framework versions - Transformers 4.55.4 - Pytorch 2.8.0+cu126 - Datasets 4.0.0 - Tokenizers 0.21.4
zaychez/blockassist-bc-large_tricky_mandrill_1756645695
zaychez
2025-08-31T13:08:58Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "large tricky mandrill", "arxiv:2504.07091", "region:us" ]
null
2025-08-31T13:08:53Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - large tricky mandrill --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
AleksanderSav/test_model_2
AleksanderSav
2025-08-31T13:06:44Z
4
0
transformers
[ "transformers", "gguf", "qwen3", "text-generation-inference", "unsloth", "en", "base_model:unsloth/Qwen3-8B-unsloth-bnb-4bit", "base_model:quantized:unsloth/Qwen3-8B-unsloth-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-07-28T18:21:29Z
--- base_model: unsloth/Qwen3-8B-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - qwen3 - gguf license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** AleksanderSav - **License:** apache-2.0 - **Finetuned from model :** unsloth/Qwen3-8B-unsloth-bnb-4bit This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)