modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-09-06 00:36:47
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
540 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-09-06 00:36:27
card
stringlengths
11
1.01M
New-Clip-Dr-eman-viral-video-link/New.full.videos.Dr.eman.Viral.Video.Official.Tutorial
New-Clip-Dr-eman-viral-video-link
2025-08-12T18:07:19Z
0
0
null
[ "region:us" ]
null
2025-08-12T18:07:04Z
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?leaked-viral-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
mradermacher/SoftwareArchitecture-Instruct-v1-GGUF
mradermacher
2025-08-12T18:06:47Z
0
0
transformers
[ "transformers", "gguf", "text-generation-inference", "unsloth", "lfm2", "en", "dataset:ajibawa-2023/Software-Architecture", "base_model:yasserrmd/SoftwareArchitecture-Instruct-v1", "base_model:quantized:yasserrmd/SoftwareArchitecture-Instruct-v1", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-08-12T18:01:32Z
--- base_model: yasserrmd/SoftwareArchitecture-Instruct-v1 datasets: - ajibawa-2023/Software-Architecture language: - en library_name: transformers license: apache-2.0 mradermacher: readme_rev: 1 quantized_by: mradermacher tags: - text-generation-inference - transformers - unsloth - lfm2 --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> <!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> static quants of https://huggingface.co/yasserrmd/SoftwareArchitecture-Instruct-v1 <!-- provided-files --> ***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#SoftwareArchitecture-Instruct-v1-GGUF).*** weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/SoftwareArchitecture-Instruct-v1-GGUF/resolve/main/SoftwareArchitecture-Instruct-v1.Q2_K.gguf) | Q2_K | 0.6 | | | [GGUF](https://huggingface.co/mradermacher/SoftwareArchitecture-Instruct-v1-GGUF/resolve/main/SoftwareArchitecture-Instruct-v1.Q3_K_S.gguf) | Q3_K_S | 0.7 | | | [GGUF](https://huggingface.co/mradermacher/SoftwareArchitecture-Instruct-v1-GGUF/resolve/main/SoftwareArchitecture-Instruct-v1.Q3_K_M.gguf) | Q3_K_M | 0.7 | lower quality | | [GGUF](https://huggingface.co/mradermacher/SoftwareArchitecture-Instruct-v1-GGUF/resolve/main/SoftwareArchitecture-Instruct-v1.Q3_K_L.gguf) | Q3_K_L | 0.7 | | | [GGUF](https://huggingface.co/mradermacher/SoftwareArchitecture-Instruct-v1-GGUF/resolve/main/SoftwareArchitecture-Instruct-v1.IQ4_XS.gguf) | IQ4_XS | 0.8 | | | [GGUF](https://huggingface.co/mradermacher/SoftwareArchitecture-Instruct-v1-GGUF/resolve/main/SoftwareArchitecture-Instruct-v1.Q4_K_S.gguf) | Q4_K_S | 0.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/SoftwareArchitecture-Instruct-v1-GGUF/resolve/main/SoftwareArchitecture-Instruct-v1.Q4_K_M.gguf) | Q4_K_M | 0.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/SoftwareArchitecture-Instruct-v1-GGUF/resolve/main/SoftwareArchitecture-Instruct-v1.Q5_K_S.gguf) | Q5_K_S | 0.9 | | | [GGUF](https://huggingface.co/mradermacher/SoftwareArchitecture-Instruct-v1-GGUF/resolve/main/SoftwareArchitecture-Instruct-v1.Q5_K_M.gguf) | Q5_K_M | 0.9 | | | [GGUF](https://huggingface.co/mradermacher/SoftwareArchitecture-Instruct-v1-GGUF/resolve/main/SoftwareArchitecture-Instruct-v1.Q6_K.gguf) | Q6_K | 1.1 | very good quality | | [GGUF](https://huggingface.co/mradermacher/SoftwareArchitecture-Instruct-v1-GGUF/resolve/main/SoftwareArchitecture-Instruct-v1.Q8_0.gguf) | Q8_0 | 1.3 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/SoftwareArchitecture-Instruct-v1-GGUF/resolve/main/SoftwareArchitecture-Instruct-v1.f16.gguf) | f16 | 2.4 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Malaysian-TTS-1.7B-v0.1-GGUF
mradermacher
2025-08-12T18:06:47Z
0
0
transformers
[ "transformers", "gguf", "en", "base_model:mesolitica/Malaysian-TTS-1.7B-v0.1", "base_model:quantized:mesolitica/Malaysian-TTS-1.7B-v0.1", "endpoints_compatible", "region:us", "conversational" ]
null
2025-08-12T17:59:22Z
--- base_model: mesolitica/Malaysian-TTS-1.7B-v0.1 language: - en library_name: transformers mradermacher: readme_rev: 1 quantized_by: mradermacher tags: [] --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> <!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> static quants of https://huggingface.co/mesolitica/Malaysian-TTS-1.7B-v0.1 <!-- provided-files --> ***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Malaysian-TTS-1.7B-v0.1-GGUF).*** weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Malaysian-TTS-1.7B-v0.1-GGUF/resolve/main/Malaysian-TTS-1.7B-v0.1.Q2_K.gguf) | Q2_K | 0.9 | | | [GGUF](https://huggingface.co/mradermacher/Malaysian-TTS-1.7B-v0.1-GGUF/resolve/main/Malaysian-TTS-1.7B-v0.1.Q3_K_S.gguf) | Q3_K_S | 1.0 | | | [GGUF](https://huggingface.co/mradermacher/Malaysian-TTS-1.7B-v0.1-GGUF/resolve/main/Malaysian-TTS-1.7B-v0.1.Q3_K_M.gguf) | Q3_K_M | 1.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Malaysian-TTS-1.7B-v0.1-GGUF/resolve/main/Malaysian-TTS-1.7B-v0.1.Q3_K_L.gguf) | Q3_K_L | 1.2 | | | [GGUF](https://huggingface.co/mradermacher/Malaysian-TTS-1.7B-v0.1-GGUF/resolve/main/Malaysian-TTS-1.7B-v0.1.IQ4_XS.gguf) | IQ4_XS | 1.2 | | | [GGUF](https://huggingface.co/mradermacher/Malaysian-TTS-1.7B-v0.1-GGUF/resolve/main/Malaysian-TTS-1.7B-v0.1.Q4_K_S.gguf) | Q4_K_S | 1.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Malaysian-TTS-1.7B-v0.1-GGUF/resolve/main/Malaysian-TTS-1.7B-v0.1.Q4_K_M.gguf) | Q4_K_M | 1.3 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Malaysian-TTS-1.7B-v0.1-GGUF/resolve/main/Malaysian-TTS-1.7B-v0.1.Q5_K_S.gguf) | Q5_K_S | 1.4 | | | [GGUF](https://huggingface.co/mradermacher/Malaysian-TTS-1.7B-v0.1-GGUF/resolve/main/Malaysian-TTS-1.7B-v0.1.Q5_K_M.gguf) | Q5_K_M | 1.4 | | | [GGUF](https://huggingface.co/mradermacher/Malaysian-TTS-1.7B-v0.1-GGUF/resolve/main/Malaysian-TTS-1.7B-v0.1.Q6_K.gguf) | Q6_K | 1.6 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Malaysian-TTS-1.7B-v0.1-GGUF/resolve/main/Malaysian-TTS-1.7B-v0.1.Q8_0.gguf) | Q8_0 | 2.0 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Malaysian-TTS-1.7B-v0.1-GGUF/resolve/main/Malaysian-TTS-1.7B-v0.1.f16.gguf) | f16 | 3.7 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
hidayahlut/blockassist-bc-knobby_scavenging_wasp_1755021841
hidayahlut
2025-08-12T18:05:12Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "knobby scavenging wasp", "arxiv:2504.07091", "region:us" ]
null
2025-08-12T18:04:36Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - knobby scavenging wasp --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
xem-clip-doi-nam-nu-co-hanh-dong-nhay-cam/18.XEM.xac.minh.clip.doi.nam.nu.co.hanh.dong.nhay.cam.VIDEO
xem-clip-doi-nam-nu-co-hanh-dong-nhay-cam
2025-08-12T18:04:59Z
0
0
null
[ "region:us" ]
null
2025-08-12T18:04:31Z
<animated-image data-catalyst=""><a href="https://tinyurl.com/5xr5mb3e?leaked-videos/" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
Jack-Payne1/qwen_2.5_7b-phoenix_B1_random_seed3
Jack-Payne1
2025-08-12T18:04:26Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "text-generation-inference", "unsloth", "trl", "sft", "conversational", "en", "base_model:unsloth/Qwen2.5-7B-Instruct", "base_model:finetune:unsloth/Qwen2.5-7B-Instruct", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-08-12T18:01:37Z
--- base_model: unsloth/Qwen2.5-7B-Instruct tags: - text-generation-inference - transformers - unsloth - qwen2 - trl - sft license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** Jack-Payne1 - **License:** apache-2.0 - **Finetuned from model :** unsloth/Qwen2.5-7B-Instruct This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
charvibannur/Qwen-3-0.6B-DPO-10-5e-5-0.1-1000
charvibannur
2025-08-12T18:04:20Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-12T18:03:36Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1755021761
IvanJAjebu
2025-08-12T18:03:48Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "thorny slender capybara", "arxiv:2504.07091", "region:us" ]
null
2025-08-12T18:03:41Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - thorny slender capybara --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Ferdi3425/blockassist-bc-amphibious_deadly_otter_1755021713
Ferdi3425
2025-08-12T18:03:17Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "amphibious deadly otter", "arxiv:2504.07091", "region:us" ]
null
2025-08-12T18:02:49Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - amphibious deadly otter --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
andr0m4da/blockassist-bc-grazing_hunting_boar_1755021663
andr0m4da
2025-08-12T18:02:37Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "grazing hunting boar", "arxiv:2504.07091", "region:us" ]
null
2025-08-12T18:02:25Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - grazing hunting boar --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
aleebaster/blockassist-bc-sly_eager_boar_1755020540
aleebaster
2025-08-12T18:01:16Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "sly eager boar", "arxiv:2504.07091", "region:us" ]
null
2025-08-12T17:59:57Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - sly eager boar --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
pharaohe/dwarfredhairrep10epoc16
pharaohe
2025-08-12T18:00:41Z
0
0
diffusers
[ "diffusers", "text-to-image", "flux", "lora", "template:sd-lora", "fluxgym", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-08-12T18:00:01Z
--- tags: - text-to-image - flux - lora - diffusers - template:sd-lora - fluxgym base_model: black-forest-labs/FLUX.1-dev instance_prompt: woman license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md --- # dwarfredhairrep10epoc16 A Flux LoRA trained on a local computer with [Fluxgym](https://github.com/cocktailpeanut/fluxgym) <Gallery /> ## Trigger words You should use `woman` to trigger the image generation. ## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, Forge, etc. Weights for this model are available in Safetensors format.
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1755021428
IvanJAjebu
2025-08-12T17:58:48Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "thorny slender capybara", "arxiv:2504.07091", "region:us" ]
null
2025-08-12T17:58:02Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - thorny slender capybara --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
hidayahlut/blockassist-bc-knobby_scavenging_wasp_1755020821
hidayahlut
2025-08-12T17:58:21Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "knobby scavenging wasp", "arxiv:2504.07091", "region:us" ]
null
2025-08-12T17:48:08Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - knobby scavenging wasp --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
calegpedia/blockassist-bc-stealthy_slimy_rooster_1755019804
calegpedia
2025-08-12T17:58:11Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "stealthy slimy rooster", "arxiv:2504.07091", "region:us" ]
null
2025-08-12T17:58:07Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - stealthy slimy rooster --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
hmoreira/xlm-roberta-large-petrogeoner
hmoreira
2025-08-12T17:58:05Z
0
0
null
[ "safetensors", "xlm-roberta", "token-classification", "pt", "dataset:hmoreira/PetroGeoNER", "base_model:FacebookAI/xlm-roberta-large", "base_model:finetune:FacebookAI/xlm-roberta-large", "region:us" ]
token-classification
2025-08-12T17:20:19Z
--- datasets: - hmoreira/PetroGeoNER language: - pt metrics: - f1 base_model: - FacebookAI/xlm-roberta-large pipeline_tag: token-classification --- # Modelo de Reconhecimento de Entidades Nomeadas para Textos Geológicos ## Descrição do Modelo Modelo especializado de Reconhecimento de Entidades Nomeadas treinado em textos do domínio geológico e petrolífero em português. O modelo foi ajustado para identificar e classificar 13 tipos diferentes de entidades geológicas comumente encontradas em relatórios técnicos, artigos científicos e documentação da indústria. ## Performance do Modelo Desempenho em todas as classes de entidades: | Classe de Entidade | Precisão | Recall | F1-Score | Suporte | |-------------------|----------|--------|----------|---------| | BACIA | 0.91 | 0.96 | 0.94 | 581 | | CAMPO | 0.87 | 0.81 | 0.84 | 99 | | ESTRUTURA_FISICA | 0.89 | 0.84 | 0.86 | 396 | | FLUIDODATERRA | 0.89 | 0.85 | 0.87 | 339 | | FOSSEIS | 0.90 | 0.76 | 0.82 | 336 | | MINERAIS | 0.93 | 0.83 | 0.88 | 217 | | NAO_CONSOLID | 0.89 | 0.69 | 0.78 | 131 | | PALEOAMBIENTE | 0.85 | 0.71 | 0.77 | 486 | | POÇO | 0.97 | 0.92 | 0.94 | 84 | | ROCHA | 0.93 | 0.93 | 0.93 | 848 | | TEXTURA | 0.88 | 0.79 | 0.84 | 29 | | UNIDADE_CRONO | 0.95 | 0.96 | 0.95 | 1119 | | UNIDADE_LITO | 0.91 | 0.88 | 0.90 | 468 | **Performance Geral:** - **Média Micro:** Precisão: 0.91, Recall: 0.88, F1-Score: 0.90 - **Média Macro:** Precisão: 0.91, Recall: 0.84, F1-Score: 0.87 - **Média Ponderada:** Precisão: 0.91, Recall: 0.88, F1-Score: 0.89 ## Classes de Entidades O modelo reconhece 13 tipos de entidades geológicas: - **BACIA**: Bacias geológicas e áreas sedimentares - **CAMPO**: Campos de petróleo e gás - **ESTRUTURA_FISICA**: Estruturas e arranjos de rochas - **FLUIDODATERRA**: Fluidos geológicos - **FOSSEIS**: Restos fósseis e evidências paleontológicas - **MINERAIS**: Composições e tipos minerais - **NAO_CONSOLID**: Materiais geológicos não consolidados - **PALEOAMBIENTE**: Ambientes sedimentares antigos - **POÇO**: Poços de petróleo/gás e locais de perfuração - **ROCHA**: Tipos e formações rochosas - **TEXTURA**: Texturas e padrões de rochas - **UNIDADE_CRONO**: Períodos de tempo geológico - **UNIDADE_LITO**: Formações litoestratigráficas
VIDEOS-18-archita-phukan-first-film/New.full.videos.archita.phukan.first.film.Official.Tutorial
VIDEOS-18-archita-phukan-first-film
2025-08-12T17:56:57Z
0
0
null
[ "region:us" ]
null
2025-08-12T17:56:46Z
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?leaked-viral-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
JeonMashup/Agust_D_BTS
JeonMashup
2025-08-12T17:56:54Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2024-11-08T00:42:03Z
--- license: apache-2.0 ---
y0yvu/y0y-vuv2
y0yvu
2025-08-12T17:56:44Z
0
0
null
[ "safetensors", "any-to-any", "omega", "omegalabs", "bittensor", "agi", "license:mit", "region:us" ]
any-to-any
2025-08-12T17:29:04Z
--- license: mit tags: - any-to-any - omega - omegalabs - bittensor - agi --- This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet. Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
Sayemahsjn/blockassist-bc-playful_feline_octopus_1755020319
Sayemahsjn
2025-08-12T17:56:28Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "playful feline octopus", "arxiv:2504.07091", "region:us" ]
null
2025-08-12T17:56:24Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - playful feline octopus --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
tdickson17/Text_Summarization
tdickson17
2025-08-12T17:55:38Z
22
0
transformers
[ "transformers", "safetensors", "t5", "text2text-generation", "summarization", "text-generation-inference", "endpoints_compatible", "region:us" ]
summarization
2025-08-09T23:35:37Z
--- library_name: transformers pipeline_tag: summarization --- tags: - politics - summarization - climate change - political party - press release - political communication - European Union - Speech license: afl-3.0 language: - en - es - da - de - it - fr - nl - pl # Text Summarization The model used in this summarization task is a T5 summarization transformer-based language model fine-tuned for abstractive summarization. This model is intended to summarize political texts regarding generates summaries by treating text summarization as a text-to-text problem, where both the input and the output are sequences of text. The model was fine-tuned on 10k political party press releases from 66 parties in 12 different countries via an abstract summary. ## Model Details Pretrained Model: The model uses a pretrained tokenizer and model from the Hugging Face transformers library (e.g., T5ForConditionalGeneration). Tokenization: Text is tokenized using a subword tokenizer, where long words are split into smaller, meaningful subwords. Input Processing: The model processes the input sequence by truncating or padding the text to fit within the max_input_length of 512 tokens. Output Generation: The model generates the summary through a text generation process using beam search with a beam width of 4 to explore multiple possible summary sequences at each step. Key Parameters: Max Input Length: 512 tokens — ensures the input text is truncated or padded to fit within the model's processing capacity. Max Target Length: 128 tokens — restricts the length of the generated summary, balancing between concise output and content preservation. Beam Search: Uses a beam width of 10 to explore multiple candidate sequences during generation, helping the model choose the most probable summary. Early Stopping: The generation process stops early if the model predicts the end of the sequence before reaching the maximum target length. Generation Process: Input Tokenization: The input text is tokenized into subword units and passed into the model. Beam Search: The model generates the next token by considering the top 10 possible sequences at each step, aiming to find the most probable summary sequence. Output Decoding: The generated summary is decoded from token IDs back into human-readable text using the tokenizer, skipping special tokens like padding or end-of-sequence markers. - **Repository:** https://github.com/tcdickson/Text-Summarization.git ## Training Details The summarization model was trained on a dataset of press releases scraped from various party websites. These press releases were selected to represent diverse political perspectives and topics, ensuring that the model learned to generate summaries across a wide range of political content. Data Collection: Source: Press releases from official party websites, which often contain detailed statements, policy announcements, and responses to current events. These documents were chosen because of their structured format and consistent language use. Preprocessing: The scraped text was cleaned and preprocessed, removing extraneous HTML tags, irrelevant information, and ensuring that the text content was well-formatted for model training. Text Format: The press releases were processed into suitable text pairs: the original full text as the input and a human-crafted summary (if available) or a custom summary generated by the developers as the target output. Training Objective: The model was fine-tuned using these press releases to learn the task of abstractive summarization — generating concise, fluent summaries of longer political texts. The model was trained to capture key information and context, while avoiding irrelevant details, ensuring that it could produce summaries that accurately reflect the essence of each release. Training Strategy: Supervised Learning: The model was trained using supervised learning, where each input (press release) was paired with a corresponding summary. Optimization: During training, the model's parameters were adjusted using gradient descent and the cross-entropy loss function. This training process allowed the model to learn not only the specific language patterns commonly found in political press releases but also the broader context of political discourse. ## Citation: @article{dickson2024going, title={Going against the grain: Climate change as a wedge issue for the radical right}, author={Dickson, Zachary P and Hobolt, Sara B}, journal={Comparative Political Studies}, year={2024}, publisher={SAGE Publications Sage CA: Los Angeles, CA} }
coastalcph/Qwen2.5-7B-05t_gcd_sycophancy-05t_diff_sycophant
coastalcph
2025-08-12T17:55:28Z
0
0
null
[ "safetensors", "qwen2", "region:us" ]
null
2025-08-12T17:50:48Z
# Combined Task Vector Model This model was created by combining task vectors from multiple fine-tuned models. ## Task Vector Computation ```python t_1 = TaskVector("Qwen/Qwen2.5-7B-Instruct", "coastalcph/Qwen2.5-7B-gcd_sycophancy") t_2 = TaskVector("Qwen/Qwen2.5-7B-Instruct", "coastalcph/Qwen2.5-7B-personality-non-sycophancy") t_combined = 0.5 * t_1 + 0.5 * t_2 - 0.5 * t_3 new_model = t_combined.apply_to("Qwen/Qwen2.5-7B-Instruct", scaling_coef=1.0) ``` Models Used - Base Model: https://huggingface.co/Qwen/Qwen2.5-7B-Instruct - Fine-tuned Model 1: https://huggingface.co/coastalcph/Qwen2.5-7B-gcd_sycophancy - Fine-tuned Model 2: https://huggingface.co/coastalcph/Qwen2.5-7B-personality-non-sycophancy Technical Details - Creation Script Git Hash: 435fdd2a144e79c487d864db94b34a02894295b9 - Task Vector Method: Additive combination - Args: { "pretrained_model": "Qwen/Qwen2.5-7B-Instruct", "finetuned_model1": "coastalcph/Qwen2.5-7B-gcd_sycophancy", "finetuned_model2": "coastalcph/Qwen2.5-7B-personality-non-sycophancy", "finetuned_model3": "coastalcph/Qwen2.5-7B-personality-sycophancy", "output_model_name": "coastalcph/Qwen2.5-7B-05t_gcd_sycophancy-05t_diff_sycophant", "output_dir": "/projects/nlp/data/constanzam/weight-interp/task-vectors/math_non_sycophant_12Aug", "scaling_coef": 1.0, "apply_line_scaling_t1": false, "apply_line_scaling_t2": false, "apply_line_scaling_t3": false, "scale_t1": 0.5, "scale_t2": 0.5, "scale_t3": 0.5 }
Ferdi3425/blockassist-bc-amphibious_deadly_otter_1755021237
Ferdi3425
2025-08-12T17:55:12Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "amphibious deadly otter", "arxiv:2504.07091", "region:us" ]
null
2025-08-12T17:54:45Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - amphibious deadly otter --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
EurekaTian/qwen2p5_7b_mmlu_neg
EurekaTian
2025-08-12T17:54:54Z
0
0
null
[ "safetensors", "qwen2", "license:apache-2.0", "region:us" ]
null
2025-08-12T17:39:16Z
--- license: apache-2.0 ---
mradermacher/Moondark-12B-GGUF
mradermacher
2025-08-12T17:54:45Z
0
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "roleplay", "en", "base_model:Vortex5/Moondark-12B", "base_model:quantized:Vortex5/Moondark-12B", "endpoints_compatible", "region:us", "conversational" ]
null
2025-08-12T14:33:38Z
--- base_model: Vortex5/Moondark-12B language: - en library_name: transformers mradermacher: readme_rev: 1 quantized_by: mradermacher tags: - mergekit - merge - roleplay --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> <!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> static quants of https://huggingface.co/Vortex5/Moondark-12B <!-- provided-files --> ***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Moondark-12B-GGUF).*** weighted/imatrix quants are available at https://huggingface.co/mradermacher/Moondark-12B-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Moondark-12B-GGUF/resolve/main/Moondark-12B.Q2_K.gguf) | Q2_K | 4.9 | | | [GGUF](https://huggingface.co/mradermacher/Moondark-12B-GGUF/resolve/main/Moondark-12B.Q3_K_S.gguf) | Q3_K_S | 5.6 | | | [GGUF](https://huggingface.co/mradermacher/Moondark-12B-GGUF/resolve/main/Moondark-12B.Q3_K_M.gguf) | Q3_K_M | 6.2 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Moondark-12B-GGUF/resolve/main/Moondark-12B.Q3_K_L.gguf) | Q3_K_L | 6.7 | | | [GGUF](https://huggingface.co/mradermacher/Moondark-12B-GGUF/resolve/main/Moondark-12B.IQ4_XS.gguf) | IQ4_XS | 6.9 | | | [GGUF](https://huggingface.co/mradermacher/Moondark-12B-GGUF/resolve/main/Moondark-12B.Q4_K_S.gguf) | Q4_K_S | 7.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Moondark-12B-GGUF/resolve/main/Moondark-12B.Q4_K_M.gguf) | Q4_K_M | 7.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Moondark-12B-GGUF/resolve/main/Moondark-12B.Q5_K_S.gguf) | Q5_K_S | 8.6 | | | [GGUF](https://huggingface.co/mradermacher/Moondark-12B-GGUF/resolve/main/Moondark-12B.Q5_K_M.gguf) | Q5_K_M | 8.8 | | | [GGUF](https://huggingface.co/mradermacher/Moondark-12B-GGUF/resolve/main/Moondark-12B.Q6_K.gguf) | Q6_K | 10.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Moondark-12B-GGUF/resolve/main/Moondark-12B.Q8_0.gguf) | Q8_0 | 13.1 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Qwen2.5-3B-gabliterated-i1-GGUF
mradermacher
2025-08-12T17:54:45Z
0
0
transformers
[ "transformers", "gguf", "uncensored", "code", "legal", "text-generation-inference", "en", "base_model:Goekdeniz-Guelmez/Qwen2.5-3B-gabliterated", "base_model:quantized:Goekdeniz-Guelmez/Qwen2.5-3B-gabliterated", "license:apache-2.0", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2025-08-12T17:17:25Z
--- base_model: Goekdeniz-Guelmez/Qwen2.5-3B-gabliterated language: - en library_name: transformers license: apache-2.0 mradermacher: readme_rev: 1 quantized_by: mradermacher tags: - uncensored - code - legal - text-generation-inference --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> <!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K_M Q4_0 IQ3_XS Q4_1 IQ3_S --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> weighted/imatrix quants of https://huggingface.co/Goekdeniz-Guelmez/Qwen2.5-3B-gabliterated <!-- provided-files --> ***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Qwen2.5-3B-gabliterated-i1-GGUF).*** static quants are available at https://huggingface.co/mradermacher/Qwen2.5-3B-gabliterated-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-gabliterated-i1-GGUF/resolve/main/Qwen2.5-3B-gabliterated.imatrix.gguf) | imatrix | 0.1 | imatrix file (for creating your own qwuants) | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-gabliterated-i1-GGUF/resolve/main/Qwen2.5-3B-gabliterated.i1-IQ1_S.gguf) | i1-IQ1_S | 0.9 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-gabliterated-i1-GGUF/resolve/main/Qwen2.5-3B-gabliterated.i1-IQ1_M.gguf) | i1-IQ1_M | 1.0 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-gabliterated-i1-GGUF/resolve/main/Qwen2.5-3B-gabliterated.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 1.0 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-gabliterated-i1-GGUF/resolve/main/Qwen2.5-3B-gabliterated.i1-IQ2_XS.gguf) | i1-IQ2_XS | 1.1 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-gabliterated-i1-GGUF/resolve/main/Qwen2.5-3B-gabliterated.i1-IQ2_S.gguf) | i1-IQ2_S | 1.2 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-gabliterated-i1-GGUF/resolve/main/Qwen2.5-3B-gabliterated.i1-IQ2_M.gguf) | i1-IQ2_M | 1.2 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-gabliterated-i1-GGUF/resolve/main/Qwen2.5-3B-gabliterated.i1-Q2_K_S.gguf) | i1-Q2_K_S | 1.3 | very low quality | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-gabliterated-i1-GGUF/resolve/main/Qwen2.5-3B-gabliterated.i1-Q2_K.gguf) | i1-Q2_K | 1.4 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-gabliterated-i1-GGUF/resolve/main/Qwen2.5-3B-gabliterated.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 1.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-gabliterated-i1-GGUF/resolve/main/Qwen2.5-3B-gabliterated.i1-IQ3_XS.gguf) | i1-IQ3_XS | 1.5 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-gabliterated-i1-GGUF/resolve/main/Qwen2.5-3B-gabliterated.i1-Q3_K_S.gguf) | i1-Q3_K_S | 1.6 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-gabliterated-i1-GGUF/resolve/main/Qwen2.5-3B-gabliterated.i1-IQ3_S.gguf) | i1-IQ3_S | 1.6 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-gabliterated-i1-GGUF/resolve/main/Qwen2.5-3B-gabliterated.i1-IQ3_M.gguf) | i1-IQ3_M | 1.6 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-gabliterated-i1-GGUF/resolve/main/Qwen2.5-3B-gabliterated.i1-Q3_K_M.gguf) | i1-Q3_K_M | 1.7 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-gabliterated-i1-GGUF/resolve/main/Qwen2.5-3B-gabliterated.i1-Q3_K_L.gguf) | i1-Q3_K_L | 1.8 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-gabliterated-i1-GGUF/resolve/main/Qwen2.5-3B-gabliterated.i1-IQ4_XS.gguf) | i1-IQ4_XS | 1.8 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-gabliterated-i1-GGUF/resolve/main/Qwen2.5-3B-gabliterated.i1-IQ4_NL.gguf) | i1-IQ4_NL | 1.9 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-gabliterated-i1-GGUF/resolve/main/Qwen2.5-3B-gabliterated.i1-Q4_0.gguf) | i1-Q4_0 | 1.9 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-gabliterated-i1-GGUF/resolve/main/Qwen2.5-3B-gabliterated.i1-Q4_K_S.gguf) | i1-Q4_K_S | 1.9 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-gabliterated-i1-GGUF/resolve/main/Qwen2.5-3B-gabliterated.i1-Q4_K_M.gguf) | i1-Q4_K_M | 2.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-gabliterated-i1-GGUF/resolve/main/Qwen2.5-3B-gabliterated.i1-Q4_1.gguf) | i1-Q4_1 | 2.1 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-gabliterated-i1-GGUF/resolve/main/Qwen2.5-3B-gabliterated.i1-Q5_K_S.gguf) | i1-Q5_K_S | 2.3 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-gabliterated-i1-GGUF/resolve/main/Qwen2.5-3B-gabliterated.i1-Q5_K_M.gguf) | i1-Q5_K_M | 2.3 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-gabliterated-i1-GGUF/resolve/main/Qwen2.5-3B-gabliterated.i1-Q6_K.gguf) | i1-Q6_K | 2.6 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
kayacrypto/blockassist-bc-thriving_barky_wolf_1755021154
kayacrypto
2025-08-12T17:54:05Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "thriving barky wolf", "arxiv:2504.07091", "region:us" ]
null
2025-08-12T17:53:48Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - thriving barky wolf --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1755021169
IvanJAjebu
2025-08-12T17:54:01Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "thorny slender capybara", "arxiv:2504.07091", "region:us" ]
null
2025-08-12T17:53:52Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - thorny slender capybara --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
VIDEOS-18-Horse-and-girl-viral-video-link/New.full.videos.Horse.and.girl.Viral.Video.Official.Tutorial
VIDEOS-18-Horse-and-girl-viral-video-link
2025-08-12T17:53:40Z
0
0
null
[ "region:us" ]
null
2025-08-12T17:53:29Z
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?leaked-viral-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
stakesquid/blockassist-bc-scaly_shrewd_stingray_1755020966
stakesquid
2025-08-12T17:53:00Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "scaly shrewd stingray", "arxiv:2504.07091", "region:us" ]
null
2025-08-12T17:52:55Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - scaly shrewd stingray --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
EurekaTian/qwen2p5_7b_openmath_3660_pos
EurekaTian
2025-08-12T17:52:47Z
0
0
null
[ "safetensors", "qwen2", "license:apache-2.0", "region:us" ]
null
2025-08-12T17:39:54Z
--- license: apache-2.0 ---
mveroe/Qwen2.5-1.5B_lightr1_4_1p0_0p0_1p0_sft
mveroe
2025-08-12T17:50:57Z
20
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-11T17:36:04Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
nightmedia/Jan-v1-4B-q8-hi-mlx
nightmedia
2025-08-12T17:48:34Z
0
0
mlx
[ "mlx", "safetensors", "qwen3", "text-generation", "conversational", "en", "base_model:janhq/Jan-v1-4B", "base_model:quantized:janhq/Jan-v1-4B", "license:apache-2.0", "8-bit", "region:us" ]
text-generation
2025-08-12T17:29:48Z
--- license: apache-2.0 language: - en base_model: janhq/Jan-v1-4B pipeline_tag: text-generation tags: - mlx library_name: mlx --- # Jan-v1-4B-q8-hi-mlx This model [Jan-v1-4B-q8-hi-mlx](https://huggingface.co/Jan-v1-4B-q8-hi-mlx) was converted to MLX format from [janhq/Jan-v1-4B](https://huggingface.co/janhq/Jan-v1-4B) using mlx-lm version **0.26.3**. ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("Jan-v1-4B-q8-hi-mlx") prompt = "hello" if tokenizer.chat_template is not None: messages = [{"role": "user", "content": prompt}] prompt = tokenizer.apply_chat_template( messages, add_generation_prompt=True ) response = generate(model, tokenizer, prompt=prompt, verbose=True) ```
koloni/blockassist-bc-deadly_graceful_stingray_1755019418
koloni
2025-08-12T17:48:23Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "deadly graceful stingray", "arxiv:2504.07091", "region:us" ]
null
2025-08-12T17:48:19Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - deadly graceful stingray --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
EurekaTian/qwen2p5_3b_mmlu_pos
EurekaTian
2025-08-12T17:46:17Z
0
0
null
[ "safetensors", "qwen2", "license:apache-2.0", "region:us" ]
null
2025-08-12T17:36:07Z
--- license: apache-2.0 ---
mveroe/Qwen2.5-1.5B_lightr1_3_1p0_0p0_1p0_sft
mveroe
2025-08-12T17:45:53Z
71
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-09T13:23:26Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mveroe/Qwen2.5-1.5B_lightr1_2_1p0_0p0_1p0_sft
mveroe
2025-08-12T17:45:50Z
51
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-10T14:06:50Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
tushar0088/blockassist-bc-vocal_tenacious_prawn_1755020652
tushar0088
2025-08-12T17:45:17Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "vocal tenacious prawn", "arxiv:2504.07091", "region:us" ]
null
2025-08-12T17:45:10Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - vocal tenacious prawn --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
elsvastika/blockassist-bc-arctic_soaring_weasel_1755019034
elsvastika
2025-08-12T17:45:16Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "arctic soaring weasel", "arxiv:2504.07091", "region:us" ]
null
2025-08-12T17:45:08Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - arctic soaring weasel --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Ferdi3425/blockassist-bc-amphibious_deadly_otter_1755020601
Ferdi3425
2025-08-12T17:44:36Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "amphibious deadly otter", "arxiv:2504.07091", "region:us" ]
null
2025-08-12T17:44:09Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - amphibious deadly otter --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
motza0025/blockassist-bc-quiet_regal_raccoon_1755018943
motza0025
2025-08-12T17:44:10Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "quiet regal raccoon", "arxiv:2504.07091", "region:us" ]
null
2025-08-12T17:42:03Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - quiet regal raccoon --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Kbashiru/Mobile_BERT_on_jumia_dataset
Kbashiru
2025-08-12T17:44:01Z
0
0
transformers
[ "transformers", "safetensors", "mobilebert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-08-12T17:43:50Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
New-Clips-Uppal-Farm-Girl-Viral-Video-Link/FULL.VIDEO.Uppal.Farm.Girl.Viral.Video.Tutorial.Official
New-Clips-Uppal-Farm-Girl-Viral-Video-Link
2025-08-12T17:43:51Z
0
0
null
[ "region:us" ]
null
2025-08-12T17:43:41Z
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?leaked-viral-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
ggozzy/blockassist-bc-stubby_yapping_mandrill_1755020526
ggozzy
2025-08-12T17:43:25Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "stubby yapping mandrill", "arxiv:2504.07091", "region:us" ]
null
2025-08-12T17:43:13Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - stubby yapping mandrill --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
ozkurt7/oracle-qwen2-1.5b-merged-final
ozkurt7
2025-08-12T17:43:19Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "unsloth", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-12T17:42:01Z
--- library_name: transformers tags: - unsloth --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
nightmedia/Jan-v1-4B-dwq3-mlx
nightmedia
2025-08-12T17:40:13Z
0
0
mlx
[ "mlx", "safetensors", "qwen3", "text-generation", "conversational", "en", "base_model:janhq/Jan-v1-4B", "base_model:quantized:janhq/Jan-v1-4B", "license:apache-2.0", "3-bit", "region:us" ]
text-generation
2025-08-12T16:46:36Z
--- license: apache-2.0 language: - en base_model: janhq/Jan-v1-4B pipeline_tag: text-generation library_name: mlx tags: - mlx --- # Jan-v1-4B-dwq3-mlx This quant is too small to do any useful work and is provided for entertainment purposes only This model [Jan-v1-4B-dwq3-mlx](https://huggingface.co/Jan-v1-4B-dwq3-mlx) was converted to MLX format from [janhq/Jan-v1-4B](https://huggingface.co/janhq/Jan-v1-4B) using mlx-lm version **0.26.3**. ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("Jan-v1-4B-dwq3-mlx") prompt = "hello" if tokenizer.chat_template is not None: messages = [{"role": "user", "content": prompt}] prompt = tokenizer.apply_chat_template( messages, add_generation_prompt=True ) response = generate(model, tokenizer, prompt=prompt, verbose=True) ```
Ferdi3425/blockassist-bc-amphibious_deadly_otter_1755020328
Ferdi3425
2025-08-12T17:39:39Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "amphibious deadly otter", "arxiv:2504.07091", "region:us" ]
null
2025-08-12T17:39:37Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - amphibious deadly otter --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
mradermacher/Huihui-InternVL3-78B-abliterated-GGUF
mradermacher
2025-08-12T17:38:49Z
0
0
transformers
[ "transformers", "gguf", "internvl", "custom_code", "abliterated", "uncensored", "multilingual", "base_model:huihui-ai/Huihui-InternVL3-78B-abliterated", "base_model:quantized:huihui-ai/Huihui-InternVL3-78B-abliterated", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-08-12T14:14:50Z
--- base_model: huihui-ai/Huihui-InternVL3-78B-abliterated language: - multilingual library_name: transformers license: apache-2.0 license_link: https://huggingface.co/Qwen/Qwen2.5-72B-Instruct/blob/main/LICENSE license_name: qwen mradermacher: readme_rev: 1 quantized_by: mradermacher tags: - internvl - custom_code - abliterated - uncensored --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> <!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> static quants of https://huggingface.co/huihui-ai/Huihui-InternVL3-78B-abliterated <!-- provided-files --> ***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Huihui-InternVL3-78B-abliterated-GGUF).*** weighted/imatrix quants are available at https://huggingface.co/mradermacher/Huihui-InternVL3-78B-abliterated-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Huihui-InternVL3-78B-abliterated-GGUF/resolve/main/Huihui-InternVL3-78B-abliterated.mmproj-Q8_0.gguf) | mmproj-Q8_0 | 6.2 | multi-modal supplement | | [GGUF](https://huggingface.co/mradermacher/Huihui-InternVL3-78B-abliterated-GGUF/resolve/main/Huihui-InternVL3-78B-abliterated.mmproj-f16.gguf) | mmproj-f16 | 11.5 | multi-modal supplement | | [GGUF](https://huggingface.co/mradermacher/Huihui-InternVL3-78B-abliterated-GGUF/resolve/main/Huihui-InternVL3-78B-abliterated.Q2_K.gguf) | Q2_K | 29.9 | | | [GGUF](https://huggingface.co/mradermacher/Huihui-InternVL3-78B-abliterated-GGUF/resolve/main/Huihui-InternVL3-78B-abliterated.Q3_K_S.gguf) | Q3_K_S | 34.6 | | | [GGUF](https://huggingface.co/mradermacher/Huihui-InternVL3-78B-abliterated-GGUF/resolve/main/Huihui-InternVL3-78B-abliterated.Q3_K_M.gguf) | Q3_K_M | 37.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Huihui-InternVL3-78B-abliterated-GGUF/resolve/main/Huihui-InternVL3-78B-abliterated.Q3_K_L.gguf) | Q3_K_L | 39.6 | | | [GGUF](https://huggingface.co/mradermacher/Huihui-InternVL3-78B-abliterated-GGUF/resolve/main/Huihui-InternVL3-78B-abliterated.IQ4_XS.gguf) | IQ4_XS | 40.3 | | | [GGUF](https://huggingface.co/mradermacher/Huihui-InternVL3-78B-abliterated-GGUF/resolve/main/Huihui-InternVL3-78B-abliterated.Q4_K_S.gguf) | Q4_K_S | 44.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Huihui-InternVL3-78B-abliterated-GGUF/resolve/main/Huihui-InternVL3-78B-abliterated.Q4_K_M.gguf) | Q4_K_M | 47.5 | fast, recommended | | [PART 1](https://huggingface.co/mradermacher/Huihui-InternVL3-78B-abliterated-GGUF/resolve/main/Huihui-InternVL3-78B-abliterated.Q5_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Huihui-InternVL3-78B-abliterated-GGUF/resolve/main/Huihui-InternVL3-78B-abliterated.Q5_K_S.gguf.part2of2) | Q5_K_S | 51.5 | | | [PART 1](https://huggingface.co/mradermacher/Huihui-InternVL3-78B-abliterated-GGUF/resolve/main/Huihui-InternVL3-78B-abliterated.Q5_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Huihui-InternVL3-78B-abliterated-GGUF/resolve/main/Huihui-InternVL3-78B-abliterated.Q5_K_M.gguf.part2of2) | Q5_K_M | 54.5 | | | [PART 1](https://huggingface.co/mradermacher/Huihui-InternVL3-78B-abliterated-GGUF/resolve/main/Huihui-InternVL3-78B-abliterated.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Huihui-InternVL3-78B-abliterated-GGUF/resolve/main/Huihui-InternVL3-78B-abliterated.Q6_K.gguf.part2of2) | Q6_K | 64.4 | very good quality | | [PART 1](https://huggingface.co/mradermacher/Huihui-InternVL3-78B-abliterated-GGUF/resolve/main/Huihui-InternVL3-78B-abliterated.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Huihui-InternVL3-78B-abliterated-GGUF/resolve/main/Huihui-InternVL3-78B-abliterated.Q8_0.gguf.part2of2) | Q8_0 | 77.4 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
coastalcph/Qwen2.5-7B-05t_gcd_sycophancy-05t_non_sycophant
coastalcph
2025-08-12T17:38:25Z
0
0
null
[ "safetensors", "qwen2", "region:us" ]
null
2025-08-12T17:33:56Z
# Combined Task Vector Model This model was created by combining task vectors from multiple fine-tuned models. ## Task Vector Computation ```python t_1 = TaskVector("Qwen/Qwen2.5-7B-Instruct", "coastalcph/Qwen2.5-7B-gcd_sycophancy") t_2 = TaskVector("Qwen/Qwen2.5-7B-Instruct", "coastalcph/Qwen2.5-7B-personality-non-sycophancy") t_combined = 0.5 * t_1 + 0.5 * t_2 new_model = t_combined.apply_to("Qwen/Qwen2.5-7B-Instruct", scaling_coef=1.0) ``` Models Used - Base Model: https://huggingface.co/Qwen/Qwen2.5-7B-Instruct - Fine-tuned Model 1: https://huggingface.co/coastalcph/Qwen2.5-7B-gcd_sycophancy - Fine-tuned Model 2: https://huggingface.co/coastalcph/Qwen2.5-7B-personality-non-sycophancy Technical Details - Creation Script Git Hash: 435fdd2a144e79c487d864db94b34a02894295b9 - Task Vector Method: Additive combination - Args: { "pretrained_model": "Qwen/Qwen2.5-7B-Instruct", "finetuned_model1": "coastalcph/Qwen2.5-7B-gcd_sycophancy", "finetuned_model2": "coastalcph/Qwen2.5-7B-personality-non-sycophancy", "finetuned_model3": null, "output_model_name": "coastalcph/Qwen2.5-7B-05t_gcd_sycophancy-05t_non_sycophant", "output_dir": "/projects/nlp/data/constanzam/weight-interp/task-vectors/math_non_sycophant_12Aug", "scaling_coef": 1.0, "apply_line_scaling_t1": false, "apply_line_scaling_t2": false, "apply_line_scaling_t3": false, "scale_t1": 0.5, "scale_t2": 0.5, "scale_t3": 0.5 }
mradermacher/GPT-OSS-30B-Preview-i1-GGUF
mradermacher
2025-08-12T17:37:49Z
0
1
transformers
[ "transformers", "gguf", "vllm", "unsloth", "mergekit", "gpt_oss", "en", "base_model:win10/GPT-OSS-30B-Preview", "base_model:quantized:win10/GPT-OSS-30B-Preview", "license:apache-2.0", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2025-08-12T14:25:51Z
--- base_model: win10/GPT-OSS-30B-Preview language: - en library_name: transformers license: apache-2.0 mradermacher: readme_rev: 1 quantized_by: mradermacher tags: - vllm - unsloth - mergekit - gpt_oss --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> <!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K_M Q4_0 IQ3_XS Q4_1 IQ3_S --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> weighted/imatrix quants of https://huggingface.co/win10/GPT-OSS-30B-Preview <!-- provided-files --> ***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#GPT-OSS-30B-Preview-i1-GGUF).*** static quants are available at https://huggingface.co/mradermacher/GPT-OSS-30B-Preview-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/GPT-OSS-30B-Preview-i1-GGUF/resolve/main/GPT-OSS-30B-Preview.imatrix.gguf) | imatrix | 0.1 | imatrix file (for creating your own qwuants) | | [GGUF](https://huggingface.co/mradermacher/GPT-OSS-30B-Preview-i1-GGUF/resolve/main/GPT-OSS-30B-Preview.i1-IQ1_M.gguf) | i1-IQ1_M | 17.6 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/GPT-OSS-30B-Preview-i1-GGUF/resolve/main/GPT-OSS-30B-Preview.i1-IQ1_S.gguf) | i1-IQ1_S | 17.6 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/GPT-OSS-30B-Preview-i1-GGUF/resolve/main/GPT-OSS-30B-Preview.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 17.6 | | | [GGUF](https://huggingface.co/mradermacher/GPT-OSS-30B-Preview-i1-GGUF/resolve/main/GPT-OSS-30B-Preview.i1-IQ2_XS.gguf) | i1-IQ2_XS | 17.7 | | | [GGUF](https://huggingface.co/mradermacher/GPT-OSS-30B-Preview-i1-GGUF/resolve/main/GPT-OSS-30B-Preview.i1-Q3_K_S.gguf) | i1-Q3_K_S | 17.7 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/GPT-OSS-30B-Preview-i1-GGUF/resolve/main/GPT-OSS-30B-Preview.i1-IQ2_M.gguf) | i1-IQ2_M | 17.7 | | | [GGUF](https://huggingface.co/mradermacher/GPT-OSS-30B-Preview-i1-GGUF/resolve/main/GPT-OSS-30B-Preview.i1-IQ2_S.gguf) | i1-IQ2_S | 17.7 | | | [GGUF](https://huggingface.co/mradermacher/GPT-OSS-30B-Preview-i1-GGUF/resolve/main/GPT-OSS-30B-Preview.i1-IQ3_S.gguf) | i1-IQ3_S | 17.7 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/GPT-OSS-30B-Preview-i1-GGUF/resolve/main/GPT-OSS-30B-Preview.i1-IQ3_XS.gguf) | i1-IQ3_XS | 17.7 | | | [GGUF](https://huggingface.co/mradermacher/GPT-OSS-30B-Preview-i1-GGUF/resolve/main/GPT-OSS-30B-Preview.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 17.7 | lower quality | | [GGUF](https://huggingface.co/mradermacher/GPT-OSS-30B-Preview-i1-GGUF/resolve/main/GPT-OSS-30B-Preview.i1-Q2_K.gguf) | i1-Q2_K | 17.7 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/GPT-OSS-30B-Preview-i1-GGUF/resolve/main/GPT-OSS-30B-Preview.i1-IQ4_XS.gguf) | i1-IQ4_XS | 17.8 | | | [GGUF](https://huggingface.co/mradermacher/GPT-OSS-30B-Preview-i1-GGUF/resolve/main/GPT-OSS-30B-Preview.i1-Q2_K_S.gguf) | i1-Q2_K_S | 17.8 | very low quality | | [GGUF](https://huggingface.co/mradermacher/GPT-OSS-30B-Preview-i1-GGUF/resolve/main/GPT-OSS-30B-Preview.i1-Q4_0.gguf) | i1-Q4_0 | 17.8 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/GPT-OSS-30B-Preview-i1-GGUF/resolve/main/GPT-OSS-30B-Preview.i1-IQ3_M.gguf) | i1-IQ3_M | 17.9 | | | [GGUF](https://huggingface.co/mradermacher/GPT-OSS-30B-Preview-i1-GGUF/resolve/main/GPT-OSS-30B-Preview.i1-Q3_K_M.gguf) | i1-Q3_K_M | 19.0 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/GPT-OSS-30B-Preview-i1-GGUF/resolve/main/GPT-OSS-30B-Preview.i1-Q3_K_L.gguf) | i1-Q3_K_L | 19.6 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/GPT-OSS-30B-Preview-i1-GGUF/resolve/main/GPT-OSS-30B-Preview.i1-Q4_1.gguf) | i1-Q4_1 | 19.7 | | | [GGUF](https://huggingface.co/mradermacher/GPT-OSS-30B-Preview-i1-GGUF/resolve/main/GPT-OSS-30B-Preview.i1-Q4_K_S.gguf) | i1-Q4_K_S | 21.6 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/GPT-OSS-30B-Preview-i1-GGUF/resolve/main/GPT-OSS-30B-Preview.i1-Q4_K_M.gguf) | i1-Q4_K_M | 23.3 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/GPT-OSS-30B-Preview-i1-GGUF/resolve/main/GPT-OSS-30B-Preview.i1-Q5_K_S.gguf) | i1-Q5_K_S | 23.4 | | | [GGUF](https://huggingface.co/mradermacher/GPT-OSS-30B-Preview-i1-GGUF/resolve/main/GPT-OSS-30B-Preview.i1-Q5_K_M.gguf) | i1-Q5_K_M | 24.9 | | | [GGUF](https://huggingface.co/mradermacher/GPT-OSS-30B-Preview-i1-GGUF/resolve/main/GPT-OSS-30B-Preview.i1-Q6_K.gguf) | i1-Q6_K | 32.8 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
Tiklup/results
Tiklup
2025-08-12T17:36:29Z
12
0
transformers
[ "transformers", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-07-30T16:55:10Z
--- library_name: transformers license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer metrics: - accuracy - precision - recall - f1 model-index: - name: results results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # results This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2861 - Accuracy: 0.9296 - Precision: 0.9269 - Recall: 0.9328 - F1: 0.9298 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 64 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:| | 0.2767 | 1.0 | 3125 | 0.2828 | 0.9207 | 0.9477 | 0.8905 | 0.9182 | | 0.1512 | 2.0 | 6250 | 0.2861 | 0.9296 | 0.9269 | 0.9328 | 0.9298 | ### Framework versions - Transformers 4.55.0 - Pytorch 2.6.0+cu124 - Datasets 4.0.0 - Tokenizers 0.21.4
gotutiyan/gector-bert-base-cased-5k
gotutiyan
2025-08-12T17:35:56Z
13
0
transformers
[ "transformers", "pytorch", "safetensors", "GECToR_gotutiyan", "grammatical error correction", "en", "endpoints_compatible", "region:us" ]
null
2023-08-20T03:38:12Z
--- language: en tags: - GECToR_gotutiyan - grammatical error correction --- Only non-commercial purposes. # gector sample This is an unofficial pretrained model of GECToR ([Omelianchuk+ 2020](https://aclanthology.org/2020.bea-1.16/)). ### How to use The code is avaliable from https://github.com/gotutiyan/gector. CLI ```sh python predict.py --input <raw text file> --restore_dir gotutiyan/gector-bert-base-cased-5k --out <path to output file> ``` API ```py from transformers import AutoTokenizer from gector.modeling import GECToR from gector.predict import predict, load_verb_dict import torch model_id = 'gotutiyan/gector-bert-base-cased-5k' model = GECToR.from_pretrained(model_id) if torch.cuda.is_available(): model.cuda() tokenizer = AutoTokenizer.from_pretrained(model_id) encode, decode = load_verb_dict('data/verb-form-vocab.txt') srcs = [ 'This is a correct sentence.', 'This are a wrong sentences' ] corrected = predict( model, tokenizer, srcs, encode, decode, keep_confidence=0.0, min_error_prob=0.0, n_iteration=5, batch_size=2, ) print(corrected) ```
emily84/car-show-boards-for-next-car-show
emily84
2025-08-12T17:35:53Z
0
0
null
[ "region:us" ]
null
2025-08-12T17:35:37Z
Car Show Boards help your vehicle shine by giving it the platform it deserves. Make your setup look complete and professional. ✨ Order your custom board today. 👉 https://carshowboards.com/ #StandOutDisplay #CarShowEssentials #DisplayThatPops #AutoShowPresentation #ShowTimeStyle
technaxx/distilhubert-finetuned-gtzan
technaxx
2025-08-12T17:33:35Z
9
0
transformers
[ "transformers", "pytorch", "tensorboard", "safetensors", "hubert", "audio-classification", "generated_from_trainer", "dataset:marsyas/gtzan", "license:apache-2.0", "endpoints_compatible", "region:us" ]
audio-classification
2023-07-22T02:02:36Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - marsyas/gtzan metrics: - accuracy model-index: - name: distilhubert-finetuned-gtzan results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilhubert-finetuned-gtzan This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the GTZAN dataset. It achieves the following results on the evaluation set: - Loss: 0.5748 - Accuracy: 0.89 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 10 - eval_batch_size: 10 - seed: 40 - gradient_accumulation_steps: 6 - total_train_batch_size: 60 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 15 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.2685 | 1.0 | 15 | 1.2199 | 0.71 | | 1.1248 | 2.0 | 30 | 1.0805 | 0.75 | | 1.0651 | 3.0 | 45 | 0.9617 | 0.8 | | 0.9201 | 4.0 | 60 | 0.9439 | 0.76 | | 0.805 | 5.0 | 75 | 0.8118 | 0.84 | | 0.6815 | 6.0 | 90 | 0.7881 | 0.84 | | 0.6421 | 7.0 | 105 | 0.7476 | 0.81 | | 0.5956 | 8.0 | 120 | 0.6870 | 0.84 | | 0.4791 | 9.0 | 135 | 0.6403 | 0.88 | | 0.4411 | 10.0 | 150 | 0.6420 | 0.82 | | 0.3855 | 11.0 | 165 | 0.5990 | 0.89 | | 0.3592 | 12.0 | 180 | 0.5927 | 0.87 | | 0.3254 | 13.0 | 195 | 0.5891 | 0.87 | | 0.3478 | 14.0 | 210 | 0.5887 | 0.85 | | 0.2985 | 15.0 | 225 | 0.5748 | 0.89 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.0 - Datasets 2.1.0 - Tokenizers 0.13.3
aleebaster/blockassist-bc-sly_eager_boar_1755018945
aleebaster
2025-08-12T17:32:26Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "sly eager boar", "arxiv:2504.07091", "region:us" ]
null
2025-08-12T17:32:19Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - sly eager boar --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
MJ92/AceGPT-v2-8B-Chat_finetuned_5000fr_2000ar
MJ92
2025-08-12T17:31:50Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-12T17:13:34Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
8man-crypto/blockassist-bc-insectivorous_bellowing_porpoise_1755018269
8man-crypto
2025-08-12T17:31:34Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "insectivorous bellowing porpoise", "arxiv:2504.07091", "region:us" ]
null
2025-08-12T17:31:03Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - insectivorous bellowing porpoise --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
oxford-llms/lora_10profiles_1k_respondents_model
oxford-llms
2025-08-12T17:31:16Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "trl", "sft", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2025-08-12T17:30:11Z
--- library_name: transformers tags: - trl - sft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
exala/db_fe2_10.1.1u
exala
2025-08-12T17:30:32Z
0
0
transformers
[ "transformers", "safetensors", "distilbert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-08-12T17:30:15Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Jack-Payne1/qwen_2.5_7b-phoenix_B1_random_seed2
Jack-Payne1
2025-08-12T17:30:24Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "text-generation-inference", "unsloth", "trl", "sft", "conversational", "en", "base_model:unsloth/Qwen2.5-7B-Instruct", "base_model:finetune:unsloth/Qwen2.5-7B-Instruct", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-08-12T17:27:08Z
--- base_model: unsloth/Qwen2.5-7B-Instruct tags: - text-generation-inference - transformers - unsloth - qwen2 - trl - sft license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** Jack-Payne1 - **License:** apache-2.0 - **Finetuned from model :** unsloth/Qwen2.5-7B-Instruct This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1755019605
IvanJAjebu
2025-08-12T17:28:14Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "thorny slender capybara", "arxiv:2504.07091", "region:us" ]
null
2025-08-12T17:27:39Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - thorny slender capybara --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
m-mulet/try2_qwen_2.5_7b-owl_student_2000_numbers
m-mulet
2025-08-12T17:27:11Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "qwen2", "trl", "en", "base_model:unsloth/Qwen2.5-7B-Instruct", "base_model:finetune:unsloth/Qwen2.5-7B-Instruct", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-08-12T17:27:01Z
--- base_model: unsloth/Qwen2.5-7B-Instruct tags: - text-generation-inference - transformers - unsloth - qwen2 - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** m-mulet - **License:** apache-2.0 - **Finetuned from model :** unsloth/Qwen2.5-7B-Instruct This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
hidayahlut/blockassist-bc-knobby_scavenging_wasp_1755019508
hidayahlut
2025-08-12T17:27:05Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "knobby scavenging wasp", "arxiv:2504.07091", "region:us" ]
null
2025-08-12T17:26:49Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - knobby scavenging wasp --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
ArtusDev/TheDrummer_Gemma-3-R1-4B-v1-EXL3
ArtusDev
2025-08-12T17:25:58Z
0
0
null
[ "exl3", "base_model:TheDrummer/Gemma-3-R1-4B-v1", "base_model:quantized:TheDrummer/Gemma-3-R1-4B-v1", "region:us" ]
null
2025-08-12T17:05:09Z
--- base_model: TheDrummer/Gemma-3-R1-4B-v1 base_model_relation: quantized quantized_by: ArtusDev tags: - exl3 --- ## EXL3 Quants of TheDrummer/Gemma-3-R1-4B-v1 EXL3 quants of [TheDrummer/Gemma-3-R1-4B-v1](https://huggingface.co/TheDrummer/Gemma-3-R1-4B-v1) using <a href="https://github.com/turboderp-org/exllamav3/">exllamav3</a> for quantization. ### Quants | Quant(Revision) | Bits per Weight | Head Bits | | -------- | ---------- | --------- | | [2.5_H6](https://huggingface.co/ArtusDev/TheDrummer_Gemma-3-R1-4B-v1-EXL3/tree/2.5bpw_H6) | 2.5 | 6 | | [3.0_H6](https://huggingface.co/ArtusDev/TheDrummer_Gemma-3-R1-4B-v1-EXL3/tree/3.0bpw_H6) | 3.0 | 6 | | [3.5_H6](https://huggingface.co/ArtusDev/TheDrummer_Gemma-3-R1-4B-v1-EXL3/tree/3.5bpw_H6) | 3.5 | 6 | | [4.0_H6](https://huggingface.co/ArtusDev/TheDrummer_Gemma-3-R1-4B-v1-EXL3/tree/4.0bpw_H6) | 4.0 | 6 | | [4.5_H6](https://huggingface.co/ArtusDev/TheDrummer_Gemma-3-R1-4B-v1-EXL3/tree/4.5bpw_H6) | 4.5 | 6 | | [5.0_H6](https://huggingface.co/ArtusDev/TheDrummer_Gemma-3-R1-4B-v1-EXL3/tree/5.0bpw_H6) | 5.0 | 6 | | [6.0_H6](https://huggingface.co/ArtusDev/TheDrummer_Gemma-3-R1-4B-v1-EXL3/tree/6.0bpw_H6) | 6.0 | 6 | | [8.0_H8](https://huggingface.co/ArtusDev/TheDrummer_Gemma-3-R1-4B-v1-EXL3/tree/8.0bpw_H8) | 8.0 | 8 | ### Downloading quants with huggingface-cli <details> <summary>Click to view download instructions</summary> Install hugginface-cli: ```bash pip install -U "huggingface_hub[cli]" ``` Download quant by targeting the specific quant revision (branch): ``` huggingface-cli download ArtusDev/TheDrummer_Gemma-3-R1-4B-v1-EXL3 --revision "5.0bpw_H6" --local-dir ./ ``` </details>
ggozzy/blockassist-bc-stubby_yapping_mandrill_1755019305
ggozzy
2025-08-12T17:23:08Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "stubby yapping mandrill", "arxiv:2504.07091", "region:us" ]
null
2025-08-12T17:22:54Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - stubby yapping mandrill --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
ozkurt7/oracle-qwen2-1.5b-merged
ozkurt7
2025-08-12T17:23:00Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "unsloth", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-12T17:21:26Z
--- library_name: transformers tags: - unsloth --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1755019294
IvanJAjebu
2025-08-12T17:22:40Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "thorny slender capybara", "arxiv:2504.07091", "region:us" ]
null
2025-08-12T17:22:32Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - thorny slender capybara --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
emily84/Featured-Customer-Car-Show-Displays
emily84
2025-08-12T17:22:38Z
0
0
null
[ "region:us" ]
null
2025-08-12T17:22:21Z
Nothing inspires better than real examples. Our customers bring style and personality to every display, and we’re proud to be a part of it. 👀 Browse their boards: https://showcarsign.com/customer-pics/ #CustomerFavorites #RealCarDisplays #ShowCarInspo #CarExhibit #BoardPerfection
Ferdi3425/blockassist-bc-amphibious_deadly_otter_1755019240
Ferdi3425
2025-08-12T17:22:04Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "amphibious deadly otter", "arxiv:2504.07091", "region:us" ]
null
2025-08-12T17:21:36Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - amphibious deadly otter --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
koloni/blockassist-bc-deadly_graceful_stingray_1755017713
koloni
2025-08-12T17:20:23Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "deadly graceful stingray", "arxiv:2504.07091", "region:us" ]
null
2025-08-12T17:20:14Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - deadly graceful stingray --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
ggozzy/blockassist-bc-stubby_yapping_mandrill_1755018999
ggozzy
2025-08-12T17:18:11Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "stubby yapping mandrill", "arxiv:2504.07091", "region:us" ]
null
2025-08-12T17:17:48Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - stubby yapping mandrill --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
nightmedia/Jan-v1-4B-q4-mlx
nightmedia
2025-08-12T17:17:33Z
0
0
mlx
[ "mlx", "safetensors", "qwen3", "text-generation", "conversational", "en", "base_model:janhq/Jan-v1-4B", "base_model:quantized:janhq/Jan-v1-4B", "license:apache-2.0", "4-bit", "region:us" ]
text-generation
2025-08-12T16:53:26Z
--- license: apache-2.0 language: - en base_model: janhq/Jan-v1-4B pipeline_tag: text-generation tags: - mlx library_name: mlx --- # Jan-v1-4B-q4-mlx This model [Jan-v1-4B-q4-mlx](https://huggingface.co/Jan-v1-4B-q4-mlx) was converted to MLX format from [janhq/Jan-v1-4B](https://huggingface.co/janhq/Jan-v1-4B) using mlx-lm version **0.26.3**. ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("Jan-v1-4B-q4-mlx") prompt = "hello" if tokenizer.chat_template is not None: messages = [{"role": "user", "content": prompt}] prompt = tokenizer.apply_chat_template( messages, add_generation_prompt=True ) response = generate(model, tokenizer, prompt=prompt, verbose=True) ```
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1755018976
IvanJAjebu
2025-08-12T17:17:28Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "thorny slender capybara", "arxiv:2504.07091", "region:us" ]
null
2025-08-12T17:17:14Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - thorny slender capybara --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
xinnn32/blockassist-bc-meek_winged_caterpillar_1755018960
xinnn32
2025-08-12T17:16:43Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "meek winged caterpillar", "arxiv:2504.07091", "region:us" ]
null
2025-08-12T17:16:34Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - meek winged caterpillar --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Ferdi3425/blockassist-bc-amphibious_deadly_otter_1755018938
Ferdi3425
2025-08-12T17:16:30Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "amphibious deadly otter", "arxiv:2504.07091", "region:us" ]
null
2025-08-12T17:16:24Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - amphibious deadly otter --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
MariChristmass/realismfoto
MariChristmass
2025-08-12T17:15:08Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-08-12T17:14:34Z
--- license: apache-2.0 ---
mang3dd/blockassist-bc-tangled_slithering_alligator_1755017205
mang3dd
2025-08-12T17:14:44Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "tangled slithering alligator", "arxiv:2504.07091", "region:us" ]
null
2025-08-12T17:14:41Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - tangled slithering alligator --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
silentember/Lantern_RNcAt8
silentember
2025-08-12T17:13:53Z
0
0
null
[ "safetensors", "any-to-any", "omega", "omegalabs", "bittensor", "agi", "license:mit", "region:us" ]
any-to-any
2025-08-12T17:11:57Z
--- license: mit tags: - any-to-any - omega - omegalabs - bittensor - agi --- This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet. Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
kojeklollipop/blockassist-bc-spotted_amphibious_stork_1755017154
kojeklollipop
2025-08-12T17:13:33Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "spotted amphibious stork", "arxiv:2504.07091", "region:us" ]
null
2025-08-12T17:13:28Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - spotted amphibious stork --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
ggozzy/blockassist-bc-stubby_yapping_mandrill_1755018694
ggozzy
2025-08-12T17:12:48Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "stubby yapping mandrill", "arxiv:2504.07091", "region:us" ]
null
2025-08-12T17:12:35Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - stubby yapping mandrill --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1755018688
IvanJAjebu
2025-08-12T17:12:37Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "thorny slender capybara", "arxiv:2504.07091", "region:us" ]
null
2025-08-12T17:12:26Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - thorny slender capybara --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
MAGICYA0/blockassist-bc-silky_lively_badger_1755015598
MAGICYA0
2025-08-12T17:12:07Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "silky lively badger", "arxiv:2504.07091", "region:us" ]
null
2025-08-12T17:10:24Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - silky lively badger --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
martijn75/raw_text_mt_6_layers_8_att_heads_5_seqlen
martijn75
2025-08-12T17:11:39Z
0
0
transformers
[ "transformers", "pytorch", "tensorboard", "safetensors", "bert", "fill-mask", "generated_from_trainer", "base_model:google-bert/bert-base-multilingual-cased", "base_model:finetune:google-bert/bert-base-multilingual-cased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2025-08-12T15:23:16Z
--- library_name: transformers license: apache-2.0 base_model: bert-base-multilingual-cased tags: - generated_from_trainer model-index: - name: raw_text_mt_6_layers_8_att_heads_5_seqlen results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # raw_text_mt_6_layers_8_att_heads_5_seqlen This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 5.7877 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 80 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:------:|:---------------:| | 6.6819 | 1.0 | 1763 | 6.6228 | | 6.5342 | 2.0 | 3526 | 6.4697 | | 6.4824 | 3.0 | 5289 | 6.3637 | | 6.3486 | 4.0 | 7052 | 6.3028 | | 6.3046 | 5.0 | 8815 | 6.1954 | | 6.2604 | 6.0 | 10578 | 6.1841 | | 6.2007 | 7.0 | 12341 | 6.1310 | | 6.1549 | 8.0 | 14104 | 6.1087 | | 6.1911 | 9.0 | 15867 | 6.1289 | | 6.1076 | 10.0 | 17630 | 6.1051 | | 6.1136 | 11.0 | 19393 | 6.1091 | | 6.0997 | 12.0 | 21156 | 6.0700 | | 6.0784 | 13.0 | 22919 | 6.0625 | | 6.0872 | 14.0 | 24682 | 6.0392 | | 6.0506 | 15.0 | 26445 | 6.0162 | | 6.013 | 16.0 | 28208 | 6.0294 | | 6.0141 | 17.0 | 29971 | 6.0706 | | 6.018 | 18.0 | 31734 | 5.9934 | | 5.9841 | 19.0 | 33497 | 6.0145 | | 6.0142 | 20.0 | 35260 | 5.9885 | | 5.9718 | 21.0 | 37023 | 5.9988 | | 5.9434 | 22.0 | 38786 | 5.9775 | | 5.9411 | 23.0 | 40549 | 5.9749 | | 5.9141 | 24.0 | 42312 | 5.9615 | | 5.8794 | 25.0 | 44075 | 5.9750 | | 5.9217 | 26.0 | 45838 | 5.9707 | | 5.9231 | 27.0 | 47601 | 5.9566 | | 5.8793 | 28.0 | 49364 | 5.9408 | | 5.9119 | 29.0 | 51127 | 5.9601 | | 5.921 | 30.0 | 52890 | 5.9518 | | 5.8938 | 31.0 | 54653 | 5.9631 | | 5.884 | 32.0 | 56416 | 5.8982 | | 5.8552 | 33.0 | 58179 | 5.9468 | | 5.8749 | 34.0 | 59942 | 5.9418 | | 5.8397 | 35.0 | 61705 | 5.9253 | | 5.8201 | 36.0 | 63468 | 5.8915 | | 5.827 | 37.0 | 65231 | 5.9026 | | 5.8383 | 38.0 | 66994 | 5.8856 | | 5.7991 | 39.0 | 68757 | 5.8614 | | 5.8471 | 40.0 | 70520 | 5.8725 | | 5.7929 | 41.0 | 72283 | 5.8702 | | 5.8204 | 42.0 | 74046 | 5.9373 | | 5.8216 | 43.0 | 75809 | 5.8751 | | 5.8465 | 44.0 | 77572 | 5.8491 | | 5.7925 | 45.0 | 79335 | 5.8499 | | 5.8042 | 46.0 | 81098 | 5.8854 | | 5.7622 | 47.0 | 82861 | 5.8180 | | 5.7714 | 48.0 | 84624 | 5.8579 | | 5.7699 | 49.0 | 86387 | 5.8526 | | 5.7642 | 50.0 | 88150 | 5.8045 | | 5.753 | 51.0 | 89913 | 5.8486 | | 5.7585 | 52.0 | 91676 | 5.8642 | | 5.7432 | 53.0 | 93439 | 5.8314 | | 5.725 | 54.0 | 95202 | 5.8363 | | 5.7363 | 55.0 | 96965 | 5.7895 | | 5.7489 | 56.0 | 98728 | 5.8092 | | 5.722 | 57.0 | 100491 | 5.7901 | | 5.7316 | 58.0 | 102254 | 5.8211 | | 5.683 | 59.0 | 104017 | 5.8091 | | 5.7252 | 60.0 | 105780 | 5.8195 | | 5.7462 | 61.0 | 107543 | 5.7688 | | 5.6803 | 62.0 | 109306 | 5.8213 | | 5.6983 | 63.0 | 111069 | 5.7816 | | 5.7121 | 64.0 | 112832 | 5.8174 | | 5.6948 | 65.0 | 114595 | 5.8113 | | 5.6371 | 66.0 | 116358 | 5.8555 | | 5.6859 | 67.0 | 118121 | 5.7701 | | 5.6958 | 68.0 | 119884 | 5.7698 | | 5.6804 | 69.0 | 121647 | 5.8245 | | 5.6719 | 70.0 | 123410 | 5.7793 | | 5.6385 | 71.0 | 125173 | 5.7877 | ### Framework versions - Transformers 4.52.4 - Pytorch 2.7.1+cu118 - Datasets 3.6.0 - Tokenizers 0.21.1
nightmedia/Jan-v1-4B-dwq5-mlx
nightmedia
2025-08-12T17:10:20Z
0
0
mlx
[ "mlx", "safetensors", "qwen3", "text-generation", "conversational", "en", "base_model:janhq/Jan-v1-4B", "base_model:quantized:janhq/Jan-v1-4B", "license:apache-2.0", "5-bit", "region:us" ]
text-generation
2025-08-12T16:33:45Z
--- license: apache-2.0 language: - en base_model: janhq/Jan-v1-4B pipeline_tag: text-generation library_name: mlx tags: - mlx --- # Jan-v1-4B-dwq5-mlx This model [Jan-v1-4B-dwq5-mlx](https://huggingface.co/Jan-v1-4B-dwq5-mlx) was converted to MLX format from [janhq/Jan-v1-4B](https://huggingface.co/janhq/Jan-v1-4B) using mlx-lm version **0.26.3**. ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("Jan-v1-4B-dwq5-mlx") prompt = "hello" if tokenizer.chat_template is not None: messages = [{"role": "user", "content": prompt}] prompt = tokenizer.apply_chat_template( messages, add_generation_prompt=True ) response = generate(model, tokenizer, prompt=prompt, verbose=True) ```
aleebaster/blockassist-bc-sly_eager_boar_1755017319
aleebaster
2025-08-12T17:10:18Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "sly eager boar", "arxiv:2504.07091", "region:us" ]
null
2025-08-12T17:10:09Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - sly eager boar --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Kbashiru/Tiny_Naija_BERT_on_jumia_dataset
Kbashiru
2025-08-12T17:08:24Z
0
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-08-12T17:08:14Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Ferdi3425/blockassist-bc-amphibious_deadly_otter_1755018437
Ferdi3425
2025-08-12T17:08:06Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "amphibious deadly otter", "arxiv:2504.07091", "region:us" ]
null
2025-08-12T17:08:03Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - amphibious deadly otter --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
ggozzy/blockassist-bc-stubby_yapping_mandrill_1755018389
ggozzy
2025-08-12T17:07:53Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "stubby yapping mandrill", "arxiv:2504.07091", "region:us" ]
null
2025-08-12T17:07:34Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - stubby yapping mandrill --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
jeongseokoh/Llama3.1-8B-LatentRAG-batch_40st-og
jeongseokoh
2025-08-12T17:07:36Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-12T17:00:49Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Elizavr/blockassist-bc-reclusive_shaggy_bee_1755018324
Elizavr
2025-08-12T17:07:25Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "reclusive shaggy bee", "arxiv:2504.07091", "region:us" ]
null
2025-08-12T17:06:06Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - reclusive shaggy bee --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
xinnn32/blockassist-bc-meek_winged_caterpillar_1755018374
xinnn32
2025-08-12T17:07:01Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "meek winged caterpillar", "arxiv:2504.07091", "region:us" ]
null
2025-08-12T17:06:54Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - meek winged caterpillar --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
sudoping01/whisereer-v2
sudoping01
2025-08-12T17:06:11Z
0
0
peft
[ "peft", "safetensors", "generated_from_trainer", "base_model:openai/whisper-large-v2", "base_model:adapter:openai/whisper-large-v2", "license:apache-2.0", "region:us" ]
null
2025-08-12T17:06:06Z
--- library_name: peft license: apache-2.0 base_model: openai/whisper-large-v2 tags: - generated_from_trainer model-index: - name: whisereer-v2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # whisereer-v2 This model is a fine-tuned version of [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.7024 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 50 - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 1.4577 | 1.0 | 853 | 1.7271 | | 1.2127 | 2.0 | 1706 | 1.6592 | | 1.024 | 3.0 | 2559 | 1.6661 | | 0.7389 | 3.9959 | 3408 | 1.7024 | ### Framework versions - PEFT 0.15.2 - Transformers 4.51.3 - Pytorch 2.5.1+cu124 - Datasets 3.5.1 - Tokenizers 0.21.1
mlx-community/Jan-v1-4B-6bit
mlx-community
2025-08-12T17:04:50Z
0
0
mlx
[ "mlx", "safetensors", "qwen3", "text-generation", "conversational", "en", "base_model:janhq/Jan-v1-4B", "base_model:quantized:janhq/Jan-v1-4B", "license:apache-2.0", "6-bit", "region:us" ]
text-generation
2025-08-12T17:02:07Z
--- license: apache-2.0 language: - en base_model: janhq/Jan-v1-4B pipeline_tag: text-generation library_name: mlx tags: - mlx --- # mlx-community/Jan-v1-4B-6bit This model [mlx-community/Jan-v1-4B-6bit](https://huggingface.co/mlx-community/Jan-v1-4B-6bit) was converted to MLX format from [janhq/Jan-v1-4B](https://huggingface.co/janhq/Jan-v1-4B) using mlx-lm version **0.26.2**. ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("mlx-community/Jan-v1-4B-6bit") prompt = "hello" if tokenizer.chat_template is not None: messages = [{"role": "user", "content": prompt}] prompt = tokenizer.apply_chat_template( messages, add_generation_prompt=True ) response = generate(model, tokenizer, prompt=prompt, verbose=True) ```
silentember/Lantern_6VcEsx
silentember
2025-08-12T17:03:13Z
0
0
null
[ "safetensors", "any-to-any", "omega", "omegalabs", "bittensor", "agi", "license:mit", "region:us" ]
any-to-any
2025-08-12T17:01:09Z
--- license: mit tags: - any-to-any - omega - omegalabs - bittensor - agi --- This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet. Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
BootesVoid/cme6nf15x09v06aq1x8d8pate_cme8qwk4o0281rts8ysd9roch
BootesVoid
2025-08-12T17:03:11Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-08-12T17:03:10Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: LATINASEXY --- # Cme6Nf15X09V06Aq1X8D8Pate_Cme8Qwk4O0281Rts8Ysd9Roch <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `LATINASEXY` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "LATINASEXY", "lora_weights": "https://huggingface.co/BootesVoid/cme6nf15x09v06aq1x8d8pate_cme8qwk4o0281rts8ysd9roch/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('BootesVoid/cme6nf15x09v06aq1x8d8pate_cme8qwk4o0281rts8ysd9roch', weight_name='lora.safetensors') image = pipeline('LATINASEXY').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 2000 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/BootesVoid/cme6nf15x09v06aq1x8d8pate_cme8qwk4o0281rts8ysd9roch/discussions) to add images that show off what you’ve made with this LoRA.
ggozzy/blockassist-bc-stubby_yapping_mandrill_1755018084
ggozzy
2025-08-12T17:02:49Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "stubby yapping mandrill", "arxiv:2504.07091", "region:us" ]
null
2025-08-12T17:02:32Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - stubby yapping mandrill --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
andr0m4da/blockassist-bc-grazing_hunting_boar_1755018079
andr0m4da
2025-08-12T17:02:40Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "grazing hunting boar", "arxiv:2504.07091", "region:us" ]
null
2025-08-12T17:02:34Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - grazing hunting boar --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
mlx-community/Jan-v1-4B-5bit
mlx-community
2025-08-12T17:02:27Z
0
0
mlx
[ "mlx", "safetensors", "qwen3", "text-generation", "conversational", "en", "base_model:janhq/Jan-v1-4B", "base_model:quantized:janhq/Jan-v1-4B", "license:apache-2.0", "5-bit", "region:us" ]
text-generation
2025-08-12T17:00:50Z
--- license: apache-2.0 language: - en base_model: janhq/Jan-v1-4B pipeline_tag: text-generation library_name: mlx tags: - mlx --- # mlx-community/Jan-v1-4B-5bit This model [mlx-community/Jan-v1-4B-5bit](https://huggingface.co/mlx-community/Jan-v1-4B-5bit) was converted to MLX format from [janhq/Jan-v1-4B](https://huggingface.co/janhq/Jan-v1-4B) using mlx-lm version **0.26.3**. ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("mlx-community/Jan-v1-4B-5bit") prompt = "hello" if tokenizer.chat_template is not None: messages = [{"role": "user", "content": prompt}] prompt = tokenizer.apply_chat_template( messages, add_generation_prompt=True ) response = generate(model, tokenizer, prompt=prompt, verbose=True) ```
giovannidemuri/llama8b-er-afg-v90-seed2-hx
giovannidemuri
2025-08-12T17:02:27Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "generated_from_trainer", "conversational", "base_model:meta-llama/Llama-3.1-8B", "base_model:finetune:meta-llama/Llama-3.1-8B", "license:llama3.1", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-12T14:35:41Z
--- library_name: transformers license: llama3.1 base_model: meta-llama/Llama-3.1-8B tags: - generated_from_trainer model-index: - name: llama8b-er-afg-v90-seed2-hx results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # llama8b-er-afg-v90-seed2-hx This model is a fine-tuned version of [meta-llama/Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 2 - gradient_accumulation_steps: 8 - total_train_batch_size: 128 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.51.3 - Pytorch 2.7.0+cu128 - Datasets 4.0.0 - Tokenizers 0.21.0
theprint/TiTan-Gemma3-1B-GGUF
theprint
2025-08-12T17:02:12Z
0
0
gguf
[ "gguf", "quantized", "llama.cpp", "titan-gemma3-1b", "text-generation", "en", "base_model:theprint/TiTan-Gemma3-1B", "base_model:quantized:theprint/TiTan-Gemma3-1B", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2025-08-12T16:57:16Z
--- base_model: - theprint/TiTan-Gemma3-1B library_name: gguf pipeline_tag: text-generation language: en license: apache-2.0 tags: - gguf - quantized - llama.cpp - titan-gemma3-1b model_type: llama quantized_by: theprint --- # TiTan-Gemma3-1B - GGUF Quantized Quantized GGUF versions of [TiTan-Gemma3-1B](https://huggingface.co/theprint/TiTan-Gemma3-1B) for use with llama.cpp and other GGUF-compatible inference engines. ## Original Model - **Base model:** [google/gemma-3-1b-it](https://huggingface.co/google/gemma-3-1b-it) - **Fine-tuned model:** [theprint/TiTan-Gemma3-1B](https://huggingface.co/theprint/TiTan-Gemma3-1B) - **Quantized by:** theprint ## Available Quantizations - `TiTan-Gemma3-1B-f16.gguf` (2489.6 MB) - 16-bit float (original precision, largest file) - `TiTan-Gemma3-1B-q3_k_m.gguf` (850.9 MB) - 3-bit quantization (medium quality) - `TiTan-Gemma3-1B-q4_k_m.gguf` (966.7 MB) - 4-bit quantization (medium, recommended for most use cases) - `TiTan-Gemma3-1B-q5_k_m.gguf` (1027.9 MB) - 5-bit quantization (medium, good quality) - `TiTan-Gemma3-1B-q6_k.gguf` (1270.9 MB) - 6-bit quantization (high quality) - `TiTan-Gemma3-1B-q8_0.gguf` (1325.8 MB) - 8-bit quantization (very high quality) ## Usage ### With llama.cpp ```bash # Download recommended quantization wget https://huggingface.co/theprint/TiTan-Gemma3-1B-GGUF/resolve/main/TiTan-Gemma3-1B-q4_k_m.gguf # Run inference ./llama.cpp/main -m TiTan-Gemma3-1B-q4_k_m.gguf \ -p "Your prompt here" \ -n 256 \ --temp 0.7 \ --top-p 0.9 ``` ### With other GGUF tools These files are compatible with: - [llama.cpp](https://github.com/ggerganov/llama.cpp) - [Ollama](https://ollama.ai/) (import as custom model) - [KoboldCpp](https://github.com/LostRuins/koboldcpp) - [text-generation-webui](https://github.com/oobabooga/text-generation-webui) ## Quantization Info **Recommended:** `q4_k_m` provides the best balance of size, speed, and quality for most use cases. **For maximum quality:** Use `q8_0` or `f16` **For maximum speed/smallest size:** Use `q3_k_m` or `q4_k_s` ## License apache-2.0 ## Citation ```bibtex @misc{titan_gemma3_1b_gguf, title={TiTan-Gemma3-1B GGUF Quantized Models}, author={theprint}, year={2025}, publisher={Hugging Face}, url={https://huggingface.co/theprint/TiTan-Gemma3-1B-GGUF} } ```