modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-09-12 12:31:00
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
555 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-09-12 12:28:53
card
stringlengths
11
1.01M
mradermacher/UltraPatriMerge-12B-i1-GGUF
mradermacher
2025-09-12T09:49:15Z
0
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:pot99rta/UltraPatriMerge-12B", "base_model:quantized:pot99rta/UltraPatriMerge-12B", "endpoints_compatible", "region:us", "imatrix" ]
null
2025-09-12T08:08:15Z
--- base_model: pot99rta/UltraPatriMerge-12B language: - en library_name: transformers mradermacher: readme_rev: 1 quantized_by: mradermacher tags: - mergekit - merge --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> <!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K_M Q4_0 IQ3_XS Q4_1 IQ3_S --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> weighted/imatrix quants of https://huggingface.co/pot99rta/UltraPatriMerge-12B <!-- provided-files --> ***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#UltraPatriMerge-12B-i1-GGUF).*** static quants are available at https://huggingface.co/mradermacher/UltraPatriMerge-12B-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/UltraPatriMerge-12B-i1-GGUF/resolve/main/UltraPatriMerge-12B.imatrix.gguf) | imatrix | 0.1 | imatrix file (for creating your own qwuants) | | [GGUF](https://huggingface.co/mradermacher/UltraPatriMerge-12B-i1-GGUF/resolve/main/UltraPatriMerge-12B.i1-IQ1_S.gguf) | i1-IQ1_S | 3.1 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/UltraPatriMerge-12B-i1-GGUF/resolve/main/UltraPatriMerge-12B.i1-IQ1_M.gguf) | i1-IQ1_M | 3.3 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/UltraPatriMerge-12B-i1-GGUF/resolve/main/UltraPatriMerge-12B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 3.7 | | | [GGUF](https://huggingface.co/mradermacher/UltraPatriMerge-12B-i1-GGUF/resolve/main/UltraPatriMerge-12B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/UltraPatriMerge-12B-i1-GGUF/resolve/main/UltraPatriMerge-12B.i1-IQ2_S.gguf) | i1-IQ2_S | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/UltraPatriMerge-12B-i1-GGUF/resolve/main/UltraPatriMerge-12B.i1-IQ2_M.gguf) | i1-IQ2_M | 4.5 | | | [GGUF](https://huggingface.co/mradermacher/UltraPatriMerge-12B-i1-GGUF/resolve/main/UltraPatriMerge-12B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 4.6 | very low quality | | [GGUF](https://huggingface.co/mradermacher/UltraPatriMerge-12B-i1-GGUF/resolve/main/UltraPatriMerge-12B.i1-Q2_K.gguf) | i1-Q2_K | 4.9 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/UltraPatriMerge-12B-i1-GGUF/resolve/main/UltraPatriMerge-12B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 5.0 | lower quality | | [GGUF](https://huggingface.co/mradermacher/UltraPatriMerge-12B-i1-GGUF/resolve/main/UltraPatriMerge-12B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/UltraPatriMerge-12B-i1-GGUF/resolve/main/UltraPatriMerge-12B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 5.6 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/UltraPatriMerge-12B-i1-GGUF/resolve/main/UltraPatriMerge-12B.i1-IQ3_S.gguf) | i1-IQ3_S | 5.7 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/UltraPatriMerge-12B-i1-GGUF/resolve/main/UltraPatriMerge-12B.i1-IQ3_M.gguf) | i1-IQ3_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/UltraPatriMerge-12B-i1-GGUF/resolve/main/UltraPatriMerge-12B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 6.2 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/UltraPatriMerge-12B-i1-GGUF/resolve/main/UltraPatriMerge-12B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 6.7 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/UltraPatriMerge-12B-i1-GGUF/resolve/main/UltraPatriMerge-12B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 6.8 | | | [GGUF](https://huggingface.co/mradermacher/UltraPatriMerge-12B-i1-GGUF/resolve/main/UltraPatriMerge-12B.i1-Q4_0.gguf) | i1-Q4_0 | 7.2 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/UltraPatriMerge-12B-i1-GGUF/resolve/main/UltraPatriMerge-12B.i1-IQ4_NL.gguf) | i1-IQ4_NL | 7.2 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/UltraPatriMerge-12B-i1-GGUF/resolve/main/UltraPatriMerge-12B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 7.2 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/UltraPatriMerge-12B-i1-GGUF/resolve/main/UltraPatriMerge-12B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 7.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/UltraPatriMerge-12B-i1-GGUF/resolve/main/UltraPatriMerge-12B.i1-Q4_1.gguf) | i1-Q4_1 | 7.9 | | | [GGUF](https://huggingface.co/mradermacher/UltraPatriMerge-12B-i1-GGUF/resolve/main/UltraPatriMerge-12B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 8.6 | | | [GGUF](https://huggingface.co/mradermacher/UltraPatriMerge-12B-i1-GGUF/resolve/main/UltraPatriMerge-12B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 8.8 | | | [GGUF](https://huggingface.co/mradermacher/UltraPatriMerge-12B-i1-GGUF/resolve/main/UltraPatriMerge-12B.i1-Q6_K.gguf) | i1-Q6_K | 10.2 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
5456es/bees_prune_Llama-3.2-1B-Instruct_prune_0.7-sigmoid
5456es
2025-09-12T09:48:23Z
35
0
null
[ "safetensors", "llama", "dpo", "preference-learning", "bees", "pruned", "license:apache-2.0", "region:us" ]
null
2025-09-07T11:25:11Z
--- license: apache-2.0 base_model: Llama-3.2-1B-Instruct tags: - dpo - preference-learning - bees - pruned --- # bees_prune_Llama-3.2-1B-Instruct_prune_0.7-sigmoid This model is a DPO (Direct Preference Optimization) fine-tuned version of Llama-3.2-1B-Instruct using the bees method. ## Model Details - **Base Model**: Llama-3.2-1B-Instruct - **Training Method**: bees - **Pruning Ratio**: unknown - **Training Date**: 2025-09-12 ## Training Configuration This model was trained using Direct Preference Optimization (DPO) with the following characteristics: - Method: bees - Pruning applied during training - Fine-tuned on preference data ## Usage ```python from transformers import AutoTokenizer, AutoModelForCausalLM model_name = "5456es/bees_prune_Llama-3.2-1B-Instruct_prune_0.7-sigmoid" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained(model_name) # Example usage prompt = "Your prompt here" inputs = tokenizer(prompt, return_tensors="pt") outputs = model.generate(**inputs, max_length=100) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` ## Training Data This model was trained on preference data using the DPO algorithm. ## Limitations This model inherits the limitations of its base model and may have additional limitations due to the pruning process. ## Citation If you use this model, please cite the original DPO paper and the base model.
5456es/cluster_prune_Llama-3.2-3B-Instruct_prune_0.3-sigmoid
5456es
2025-09-12T09:47:55Z
24
0
null
[ "safetensors", "llama", "dpo", "preference-learning", "cluster", "pruned", "license:apache-2.0", "region:us" ]
null
2025-09-08T03:45:20Z
--- license: apache-2.0 base_model: Llama-3.2-3B-Instruct tags: - dpo - preference-learning - cluster - pruned --- # cluster_prune_Llama-3.2-3B-Instruct_prune_0.3-sigmoid This model is a DPO (Direct Preference Optimization) fine-tuned version of Llama-3.2-3B-Instruct using the cluster method. ## Model Details - **Base Model**: Llama-3.2-3B-Instruct - **Training Method**: cluster - **Pruning Ratio**: unknown - **Training Date**: 2025-09-12 ## Training Configuration This model was trained using Direct Preference Optimization (DPO) with the following characteristics: - Method: cluster - Pruning applied during training - Fine-tuned on preference data ## Usage ```python from transformers import AutoTokenizer, AutoModelForCausalLM model_name = "5456es/cluster_prune_Llama-3.2-3B-Instruct_prune_0.3-sigmoid" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained(model_name) # Example usage prompt = "Your prompt here" inputs = tokenizer(prompt, return_tensors="pt") outputs = model.generate(**inputs, max_length=100) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` ## Training Data This model was trained on preference data using the DPO algorithm. ## Limitations This model inherits the limitations of its base model and may have additional limitations due to the pruning process. ## Citation If you use this model, please cite the original DPO paper and the base model.
maroon14/payment-related-seq-cls
maroon14
2025-09-12T09:47:52Z
0
0
peft
[ "peft", "safetensors", "base_model:adapter:prajjwal1/bert-tiny", "lora", "transformers", "text-classification", "en", "arxiv:1910.09700", "base_model:prajjwal1/bert-tiny", "license:apache-2.0", "region:us" ]
text-classification
2025-09-10T11:29:12Z
--- base_model: prajjwal1/bert-tiny library_name: peft tags: - base_model:adapter:prajjwal1/bert-tiny - lora - transformers license: apache-2.0 language: - en pipeline_tag: text-classification --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [tonible14012002] - **Model type:** BERT-Tiny - **Language(s) (NLP):** English ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.17.1
AodenT/progression_1.pt
AodenT
2025-09-12T09:47:32Z
0
0
null
[ "safetensors", "custom-dalta", "region:us" ]
null
2025-09-12T09:34:14Z
# Checkpoint uploaded from progression_1.pt Repository: `AodenT/progression_1.pt` This repo contains weights only (plus optional optimizer/scheduler files). Integrate with your local `Model` class to load.
5456es/random_prune_Llama-3.2-3B-Instruct_prune_0.7-sigmoid
5456es
2025-09-12T09:47:20Z
25
0
null
[ "safetensors", "llama", "dpo", "preference-learning", "random", "pruned", "license:apache-2.0", "region:us" ]
null
2025-09-09T04:27:40Z
--- license: apache-2.0 base_model: Llama-3.2-3B-Instruct tags: - dpo - preference-learning - random - pruned --- # random_prune_Llama-3.2-3B-Instruct_prune_0.7-sigmoid This model is a DPO (Direct Preference Optimization) fine-tuned version of Llama-3.2-3B-Instruct using the random method. ## Model Details - **Base Model**: Llama-3.2-3B-Instruct - **Training Method**: random - **Pruning Ratio**: unknown - **Training Date**: 2025-09-12 ## Training Configuration This model was trained using Direct Preference Optimization (DPO) with the following characteristics: - Method: random - Pruning applied during training - Fine-tuned on preference data ## Usage ```python from transformers import AutoTokenizer, AutoModelForCausalLM model_name = "5456es/random_prune_Llama-3.2-3B-Instruct_prune_0.7-sigmoid" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained(model_name) # Example usage prompt = "Your prompt here" inputs = tokenizer(prompt, return_tensors="pt") outputs = model.generate(**inputs, max_length=100) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` ## Training Data This model was trained on preference data using the DPO algorithm. ## Limitations This model inherits the limitations of its base model and may have additional limitations due to the pruning process. ## Citation If you use this model, please cite the original DPO paper and the base model.
5456es/cluster_prune_Qwen2.5-1.5B-Instruct_prune_0.5-sigmoid
5456es
2025-09-12T09:46:38Z
29
0
null
[ "safetensors", "qwen2", "dpo", "preference-learning", "cluster", "pruned", "license:apache-2.0", "region:us" ]
null
2025-09-07T05:26:15Z
--- license: apache-2.0 base_model: Qwen2.5-1.5B-Instruct tags: - dpo - preference-learning - cluster - pruned --- # cluster_prune_Qwen2.5-1.5B-Instruct_prune_0.5-sigmoid This model is a DPO (Direct Preference Optimization) fine-tuned version of Qwen2.5-1.5B-Instruct using the cluster method. ## Model Details - **Base Model**: Qwen2.5-1.5B-Instruct - **Training Method**: cluster - **Pruning Ratio**: unknown - **Training Date**: 2025-09-12 ## Training Configuration This model was trained using Direct Preference Optimization (DPO) with the following characteristics: - Method: cluster - Pruning applied during training - Fine-tuned on preference data ## Usage ```python from transformers import AutoTokenizer, AutoModelForCausalLM model_name = "5456es/cluster_prune_Qwen2.5-1.5B-Instruct_prune_0.5-sigmoid" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained(model_name) # Example usage prompt = "Your prompt here" inputs = tokenizer(prompt, return_tensors="pt") outputs = model.generate(**inputs, max_length=100) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` ## Training Data This model was trained on preference data using the DPO algorithm. ## Limitations This model inherits the limitations of its base model and may have additional limitations due to the pruning process. ## Citation If you use this model, please cite the original DPO paper and the base model.
Sripriya16/t5-small-opus-books-en-fr
Sripriya16
2025-09-12T09:46:31Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:google-t5/t5-small", "base_model:finetune:google-t5/t5-small", "license:apache-2.0", "text-generation-inference", "endpoints_compatible", "region:us" ]
null
2025-09-12T08:25:06Z
--- library_name: transformers license: apache-2.0 base_model: google-t5/t5-small tags: - generated_from_trainer metrics: - bleu model-index: - name: t5-small-opus-books-en-fr results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-small-opus-books-en-fr This model is a fine-tuned version of [google-t5/t5-small](https://huggingface.co/google-t5/t5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.6042 - Bleu: 6.1861 - Gen Len: 18.3956 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 2 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:------:|:-------:| | 1.8592 | 1.0 | 6355 | 1.6281 | 5.9994 | 18.4066 | | 1.8116 | 2.0 | 12710 | 1.6042 | 6.1861 | 18.3956 | ### Framework versions - Transformers 4.56.1 - Pytorch 2.8.0+cu126 - Datasets 4.0.0 - Tokenizers 0.22.0
DennisS1/BSER
DennisS1
2025-09-12T09:46:30Z
0
0
diffusers
[ "diffusers", "text-to-image", "lora", "template:diffusion-lora", "base_model:tencent/HunyuanImage-2.1", "base_model:adapter:tencent/HunyuanImage-2.1", "region:us" ]
text-to-image
2025-09-12T09:43:40Z
--- tags: - text-to-image - lora - diffusers - template:diffusion-lora widget: - output: url: images/Screen Shot 2025-09-12 at 7.36.49 pm.png text: Screenshot base_model: tencent/HunyuanImage-2.1 instance_prompt: null --- # BSER <Gallery /> ## Download model [Download](/DennisS1/BSER/tree/main) them in the Files & versions tab.
5456es/bees_prune_Llama-3.2-3B-Instruct_prune_0.3-sigmoid
5456es
2025-09-12T09:46:06Z
28
0
null
[ "safetensors", "llama", "dpo", "preference-learning", "bees", "pruned", "license:apache-2.0", "region:us" ]
null
2025-09-08T05:13:34Z
--- license: apache-2.0 base_model: Llama-3.2-3B-Instruct tags: - dpo - preference-learning - bees - pruned --- # bees_prune_Llama-3.2-3B-Instruct_prune_0.3-sigmoid This model is a DPO (Direct Preference Optimization) fine-tuned version of Llama-3.2-3B-Instruct using the bees method. ## Model Details - **Base Model**: Llama-3.2-3B-Instruct - **Training Method**: bees - **Pruning Ratio**: unknown - **Training Date**: 2025-09-12 ## Training Configuration This model was trained using Direct Preference Optimization (DPO) with the following characteristics: - Method: bees - Pruning applied during training - Fine-tuned on preference data ## Usage ```python from transformers import AutoTokenizer, AutoModelForCausalLM model_name = "5456es/bees_prune_Llama-3.2-3B-Instruct_prune_0.3-sigmoid" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained(model_name) # Example usage prompt = "Your prompt here" inputs = tokenizer(prompt, return_tensors="pt") outputs = model.generate(**inputs, max_length=100) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` ## Training Data This model was trained on preference data using the DPO algorithm. ## Limitations This model inherits the limitations of its base model and may have additional limitations due to the pruning process. ## Citation If you use this model, please cite the original DPO paper and the base model.
5456es/bees_prune_Llama-3.2-1B-Instruct_prune_0.5-sigmoid
5456es
2025-09-12T09:45:28Z
31
0
null
[ "safetensors", "llama", "dpo", "preference-learning", "bees", "pruned", "license:apache-2.0", "region:us" ]
null
2025-09-07T11:22:32Z
--- license: apache-2.0 base_model: Llama-3.2-1B-Instruct tags: - dpo - preference-learning - bees - pruned --- # bees_prune_Llama-3.2-1B-Instruct_prune_0.5-sigmoid This model is a DPO (Direct Preference Optimization) fine-tuned version of Llama-3.2-1B-Instruct using the bees method. ## Model Details - **Base Model**: Llama-3.2-1B-Instruct - **Training Method**: bees - **Pruning Ratio**: unknown - **Training Date**: 2025-09-12 ## Training Configuration This model was trained using Direct Preference Optimization (DPO) with the following characteristics: - Method: bees - Pruning applied during training - Fine-tuned on preference data ## Usage ```python from transformers import AutoTokenizer, AutoModelForCausalLM model_name = "5456es/bees_prune_Llama-3.2-1B-Instruct_prune_0.5-sigmoid" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained(model_name) # Example usage prompt = "Your prompt here" inputs = tokenizer(prompt, return_tensors="pt") outputs = model.generate(**inputs, max_length=100) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` ## Training Data This model was trained on preference data using the DPO algorithm. ## Limitations This model inherits the limitations of its base model and may have additional limitations due to the pruning process. ## Citation If you use this model, please cite the original DPO paper and the base model.
nayanakto/all-MiniLM-L6-v2-Q8_0-GGUF
nayanakto
2025-09-12T09:45:05Z
0
0
sentence-transformers
[ "sentence-transformers", "gguf", "feature-extraction", "sentence-similarity", "transformers", "llama-cpp", "gguf-my-repo", "en", "dataset:s2orc", "dataset:flax-sentence-embeddings/stackexchange_xml", "dataset:ms_marco", "dataset:gooaq", "dataset:yahoo_answers_topics", "dataset:code_search_net", "dataset:search_qa", "dataset:eli5", "dataset:snli", "dataset:multi_nli", "dataset:wikihow", "dataset:natural_questions", "dataset:trivia_qa", "dataset:embedding-data/sentence-compression", "dataset:embedding-data/flickr30k-captions", "dataset:embedding-data/altlex", "dataset:embedding-data/simple-wiki", "dataset:embedding-data/QQP", "dataset:embedding-data/SPECTER", "dataset:embedding-data/PAQ_pairs", "dataset:embedding-data/WikiAnswers", "base_model:sentence-transformers/all-MiniLM-L6-v2", "base_model:quantized:sentence-transformers/all-MiniLM-L6-v2", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
sentence-similarity
2025-09-12T09:45:03Z
--- language: en license: apache-2.0 library_name: sentence-transformers tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers - llama-cpp - gguf-my-repo datasets: - s2orc - flax-sentence-embeddings/stackexchange_xml - ms_marco - gooaq - yahoo_answers_topics - code_search_net - search_qa - eli5 - snli - multi_nli - wikihow - natural_questions - trivia_qa - embedding-data/sentence-compression - embedding-data/flickr30k-captions - embedding-data/altlex - embedding-data/simple-wiki - embedding-data/QQP - embedding-data/SPECTER - embedding-data/PAQ_pairs - embedding-data/WikiAnswers pipeline_tag: sentence-similarity base_model: sentence-transformers/all-MiniLM-L6-v2 --- # nayanakto/all-MiniLM-L6-v2-Q8_0-GGUF This model was converted to GGUF format from [`sentence-transformers/all-MiniLM-L6-v2`](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo nayanakto/all-MiniLM-L6-v2-Q8_0-GGUF --hf-file all-minilm-l6-v2-q8_0.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo nayanakto/all-MiniLM-L6-v2-Q8_0-GGUF --hf-file all-minilm-l6-v2-q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo nayanakto/all-MiniLM-L6-v2-Q8_0-GGUF --hf-file all-minilm-l6-v2-q8_0.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo nayanakto/all-MiniLM-L6-v2-Q8_0-GGUF --hf-file all-minilm-l6-v2-q8_0.gguf -c 2048 ```
5456es/random_prune_Llama-3.2-1B-Instruct_prune_0.5-sigmoid
5456es
2025-09-12T09:44:56Z
26
0
null
[ "safetensors", "llama", "dpo", "preference-learning", "random", "pruned", "license:apache-2.0", "region:us" ]
null
2025-09-09T04:24:06Z
--- license: apache-2.0 base_model: Llama-3.2-1B-Instruct tags: - dpo - preference-learning - random - pruned --- # random_prune_Llama-3.2-1B-Instruct_prune_0.5-sigmoid This model is a DPO (Direct Preference Optimization) fine-tuned version of Llama-3.2-1B-Instruct using the random method. ## Model Details - **Base Model**: Llama-3.2-1B-Instruct - **Training Method**: random - **Pruning Ratio**: unknown - **Training Date**: 2025-09-12 ## Training Configuration This model was trained using Direct Preference Optimization (DPO) with the following characteristics: - Method: random - Pruning applied during training - Fine-tuned on preference data ## Usage ```python from transformers import AutoTokenizer, AutoModelForCausalLM model_name = "5456es/random_prune_Llama-3.2-1B-Instruct_prune_0.5-sigmoid" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained(model_name) # Example usage prompt = "Your prompt here" inputs = tokenizer(prompt, return_tensors="pt") outputs = model.generate(**inputs, max_length=100) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` ## Training Data This model was trained on preference data using the DPO algorithm. ## Limitations This model inherits the limitations of its base model and may have additional limitations due to the pruning process. ## Citation If you use this model, please cite the original DPO paper and the base model.
5456es/bees_prune_Qwen2.5-1.5B-Instruct_prune_0.3-sigmoid
5456es
2025-09-12T09:44:28Z
36
0
null
[ "safetensors", "qwen2", "dpo", "preference-learning", "bees", "pruned", "license:apache-2.0", "region:us" ]
null
2025-09-08T03:44:21Z
--- license: apache-2.0 base_model: Qwen2.5-1.5B-Instruct tags: - dpo - preference-learning - bees - pruned --- # bees_prune_Qwen2.5-1.5B-Instruct_prune_0.3-sigmoid This model is a DPO (Direct Preference Optimization) fine-tuned version of Qwen2.5-1.5B-Instruct using the bees method. ## Model Details - **Base Model**: Qwen2.5-1.5B-Instruct - **Training Method**: bees - **Pruning Ratio**: unknown - **Training Date**: 2025-09-12 ## Training Configuration This model was trained using Direct Preference Optimization (DPO) with the following characteristics: - Method: bees - Pruning applied during training - Fine-tuned on preference data ## Usage ```python from transformers import AutoTokenizer, AutoModelForCausalLM model_name = "5456es/bees_prune_Qwen2.5-1.5B-Instruct_prune_0.3-sigmoid" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained(model_name) # Example usage prompt = "Your prompt here" inputs = tokenizer(prompt, return_tensors="pt") outputs = model.generate(**inputs, max_length=100) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` ## Training Data This model was trained on preference data using the DPO algorithm. ## Limitations This model inherits the limitations of its base model and may have additional limitations due to the pruning process. ## Citation If you use this model, please cite the original DPO paper and the base model.
stonermay/blockassist-bc-diving_lightfooted_caterpillar_1757670054
stonermay
2025-09-12T09:42:17Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "diving lightfooted caterpillar", "arxiv:2504.07091", "region:us" ]
null
2025-09-12T09:41:55Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - diving lightfooted caterpillar --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
cglez/gpt2-ohsumed
cglez
2025-09-12T09:41:26Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "gpt2", "text-generation", "en", "arxiv:1910.09700", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-12T09:35:39Z
--- library_name: transformers language: en license: mit datasets: [] tags: [] --- # Model Card for <Model> A pretrained GPT2 using <Dataset>. ## Model Details ### Model Description A pretrained GPT2 using <Dataset>. - **Developed by:** [Cesar Gonzalez-Gutierrez](https://ceguel.es) - **Funded by:** [ERC](https://erc.europa.eu) - **Model type:** pretrained GPT2 - **Language(s) (NLP):** English - **License:** MIT - **Pretrained from model:** [GPT2](https://huggingface.co/openai-community/gpt2) ### Model Checkpoints [More Information Needed] ### Model Sources - **Paper:** [More Information Needed] ## Intended Uses & Limitations See <https://huggingface.co/openai-community/gpt2#intended-uses--limitations>. ### Loading Checkpoints [More Information Needed] ## Training Details ### Training Data [More Information Needed] #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** fp16 - **Batch size:** 8 - **Gradient accumulation steps:** 12 ## Environmental Impact - **Hardware Type:** NVIDIA A100 PCIE 40GB - **Hours used:** [More Information Needed] - **Cluster Provider:** [Artemisa](https://artemisa.ific.uv.es/web/) - **Compute Region:** EU - **Carbon Emitted:** [More Information Needed] <!-- Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). --> ## Citation **BibTeX:** [More Information Needed]
cglez/gpt2-dapt-ohsumed
cglez
2025-09-12T09:41:21Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "gpt2", "text-generation", "en", "arxiv:1910.09700", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-12T09:35:45Z
--- library_name: transformers language: en license: mit datasets: [] tags: [] --- # Model Card for <Model> A pretrained GPT2 using <Dataset>. ## Model Details ### Model Description A pretrained GPT2 using <Dataset>. - **Developed by:** [Cesar Gonzalez-Gutierrez](https://ceguel.es) - **Funded by:** [ERC](https://erc.europa.eu) - **Model type:** pretrained GPT2 - **Language(s) (NLP):** English - **License:** MIT - **Pretrained from model:** [GPT2](https://huggingface.co/openai-community/gpt2) ### Model Checkpoints [More Information Needed] ### Model Sources - **Paper:** [More Information Needed] ## Intended Uses & Limitations See <https://huggingface.co/openai-community/gpt2#intended-uses--limitations>. ### Loading Checkpoints [More Information Needed] ## Training Details ### Training Data [More Information Needed] #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** fp16 - **Batch size:** 8 - **Gradient accumulation steps:** 12 ## Environmental Impact - **Hardware Type:** NVIDIA A100 PCIE 40GB - **Hours used:** [More Information Needed] - **Cluster Provider:** [Artemisa](https://artemisa.ific.uv.es/web/) - **Compute Region:** EU - **Carbon Emitted:** [More Information Needed] <!-- Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). --> ## Citation **BibTeX:** [More Information Needed]
iamzac/Qwen3-0.6B-Gensyn-Swarm-unseen_opaque_porpoise
iamzac
2025-09-12T09:36:42Z
35
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "rl-swarm", "genrl-swarm", "grpo", "gensyn", "I am unseen_opaque_porpoise", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-13T04:41:50Z
--- library_name: transformers tags: - rl-swarm - genrl-swarm - grpo - gensyn - I am unseen_opaque_porpoise --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
trongg/cryoutloud
trongg
2025-09-12T09:36:31Z
11
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "trl", "sft", "conversational", "custom_code", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-11T05:26:10Z
--- library_name: transformers tags: - trl - sft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
trkbt10/ksdk-gptoss-20b-ft
trkbt10
2025-09-12T09:35:46Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "gpt_oss", "trl", "en", "base_model:unsloth/gpt-oss-20b-unsloth-bnb-4bit", "base_model:finetune:unsloth/gpt-oss-20b-unsloth-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-09-12T09:35:37Z
--- base_model: unsloth/gpt-oss-20b-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - gpt_oss - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** trkbt10 - **License:** apache-2.0 - **Finetuned from model :** unsloth/gpt-oss-20b-unsloth-bnb-4bit This gpt_oss model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
miyagawaorj/business-news-generator
miyagawaorj
2025-09-12T09:34:28Z
12
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "generated_from_trainer", "base_model:HuggingFaceTB/SmolLM-135M", "base_model:finetune:HuggingFaceTB/SmolLM-135M", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-07-08T13:38:32Z
--- library_name: transformers license: apache-2.0 base_model: HuggingFaceTB/SmolLM-135M tags: - generated_from_trainer model-index: - name: business-news-generator results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # business-news-generator This model is a fine-tuned version of [HuggingFaceTB/SmolLM-135M](https://huggingface.co/HuggingFaceTB/SmolLM-135M) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 3.2278 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.1446 | 0.32 | 200 | 3.3099 | | 2.8324 | 0.64 | 400 | 3.2142 | | 2.663 | 0.96 | 600 | 3.0995 | | 1.694 | 1.28 | 800 | 3.2399 | | 1.5127 | 1.6 | 1000 | 3.2239 | | 1.4611 | 1.92 | 1200 | 3.2278 | ### Framework versions - Transformers 4.53.0 - Pytorch 2.7.1+cu118 - Datasets 4.0.0 - Tokenizers 0.21.2
anmol44/gpt2-medquad-finetuned
anmol44
2025-09-12T09:33:58Z
0
0
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-12T09:32:52Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
fulljourney/FLUX-v1
fulljourney
2025-09-12T09:32:32Z
0
0
diffusers
[ "diffusers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "diffusers:FluxPipeline", "region:us" ]
text-to-image
2025-09-12T09:30:12Z
--- library_name: diffusers --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
5456es/random_prune_Llama-3.2-3B-Instruct_prune_0.0-sigmoid
5456es
2025-09-12T09:32:24Z
15
0
null
[ "safetensors", "llama", "dpo", "preference-learning", "random", "pruned", "license:apache-2.0", "region:us" ]
null
2025-09-10T03:23:38Z
--- license: apache-2.0 base_model: Llama-3.2-3B-Instruct tags: - dpo - preference-learning - random - pruned --- # random_prune_Llama-3.2-3B-Instruct_prune_0.0-sigmoid This model is a DPO (Direct Preference Optimization) fine-tuned version of Llama-3.2-3B-Instruct using the random method. ## Model Details - **Base Model**: Llama-3.2-3B-Instruct - **Training Method**: random - **Pruning Ratio**: unknown - **Training Date**: 2025-09-12 ## Training Configuration This model was trained using Direct Preference Optimization (DPO) with the following characteristics: - Method: random - Pruning applied during training - Fine-tuned on preference data ## Usage ```python from transformers import AutoTokenizer, AutoModelForCausalLM model_name = "5456es/random_prune_Llama-3.2-3B-Instruct_prune_0.0-sigmoid" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained(model_name) # Example usage prompt = "Your prompt here" inputs = tokenizer(prompt, return_tensors="pt") outputs = model.generate(**inputs, max_length=100) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` ## Training Data This model was trained on preference data using the DPO algorithm. ## Limitations This model inherits the limitations of its base model and may have additional limitations due to the pruning process. ## Citation If you use this model, please cite the original DPO paper and the base model.
5456es/bees_prune_Llama-3.2-1B-Instruct_prune_0.3-sigmoid
5456es
2025-09-12T09:31:49Z
34
0
null
[ "safetensors", "llama", "dpo", "preference-learning", "bees", "pruned", "license:apache-2.0", "region:us" ]
null
2025-09-07T11:19:24Z
--- license: apache-2.0 base_model: Llama-3.2-1B-Instruct tags: - dpo - preference-learning - bees - pruned --- # bees_prune_Llama-3.2-1B-Instruct_prune_0.3-sigmoid This model is a DPO (Direct Preference Optimization) fine-tuned version of Llama-3.2-1B-Instruct using the bees method. ## Model Details - **Base Model**: Llama-3.2-1B-Instruct - **Training Method**: bees - **Pruning Ratio**: unknown - **Training Date**: 2025-09-12 ## Training Configuration This model was trained using Direct Preference Optimization (DPO) with the following characteristics: - Method: bees - Pruning applied during training - Fine-tuned on preference data ## Usage ```python from transformers import AutoTokenizer, AutoModelForCausalLM model_name = "5456es/bees_prune_Llama-3.2-1B-Instruct_prune_0.3-sigmoid" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained(model_name) # Example usage prompt = "Your prompt here" inputs = tokenizer(prompt, return_tensors="pt") outputs = model.generate(**inputs, max_length=100) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` ## Training Data This model was trained on preference data using the DPO algorithm. ## Limitations This model inherits the limitations of its base model and may have additional limitations due to the pruning process. ## Citation If you use this model, please cite the original DPO paper and the base model.
stonermay/blockassist-bc-diving_lightfooted_caterpillar_1757669439
stonermay
2025-09-12T09:31:45Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "diving lightfooted caterpillar", "arxiv:2504.07091", "region:us" ]
null
2025-09-12T09:31:37Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - diving lightfooted caterpillar --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
5456es/implicit_reward_Llama-3.2-3B-Instruct_prune_0.7-sigmoid
5456es
2025-09-12T09:31:20Z
26
0
null
[ "safetensors", "llama", "dpo", "preference-learning", "implicit", "pruned", "license:apache-2.0", "region:us" ]
null
2025-09-08T05:07:26Z
--- license: apache-2.0 base_model: Llama-3.2-3B-Instruct tags: - dpo - preference-learning - implicit - pruned --- # implicit_reward_Llama-3.2-3B-Instruct_prune_0.7-sigmoid This model is a DPO (Direct Preference Optimization) fine-tuned version of Llama-3.2-3B-Instruct using the implicit method. ## Model Details - **Base Model**: Llama-3.2-3B-Instruct - **Training Method**: implicit - **Pruning Ratio**: unknown - **Training Date**: 2025-09-12 ## Training Configuration This model was trained using Direct Preference Optimization (DPO) with the following characteristics: - Method: implicit - Pruning applied during training - Fine-tuned on preference data ## Usage ```python from transformers import AutoTokenizer, AutoModelForCausalLM model_name = "5456es/implicit_reward_Llama-3.2-3B-Instruct_prune_0.7-sigmoid" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained(model_name) # Example usage prompt = "Your prompt here" inputs = tokenizer(prompt, return_tensors="pt") outputs = model.generate(**inputs, max_length=100) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` ## Training Data This model was trained on preference data using the DPO algorithm. ## Limitations This model inherits the limitations of its base model and may have additional limitations due to the pruning process. ## Citation If you use this model, please cite the original DPO paper and the base model.
5456es/selective_dpo_Qwen2.5-0.5B-Instruct_prune_0.5-sigmoid
5456es
2025-09-12T09:30:20Z
41
0
null
[ "safetensors", "qwen2", "dpo", "preference-learning", "selective", "pruned", "license:apache-2.0", "region:us" ]
null
2025-09-07T05:20:12Z
--- license: apache-2.0 base_model: Qwen2.5-0.5B-Instruct tags: - dpo - preference-learning - selective - pruned --- # selective_dpo_Qwen2.5-0.5B-Instruct_prune_0.5-sigmoid This model is a DPO (Direct Preference Optimization) fine-tuned version of Qwen2.5-0.5B-Instruct using the selective method. ## Model Details - **Base Model**: Qwen2.5-0.5B-Instruct - **Training Method**: selective - **Pruning Ratio**: unknown - **Training Date**: 2025-09-12 ## Training Configuration This model was trained using Direct Preference Optimization (DPO) with the following characteristics: - Method: selective - Pruning applied during training - Fine-tuned on preference data ## Usage ```python from transformers import AutoTokenizer, AutoModelForCausalLM model_name = "5456es/selective_dpo_Qwen2.5-0.5B-Instruct_prune_0.5-sigmoid" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained(model_name) # Example usage prompt = "Your prompt here" inputs = tokenizer(prompt, return_tensors="pt") outputs = model.generate(**inputs, max_length=100) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` ## Training Data This model was trained on preference data using the DPO algorithm. ## Limitations This model inherits the limitations of its base model and may have additional limitations due to the pruning process. ## Citation If you use this model, please cite the original DPO paper and the base model.
5456es/selective_dpo_Llama-3.2-1B-Instruct_prune_0.5-sigmoid
5456es
2025-09-12T09:29:51Z
19
0
null
[ "safetensors", "llama", "dpo", "preference-learning", "selective", "pruned", "license:apache-2.0", "region:us" ]
null
2025-09-07T05:18:08Z
--- license: apache-2.0 base_model: Llama-3.2-1B-Instruct tags: - dpo - preference-learning - selective - pruned --- # selective_dpo_Llama-3.2-1B-Instruct_prune_0.5-sigmoid This model is a DPO (Direct Preference Optimization) fine-tuned version of Llama-3.2-1B-Instruct using the selective method. ## Model Details - **Base Model**: Llama-3.2-1B-Instruct - **Training Method**: selective - **Pruning Ratio**: unknown - **Training Date**: 2025-09-12 ## Training Configuration This model was trained using Direct Preference Optimization (DPO) with the following characteristics: - Method: selective - Pruning applied during training - Fine-tuned on preference data ## Usage ```python from transformers import AutoTokenizer, AutoModelForCausalLM model_name = "5456es/selective_dpo_Llama-3.2-1B-Instruct_prune_0.5-sigmoid" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained(model_name) # Example usage prompt = "Your prompt here" inputs = tokenizer(prompt, return_tensors="pt") outputs = model.generate(**inputs, max_length=100) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` ## Training Data This model was trained on preference data using the DPO algorithm. ## Limitations This model inherits the limitations of its base model and may have additional limitations due to the pruning process. ## Citation If you use this model, please cite the original DPO paper and the base model.
5456es/last_layer_prune_Llama-3.1-8B-Instruct_prune_0.6-sigmoid
5456es
2025-09-12T09:29:02Z
0
0
null
[ "safetensors", "llama", "dpo", "preference-learning", "last", "pruned", "license:apache-2.0", "region:us" ]
null
2025-09-12T09:18:09Z
--- license: apache-2.0 base_model: Llama-3.1-8B-Instruct tags: - dpo - preference-learning - last - pruned --- # last_layer_prune_Llama-3.1-8B-Instruct_prune_0.6-sigmoid This model is a DPO (Direct Preference Optimization) fine-tuned version of Llama-3.1-8B-Instruct using the last method. ## Model Details - **Base Model**: Llama-3.1-8B-Instruct - **Training Method**: last - **Pruning Ratio**: unknown - **Training Date**: 2025-09-12 ## Training Configuration This model was trained using Direct Preference Optimization (DPO) with the following characteristics: - Method: last - Pruning applied during training - Fine-tuned on preference data ## Usage ```python from transformers import AutoTokenizer, AutoModelForCausalLM model_name = "5456es/last_layer_prune_Llama-3.1-8B-Instruct_prune_0.6-sigmoid" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained(model_name) # Example usage prompt = "Your prompt here" inputs = tokenizer(prompt, return_tensors="pt") outputs = model.generate(**inputs, max_length=100) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` ## Training Data This model was trained on preference data using the DPO algorithm. ## Limitations This model inherits the limitations of its base model and may have additional limitations due to the pruning process. ## Citation If you use this model, please cite the original DPO paper and the base model.
cglez/gpt2-dapt-wiki_toxic
cglez
2025-09-12T09:28:57Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "gpt2", "text-generation", "en", "arxiv:1910.09700", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-12T09:23:33Z
--- library_name: transformers language: en license: mit datasets: [] tags: [] --- # Model Card for <Model> A pretrained GPT2 using <Dataset>. ## Model Details ### Model Description A pretrained GPT2 using <Dataset>. - **Developed by:** [Cesar Gonzalez-Gutierrez](https://ceguel.es) - **Funded by:** [ERC](https://erc.europa.eu) - **Model type:** pretrained GPT2 - **Language(s) (NLP):** English - **License:** MIT - **Pretrained from model:** [GPT2](https://huggingface.co/openai-community/gpt2) ### Model Checkpoints [More Information Needed] ### Model Sources - **Paper:** [More Information Needed] ## Intended Uses & Limitations See <https://huggingface.co/openai-community/gpt2#intended-uses--limitations>. ### Loading Checkpoints [More Information Needed] ## Training Details ### Training Data [More Information Needed] #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** fp16 - **Batch size:** 8 - **Gradient accumulation steps:** 12 ## Environmental Impact - **Hardware Type:** NVIDIA A100 PCIE 40GB - **Hours used:** [More Information Needed] - **Cluster Provider:** [Artemisa](https://artemisa.ific.uv.es/web/) - **Compute Region:** EU - **Carbon Emitted:** [More Information Needed] <!-- Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). --> ## Citation **BibTeX:** [More Information Needed]
phathuynhAI/blockassist
phathuynhAI
2025-09-12T09:27:40Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "huge deft eagle", "arxiv:2504.07091", "region:us" ]
null
2025-09-12T01:46:46Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - huge deft eagle --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
thefirstgoku/129PP_13smoe_V3_2
thefirstgoku
2025-09-12T09:24:48Z
0
0
null
[ "safetensors", "any-to-any", "omega", "omegalabs", "bittensor", "agi", "license:mit", "region:us" ]
any-to-any
2025-09-12T09:24:06Z
--- license: mit tags: - any-to-any - omega - omegalabs - bittensor - agi --- This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet. Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
cglez/gpt2-wiki_toxic
cglez
2025-09-12T09:24:34Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "gpt2", "text-generation", "en", "arxiv:1910.09700", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-12T09:19:27Z
--- library_name: transformers language: en license: mit datasets: [] tags: [] --- # Model Card for <Model> A pretrained GPT2 using <Dataset>. ## Model Details ### Model Description A pretrained GPT2 using <Dataset>. - **Developed by:** [Cesar Gonzalez-Gutierrez](https://ceguel.es) - **Funded by:** [ERC](https://erc.europa.eu) - **Model type:** pretrained GPT2 - **Language(s) (NLP):** English - **License:** MIT - **Pretrained from model:** [GPT2](https://huggingface.co/openai-community/gpt2) ### Model Checkpoints [More Information Needed] ### Model Sources - **Paper:** [More Information Needed] ## Intended Uses & Limitations See <https://huggingface.co/openai-community/gpt2#intended-uses--limitations>. ### Loading Checkpoints [More Information Needed] ## Training Details ### Training Data [More Information Needed] #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** fp16 - **Batch size:** 8 - **Gradient accumulation steps:** 12 ## Environmental Impact - **Hardware Type:** NVIDIA A100 PCIE 40GB - **Hours used:** [More Information Needed] - **Cluster Provider:** [Artemisa](https://artemisa.ific.uv.es/web/) - **Compute Region:** EU - **Carbon Emitted:** [More Information Needed] <!-- Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). --> ## Citation **BibTeX:** [More Information Needed]
linweixiang/multimodel_api_test_model
linweixiang
2025-09-12T09:23:57Z
3
0
null
[ "license:other", "region:us" ]
null
2025-09-08T08:11:52Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md ---
cglez/gpt2-trec
cglez
2025-09-12T09:22:31Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "gpt2", "text-generation", "en", "arxiv:1910.09700", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-12T09:16:44Z
--- library_name: transformers language: en license: mit datasets: [] tags: [] --- # Model Card for <Model> A pretrained GPT2 using <Dataset>. ## Model Details ### Model Description A pretrained GPT2 using <Dataset>. - **Developed by:** [Cesar Gonzalez-Gutierrez](https://ceguel.es) - **Funded by:** [ERC](https://erc.europa.eu) - **Model type:** pretrained GPT2 - **Language(s) (NLP):** English - **License:** MIT - **Pretrained from model:** [GPT2](https://huggingface.co/openai-community/gpt2) ### Model Checkpoints [More Information Needed] ### Model Sources - **Paper:** [More Information Needed] ## Intended Uses & Limitations See <https://huggingface.co/openai-community/gpt2#intended-uses--limitations>. ### Loading Checkpoints [More Information Needed] ## Training Details ### Training Data [More Information Needed] #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** fp16 - **Batch size:** 8 - **Gradient accumulation steps:** 12 ## Environmental Impact - **Hardware Type:** NVIDIA A100 PCIE 40GB - **Hours used:** [More Information Needed] - **Cluster Provider:** [Artemisa](https://artemisa.ific.uv.es/web/) - **Compute Region:** EU - **Carbon Emitted:** [More Information Needed] <!-- Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). --> ## Citation **BibTeX:** [More Information Needed]
4everStudent/Qwen3-4B-lr-1e-05
4everStudent
2025-09-12T09:22:24Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "generated_from_trainer", "grpo", "trl", "arxiv:2402.03300", "base_model:Qwen/Qwen3-4B", "base_model:finetune:Qwen/Qwen3-4B", "endpoints_compatible", "region:us" ]
null
2025-09-03T14:06:16Z
--- base_model: Qwen/Qwen3-4B library_name: transformers model_name: Qwen3-4B-lr-1e-05 tags: - generated_from_trainer - grpo - trl licence: license --- # Model Card for Qwen3-4B-lr-1e-05 This model is a fine-tuned version of [Qwen/Qwen3-4B](https://huggingface.co/Qwen/Qwen3-4B). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="4everStudent/Qwen3-4B-lr-1e-05", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/wljorge/cif_generation_with_grpo/runs/bzmx2qli) This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.19.0 - Transformers: 4.51.3 - Pytorch: 2.6.0 - Datasets: 3.6.0 - Tokenizers: 0.21.0 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
stonermay/blockassist-bc-diving_lightfooted_caterpillar_1757668823
stonermay
2025-09-12T09:21:48Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "diving lightfooted caterpillar", "arxiv:2504.07091", "region:us" ]
null
2025-09-12T09:21:22Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - diving lightfooted caterpillar --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
shun89/opus-mt-ja-zh
shun89
2025-09-12T09:21:35Z
0
0
null
[ "pytorch", "marian", "ja", "zh", "license:apache-2.0", "region:us" ]
null
2025-09-12T08:40:28Z
--- license: apache-2.0 language: - ja - zh --- from transformers import MarianMTModel, MarianTokenizer model = MarianMTModel.from_pretrained(“shun89/opus-mt-ja-zh”) tokenizer = MarianTokenizer.from_pretrained(“shun89/opus-mt-ja-zh”) text = '高校生の時、毎週土曜日の午後は友達のリナと一緒に図書館で勉強していました。リナは数学が得意で、いつも私の分からない問題を丁寧に教えてくれました。休み時間には、自販機でコーラを買って廊下で話したり、放課後に近くのカフェでケーキを食べながら未来の夢について話したりしていました。今でもその頃の時間がとても懐かしいです。' inputs = tokenizer(texts, return_tensors="pt",padding=True, truncation=True, max_length=256) outputs = model.generate(**inputs) result= " ".join(tokenizer.batch_decode(outputs, skip_special_tokens=True)) print("待翻译语句:",text) print("翻译结果:",result)
shun89/opus-mt-zh-ja
shun89
2025-09-12T09:20:25Z
0
0
null
[ "pytorch", "marian", "zh", "ja", "region:us" ]
null
2025-09-12T08:53:26Z
--- language: - zh - ja --- from transformers import MarianMTModel, MarianTokenizer model = MarianMTModel.from_pretrained(“shun89/opus-mt-zh-ja”) tokenizer = MarianTokenizer.from_pretrained(“shun89/opus-mt-zh-ja”) text = '最近,谷歌发布了一则新广告,直接针对苹果最新发布的iOS 26操作系统。' inputs = tokenizer(texts, return_tensors="pt",padding=True, truncation=True, max_length=256) outputs = model.generate(**inputs) result= " ".join(tokenizer.batch_decode(outputs, skip_special_tokens=True)) print("待翻译语句:",text) print("翻译结果:",result)
kmpartner/bkv2tpcmlr4-test
kmpartner
2025-09-12T09:19:08Z
94
0
peft
[ "peft", "tensorboard", "safetensors", "arxiv:1910.09700", "base_model:nota-ai/bk-sdm-v2-tiny", "base_model:adapter:nota-ai/bk-sdm-v2-tiny", "region:us" ]
null
2025-04-09T23:11:29Z
--- library_name: peft base_model: nota-ai/bk-sdm-v2-tiny --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.9.0
WXDAQ3/Full.18.Video.intimo.de.Valentina.Ricarda.Original.valentina.ricarda.Video
WXDAQ3
2025-09-12T09:19:06Z
0
0
null
[ "region:us" ]
null
2025-09-12T09:17:21Z
<a href="https://viralvidzzz.com/Video-íntimo-de-Valentina-Ricarda-Original"> 🌐 Full.18.Video.intimo.de.Valentina.Ricarda.Original.valentina.ricarda.Video 🔴 ➤►DOWNLOAD👉👉🟢 ➤ <a href="https://viralvidzzz.com/Video-íntimo-de-Valentina-Ricarda-Original"> 🌐 Full.18.Video.intimo.de.Valentina.Ricarda.Original.valentina.ricarda.Video <a href="https://viralvidzzz.com/Video-íntimo-de-Valentina-Ricarda-Original"> 🌐 Full.18.Video.intimo.de.Valentina.Ricarda.Original.valentina.ricarda.Video 🔴 ➤►DOWNLOAD👉👉🟢 ➤ <a href="https://viralvidzzz.com/Video-íntimo-de-Valentina-Ricarda-Original"> 🌐 Full.18.Video.intimo.de.Valentina.Ricarda.Original.valentina.ricarda.Video
5456es/bees_prune_Qwen2.5-0.5B-Instruct_prune_0.7-sigmoid
5456es
2025-09-12T09:18:08Z
33
0
null
[ "safetensors", "qwen2", "dpo", "preference-learning", "bees", "pruned", "license:apache-2.0", "region:us" ]
null
2025-09-07T11:16:31Z
--- license: apache-2.0 base_model: Qwen2.5-0.5B-Instruct tags: - dpo - preference-learning - bees - pruned --- # bees_prune_Qwen2.5-0.5B-Instruct_prune_0.7-sigmoid This model is a DPO (Direct Preference Optimization) fine-tuned version of Qwen2.5-0.5B-Instruct using the bees method. ## Model Details - **Base Model**: Qwen2.5-0.5B-Instruct - **Training Method**: bees - **Pruning Ratio**: unknown - **Training Date**: 2025-09-12 ## Training Configuration This model was trained using Direct Preference Optimization (DPO) with the following characteristics: - Method: bees - Pruning applied during training - Fine-tuned on preference data ## Usage ```python from transformers import AutoTokenizer, AutoModelForCausalLM model_name = "5456es/bees_prune_Qwen2.5-0.5B-Instruct_prune_0.7-sigmoid" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained(model_name) # Example usage prompt = "Your prompt here" inputs = tokenizer(prompt, return_tensors="pt") outputs = model.generate(**inputs, max_length=100) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` ## Training Data This model was trained on preference data using the DPO algorithm. ## Limitations This model inherits the limitations of its base model and may have additional limitations due to the pruning process. ## Citation If you use this model, please cite the original DPO paper and the base model.
nopokkizu/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-vocal_scurrying_tarantula
nopokkizu
2025-09-12T09:17:34Z
58
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "rl-swarm", "genrl-swarm", "grpo", "gensyn", "I am vocal_scurrying_tarantula", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-11T15:11:54Z
--- library_name: transformers tags: - rl-swarm - genrl-swarm - grpo - gensyn - I am vocal_scurrying_tarantula --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
shun89/opus-mt-zh-ko
shun89
2025-09-12T09:16:57Z
0
0
null
[ "pytorch", "marian", "zh", "ko", "license:apache-2.0", "region:us" ]
null
2025-09-12T09:14:24Z
--- license: apache-2.0 language: - zh - ko --- from transformers import MarianMTModel, MarianTokenizer model = MarianMTModel.from_pretrained(“shun89/opus-mt-zh-ko”) tokenizer = MarianTokenizer.from_pretrained(“shun89/opus-mt-zh-ko”) text = '你好' inputs = tokenizer(texts, return_tensors="pt",padding=True, truncation=True, max_length=256) outputs = model.generate(**inputs) result= " ".join(tokenizer.batch_decode(outputs, skip_special_tokens=True)) print("待翻译语句:",text) print("翻译结果:",result)
maidacundo/annie-lite-v0.3.1-SFT-qwen3-8b
maidacundo
2025-09-12T09:16:40Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "text-generation-inference", "unsloth", "conversational", "en", "base_model:unsloth/Qwen3-8B-unsloth-bnb-4bit", "base_model:finetune:unsloth/Qwen3-8B-unsloth-bnb-4bit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-09-12T09:12:41Z
--- base_model: unsloth/Qwen3-8B-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - qwen3 license: apache-2.0 language: - en --- # Uploaded finetuned model - **Developed by:** maidacundo - **License:** apache-2.0 - **Finetuned from model :** unsloth/Qwen3-8B-unsloth-bnb-4bit This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
lynn-mikami/wan-testing
lynn-mikami
2025-09-12T09:15:02Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-07-18T10:20:30Z
--- license: apache-2.0 ---
lakshya-sahu/mistral_7b_dolly-finetune
lakshya-sahu
2025-09-12T09:13:57Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-09-04T16:42:06Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
MaxVell337/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-flapping_foraging_walrus
MaxVell337
2025-09-12T09:13:03Z
5
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am flapping foraging walrus", "trl", "genrl-swarm", "I am flapping_foraging_walrus", "conversational", "arxiv:2402.03300", "base_model:unsloth/Qwen2.5-0.5B-Instruct", "base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-02T18:14:09Z
--- base_model: unsloth/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-flapping_foraging_walrus tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am flapping foraging walrus - trl - genrl-swarm - I am flapping_foraging_walrus licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-flapping_foraging_walrus This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="MaxVell337/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-flapping_foraging_walrus", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.7.0 - Datasets: 3.5.1 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
5456es/selective_dpo_Llama-3.2-3B-Instruct_prune_0.7-sigmoid
5456es
2025-09-12T09:12:54Z
26
0
null
[ "safetensors", "llama", "dpo", "preference-learning", "selective", "pruned", "license:apache-2.0", "region:us" ]
null
2025-09-08T05:01:10Z
--- license: apache-2.0 base_model: Llama-3.2-3B-Instruct tags: - dpo - preference-learning - selective - pruned --- # selective_dpo_Llama-3.2-3B-Instruct_prune_0.7-sigmoid This model is a DPO (Direct Preference Optimization) fine-tuned version of Llama-3.2-3B-Instruct using the selective method. ## Model Details - **Base Model**: Llama-3.2-3B-Instruct - **Training Method**: selective - **Pruning Ratio**: unknown - **Training Date**: 2025-09-12 ## Training Configuration This model was trained using Direct Preference Optimization (DPO) with the following characteristics: - Method: selective - Pruning applied during training - Fine-tuned on preference data ## Usage ```python from transformers import AutoTokenizer, AutoModelForCausalLM model_name = "5456es/selective_dpo_Llama-3.2-3B-Instruct_prune_0.7-sigmoid" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained(model_name) # Example usage prompt = "Your prompt here" inputs = tokenizer(prompt, return_tensors="pt") outputs = model.generate(**inputs, max_length=100) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` ## Training Data This model was trained on preference data using the DPO algorithm. ## Limitations This model inherits the limitations of its base model and may have additional limitations due to the pruning process. ## Citation If you use this model, please cite the original DPO paper and the base model.
cglez/gpt2-dapt-trec
cglez
2025-09-12T09:12:29Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "gpt2", "text-generation", "en", "arxiv:1910.09700", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-12T09:07:31Z
--- library_name: transformers language: en license: mit datasets: [] tags: [] --- # Model Card for <Model> A pretrained GPT2 using <Dataset>. ## Model Details ### Model Description A pretrained GPT2 using <Dataset>. - **Developed by:** [Cesar Gonzalez-Gutierrez](https://ceguel.es) - **Funded by:** [ERC](https://erc.europa.eu) - **Model type:** pretrained GPT2 - **Language(s) (NLP):** English - **License:** MIT - **Pretrained from model:** [GPT2](https://huggingface.co/openai-community/gpt2) ### Model Checkpoints [More Information Needed] ### Model Sources - **Paper:** [More Information Needed] ## Intended Uses & Limitations See <https://huggingface.co/openai-community/gpt2#intended-uses--limitations>. ### Loading Checkpoints [More Information Needed] ## Training Details ### Training Data [More Information Needed] #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** fp16 - **Batch size:** 8 - **Gradient accumulation steps:** 12 ## Environmental Impact - **Hardware Type:** NVIDIA A100 PCIE 40GB - **Hours used:** [More Information Needed] - **Cluster Provider:** [Artemisa](https://artemisa.ific.uv.es/web/) - **Compute Region:** EU - **Carbon Emitted:** [More Information Needed] <!-- Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). --> ## Citation **BibTeX:** [More Information Needed]
5456es/bees_prune_Qwen2.5-1.5B-Instruct_prune_0.5-sigmoid
5456es
2025-09-12T09:12:15Z
17
0
null
[ "safetensors", "qwen2", "dpo", "preference-learning", "bees", "pruned", "license:apache-2.0", "region:us" ]
null
2025-09-08T03:38:23Z
--- license: apache-2.0 base_model: Qwen2.5-1.5B-Instruct tags: - dpo - preference-learning - bees - pruned --- # bees_prune_Qwen2.5-1.5B-Instruct_prune_0.5-sigmoid This model is a DPO (Direct Preference Optimization) fine-tuned version of Qwen2.5-1.5B-Instruct using the bees method. ## Model Details - **Base Model**: Qwen2.5-1.5B-Instruct - **Training Method**: bees - **Pruning Ratio**: unknown - **Training Date**: 2025-09-12 ## Training Configuration This model was trained using Direct Preference Optimization (DPO) with the following characteristics: - Method: bees - Pruning applied during training - Fine-tuned on preference data ## Usage ```python from transformers import AutoTokenizer, AutoModelForCausalLM model_name = "5456es/bees_prune_Qwen2.5-1.5B-Instruct_prune_0.5-sigmoid" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained(model_name) # Example usage prompt = "Your prompt here" inputs = tokenizer(prompt, return_tensors="pt") outputs = model.generate(**inputs, max_length=100) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` ## Training Data This model was trained on preference data using the DPO algorithm. ## Limitations This model inherits the limitations of its base model and may have additional limitations due to the pruning process. ## Citation If you use this model, please cite the original DPO paper and the base model.
5456es/cluster_prune_Qwen2.5-0.5B-Instruct_prune_0.5-sigmoid
5456es
2025-09-12T09:11:40Z
21
0
null
[ "safetensors", "qwen2", "dpo", "preference-learning", "cluster", "pruned", "license:apache-2.0", "region:us" ]
null
2025-09-07T05:15:23Z
--- license: apache-2.0 base_model: Qwen2.5-0.5B-Instruct tags: - dpo - preference-learning - cluster - pruned --- # cluster_prune_Qwen2.5-0.5B-Instruct_prune_0.5-sigmoid This model is a DPO (Direct Preference Optimization) fine-tuned version of Qwen2.5-0.5B-Instruct using the cluster method. ## Model Details - **Base Model**: Qwen2.5-0.5B-Instruct - **Training Method**: cluster - **Pruning Ratio**: unknown - **Training Date**: 2025-09-12 ## Training Configuration This model was trained using Direct Preference Optimization (DPO) with the following characteristics: - Method: cluster - Pruning applied during training - Fine-tuned on preference data ## Usage ```python from transformers import AutoTokenizer, AutoModelForCausalLM model_name = "5456es/cluster_prune_Qwen2.5-0.5B-Instruct_prune_0.5-sigmoid" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained(model_name) # Example usage prompt = "Your prompt here" inputs = tokenizer(prompt, return_tensors="pt") outputs = model.generate(**inputs, max_length=100) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` ## Training Data This model was trained on preference data using the DPO algorithm. ## Limitations This model inherits the limitations of its base model and may have additional limitations due to the pruning process. ## Citation If you use this model, please cite the original DPO paper and the base model.
5456es/random_prune_Qwen2.5-7B-Instruct_prune_0.5-sigmoid
5456es
2025-09-12T09:11:09Z
29
0
null
[ "safetensors", "qwen2", "dpo", "preference-learning", "random", "pruned", "license:apache-2.0", "region:us" ]
null
2025-09-08T04:50:54Z
--- license: apache-2.0 base_model: Qwen2.5-7B-Instruct tags: - dpo - preference-learning - random - pruned --- # random_prune_Qwen2.5-7B-Instruct_prune_0.5-sigmoid This model is a DPO (Direct Preference Optimization) fine-tuned version of Qwen2.5-7B-Instruct using the random method. ## Model Details - **Base Model**: Qwen2.5-7B-Instruct - **Training Method**: random - **Pruning Ratio**: unknown - **Training Date**: 2025-09-12 ## Training Configuration This model was trained using Direct Preference Optimization (DPO) with the following characteristics: - Method: random - Pruning applied during training - Fine-tuned on preference data ## Usage ```python from transformers import AutoTokenizer, AutoModelForCausalLM model_name = "5456es/random_prune_Qwen2.5-7B-Instruct_prune_0.5-sigmoid" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained(model_name) # Example usage prompt = "Your prompt here" inputs = tokenizer(prompt, return_tensors="pt") outputs = model.generate(**inputs, max_length=100) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` ## Training Data This model was trained on preference data using the DPO algorithm. ## Limitations This model inherits the limitations of its base model and may have additional limitations due to the pruning process. ## Citation If you use this model, please cite the original DPO paper and the base model.
Enstar07/piper_ACT_09-08_pickC2laundry_model
Enstar07
2025-09-12T09:10:38Z
0
0
null
[ "safetensors", "license:mit", "region:us" ]
null
2025-09-10T07:05:56Z
--- license: mit --- **Date:** 2025-09-07 **dataset:**: https://huggingface.co/datasets/Enstar07/piper_ACT_09-08_pickC2laundry **Task information:** piper pick cloth from basket to laundry **Episodes Collected:** 70 **Training:** 120,000 steps completed **Deployment Result:** piper can successfully grab clothes into the washing machine, and also gradually pick the clothes hanging at the washing machine door into the washing machine. **Pick rate:** 90-95% ##### Data Collection Successfully collected **70 episodes**: piper dataset ```bash python -m lerobot.record \ --robot.disable_torque_on_disconnect=true \ --robot.type=piper \ --robot.port=can0 \ --robot.cameras="{'handeye': {'type':'opencv', 'index_or_path':0, 'width':640, 'height':480, 'fps':30}, 'fixed': {'type':'opencv', 'index_or_path':2, 'width':640, 'height':480, 'fps':30}, 'extra': {'type':'opencv', 'index_or_path':4, 'width':640, 'height':480, 'fps':30}}" \ --teleop.type=so101_leader \ --teleop.port=/dev/ttyACM0 \ --teleop.id=R11 \ --display_data=true \ --dataset.repo_id=local/so101_piper_pickC2washer \ --dataset.num_episodes=30 \ --dataset.episode_time_s=40 \ --dataset.reset_time_s=5 \ --dataset.push_to_hub=false \ --resume=true \ --dataset.root=/home/paris/X/data/piper_data/piper_09_08 \ --dataset.single_task="piper pick cloth2washer" ``` ```bash --resume=true \ ``` ##### Training Training **120,000 steps**, results saved at: `outputs/train/piper/piper_pickC2washer_120000` ```bash nohup python scripts/train.py \ --dataset.repo_id=/home/paris/X/data/piper_data/piper_09_08 \ --policy.type=act \ --output_dir=outputs/train/piper/piper_pickC2washer_120000 \ --job_name=piper_pickC2washer \ --policy.device=cuda \ --batch_size=32 \ --steps=120000 \ --save_freq=5000 \ --eval_freq=5000 \ --log_freq=1000 \ --policy.push_to_hub=false \ > train.log 2>&1 & ``` Check training progress: ```bash tail -f train.log ``` ##### Deployment Deployment successful: after **120,000 steps training**, the result is that **piper can successfully pick clothes into the washing machine with high accuracy**, and also gradually pick the clothes hanging on the washing machine door into the washing machine. sometimes, it cannot distinguish the basket boundary clearly. Models in `/last/` work. **Next step:** increase dataset size and training steps. ```bash python scripts/deploy.py \ --robot.type=piper \ --robot.disable_torque_on_disconnect=true \ --robot.port=can0 \ --robot.cameras="{'handeye': {'type':'opencv', 'index_or_path':0, 'width':640, 'height':480, 'fps':30}, 'fixed': {'type':'opencv', 'index_or_path':2, 'width':640, 'height':480, 'fps':30}, 'extra': {'type':'opencv', 'index_or_path':4, 'width':640, 'height':480, 'fps':30}}" \ --display_data=true \ --dataset.single_task="piper_pickA2B" \ --policy.path=/home/paris/X/so101/lerobot/src/lerobot/outputs/train/piper/piper_pickC2washer_120000/checkpoints/last/pretrained_model \ --policy.device=cuda \ --dataset.episode_time_s=9999 \ --dataset.repo_id=local/eval_pickC2washer00 \ --dataset.push_to_hub=false ```
second-state/Seed-OSS-36B-Instruct-GGUF
second-state
2025-09-12T09:10:15Z
360
0
transformers
[ "transformers", "gguf", "seed_oss", "text-generation", "base_model:ByteDance-Seed/Seed-OSS-36B-Instruct", "base_model:quantized:ByteDance-Seed/Seed-OSS-36B-Instruct", "autotrain_compatible", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2025-08-28T04:49:01Z
--- base_model: ByteDance-Seed/Seed-OSS-36B-Instruct model_creator: ByteDance-Seed model_name: Seed-OSS-36B-Instruct quantized_by: Second State Inc. pipeline_tag: text-generation library_name: transformers --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://github.com/LlamaEdge/LlamaEdge/raw/dev/assets/logo.svg" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Seed-OSS-36B-Instruct-GGUF ## Original Model [ByteDance-Seed/Seed-OSS-36B-Instruct](https://huggingface.co/ByteDance-Seed/Seed-OSS-36B-Instruct) ## Run with LlamaEdge - LlamaEdge version: coming soon <!-- - LlamaEdge version: [v0.25.1](https://github.com/LlamaEdge/LlamaEdge/releases/tag/0.25.1) and above --> - Prompt template - Prompt type: - `seed-oss-think` for think mode - `seed-oss-no-think` for no think mode - Prompt string - `Thinking` mode ```text <seed:bos>system You are Doubao, a helpful AI assistant. <seed:eos> <seed:bos>user {user_message_1} <seed:eos> <seed:bos>assistant <seed:think>{thinking_content}</seed:think> {assistant_message_1} <seed:eos> <seed:bos>user {user_message_2} <seed:eos> <seed:bos>assistant ``` - `No-thinking` mode ```text <seed:bos>system You are Doubao, a helpful AI assistant. <seed:eos> <seed:bos>system You are an intelligent assistant that can answer questions in one step without the need for reasoning and thinking, that is, your thinking budget is 0. Next, please skip the thinking process and directly start answering the user's questions. <seed:eos> <seed:bos>user {user_message_1} <seed:eos> <seed:bos>assistant {assistant_message_1} <seed:eos> <seed:bos>user {user_message_2} <seed:eos> <seed:bos>assistant ``` - Context size: `512000` - Run as LlamaEdge service ```bash wasmedge --dir .:. \ --nn-preload default:GGML:AUTO:Seed-OSS-36B-Instruct-Q5_K_M.gguf \ llama-api-server.wasm \ --prompt-template seed-oss-no-think \ --ctx-size 512000 \ --model-name seed-oss ``` ## Quantized GGUF Models | Name | Quant method | Bits | Size | Use case | | ---- | ---- | ---- | ---- | ----- | | [Seed-OSS-36B-Instruct-Q2_K.gguf](https://huggingface.co/second-state/Seed-OSS-36B-Instruct-GGUF/blob/main/Seed-OSS-36B-Instruct-Q2_K.gguf) | Q2_K | 2 | 13.6 GB| smallest, significant quality loss - not recommended for most purposes | | [Seed-OSS-36B-Instruct-Q3_K_L.gguf](https://huggingface.co/second-state/Seed-OSS-36B-Instruct-GGUF/blob/main/Seed-OSS-36B-Instruct-Q3_K_L.gguf) | Q3_K_L | 3 | 19.1 GB| small, substantial quality loss | | [Seed-OSS-36B-Instruct-Q3_K_M.gguf](https://huggingface.co/second-state/Seed-OSS-36B-Instruct-GGUF/blob/main/Seed-OSS-36B-Instruct-Q3_K_M.gguf) | Q3_K_M | 3 | 17.6 GB| very small, high quality loss | | [Seed-OSS-36B-Instruct-Q3_K_S.gguf](https://huggingface.co/second-state/Seed-OSS-36B-Instruct-GGUF/blob/main/Seed-OSS-36B-Instruct-Q3_K_S.gguf) | Q3_K_S | 3 | 15.9 GB| very small, high quality loss | | [Seed-OSS-36B-Instruct-Q4_0.gguf](https://huggingface.co/second-state/Seed-OSS-36B-Instruct-GGUF/blob/main/Seed-OSS-36B-Instruct-Q4_0.gguf) | Q4_0 | 4 | 20.6 GB| legacy; small, very high quality loss - prefer using Q3_K_M | | [Seed-OSS-36B-Instruct-Q4_K_M.gguf](https://huggingface.co/second-state/Seed-OSS-36B-Instruct-GGUF/blob/main/Seed-OSS-36B-Instruct-Q4_K_M.gguf) | Q4_K_M | 4 | 21.8 GB| medium, balanced quality - recommended | | [Seed-OSS-36B-Instruct-Q4_K_S.gguf](https://huggingface.co/second-state/Seed-OSS-36B-Instruct-GGUF/blob/main/Seed-OSS-36B-Instruct-Q4_K_S.gguf) | Q4_K_S | 4 | 20.7 GB| small, greater quality loss | | [Seed-OSS-36B-Instruct-Q5_0.gguf](https://huggingface.co/second-state/Seed-OSS-36B-Instruct-GGUF/blob/main/Seed-OSS-36B-Instruct-Q5_0.gguf) | Q5_0 | 5 | 25.0 GB| legacy; medium, balanced quality - prefer using Q4_K_M | | [Seed-OSS-36B-Instruct-Q5_K_M.gguf](https://huggingface.co/second-state/Seed-OSS-36B-Instruct-GGUF/blob/main/Seed-OSS-36B-Instruct-Q5_K_M.gguf) | Q5_K_M | 5 | 25.6 GB| large, very low quality loss - recommended | | [Seed-OSS-36B-Instruct-Q5_K_S.gguf](https://huggingface.co/second-state/Seed-OSS-36B-Instruct-GGUF/blob/main/Seed-OSS-36B-Instruct-Q5_K_S.gguf) | Q5_K_S | 5 | 25.0 GB| large, low quality loss - recommended | | [Seed-OSS-36B-Instruct-Q6_K.gguf](https://huggingface.co/second-state/Seed-OSS-36B-Instruct-GGUF/blob/main/Seed-OSS-36B-Instruct-Q6_K.gguf) | Q6_K | 6 | 29.7 GB| very large, extremely low quality loss | | [Seed-OSS-36B-Instruct-Q8_0.gguf](https://huggingface.co/second-state/Seed-OSS-36B-Instruct-GGUF/blob/main/Seed-OSS-36B-Instruct-Q8_0.gguf) | Q8_0 | 8 | 38.4 GB| very large, extremely low quality loss - not recommended | | [Seed-OSS-36B-Instruct-f16-00001-of-00003.gguf](https://huggingface.co/second-state/Seed-OSS-36B-Instruct-GGUF/blob/main/Seed-OSS-36B-Instruct-f16-00001-of-00003.gguf) | f16 | 16 | 30.0 GB| | | [Seed-OSS-36B-Instruct-f16-00002-of-00003.gguf](https://huggingface.co/second-state/Seed-OSS-36B-Instruct-GGUF/blob/main/Seed-OSS-36B-Instruct-f16-00002-of-00003.gguf) | f16 | 16 | 30.0 GB| | | [Seed-OSS-36B-Instruct-f16-00003-of-00003.gguf](https://huggingface.co/second-state/Seed-OSS-36B-Instruct-GGUF/blob/main/Seed-OSS-36B-Instruct-f16-00003-of-00003.gguf) | f16 | 16 | 12.4 GB| | *Quantized with llama.cpp b6301.*
5456es/random_prune_Llama-3.1-8B-Instruct_prune_0.7-sigmoid
5456es
2025-09-12T09:09:58Z
28
0
null
[ "safetensors", "llama", "dpo", "preference-learning", "random", "pruned", "license:apache-2.0", "region:us" ]
null
2025-09-09T04:04:51Z
--- license: apache-2.0 base_model: Llama-3.1-8B-Instruct tags: - dpo - preference-learning - random - pruned --- # random_prune_Llama-3.1-8B-Instruct_prune_0.7-sigmoid This model is a DPO (Direct Preference Optimization) fine-tuned version of Llama-3.1-8B-Instruct using the random method. ## Model Details - **Base Model**: Llama-3.1-8B-Instruct - **Training Method**: random - **Pruning Ratio**: unknown - **Training Date**: 2025-09-12 ## Training Configuration This model was trained using Direct Preference Optimization (DPO) with the following characteristics: - Method: random - Pruning applied during training - Fine-tuned on preference data ## Usage ```python from transformers import AutoTokenizer, AutoModelForCausalLM model_name = "5456es/random_prune_Llama-3.1-8B-Instruct_prune_0.7-sigmoid" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained(model_name) # Example usage prompt = "Your prompt here" inputs = tokenizer(prompt, return_tensors="pt") outputs = model.generate(**inputs, max_length=100) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` ## Training Data This model was trained on preference data using the DPO algorithm. ## Limitations This model inherits the limitations of its base model and may have additional limitations due to the pruning process. ## Citation If you use this model, please cite the original DPO paper and the base model.
lejonck/whisper-small-common-voice-3
lejonck
2025-09-12T09:09:46Z
36
0
transformers
[ "transformers", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "dataset:generator", "base_model:lejonck/whisper-small-common-voice-2", "base_model:finetune:lejonck/whisper-small-common-voice-2", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2025-08-25T05:38:58Z
--- library_name: transformers license: apache-2.0 base_model: lejonck/whisper-small-common-voice-2 tags: - generated_from_trainer datasets: - generator metrics: - wer model-index: - name: whisper-small-common-voice-3 results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: generator type: generator config: default split: train args: default metrics: - name: Wer type: wer value: 0.2480634452231649 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # whisper-small-common-voice-3 This model is a fine-tuned version of [lejonck/whisper-small-common-voice-2](https://huggingface.co/lejonck/whisper-small-common-voice-2) on the generator dataset. It achieves the following results on the evaluation set: - Loss: 0.1207 - Wer: 0.2481 - Cer: 0.3645 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 2 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - num_epochs: 12 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | Cer | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:| | 0.2347 | 1.0 | 1000 | 0.1108 | 0.3383 | 0.3745 | | 0.0761 | 2.0 | 2000 | 0.1207 | 0.2481 | 0.3645 | | 0.0244 | 3.0 | 3000 | 0.1340 | 0.4093 | 0.3905 | | 0.0076 | 4.0 | 4000 | 0.1434 | 0.4784 | 0.4075 | | 0.0018 | 5.0 | 5000 | 0.1585 | 0.3921 | 0.3755 | | 0.0035 | 6.0 | 6000 | 0.1639 | 0.4190 | 0.3841 | | 0.0004 | 7.0 | 7000 | 0.1693 | 0.3445 | 0.3757 | ### Framework versions - Transformers 4.55.2 - Pytorch 2.7.0+cu126 - Datasets 2.19.1 - Tokenizers 0.21.4
kartikeyapandey20/MiniModernBERT-glue-cola
kartikeyapandey20
2025-09-12T09:09:04Z
0
0
transformers
[ "transformers", "safetensors", "modernbert", "text-classification", "generated_from_trainer", "base_model:kartikeyapandey20/MiniModernBERT-Pretrained", "base_model:finetune:kartikeyapandey20/MiniModernBERT-Pretrained", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-09-12T09:08:31Z
--- library_name: transformers license: mit base_model: kartikeya-pandey/MiniModernBERT-Pretrained tags: - generated_from_trainer metrics: - matthews_correlation model-index: - name: MiniModernBERT-glue-cola results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # MiniModernBERT-glue-cola This model is a fine-tuned version of [kartikeya-pandey/MiniModernBERT-Pretrained](https://huggingface.co/kartikeya-pandey/MiniModernBERT-Pretrained) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.1227 - Matthews Correlation: 0.3408 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.56.1 - Pytorch 2.8.0+cu126 - Datasets 4.0.0 - Tokenizers 0.22.0
5456es/implicit_reward_Qwen2.5-1.5B-Instruct_prune_0.3-sigmoid
5456es
2025-09-12T09:07:04Z
35
0
null
[ "safetensors", "qwen2", "dpo", "preference-learning", "implicit", "pruned", "license:apache-2.0", "region:us" ]
null
2025-09-07T05:12:42Z
--- license: apache-2.0 base_model: Qwen2.5-1.5B-Instruct tags: - dpo - preference-learning - implicit - pruned --- # implicit_reward_Qwen2.5-1.5B-Instruct_prune_0.3-sigmoid This model is a DPO (Direct Preference Optimization) fine-tuned version of Qwen2.5-1.5B-Instruct using the implicit method. ## Model Details - **Base Model**: Qwen2.5-1.5B-Instruct - **Training Method**: implicit - **Pruning Ratio**: unknown - **Training Date**: 2025-09-12 ## Training Configuration This model was trained using Direct Preference Optimization (DPO) with the following characteristics: - Method: implicit - Pruning applied during training - Fine-tuned on preference data ## Usage ```python from transformers import AutoTokenizer, AutoModelForCausalLM model_name = "5456es/implicit_reward_Qwen2.5-1.5B-Instruct_prune_0.3-sigmoid" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained(model_name) # Example usage prompt = "Your prompt here" inputs = tokenizer(prompt, return_tensors="pt") outputs = model.generate(**inputs, max_length=100) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` ## Training Data This model was trained on preference data using the DPO algorithm. ## Limitations This model inherits the limitations of its base model and may have additional limitations due to the pruning process. ## Citation If you use this model, please cite the original DPO paper and the base model.
Clemylia/Miamuy-midi
Clemylia
2025-09-12T09:03:59Z
0
0
transformers.js
[ "transformers.js", "music", "text-to-audio", "license:apache-2.0", "region:us" ]
text-to-audio
2025-09-12T08:03:17Z
--- license: apache-2.0 library_name: transformers.js tags: - music pipeline_tag: text-to-audio --- ### Documentation du modèle `Miamuy-midi` 🎵 ![Miamuy](http://www.image-heberg.fr/files/17576650531190605803.jpg) Bienvenue sur la page de documentation de **`Miamuy-midi`**, un modèle JavaScript qui génère des mélodies. Ce modèle a été conçu pour l'apprentissage et la création musicale. ----- ### ✨ Qu'est-ce que c'est ? `Miamuy-midi` est un modèle génératif basé sur des règles. Son but est de créer des séquences de notes MIDI à partir d'une note de départ fournie par l'utilisateur. C'est un outil parfait pour composer de petites mélodies ou pour explorer la musique algorithmique. Ce modèle fonctionne entièrement **côté client**, ce qui le rend ultra-léger et rapide, car il ne dépend d'aucun serveur externe. ----- ### 🧠 Comment ça fonctionne ? Le modèle `Miamuy-midi` suit un processus simple mais efficace : 1. **Saisie de la note :** Le modèle reçoit en entrée une note de départ (par exemple, "C4"). 2. **Création de la séquence :** Il génère une séquence de notes en alternant de manière semi-aléatoire des notes autour de la note de départ pour créer une mélodie cohérente. 3. **Sortie des données :** Le modèle renvoie une liste des notes générées, à la fois sous forme de noms de notes lisibles par l'humain et sous forme de valeurs MIDI numériques. ----- ### 💻 Comment utiliser le modèle Tu peux utiliser ce modèle dans n'importe quel projet JavaScript en l'important directement depuis le Hugging Face Hub. #### Installation Il n'y a pas d'installation \! Tu as juste besoin d'accéder au fichier du modèle via son URL. #### Exemple d'utilisation Voici comment appeler et utiliser le modèle : ```javascript import MiamuyMidiModel from 'https://huggingface.co/Clemylia/Miamuy-midi/raw/main/transformer.js'; // Crée une instance du modèle const miamuy = await MiamuyMidiModel.getInstance(); // Génère une séquence de notes à partir de la note de départ 'C4' const result = await miamuy.generate('C4', { length: 8 }); // Affiche les notes générées console.log(result[0].generated_text); // Ex: "C4 F4 G4 C5 A4 D5 G4 B4" console.log(result[0].midi_notes); // Ex: [60, 65, 67, 72, 69, 74, 67, 71] ``` ----- ### ⚙️ Paramètres de la méthode `generate` La méthode `generate` accepte une chaîne de caractères pour la note de départ (`prompt`) et un objet `options` optionnel : * **`prompt`** (`string`) : La note de départ pour la mélodie (ex: `'C4'`, `'A#3'`). Obligatoire. * **`options.length`** (`number`, optionnel) : La longueur de la séquence à générer. Par défaut, la longueur est de 8 notes. ----- ### ✍️ Auteur Ce modèle a été créé par **Clemylia**. ----- ### 📄 Licence Ce modèle est sous licence Apache-2.0. -----
BGDolls/CLIP-ViT-H-14-laion2B-s32B-b79K-SD1.5-onnx
BGDolls
2025-09-12T09:03:42Z
3
0
null
[ "onnx", "license:mit", "region:us" ]
null
2025-09-12T08:37:12Z
--- license: mit --- Onnx version of https://huggingface.co/h94/IP-Adapter/tree/main/models/image_encoder CLIP-ViT-H-14-laion2B-s32B-b79K is MIT license
mradermacher/Qwen3-Coder-30B-A3B-Instruct-480B-Distill-V2-Fp32-GGUF
mradermacher
2025-09-12T09:03:39Z
3,825
0
transformers
[ "transformers", "gguf", "causal-lm", "moe", "mixture-of-experts", "qwen", "distillation", "svd", "lora-merged", "code-generation", "en", "code", "base_model:BasedBase/Qwen3-Coder-30B-A3B-Instruct-480B-Distill-V2-Fp32", "base_model:quantized:BasedBase/Qwen3-Coder-30B-A3B-Instruct-480B-Distill-V2-Fp32", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-09-11T18:56:38Z
--- base_model: BasedBase/Qwen3-Coder-30B-A3B-Instruct-480B-Distill-V2-Fp32 language: - en - code library_name: transformers license: apache-2.0 mradermacher: readme_rev: 1 quantized_by: mradermacher tags: - causal-lm - moe - mixture-of-experts - qwen - distillation - svd - lora-merged - code-generation --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> <!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> static quants of https://huggingface.co/BasedBase/Qwen3-Coder-30B-A3B-Instruct-480B-Distill-V2-Fp32 <!-- provided-files --> ***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Qwen3-Coder-30B-A3B-Instruct-480B-Distill-V2-Fp32-GGUF).*** weighted/imatrix quants are available at https://huggingface.co/mradermacher/Qwen3-Coder-30B-A3B-Instruct-480B-Distill-V2-Fp32-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Qwen3-Coder-30B-A3B-Instruct-480B-Distill-V2-Fp32-GGUF/resolve/main/Qwen3-Coder-30B-A3B-Instruct-480B-Distill-V2-Fp32.Q2_K.gguf) | Q2_K | 11.4 | | | [GGUF](https://huggingface.co/mradermacher/Qwen3-Coder-30B-A3B-Instruct-480B-Distill-V2-Fp32-GGUF/resolve/main/Qwen3-Coder-30B-A3B-Instruct-480B-Distill-V2-Fp32.Q3_K_S.gguf) | Q3_K_S | 13.4 | | | [GGUF](https://huggingface.co/mradermacher/Qwen3-Coder-30B-A3B-Instruct-480B-Distill-V2-Fp32-GGUF/resolve/main/Qwen3-Coder-30B-A3B-Instruct-480B-Distill-V2-Fp32.Q3_K_M.gguf) | Q3_K_M | 14.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Qwen3-Coder-30B-A3B-Instruct-480B-Distill-V2-Fp32-GGUF/resolve/main/Qwen3-Coder-30B-A3B-Instruct-480B-Distill-V2-Fp32.Q3_K_L.gguf) | Q3_K_L | 16.0 | | | [GGUF](https://huggingface.co/mradermacher/Qwen3-Coder-30B-A3B-Instruct-480B-Distill-V2-Fp32-GGUF/resolve/main/Qwen3-Coder-30B-A3B-Instruct-480B-Distill-V2-Fp32.IQ4_XS.gguf) | IQ4_XS | 16.7 | | | [GGUF](https://huggingface.co/mradermacher/Qwen3-Coder-30B-A3B-Instruct-480B-Distill-V2-Fp32-GGUF/resolve/main/Qwen3-Coder-30B-A3B-Instruct-480B-Distill-V2-Fp32.Q4_K_S.gguf) | Q4_K_S | 17.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Qwen3-Coder-30B-A3B-Instruct-480B-Distill-V2-Fp32-GGUF/resolve/main/Qwen3-Coder-30B-A3B-Instruct-480B-Distill-V2-Fp32.Q4_K_M.gguf) | Q4_K_M | 18.7 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Qwen3-Coder-30B-A3B-Instruct-480B-Distill-V2-Fp32-GGUF/resolve/main/Qwen3-Coder-30B-A3B-Instruct-480B-Distill-V2-Fp32.Q5_K_S.gguf) | Q5_K_S | 21.2 | | | [GGUF](https://huggingface.co/mradermacher/Qwen3-Coder-30B-A3B-Instruct-480B-Distill-V2-Fp32-GGUF/resolve/main/Qwen3-Coder-30B-A3B-Instruct-480B-Distill-V2-Fp32.Q5_K_M.gguf) | Q5_K_M | 21.8 | | | [GGUF](https://huggingface.co/mradermacher/Qwen3-Coder-30B-A3B-Instruct-480B-Distill-V2-Fp32-GGUF/resolve/main/Qwen3-Coder-30B-A3B-Instruct-480B-Distill-V2-Fp32.Q6_K.gguf) | Q6_K | 25.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Qwen3-Coder-30B-A3B-Instruct-480B-Distill-V2-Fp32-GGUF/resolve/main/Qwen3-Coder-30B-A3B-Instruct-480B-Distill-V2-Fp32.Q8_0.gguf) | Q8_0 | 32.6 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/ATLAS-8B-Instruct-GGUF
mradermacher
2025-09-12T09:03:39Z
0
0
transformers
[ "transformers", "gguf", "supervised-fine-tuning", "teacher-model", "pedagogy", "reasoning", "sft", "en", "dataset:Arc-Intelligence/Arc-ATLAS-Teach-v0", "base_model:Arc-Intelligence/ATLAS-8B-Instruct", "base_model:quantized:Arc-Intelligence/ATLAS-8B-Instruct", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-09-12T08:08:35Z
--- base_model: Arc-Intelligence/ATLAS-8B-Instruct datasets: - Arc-Intelligence/Arc-ATLAS-Teach-v0 language: - en library_name: transformers license: apache-2.0 mradermacher: readme_rev: 1 quantized_by: mradermacher tags: - supervised-fine-tuning - teacher-model - pedagogy - reasoning - sft --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> <!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> static quants of https://huggingface.co/Arc-Intelligence/ATLAS-8B-Instruct <!-- provided-files --> ***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#ATLAS-8B-Instruct-GGUF).*** weighted/imatrix quants are available at https://huggingface.co/mradermacher/ATLAS-8B-Instruct-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/ATLAS-8B-Instruct-GGUF/resolve/main/ATLAS-8B-Instruct.Q2_K.gguf) | Q2_K | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/ATLAS-8B-Instruct-GGUF/resolve/main/ATLAS-8B-Instruct.Q3_K_S.gguf) | Q3_K_S | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/ATLAS-8B-Instruct-GGUF/resolve/main/ATLAS-8B-Instruct.Q3_K_M.gguf) | Q3_K_M | 4.2 | lower quality | | [GGUF](https://huggingface.co/mradermacher/ATLAS-8B-Instruct-GGUF/resolve/main/ATLAS-8B-Instruct.Q3_K_L.gguf) | Q3_K_L | 4.5 | | | [GGUF](https://huggingface.co/mradermacher/ATLAS-8B-Instruct-GGUF/resolve/main/ATLAS-8B-Instruct.IQ4_XS.gguf) | IQ4_XS | 4.7 | | | [GGUF](https://huggingface.co/mradermacher/ATLAS-8B-Instruct-GGUF/resolve/main/ATLAS-8B-Instruct.Q4_K_S.gguf) | Q4_K_S | 4.9 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/ATLAS-8B-Instruct-GGUF/resolve/main/ATLAS-8B-Instruct.Q4_K_M.gguf) | Q4_K_M | 5.1 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/ATLAS-8B-Instruct-GGUF/resolve/main/ATLAS-8B-Instruct.Q5_K_S.gguf) | Q5_K_S | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/ATLAS-8B-Instruct-GGUF/resolve/main/ATLAS-8B-Instruct.Q5_K_M.gguf) | Q5_K_M | 6.0 | | | [GGUF](https://huggingface.co/mradermacher/ATLAS-8B-Instruct-GGUF/resolve/main/ATLAS-8B-Instruct.Q6_K.gguf) | Q6_K | 6.8 | very good quality | | [GGUF](https://huggingface.co/mradermacher/ATLAS-8B-Instruct-GGUF/resolve/main/ATLAS-8B-Instruct.Q8_0.gguf) | Q8_0 | 8.8 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/ATLAS-8B-Instruct-GGUF/resolve/main/ATLAS-8B-Instruct.f16.gguf) | f16 | 16.5 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
BienKieu/deepseek-7b-lora
BienKieu
2025-09-12T09:03:37Z
10
0
peft
[ "peft", "safetensors", "generated_from_trainer", "sft", "trl", "base_model:deepseek-ai/deepseek-llm-7b-base", "base_model:adapter:deepseek-ai/deepseek-llm-7b-base", "region:us" ]
null
2025-09-10T19:15:04Z
--- base_model: deepseek-ai/deepseek-llm-7b-base library_name: peft model_name: deepseek-7b-lora-output tags: - generated_from_trainer - sft - trl licence: license --- # Model Card for deepseek-7b-lora-output This model is a fine-tuned version of [deepseek-ai/deepseek-llm-7b-base](https://huggingface.co/deepseek-ai/deepseek-llm-7b-base). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="None", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with SFT. ### Framework versions - PEFT 0.15.2 - TRL: 0.23.0 - Transformers: 4.56.1 - Pytorch: 2.8.0 - Datasets: 3.6.0 - Tokenizers: 0.22.0 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
5456es/bees_prune_Llama-3.2-3B-Instruct_prune_0.5-sigmoid
5456es
2025-09-12T09:03:35Z
26
0
null
[ "safetensors", "llama", "dpo", "preference-learning", "bees", "pruned", "license:apache-2.0", "region:us" ]
null
2025-09-08T04:41:14Z
--- license: apache-2.0 base_model: Llama-3.2-3B-Instruct tags: - dpo - preference-learning - bees - pruned --- # bees_prune_Llama-3.2-3B-Instruct_prune_0.5-sigmoid This model is a DPO (Direct Preference Optimization) fine-tuned version of Llama-3.2-3B-Instruct using the bees method. ## Model Details - **Base Model**: Llama-3.2-3B-Instruct - **Training Method**: bees - **Pruning Ratio**: unknown - **Training Date**: 2025-09-12 ## Training Configuration This model was trained using Direct Preference Optimization (DPO) with the following characteristics: - Method: bees - Pruning applied during training - Fine-tuned on preference data ## Usage ```python from transformers import AutoTokenizer, AutoModelForCausalLM model_name = "5456es/bees_prune_Llama-3.2-3B-Instruct_prune_0.5-sigmoid" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained(model_name) # Example usage prompt = "Your prompt here" inputs = tokenizer(prompt, return_tensors="pt") outputs = model.generate(**inputs, max_length=100) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` ## Training Data This model was trained on preference data using the DPO algorithm. ## Limitations This model inherits the limitations of its base model and may have additional limitations due to the pruning process. ## Citation If you use this model, please cite the original DPO paper and the base model.
5456es/cluster_prune_Qwen2.5-0.5B-Instruct_prune_0.7-sigmoid
5456es
2025-09-12T09:03:04Z
37
0
null
[ "safetensors", "qwen2", "dpo", "preference-learning", "cluster", "pruned", "license:apache-2.0", "region:us" ]
null
2025-09-07T05:08:56Z
--- license: apache-2.0 base_model: Qwen2.5-0.5B-Instruct tags: - dpo - preference-learning - cluster - pruned --- # cluster_prune_Qwen2.5-0.5B-Instruct_prune_0.7-sigmoid This model is a DPO (Direct Preference Optimization) fine-tuned version of Qwen2.5-0.5B-Instruct using the cluster method. ## Model Details - **Base Model**: Qwen2.5-0.5B-Instruct - **Training Method**: cluster - **Pruning Ratio**: unknown - **Training Date**: 2025-09-12 ## Training Configuration This model was trained using Direct Preference Optimization (DPO) with the following characteristics: - Method: cluster - Pruning applied during training - Fine-tuned on preference data ## Usage ```python from transformers import AutoTokenizer, AutoModelForCausalLM model_name = "5456es/cluster_prune_Qwen2.5-0.5B-Instruct_prune_0.7-sigmoid" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained(model_name) # Example usage prompt = "Your prompt here" inputs = tokenizer(prompt, return_tensors="pt") outputs = model.generate(**inputs, max_length=100) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` ## Training Data This model was trained on preference data using the DPO algorithm. ## Limitations This model inherits the limitations of its base model and may have additional limitations due to the pruning process. ## Citation If you use this model, please cite the original DPO paper and the base model.
5456es/selective_dpo_Qwen2.5-1.5B-Instruct_prune_0.3-sigmoid
5456es
2025-09-12T09:02:42Z
28
0
null
[ "safetensors", "qwen2", "dpo", "preference-learning", "selective", "pruned", "license:apache-2.0", "region:us" ]
null
2025-09-07T05:06:28Z
--- license: apache-2.0 base_model: Qwen2.5-1.5B-Instruct tags: - dpo - preference-learning - selective - pruned --- # selective_dpo_Qwen2.5-1.5B-Instruct_prune_0.3-sigmoid This model is a DPO (Direct Preference Optimization) fine-tuned version of Qwen2.5-1.5B-Instruct using the selective method. ## Model Details - **Base Model**: Qwen2.5-1.5B-Instruct - **Training Method**: selective - **Pruning Ratio**: unknown - **Training Date**: 2025-09-12 ## Training Configuration This model was trained using Direct Preference Optimization (DPO) with the following characteristics: - Method: selective - Pruning applied during training - Fine-tuned on preference data ## Usage ```python from transformers import AutoTokenizer, AutoModelForCausalLM model_name = "5456es/selective_dpo_Qwen2.5-1.5B-Instruct_prune_0.3-sigmoid" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained(model_name) # Example usage prompt = "Your prompt here" inputs = tokenizer(prompt, return_tensors="pt") outputs = model.generate(**inputs, max_length=100) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` ## Training Data This model was trained on preference data using the DPO algorithm. ## Limitations This model inherits the limitations of its base model and may have additional limitations due to the pruning process. ## Citation If you use this model, please cite the original DPO paper and the base model.
andersonbcdefg/vl-finetuning-max-thresh-10-2025-09-12
andersonbcdefg
2025-09-12T09:02:41Z
0
0
transformers
[ "transformers", "safetensors", "qwen2_5_vl", "image-to-text", "arxiv:1910.09700", "text-generation-inference", "endpoints_compatible", "region:us" ]
image-to-text
2025-09-12T08:58:41Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
5456es/last_layer_prune_Llama-3.2-3B-Instruct_prune_0.6-sigmoid
5456es
2025-09-12T09:02:15Z
0
0
null
[ "safetensors", "llama", "dpo", "preference-learning", "last", "pruned", "license:apache-2.0", "region:us" ]
null
2025-09-12T08:57:57Z
--- license: apache-2.0 base_model: Llama-3.2-3B-Instruct tags: - dpo - preference-learning - last - pruned --- # last_layer_prune_Llama-3.2-3B-Instruct_prune_0.6-sigmoid This model is a DPO (Direct Preference Optimization) fine-tuned version of Llama-3.2-3B-Instruct using the last method. ## Model Details - **Base Model**: Llama-3.2-3B-Instruct - **Training Method**: last - **Pruning Ratio**: unknown - **Training Date**: 2025-09-12 ## Training Configuration This model was trained using Direct Preference Optimization (DPO) with the following characteristics: - Method: last - Pruning applied during training - Fine-tuned on preference data ## Usage ```python from transformers import AutoTokenizer, AutoModelForCausalLM model_name = "5456es/last_layer_prune_Llama-3.2-3B-Instruct_prune_0.6-sigmoid" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained(model_name) # Example usage prompt = "Your prompt here" inputs = tokenizer(prompt, return_tensors="pt") outputs = model.generate(**inputs, max_length=100) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` ## Training Data This model was trained on preference data using the DPO algorithm. ## Limitations This model inherits the limitations of its base model and may have additional limitations due to the pruning process. ## Citation If you use this model, please cite the original DPO paper and the base model.
manbeast3b/007-american-party-01-2
manbeast3b
2025-09-12T09:00:12Z
0
0
null
[ "safetensors", "any-to-any", "omega", "omegalabs", "bittensor", "agi", "license:mit", "region:us" ]
any-to-any
2025-09-10T00:39:03Z
--- license: mit tags: - any-to-any - omega - omegalabs - bittensor - agi --- This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet. Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
sitaram05s/blockassist
sitaram05s
2025-09-12T09:00:09Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "raging sneaky camel", "arxiv:2504.07091", "region:us" ]
null
2025-09-10T15:46:49Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - raging sneaky camel --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
manbeast3b/007-iphone17-boo-01r15
manbeast3b
2025-09-12T08:59:18Z
0
0
null
[ "safetensors", "any-to-any", "omega", "omegalabs", "bittensor", "agi", "license:mit", "region:us" ]
any-to-any
2025-09-10T14:07:48Z
--- license: mit tags: - any-to-any - omega - omegalabs - bittensor - agi --- This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet. Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
5456es/random_prune_Qwen2.5-7B-Instruct_prune_0.3-sigmoid
5456es
2025-09-12T08:57:56Z
27
0
null
[ "safetensors", "qwen2", "dpo", "preference-learning", "random", "pruned", "license:apache-2.0", "region:us" ]
null
2025-09-08T04:30:45Z
--- license: apache-2.0 base_model: Qwen2.5-7B-Instruct tags: - dpo - preference-learning - random - pruned --- # random_prune_Qwen2.5-7B-Instruct_prune_0.3-sigmoid This model is a DPO (Direct Preference Optimization) fine-tuned version of Qwen2.5-7B-Instruct using the random method. ## Model Details - **Base Model**: Qwen2.5-7B-Instruct - **Training Method**: random - **Pruning Ratio**: unknown - **Training Date**: 2025-09-12 ## Training Configuration This model was trained using Direct Preference Optimization (DPO) with the following characteristics: - Method: random - Pruning applied during training - Fine-tuned on preference data ## Usage ```python from transformers import AutoTokenizer, AutoModelForCausalLM model_name = "5456es/random_prune_Qwen2.5-7B-Instruct_prune_0.3-sigmoid" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained(model_name) # Example usage prompt = "Your prompt here" inputs = tokenizer(prompt, return_tensors="pt") outputs = model.generate(**inputs, max_length=100) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` ## Training Data This model was trained on preference data using the DPO algorithm. ## Limitations This model inherits the limitations of its base model and may have additional limitations due to the pruning process. ## Citation If you use this model, please cite the original DPO paper and the base model.
mkurman/lfm2-350M-med
mkurman
2025-09-12T08:57:38Z
2
0
transformers
[ "transformers", "safetensors", "gguf", "lfm2", "text-generation", "mergekit", "merge", "conversational", "base_model:LiquidAI/LFM2-350M", "base_model:quantized:LiquidAI/LFM2-350M", "license:other", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-09-11T17:46:00Z
--- base_model: - LiquidAI/LFM2-350M library_name: transformers license: other license_name: lfm1.0 license_link: LICENSE tags: - mergekit - merge --- # lfm2-350M-med **Small medical fine-tune on top of LiquidAI’s LFM2-350M.** This checkpoint specializes the 350M LFM2 base for medical Q&A and tool-augmented search, using a light-weight recipe designed for laptops/edge boxes. > ⚠️ **Medical safety**: This model is **not** a clinician. It may hallucinate and should **not** be used for diagnosis or treatment. Always seek qualified medical supervision. --- ## TL;DR - **Base**: [LiquidAI/LFM2-350M](https://huggingface.co/LiquidAI/LFM2-350M). - **Training**: 1) SFT on **open-source medical data** + **tool-calling (search) traces** 2) **DPO** preference alignment using **MedMCQA** as a preference signal 3) Post-merge with the base via **Arcee Fusion** (MergeKit) for controlled weight fusion - **Eval (author’s harness)** - **MMLU-Pro**: **19.46** (vs **18.76** base in same harness) - **IFEVAL**: **52.595** (vs **61.72** base in same harness) _Note_: LFM2’s official IFEVAL uses a different internal harness and reports ~65 on IFEVAL for the base; numbers are **not directly comparable** across harnesses. --- ## What’s inside ### Base model: LFM2-350M - Designed for **on-device** inference, with strong CPU latency and a **ChatML-like** template. - Supports **tool use** with dedicated special tokens (`<tool_call>`, `</tool_call>`, etc.). See the base card for the full template and examples. ### Specialization steps 1. **Domain SFT (medical + tools)** - Instruction-style Q&A from open medical sources and synthetic conversions. - Tool-use (search) supervised traces to teach function calling patterns. 2. **Preference alignment (DPO)** - Direct Preference Optimization with **MedMCQA-derived** preferences to bias toward clinically reasonable short answers. - Rationale: DPO is simple, stable at a small scale, and works well for short-form medical responses. 3. **Model fusion (Arcee Fusion)** - Final merge uses **Arcee Fusion** in MergeKit, which selectively fuses parameters to avoid over-averaging and can be configured via `merge_method: arcee_fusion`. --- ## Intended use & limitations **Use**: **education**, **research**. **Don’t use**: any medical advice. --- ## Evaluation > All results below were run with the author’s harness; they **will differ** from LiquidAI’s internal suite and Open LLM Leaderboard settings. | Benchmark | lfm2-350M-med | LFM2-350M (same harness) | |------------|---------------:|-------------------------:| | MMLU-Pro | **19.46** | 18.76 | | IFEVAL | **52.595** | 61.72 | - **MMLU-Pro** raises difficulty with 10 choices and more reasoning-heavy items—small models typically drop vs standard MMLU, so small absolute movements are meaningful. - **IFEVAL** measures verifiable instruction-following; scores depend heavily on prompt templates and verification scripts. --- ## Quickstart (Transformers) ```python from transformers import AutoTokenizer, AutoModelForCausalLM model_id = "mkurman/lfm2-350M-med" tok = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype="bfloat16") messages = [ {"role": "system", "content": "You are a careful medical assistant. Cite sources and warn that outputs are not medical advice."}, {"role": "user", "content": "Briefly explain the difference between cellulitis and erysipelas."} ] prompt = tok.apply_chat_template(messages, add_generation_prompt=True, tokenize=False) out = model.generate(**tok(prompt, return_tensors="pt"), max_new_tokens=256) print(tok.decode(out[0], skip_special_tokens=True))
5456es/last_layer_prune_Llama-3.2-3B-Instruct_prune_0.2-sigmoid
5456es
2025-09-12T08:56:34Z
0
0
null
[ "safetensors", "llama", "dpo", "preference-learning", "last", "pruned", "license:apache-2.0", "region:us" ]
null
2025-09-12T08:52:03Z
--- license: apache-2.0 base_model: Llama-3.2-3B-Instruct tags: - dpo - preference-learning - last - pruned --- # last_layer_prune_Llama-3.2-3B-Instruct_prune_0.2-sigmoid This model is a DPO (Direct Preference Optimization) fine-tuned version of Llama-3.2-3B-Instruct using the last method. ## Model Details - **Base Model**: Llama-3.2-3B-Instruct - **Training Method**: last - **Pruning Ratio**: unknown - **Training Date**: 2025-09-12 ## Training Configuration This model was trained using Direct Preference Optimization (DPO) with the following characteristics: - Method: last - Pruning applied during training - Fine-tuned on preference data ## Usage ```python from transformers import AutoTokenizer, AutoModelForCausalLM model_name = "5456es/last_layer_prune_Llama-3.2-3B-Instruct_prune_0.2-sigmoid" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained(model_name) # Example usage prompt = "Your prompt here" inputs = tokenizer(prompt, return_tensors="pt") outputs = model.generate(**inputs, max_length=100) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` ## Training Data This model was trained on preference data using the DPO algorithm. ## Limitations This model inherits the limitations of its base model and may have additional limitations due to the pruning process. ## Citation If you use this model, please cite the original DPO paper and the base model.
FatimahEmadEldin/Constrained-Track-Document-Bassline-Readability-Arabertv2-d3tok-reg
FatimahEmadEldin
2025-09-12T08:55:08Z
18
0
null
[ "safetensors", "bert", "ar", "dataset:CAMeL-Lab/BAREC-Shared-Task-2025-doc", "base_model:CAMeL-Lab/readability-arabertv2-d3tok-reg", "base_model:finetune:CAMeL-Lab/readability-arabertv2-d3tok-reg", "region:us" ]
null
2025-08-12T15:13:34Z
--- datasets: - CAMeL-Lab/BAREC-Shared-Task-2025-doc language: - ar base_model: - aubmindlab/bert-base-arabertv2 - CAMeL-Lab/readability-arabertv2-d3tok-reg --- # MorphoArabia at BAREC 2025 Shared Task: A Hybrid Architecture with Morphological Analysis for Arabic Readability Assessmen <p align="center"> <img src="https://placehold.co/800x200/dbeafe/3b82f6?text=Barec-Readability-Assessment" alt="Barec Readability Assessment"> </p> This repository contains the official models and results for **MorphoArabia**, the submission to the **[BAREC 2025 Shared Task](https://www.google.com/search?q=https://sites.google.com/view/barec-2025/home)** on Arabic Readability Assessment. #### By: [Fatimah Mohamed Emad Elden](https://scholar.google.com/citations?user=CfX6eA8AAAAJ&hl=ar) #### *Cairo University* [![Paper](https://img.shields.io/badge/arXiv-25XX.XXXXX-b31b1b.svg)](https://arxiv.org/abs/25XX.XXXXX) [![Code](https://img.shields.io/badge/GitHub-Code-blue)](https://github.com/astral-fate/barec-Arabic-Readability-Assessment) [![HuggingFace](https://img.shields.io/badge/HuggingFace-Page-F9D371)](https://huggingface.co/collections/FatimahEmadEldin/barec-shared-task-2025-689195853f581b9a60f9bd6c) [![License](https://img.shields.io/badge/License-MIT-lightgrey)](https://github.com/astral-fate/mentalqa2025/blob/main/LICENSE) --- ## Model Description This project introduces a **morphologically-aware approach** for assessing the readability of Arabic text. The system is built around a fine-tuned regression model designed to process morphologically analyzed text. For the **Constrained** and **Open** tracks of the shared task, this core model is extended into a hybrid architecture that incorporates seven engineered lexical features. A key element of this system is its deep morphological preprocessing pipeline, which uses the **CAMEL Tools d3tok analyzer**. This allows the model to capture linguistic complexities that are often missed by surface-level tokenization methods. This approach proved to be highly effective, achieving a peak **Quadratic Weighted Kappa (QWK) score of 84.2** on the strict sentence-level test set. The model predicts a readability score on a **19-level scale**, from 1 (easiest) to 19 (hardest), for a given Arabic sentence or document. ----- # Hybrid Arabic Readability Model (Constrained Track - Document Level) This repository contains a fine-tuned hybrid model for **document-level** Arabic readability assessment. It was trained for the Constrained Track of the BAREC competition. The model combines the textual understanding of **CAMeL-Lab/readability-arabertv2-d3tok-reg** with 7 additional lexical features to produce a regression-based readability score for full documents. **NOTE:** This is a custom model architecture. You **must** use the `trust_remote_code=True` argument when loading it. ## How to Use The model requires both the document text and a tensor containing 7 numerical features. ### Step 1: Installation Install the necessary libraries: ```bash pip install transformers torch pandas arabert ```` ### Step 2: Full Inference Example This example shows how to preprocess a document, extract features, and get a readability score. ```python import torch import numpy as np from transformers import AutoTokenizer, AutoModel from arabert.preprocess import ArabertPreprocessor # --- 1. Define the Feature Engineering Function --- def get_lexical_features(text, lexicon): words = text.split() if not words: return [0.0] * 7 word_difficulties = [lexicon.get(word, 3.0) for word in words] features = [ float(len(text)), float(len(words)), float(np.mean([len(w) for w in words]) if words else 0.0), float(np.mean(word_difficulties)), float(np.max(word_difficulties)), float(np.sum(np.array(word_difficulties) > 4)), float(len([w for w in words if w not in lexicon]) / len(words)) ] return features # --- 2. Initialize Models and Processors --- repo_id = "FatimahEmadEldin/Constrained-Track-Document-Bassline-Readability-Arabertv2-d3tok-reg" arabert_preprocessor = ArabertPreprocessor(model_name="aubmindlab/bert-large-arabertv2") tokenizer = AutoTokenizer.from_pretrained(repo_id) model = AutoModel.from_pretrained(repo_id, trust_remote_code=True) # --- 3. Prepare Input Document and Lexicon --- # For a real use case, load the full SAMER lexicon. sample_lexicon = {'جملة': 2.5, 'عربية': 3.1, 'بسيطة': 1.8, 'النص': 2.8, 'طويل': 3.5} document_text = "هذا مثال لجملة عربية بسيطة. هذا النص أطول قليلاً من المثال السابق." # --- 4. Run the Full Pipeline --- preprocessed_text = arabert_preprocessor.preprocess(document_text) numerical_features_list = get_lexical_features(preprocessed_text, sample_lexicon) numerical_features = torch.tensor([numerical_features_list], dtype=torch.float) inputs = tokenizer(preprocessed_text, return_tensors="pt", padding=True, truncation=True, max_length=512) inputs['extra_features'] = numerical_features # The model expects 'extra_features' # --- 5. Perform Inference --- model.eval() with torch.no_grad(): logits = model(**inputs)[1] # The model returns (loss, logits) # --- 6. Process the Output --- predicted_score = logits.item() final_level = round(max(0, min(18, predicted_score))) + 1 print(f"Input Document: '{document_text}'") print(f"Raw Regression Score: {predicted_score:.4f}") print(f"Predicted Readability Level (1-19): {final_level}") ``` ## ⚙️ Training Procedure The system employs two distinct architectures based on the track's constraints: * **Strict Track**: This track uses a base regression model, `CAMeL-Lab/readability-arabertv2-d3tok-reg`, fine-tuned directly on the BAREC dataset. * **Constrained and Open Tracks**: These tracks utilize a hybrid model. This architecture combines the deep contextual understanding of the Transformer with explicit numerical features. The final representation for a sentence is created by concatenating the Transformer's `[CLS]` token embedding with a 7-dimensional vector of engineered lexical features derived from the SAMER lexicon. A critical component of the system is its preprocessing pipeline, which leverages the CAMEL Tools `d3tok` format. The `d3tok` analyzer performs a deep morphological analysis by disambiguating words in context and then segmenting them into their constituent morphemes. ### Frameworks * PyTorch * Hugging Face Transformers ----- ### 📊 Evaluation Results The models were evaluated on the blind test set provided by the BAREC organizers. The primary metric for evaluation is the **Quadratic Weighted Kappa (QWK)**, which penalizes larger disagreements more severely. #### Final Test Set Scores (QWK) | Track | Task | Dev (QWK) | Test (QWK) | | :--- | :--- | :---: | :---: | | **Strict** | Sentence | 0.823 | **84.2** | | | Document | 0.823\* | 79.9 | | **Constrained** | Sentence | 0.810 | 82.9 | | | Document | 0.835\* | 75.5 | | **Open** | Sentence | 0.827 | 83.6 | | | Document | 0.827\* | **79.2** | \*Document-level dev scores are based on the performance of the sentence-level model on the validation set. ----- ## 📜 Citation If you use the work, please cite the paper: ``` @inproceedings{eldin2025morphoarabia, title={{MorphoArabia at BAREC 2025 Shared Task: A Hybrid Architecture with Morphological Analysis for Arabic Readability Assessmen}}, author={Eldin, Fatimah Mohamed Emad}, year={2025}, booktitle={Proceedings of the BAREC 2025 Shared Task}, eprint={25XX.XXXXX}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
Loomel/prior-model
Loomel
2025-09-12T08:54:32Z
84
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "text-generation-inference", "unsloth", "conversational", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-07-16T16:39:30Z
--- base_model: unsloth/qwen3-4b-base-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - qwen3 license: apache-2.0 language: - en --- # Uploaded finetuned model - **Developed by:** Loomel - **License:** apache-2.0 - **Finetuned from model :** unsloth/qwen3-4b-base-unsloth-bnb-4bit This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
llllwxxx/Qwen3-Next-80B-A3B-Thinking-FP8-Dynamic
llllwxxx
2025-09-12T08:53:31Z
0
4
null
[ "base_model:Qwen/Qwen3-Next-80B-A3B-Thinking", "base_model:quantized:Qwen/Qwen3-Next-80B-A3B-Thinking", "region:us" ]
null
2025-09-12T08:19:26Z
--- base_model: - Qwen/Qwen3-Next-80B-A3B-Thinking base_model_relation: quantized --- # Qwen3-80B FP8 Dynamic Quantization with LLMCompressor ## Introduction --- ## Environment Requirements - **Python 3.10+** - **NVIDIA GPU** (Hopper architecture supporting FP8, e.g., H100/A100) - **CUDA 12.x** - **PyTorch 2.6** - **Dependencies installation**: ```bash uv pip install llmcompressor torch uv pip install git+https://github.com/huggingface/transformers.git@main ``` --- ## Usage Steps 1. Save the following script as `quantize.py`: ```python from llmcompressor.transformers import SparseAutoModelForCausalLM from transformers import AutoTokenizer model_name = "Qwen/Qwen3-Next-80B-A3B-Thinking" # Load tokenizer and model tokenizer = AutoTokenizer.from_pretrained(model_name) model = SparseAutoModelForCausalLM.from_pretrained( model_name, dtype="auto", device_map="auto" ) from llmcompressor.transformers import oneshot from llmcompressor.modifiers.quantization import QuantizationModifier # Configure simple PTQ quantization recipe = QuantizationModifier( targets="Linear", scheme="FP8_DYNAMIC", ignore=[ "lm_head", "re:.*mlp.gate$", # Ignore standard gate layers "re:.*shared_expert_gate$", # Ignore shared expert gate layers "re:.*router$" # Ignore router layers ] ) # Apply quantization algorithm oneshot(model=model, recipe=recipe) # Save model SAVE_DIR = model_name.split("/")[1] + "-FP8-Dynamic" model.save_pretrained(SAVE_DIR) tokenizer.save_pretrained(SAVE_DIR) ``` 2. Run the script: ```bash python quantize.py ``` 3. The quantized model will be saved in the `Qwen3-Next-80B-A3B-Thinking-FP8-Dynamic` directory. ```bash VLLM_ALLOW_LONG_MAX_MODEL_LEN=1 vllm serve Qwen3-Next-80B-A3B-Thinking-FP8-Dynamic --port 8080 --tensor-parallel-size 2 --api-key 123 --gpu-memory-utilization 0.95 --max_num_seqs 2 --max-model-len 131072 --enable-auto-tool-choice --tool-call-parser hermes --reasoning-parser deepseek_r1 # --speculative-config '{"method":"qwen3_next_mtp","num_speculative_tokens":2}' ``` --- --- ## Notes 1. **There is compatibility issues between the quantized version and MTP** --- ## References - [LLMCompressor Official Documentation](https://vllm.hyper.ai/docs/features/quantization/fp8)
resproj007/torgo_healthy_female_sesame_1b_FC02
resproj007
2025-09-12T08:53:24Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "csm", "trl", "en", "base_model:unsloth/csm-1b", "base_model:finetune:unsloth/csm-1b", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-09-12T08:53:05Z
--- base_model: unsloth/csm-1b tags: - text-generation-inference - transformers - unsloth - csm - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** resproj007 - **License:** apache-2.0 - **Finetuned from model :** unsloth/csm-1b This csm model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
5456es/cluster_prune_Qwen2.5-0.5B-Instruct_prune_0.3-sigmoid
5456es
2025-09-12T08:52:02Z
21
0
null
[ "safetensors", "qwen2", "dpo", "preference-learning", "cluster", "pruned", "license:apache-2.0", "region:us" ]
null
2025-09-07T05:03:42Z
--- license: apache-2.0 base_model: Qwen2.5-0.5B-Instruct tags: - dpo - preference-learning - cluster - pruned --- # cluster_prune_Qwen2.5-0.5B-Instruct_prune_0.3-sigmoid This model is a DPO (Direct Preference Optimization) fine-tuned version of Qwen2.5-0.5B-Instruct using the cluster method. ## Model Details - **Base Model**: Qwen2.5-0.5B-Instruct - **Training Method**: cluster - **Pruning Ratio**: unknown - **Training Date**: 2025-09-12 ## Training Configuration This model was trained using Direct Preference Optimization (DPO) with the following characteristics: - Method: cluster - Pruning applied during training - Fine-tuned on preference data ## Usage ```python from transformers import AutoTokenizer, AutoModelForCausalLM model_name = "5456es/cluster_prune_Qwen2.5-0.5B-Instruct_prune_0.3-sigmoid" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained(model_name) # Example usage prompt = "Your prompt here" inputs = tokenizer(prompt, return_tensors="pt") outputs = model.generate(**inputs, max_length=100) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` ## Training Data This model was trained on preference data using the DPO algorithm. ## Limitations This model inherits the limitations of its base model and may have additional limitations due to the pruning process. ## Citation If you use this model, please cite the original DPO paper and the base model.
5456es/cluster_prune_Qwen2.5-1.5B-Instruct_prune_0.3-sigmoid
5456es
2025-09-12T08:51:40Z
31
0
null
[ "safetensors", "qwen2", "dpo", "preference-learning", "cluster", "pruned", "license:apache-2.0", "region:us" ]
null
2025-09-07T05:01:16Z
--- license: apache-2.0 base_model: Qwen2.5-1.5B-Instruct tags: - dpo - preference-learning - cluster - pruned --- # cluster_prune_Qwen2.5-1.5B-Instruct_prune_0.3-sigmoid This model is a DPO (Direct Preference Optimization) fine-tuned version of Qwen2.5-1.5B-Instruct using the cluster method. ## Model Details - **Base Model**: Qwen2.5-1.5B-Instruct - **Training Method**: cluster - **Pruning Ratio**: unknown - **Training Date**: 2025-09-12 ## Training Configuration This model was trained using Direct Preference Optimization (DPO) with the following characteristics: - Method: cluster - Pruning applied during training - Fine-tuned on preference data ## Usage ```python from transformers import AutoTokenizer, AutoModelForCausalLM model_name = "5456es/cluster_prune_Qwen2.5-1.5B-Instruct_prune_0.3-sigmoid" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained(model_name) # Example usage prompt = "Your prompt here" inputs = tokenizer(prompt, return_tensors="pt") outputs = model.generate(**inputs, max_length=100) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` ## Training Data This model was trained on preference data using the DPO algorithm. ## Limitations This model inherits the limitations of its base model and may have additional limitations due to the pruning process. ## Citation If you use this model, please cite the original DPO paper and the base model.
5456es/last_layer_prune_Llama-3.2-1B-Instruct_prune_0.6-sigmoid
5456es
2025-09-12T08:51:08Z
0
0
null
[ "safetensors", "llama", "dpo", "preference-learning", "last", "pruned", "license:apache-2.0", "region:us" ]
null
2025-09-12T08:49:03Z
--- license: apache-2.0 base_model: Llama-3.2-1B-Instruct tags: - dpo - preference-learning - last - pruned --- # last_layer_prune_Llama-3.2-1B-Instruct_prune_0.6-sigmoid This model is a DPO (Direct Preference Optimization) fine-tuned version of Llama-3.2-1B-Instruct using the last method. ## Model Details - **Base Model**: Llama-3.2-1B-Instruct - **Training Method**: last - **Pruning Ratio**: unknown - **Training Date**: 2025-09-12 ## Training Configuration This model was trained using Direct Preference Optimization (DPO) with the following characteristics: - Method: last - Pruning applied during training - Fine-tuned on preference data ## Usage ```python from transformers import AutoTokenizer, AutoModelForCausalLM model_name = "5456es/last_layer_prune_Llama-3.2-1B-Instruct_prune_0.6-sigmoid" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained(model_name) # Example usage prompt = "Your prompt here" inputs = tokenizer(prompt, return_tensors="pt") outputs = model.generate(**inputs, max_length=100) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` ## Training Data This model was trained on preference data using the DPO algorithm. ## Limitations This model inherits the limitations of its base model and may have additional limitations due to the pruning process. ## Citation If you use this model, please cite the original DPO paper and the base model.
BKM1804/d3e0b177-7126-439b-8861-e7131c9367e6
BKM1804
2025-09-12T08:46:13Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-09-11T14:38:32Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
prithivMLmods/Gliese-OCR-7B-Post1.0
prithivMLmods
2025-09-12T08:45:41Z
0
0
null
[ "safetensors", "qwen2_5_vl", "image-to-text", "license:apache-2.0", "region:us" ]
image-to-text
2025-09-10T18:31:55Z
--- license: apache-2.0 pipeline_tag: image-to-text ---
Alicia22/Ali_Frid_F19
Alicia22
2025-09-12T08:42:28Z
0
0
null
[ "safetensors", "any-to-any", "omega", "omegalabs", "bittensor", "agi", "license:mit", "region:us" ]
any-to-any
2025-09-12T08:40:02Z
--- license: mit tags: - any-to-any - omega - omegalabs - bittensor - agi --- This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet. Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
stonermay/blockassist-bc-diving_lightfooted_caterpillar_1757666358
stonermay
2025-09-12T08:40:24Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "diving lightfooted caterpillar", "arxiv:2504.07091", "region:us" ]
null
2025-09-12T08:40:17Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - diving lightfooted caterpillar --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
nobu222/rakugo-lora-gemma2
nobu222
2025-09-12T08:38:18Z
0
0
null
[ "safetensors", "region:us" ]
null
2025-09-12T08:30:26Z
--- title: "Rakugo LoRA Space" emoji: 🎭 colorFrom: indigo colorTo: pink sdk: gradio sdk_version: "4.0" app_file: app.py pinned: false --- # 落語LoRA(志ん生スタイルの枕強化) for Gemma 2 - **Base**: `google/gemma-2-9b-it` - **Adapter**: LoRA (r=32, alpha=64, QLoRA学習) - **Style**: 「質問拾い→一分線香(短小咄)→観察ギャグ→三点→“◯◯っていやあ…”→枕冒頭」 ## 注意 - ベースモデルの利用条件(申請/ライセンス)に従ってください。 - 文化表現を模倣しますが、不適切表現を避けるよう学習しています。
HectorHe/Qwen1.5-MOE-aux-free-sft-math7k-5e-5-gamma
HectorHe
2025-09-12T08:38:05Z
6
0
transformers
[ "transformers", "safetensors", "qwen2_moe", "text-generation", "generated_from_trainer", "open-r1", "trl", "sft", "conversational", "dataset:HectorHe/math7k", "base_model:Qwen/Qwen1.5-MoE-A2.7B", "base_model:finetune:Qwen/Qwen1.5-MoE-A2.7B", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-09-10T21:21:41Z
--- base_model: Qwen/Qwen1.5-MoE-A2.7B datasets: HectorHe/math7k library_name: transformers model_name: Qwen1.5-MOE-aux-free-sft-math7k-5e-5-gamma tags: - generated_from_trainer - open-r1 - trl - sft licence: license --- # Model Card for Qwen1.5-MOE-aux-free-sft-math7k-5e-5-gamma This model is a fine-tuned version of [Qwen/Qwen1.5-MoE-A2.7B](https://huggingface.co/Qwen/Qwen1.5-MoE-A2.7B) on the [HectorHe/math7k](https://huggingface.co/datasets/HectorHe/math7k) dataset. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="HectorHe/Qwen1.5-MOE-aux-free-sft-math7k-5e-5-gamma", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/hector_-carnegie-mellon-university/huggingface/runs/ipdap84m) This model was trained with SFT. ### Framework versions - TRL: 0.16.0.dev0 - Transformers: 4.51.0 - Pytorch: 2.6.0 - Datasets: 4.0.0 - Tokenizers: 0.21.4 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
maidacundo/annie-lite-v0.3.1-ckpt-260-qwen3-8b
maidacundo
2025-09-12T08:36:16Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "text-generation-inference", "unsloth", "conversational", "en", "base_model:unsloth/Qwen3-8B-unsloth-bnb-4bit", "base_model:finetune:unsloth/Qwen3-8B-unsloth-bnb-4bit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-09-12T08:32:39Z
--- base_model: unsloth/Qwen3-8B-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - qwen3 license: apache-2.0 language: - en --- # Uploaded finetuned model - **Developed by:** maidacundo - **License:** apache-2.0 - **Finetuned from model :** unsloth/Qwen3-8B-unsloth-bnb-4bit This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
5456es/last_layer_prune_Llama-3.2-1B-Instruct_prune_0.8-sigmoid
5456es
2025-09-12T08:35:19Z
0
0
null
[ "safetensors", "llama", "dpo", "preference-learning", "last", "pruned", "license:apache-2.0", "region:us" ]
null
2025-09-12T08:33:11Z
--- license: apache-2.0 base_model: Llama-3.2-1B-Instruct tags: - dpo - preference-learning - last - pruned --- # last_layer_prune_Llama-3.2-1B-Instruct_prune_0.8-sigmoid This model is a DPO (Direct Preference Optimization) fine-tuned version of Llama-3.2-1B-Instruct using the last method. ## Model Details - **Base Model**: Llama-3.2-1B-Instruct - **Training Method**: last - **Pruning Ratio**: unknown - **Training Date**: 2025-09-12 ## Training Configuration This model was trained using Direct Preference Optimization (DPO) with the following characteristics: - Method: last - Pruning applied during training - Fine-tuned on preference data ## Usage ```python from transformers import AutoTokenizer, AutoModelForCausalLM model_name = "5456es/last_layer_prune_Llama-3.2-1B-Instruct_prune_0.8-sigmoid" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained(model_name) # Example usage prompt = "Your prompt here" inputs = tokenizer(prompt, return_tensors="pt") outputs = model.generate(**inputs, max_length=100) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` ## Training Data This model was trained on preference data using the DPO algorithm. ## Limitations This model inherits the limitations of its base model and may have additional limitations due to the pruning process. ## Citation If you use this model, please cite the original DPO paper and the base model.
yonggwon/gemma-3-12b-it-Rude-LORA
yonggwon
2025-09-12T08:34:47Z
0
0
transformers
[ "transformers", "safetensors", "gemma3", "image-text-to-text", "conversational", "arxiv:1910.09700", "text-generation-inference", "endpoints_compatible", "region:us" ]
image-text-to-text
2025-09-12T08:30:52Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Avokado777/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-fast_small_gibbon
Avokado777
2025-09-12T08:32:32Z
7
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am fast small gibbon", "trl", "genrl-swarm", "I am fast_small_gibbon", "conversational", "arxiv:2402.03300", "base_model:unsloth/Qwen2.5-0.5B-Instruct", "base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-03T23:03:53Z
--- base_model: unsloth/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-fast_small_gibbon tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am fast small gibbon - trl - genrl-swarm - I am fast_small_gibbon licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-fast_small_gibbon This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="Avokado777/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-fast_small_gibbon", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/corobov-mitya-individual/huggingface/runs/zcdsijaj) This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.17.0 - Transformers: 4.51.3 - Pytorch: 2.7.0 - Datasets: 3.5.1 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
Alicia22/Ali_Frid_F16
Alicia22
2025-09-12T08:32:17Z
0
0
null
[ "safetensors", "any-to-any", "omega", "omegalabs", "bittensor", "agi", "license:mit", "region:us" ]
any-to-any
2025-09-12T08:29:45Z
--- license: mit tags: - any-to-any - omega - omegalabs - bittensor - agi --- This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet. Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
kakimoto/act-airhockey-step100k
kakimoto
2025-09-12T08:30:58Z
0
0
lerobot
[ "lerobot", "safetensors", "act", "robotics", "dataset:kakimoto/record-hockey-640x480", "arxiv:2304.13705", "license:apache-2.0", "region:us" ]
robotics
2025-09-12T08:30:36Z
--- datasets: kakimoto/record-hockey-640x480 library_name: lerobot license: apache-2.0 model_name: act pipeline_tag: robotics tags: - act - lerobot - robotics --- # Model Card for act <!-- Provide a quick summary of what the model is/does. --> [Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high success rates. This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot). See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index). --- ## How to Get Started with the Model For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy). Below is the short version on how to train and run inference/eval: ### Train from scratch ```bash lerobot-train \ --dataset.repo_id=${HF_USER}/<dataset> \ --policy.type=act \ --output_dir=outputs/train/<desired_policy_repo_id> \ --job_name=lerobot_training \ --policy.device=cuda \ --policy.repo_id=${HF_USER}/<desired_policy_repo_id> --wandb.enable=true ``` _Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._ ### Evaluate the policy/run inference ```bash lerobot-record \ --robot.type=so100_follower \ --dataset.repo_id=<hf_user>/eval_<dataset> \ --policy.path=<hf_user>/<desired_policy_repo_id> \ --episodes=10 ``` Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint. --- ## Model Details - **License:** apache-2.0
stonermay/blockassist-bc-diving_lightfooted_caterpillar_1757665747
stonermay
2025-09-12T08:30:38Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "diving lightfooted caterpillar", "arxiv:2504.07091", "region:us" ]
null
2025-09-12T08:30:16Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - diving lightfooted caterpillar --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
inclusionAI/GroveMoE-Inst
inclusionAI
2025-09-12T08:30:17Z
382
31
transformers
[ "transformers", "safetensors", "qwen3_moe", "text-generation", "conversational", "custom_code", "arxiv:2508.07785", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-08-18T05:28:51Z
--- library_name: transformers license: apache-2.0 pipeline_tag: text-generation --- # GroveMoE-Inst </div> <p align="left"> 🤗 <a href="https://huggingface.co/collections/inclusionAI/grovemoe-68a2b58acbb55827244ef664">Models</a>&nbsp&nbsp | &nbsp&nbsp 📑 <a href="https://arxiv.org/abs/2508.07785">Paper</a> &nbsp&nbsp | &nbsp&nbsp 🔗 <a href="https://github.com/inclusionAI/GroveMoE">Github</a>&nbsp&nbsp ## Highlights We introduce **GroveMoE**, a new sparse architecture using **adjugate experts** for dynamic computation allocation, featuring the following key highlights: - **Architecture**: Novel **adjugate experts** grouped with ordinary experts; shared computation is executed once, then reused, cutting FLOPs. - **Sparse Activation**: 33 B params total, only **3.14–3.28 B** active per token. - **Traning**: Mid-training + SFT, up-cycled from Qwen3-30B-A3B-Base; preserves prior knowledge while adding new capabilities. ## Model Downloads | **Model** | **#Total Params** | **#Activated Params** | **HF Download** |**MS Download** | |:---------:|:-----------------:|:---------------------:|:------------:|:------------:| | GroveMoE-Base | 33B | 3.14~3.28B | [🤗 HuggingFace](https://huggingface.co/inclusionAI/GroveMoE-Base) | [📦 ModelScope](https://modelscope.cn/models/cccnju/GroveMoE-Base) | | GroveMoE-Inst | 33B | 3.14~3.28B | [🤗 HuggingFace](https://huggingface.co/inclusionAI/GroveMoE-Inst) | [📦 ModelScope](https://modelscope.cn/models/cccnju/GroveMoE-Inst) | ## Performance | Model | Activated Params | MMLU-Pro | SuperGPQA | GPQA-Diamond | OlympiadBench | Omni-math | AIME'25 | MultiPL-E | LiveCodeBench v6 | |:-----:|:----------------:|:------------:|:-------------:|:------------:|:-----------------:|:------------:|:------------------:|:------------------:|:------------------:| |Llama4-Scout| 17B | 64.9 | 42.0 | 55.6 | 56.6 | 30.2 | 10.0 | 45.0 | 32.0 | |Qwen3-30B-A3B| 3B | 63.3 | 40.5 | 51.7 | 60.3 | 33.7 | 21.7 | 66.0 | 29.4 | |Qwen3-32B| 32B | 68.2 | 43.0 | 53.6 | 59.5 | 31.8 | 22.9 | 68.6 | 28.6 | |Gemma3-27B-IT| 27B | 67.1 | 35.6 | 45.3 | 59.9 | 33.3 | 23.1 | 65.5 | 30.9 | |Mistral-Small-3.2| 24B | 68.1 | 37.5 | 59.9 | 61.9 | 33.4 | 28.1 | 69.5 | 32.2 | |GroveMoE-Inst|3.14~3.28B | <font color=#FBD98D>**72.8**</font> | <font color=#FBD98D>**47.7**</font> | <font color=#FBD98D>**61.3**</font> |<font color=#FBD98D>**71.2**</font> |<font color=#FBD98D>**43.5**</font> | <font color=#FBD98D>**44.4**</font> |<font color=#FBD98D>**74.5**</font> | <font color=#FBD98D>**34.6**</font> | We bold the top1 scores separately for all models. More details are reported in our [technical report](https://arxiv.org/abs/2508.07785). ## Run GroveMoE ### 🤗 Transformers Quick Start Below, there are some code snippets on how to get quickly started with running the model. First, install the Transformers library. ```sh $ pip install transformers==4.51.3 ``` Then, copy the snippet from the section that is relevant for your use case. ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "inclusionAI/GroveMoE-Inst" # load the tokenizer and the model tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype="auto", device_map="auto" ) # prepare the model input prompt = "Give me a short introduction to large language model." messages = [ {"role": "user", "content": prompt} ] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True, ) model_inputs = tokenizer([text], return_tensors="pt").to(model.device) # conduct text completion generated_ids = model.generate( **model_inputs, max_new_tokens=16384 ) output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist() content = tokenizer.decode(output_ids, skip_special_tokens=True) print("content:", content) ``` ### 🚀 SGLang Quick Start For SGLang, you can follow the steps below to deploy: 1️⃣ Install Dependencies First, clone the repository: ```shell git clone https://github.com/inclusionAI/GroveMoE.git ``` Then, install Transformers: ```shell cd src/transformers-4.51.3 pip install . ``` Next, install SGLang: ```shell cd src/sglang-0.4.6.post5 pip install . ``` 2️⃣ Launch the Server Run the following command to start SGLang: ```shell python -m sglang.launch_server \ --model-path inclusionAI/GroveMoE-Inst \ --port 30000 \ --context-length 32768 ``` 3️⃣ Access the API Once started, the OpenAI-compatible API will be available at `http://localhost:30000/v1`. Test it with curl: ```shell curl http://localhost:30000/v1/chat/completions \ -H "Content-Type: application/json" \ -d '{ "model": "inclusionAI/GroveMoE-Inst", "messages": [{"role": "user", "content": "Hello, SGLang!"}] }' ``` ### llama.cpp Thanks @CISCai, support for llama.cpp can be found in the implementation at https://github.com/ggml-org/llama.cpp/pull/15510. ## Best Practices for Model Configuration To achieve optimal performance, we recommend the following settings: 1. **Sampling Parameters**: - We suggest using `Temperature=0.7`, `TopP=0.8`, `TopK=20`, and `MinP=0`. (⚠️ For benchmarking scenarios requiring sampling (e.g., AIME), these parameters must be explicitly configured.) 2. **Adequate Output Length**: Set output length to 16,384 tokens for general use cases to accommodate complex reasoning tasks in instruct models. 3. **Standardize Output Format**: We recommend using prompts to standardize model outputs when benchmarking. - **Math Problems**: Include "Please reason step by step, and put your final answer within \boxed{}." in the prompt. - **Multiple-Choice Questions**: Add the following JSON structure to the prompt to standardize responses: "Please show your choice in the `answer` field with only the choice letter, e.g., `"answer": "C"`." ## Citation ```bibtex @article{GroveMoE, title = {GroveMoE: Towards Efficient and Superior MoE LLMs with Adjugate Experts}, author = {Wu, Haoyuan and Chen, Haoxing and Chen, Xiaodong and Zhou, Zhanchao and Chen, Tieyuan and Zhuang, Yihong and Lu, Guoshan and Zhao, Junbo and Liu, Lin and Huang, Zenan and Lan, Zhenzhong and Yu, Bei and Li, Jianguo}, journal = {arXiv preprint arXiv:2508.07785}, year = {2025} } ```
Kijai/WanVideo_comfy_fp8_scaled
Kijai
2025-09-12T08:29:31Z
259,971
206
diffusion-single-file
[ "diffusion-single-file", "comfyui", "base_model:Wan-AI/Wan2.1-VACE-1.3B", "base_model:finetune:Wan-AI/Wan2.1-VACE-1.3B", "license:apache-2.0", "region:us" ]
null
2025-07-22T10:39:42Z
--- tags: - diffusion-single-file - comfyui license: apache-2.0 base_model: - Wan-AI/Wan2.1-VACE-14B - Wan-AI/Wan2.1-VACE-1.3B --- Better fp8 scaled models (when measured against fp16) based on quantization code from https://github.com/Tencent-Hunyuan/HunyuanVideo/blob/main/hyvideo/modules/fp8_optimization.py Can be used with: https://github.com/kijai/ComfyUI-WanVideoWrapper (latest version) and ComfyUI native WanVideo nodes. 14B-T2V comparison test without LoRAs, 25 steps, 832x480x81 --- <video controls autoplay width=50% src=https://cdn-uploads.huggingface.co/production/uploads/63297908f0b2fc94904a65b8/DwlAGbj20it1unZW54NDC.mp4></video> 2.2 A14B-T2V test --- <video controls autoplay width=50% src=https://cdn-uploads.huggingface.co/production/uploads/63297908f0b2fc94904a65b8/6A_AZ7GN_uxeRH0vwsWkH.mp4></video> <video controls autoplay width=50% src=https://cdn-uploads.huggingface.co/production/uploads/63297908f0b2fc94904a65b8/GpuqQ4YwoR3kjxkhuvP8P.mp4></video> The e5m2 marked as v2 is the one uploaded here and these are all scaled even if I forgot to label properly.
jumanaawk/money_detection
jumanaawk
2025-09-12T08:28:27Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-09-12T08:28:27Z
--- license: apache-2.0 ---
DoppelReflEx/CirtusMandarin-14B
DoppelReflEx
2025-09-12T08:23:39Z
0
1
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "mergekit", "merge", "conversational", "arxiv:2311.03099", "base_model:NousResearch/Hermes-4-14B", "base_model:merge:NousResearch/Hermes-4-14B", "base_model:ReadyArt/The-Omega-Directive-Qwen3-14B-v1.1", "base_model:merge:ReadyArt/The-Omega-Directive-Qwen3-14B-v1.1", "base_model:nbeerbower/Vitus-Qwen3-14B", "base_model:merge:nbeerbower/Vitus-Qwen3-14B", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-12T08:18:38Z
--- base_model: - nbeerbower/Vitus-Qwen3-14B - ReadyArt/The-Omega-Directive-Qwen3-14B-v1.1 - NousResearch/Hermes-4-14B library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [DARE TIES](https://arxiv.org/abs/2311.03099) merge method using [NousResearch/Hermes-4-14B](https://huggingface.co/NousResearch/Hermes-4-14B) as a base. ### Models Merged The following models were included in the merge: * [nbeerbower/Vitus-Qwen3-14B](https://huggingface.co/nbeerbower/Vitus-Qwen3-14B) * [ReadyArt/The-Omega-Directive-Qwen3-14B-v1.1](https://huggingface.co/ReadyArt/The-Omega-Directive-Qwen3-14B-v1.1) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: NousResearch/Hermes-4-14B parameters: density: 0.9 weight: 1 - model: nbeerbower/Vitus-Qwen3-14B parameters: density: 0.6 weight: 0.8 - model: ReadyArt/The-Omega-Directive-Qwen3-14B-v1.1 parameters: density: 0.8 weight: 0.6 merge_method: dare_ties base_model: NousResearch/Hermes-4-14B tokenizer_source: base parameters: rescale: true dtype: bfloat16 ```