modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-09-02 18:52:31
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
533 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-09-02 18:52:05
card
stringlengths
11
1.01M
mehul755/distilbert-base-uncased-finetuned-clinc
mehul755
2025-09-02T14:49:30Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-09-02T14:38:53Z
--- library_name: transformers license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer model-index: - name: distilbert-base-uncased-finetuned-clinc results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-clinc This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 48 - eval_batch_size: 48 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 5 ### Training results ### Framework versions - Transformers 4.55.4 - Pytorch 2.8.0+cu126 - Datasets 4.0.0 - Tokenizers 0.21.4
iamgroot42/gemma-3-270m-enron
iamgroot42
2025-09-02T14:49:03Z
0
0
transformers
[ "transformers", "safetensors", "gemma3_text", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-02T14:48:28Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Abrhaley/gpt2-tigrinya-lora
Abrhaley
2025-09-02T14:46:46Z
0
0
transformers
[ "transformers", "safetensors", "gpt2", "tigriniya", "lora", "text-generation-inference", "causal-lm", "low_resource", "text-generation", "ti", "license:mit", "endpoints_compatible", "region:us" ]
text-generation
2025-09-02T14:16:35Z
--- license: mit language: - ti metrics: - perplexity - accuracy pipeline_tag: text-generation library_name: transformers tags: - gpt2 - tigriniya - lora - text-generation-inference - causal-lm - low_resource --- # GPT-2 Tigrinya (LoRA Fine-Tuned) ## Model Details - **Developed by**: Abrhaley (Warsaw University of Technology, MSc student) - **Model type**: Causal Language Model (decoder-only Transformer, GPT-2 architecture) - **Languages**: Tigrinya (`ti`) - **License**: MIT - **Finetuned from model**: [gpt2](https://huggingface.co/gpt2) - **Framework**: [Transformers](https://huggingface.co/transformers), [PEFT](https://github.com/huggingface/peft) --- ## Model Description This model is a **GPT-2 small** fine-tuned using **LoRA (Low-Rank Adaptation)** on a custom Tigrinya dataset. It is designed to generate coherent Tigrinya text for tasks such as dialogue, storytelling, and text continuation. - **Architecture**: GPT-2 (124M parameters, with LoRA adapters trained on attention layers) - **LoRA Config**: r=8, alpha=32, dropout=0.05 - **Tokenizer**: GPT-2 tokenizer, extended with EOS as padding --- ## Model Sources - **Repository**: (https://huggingface.co/abrhaley/gpt2-tigrinya-lora) - **Training Script**: Hugging Face `Trainer` + PEFT --- ## Uses ### Direct Use - Text generation in Tigrinya - Chatbot / dialogue systems - Story and content generation ### Downstream Use - Further fine-tuning for domain-specific Tigrinya applications (e.g., news, education, cultural storytelling) ### Out-of-Scope Use - Generating harmful, offensive, or misleading content - Using for critical decision-making without human supervision --- ## Bias, Risks, and Limitations - The dataset may not fully represent all dialects of Tigrinya. - Risk of generating biased, offensive, or incoherent outputs. - Not suitable for factual QA or tasks requiring truthfulness. --- ## Recommendations Users should: - Verify outputs before real-world use. - Avoid sensitive or harmful applications. --- ## How to Get Started with the Model ```python from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline model_id = "abrhaley/gpt2-tigrinya-lora" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id) generator = pipeline("text-generation", model=model, tokenizer=tokenizer) prompt = "ኣብ ኣዲስ ኣበባ" print(generator(prompt, max_length=100, do_sample=True)) ## Eval Results | Metric | Value | |--------------------|---------| | Training loss | 1.74 | | Validation loss | 1.61 | | Training PPL | 5.73 | | Validation PPL | 5.00 | | Runtime (1 epoch) | ~5.5h | | GPU | Colab T4 | ##Citation @misc{abrhaley2025gpt2tigrinya, title = {GPT-2 Tigrinya LoRA Fine-Tuned}, author = {Abrhaley}, year = {2025}, url = {https://huggingface.co/abrhaley/gpt2-tigrinya-lora} }
jheuschkel/SynCodonLM
jheuschkel
2025-09-02T14:46:28Z
96
1
null
[ "safetensors", "deberta-v2", "codon", "Codon", "biology", "synthetic", "dna", "mrna", "optimization", "codon-optimization", "codon-embedding", "codon-representation", "codon-language-model", "codon-language", "fill-mask", "en", "dataset:jheuschkel/cds-dataset", "license:apache-2.0", "region:us" ]
fill-mask
2025-08-19T11:58:31Z
--- license: apache-2.0 datasets: - jheuschkel/cds-dataset language: - en pipeline_tag: fill-mask tags: - codon - Codon - biology - synthetic - dna - mrna - optimization - codon-optimization - codon-embedding - codon-representation - codon-language-model - codon-language misc: - codon --- # Model Card for SynCodonLM - This repository contains code to utilize the model, and reproduce results of the preprint [**Advancing Codon Language Modeling with Synonymous Codon Constrained Masking**](https://doi.org/10.1101/2025.08.19.671089). - Unlike other Codon Language Models, SynCodonLM was trained with logit-level control, masking logits for non-synonymous codons. This allowed the model to learn codon-specific patterns disentangled from protein-level semantics. - [Pre-training dataset of 66 Million CDS is available on Hugging Face here.](https://huggingface.co/datasets/jheuschkel/cds-dataset) --- ## Installation ```python git clone https://github.com/Boehringer-Ingelheim/SynCodonLM.git cd SynCodonLM pip install -r requirements.txt #maybe not neccesary depending on your env :) ``` --- # Usage #### SynCodonLM uses token-type ID's to add species-specific codon sontext to it's thinking. ###### Before use, find the token type ID (species_token_type) for your species of interest [here](https://github.com/Boehringer-Ingelheim/SynCodonLM/blob/master/SynCodonLM/species_token_type.py)! ###### Or use our list of model organisms [below]() --- ## Embedding a Coding DNA Sequence ```python from SynCodonLM import CodonEmbeddings model = CodonEmbeddings() #this loads the model & tokenizer using our built-in functions seq = 'ATGTCCACCGGGCGGTGA' mean_pooled_embedding = model.get_mean_embedding(seq, species_token_type=67) #E. coli #returns --> tensor of shape [768] raw_output = model.get_raw_embeddings(seq, species_token_type=67) #E. coli raw_embedding_final_layer = raw_output.hidden_states[-1] #treat this like a typical Hugging Face model dictionary based output! #returns --> tensor of shape [batch size (1), sequence length, 768] ``` ## Codon Optimizing a Protein Sequence ###### This has not yet been rigourosly evaluated, although we can confidently say it will generate 'natural looking' coding-DNA sequences. ```python from SynCodonLM import CodonOptimizer optimizer = CodonOptimizer() #this loads the model & tokenizer using our built-in functions result = optimizer.optimize( protein_sequence="MSKGEELFTGVVPILVELDGDVNGHKFSVSGEGEGDATYGKLTLKFICTTGKLPVPWPTLVTTFSYGVQCFSRYPDHMKRHDFFKSAMPEGYVQERTIFFKDDGNYKTRAEVKFEGDTLVNRIELKGIDFKEDGNILGHKLEYNYNSHNVYIMADKQKNGIKVNFKIRHNIEDGSVQLADHYQQNTPIGDGPVLLPDNHYLSTQSALSKDPNEKRDHMVLLEFVTAAGITLGMDELYK", #GFP species_token_type=67, #E. coli deterministic=True #true by default ) codon_optimized_sequence = result.sequence ``` ## Citation If you use this work, please cite: ```bibtex @article {Heuschkel2025.08.19.671089, author = {Heuschkel, James and Kingsley, Laura and Pefaur, Noah and Nixon, Andrew and Cramer, Steven}, title = {Advancing Codon Language Modeling with Synonymous Codon Constrained Masking}, elocation-id = {2025.08.19.671089}, year = {2025}, doi = {10.1101/2025.08.19.671089}, publisher = {Cold Spring Harbor Laboratory}, abstract = {Codon language models offer a promising framework for modeling protein-coding DNA sequences, yet current approaches often conflate codon usage with amino acid semantics, limiting their ability to capture DNA-level biology. We introduce SynCodonLM, a codon language model that enforces a biologically grounded constraint: masked codons are only predicted from synonymous options, guided by the known protein sequence. This design disentangles codon-level from protein-level semantics, enabling the model to learn nucleotide-specific patterns. The constraint is implemented by masking non-synonymous codons from the prediction space prior to softmax. Unlike existing models, which cluster codons by amino acid identity, SynCodonLM clusters by nucleotide properties, revealing structure aligned with DNA-level biology. Furthermore, SynCodonLM outperforms existing models on 6 of 7 benchmarks sensitive to DNA-level features, including mRNA and protein expression. Our approach advances domain-specific representation learning and opens avenues for sequence design in synthetic biology, as well as deeper insights into diverse bioprocesses.Competing Interest StatementThe authors have declared no competing interest.}, URL = {https://www.biorxiv.org/content/early/2025/08/24/2025.08.19.671089}, eprint = {https://www.biorxiv.org/content/early/2025/08/24/2025.08.19.671089.full.pdf}, journal = {bioRxiv} } ``` ---- #### Model Organisms Species Token Type IDs | Organism | Token-Type ID | |-------------------------|----------------| | *E. coli* | 67 | | *S. cerevisiae* | 108 | | *C. elegans*| 187 | | *D. melanogaster*| 178 | | *D. rerio* |468 | | *M. musculus* | 321 | | *A. thaliana* | 266 | | *H. sapiens* | 317 | | *C. griseus* | 394 |
TohanBoss/blockassist-bc-regal_spotted_pelican_1756824306
TohanBoss
2025-09-02T14:46:22Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "regal spotted pelican", "arxiv:2504.07091", "region:us" ]
null
2025-09-02T14:46:14Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - regal spotted pelican --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
HZCDLUT/MoE_Adapters_pp_CLIP_vitL_TIL
HZCDLUT
2025-09-02T14:46:03Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-09-02T03:32:22Z
--- license: apache-2.0 ---
crim50n/varley_flux2
crim50n
2025-09-02T14:45:11Z
0
0
diffusers
[ "diffusers", "flux", "text-to-image", "lora", "fal", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-09-02T14:45:04Z
--- tags: - flux - text-to-image - lora - diffusers - fal base_model: black-forest-labs/FLUX.1-dev instance_prompt: license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md --- # varley_flux2 <Gallery /> ## Model description ## Trigger words You should use `` to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](/crim50n/varley_flux2/tree/main) them in the Files & versions tab. ## Training at fal.ai Training was done using [fal.ai/models/fal-ai/flux-lora-fast-training](https://fal.ai/models/fal-ai/flux-lora-fast-training).
Yassine147/test1
Yassine147
2025-09-02T14:44:16Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-09-02T14:44:16Z
--- license: apache-2.0 ---
akirafudo/blockassist-bc-keen_fast_giraffe_1756824175
akirafudo
2025-09-02T14:43:24Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "keen fast giraffe", "arxiv:2504.07091", "region:us" ]
null
2025-09-02T14:43:19Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - keen fast giraffe --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
the-AI-guy1/ppo-LunarLander-v2
the-AI-guy1
2025-09-02T14:43:06Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2025-09-02T14:42:46Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 212.03 +/- 70.35 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
TohanBoss/blockassist-bc-regal_spotted_pelican_1756824092
TohanBoss
2025-09-02T14:42:46Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "regal spotted pelican", "arxiv:2504.07091", "region:us" ]
null
2025-09-02T14:42:38Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - regal spotted pelican --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
GroomerG/blockassist-bc-vicious_pawing_badger_1756822490
GroomerG
2025-09-02T14:42:26Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "vicious pawing badger", "arxiv:2504.07091", "region:us" ]
null
2025-09-02T14:42:22Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - vicious pawing badger --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Job099/distilbert-imdb-sentiment-analysis
Job099
2025-09-02T14:41:32Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-09-02T14:38:31Z
--- library_name: transformers license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer model-index: - name: distilbert-imdb-sentiment-analysis results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-imdb-sentiment-analysis This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - eval_loss: 0.6921 - eval_model_preparation_time: 0.0015 - eval_accuracy: 0.54 - eval_f1: 0.5181 - eval_runtime: 5.8166 - eval_samples_per_second: 51.576 - eval_steps_per_second: 3.267 - step: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 2 ### Framework versions - Transformers 4.55.4 - Pytorch 2.8.0+cu126 - Datasets 4.0.0 - Tokenizers 0.21.4
omerbektass/blockassist-bc-keen_fast_giraffe_1756824065
omerbektass
2025-09-02T14:41:25Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "keen fast giraffe", "arxiv:2504.07091", "region:us" ]
null
2025-09-02T14:41:20Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - keen fast giraffe --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
yeok/yeok_faithfulness-train-meta-llama_Llama-3.1-8B-Instruct-user_bias
yeok
2025-09-02T14:40:54Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/Llama-3.1-8B-Instruct", "base_model:finetune:unsloth/Llama-3.1-8B-Instruct", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-09-02T14:40:43Z
--- base_model: unsloth/Llama-3.1-8B-Instruct tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** yeok - **License:** apache-2.0 - **Finetuned from model :** unsloth/Llama-3.1-8B-Instruct This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
omerbkts/blockassist-bc-keen_fast_giraffe_1756823939
omerbkts
2025-09-02T14:39:28Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "keen fast giraffe", "arxiv:2504.07091", "region:us" ]
null
2025-09-02T14:39:23Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - keen fast giraffe --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
TohanBoss/blockassist-bc-regal_spotted_pelican_1756823867
TohanBoss
2025-09-02T14:39:05Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "regal spotted pelican", "arxiv:2504.07091", "region:us" ]
null
2025-09-02T14:38:54Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - regal spotted pelican --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
ayolovesu/brain-tumor-od-finetuned-paligemma2
ayolovesu
2025-09-02T14:38:19Z
0
0
peft
[ "peft", "tensorboard", "safetensors", "base_model:adapter:google/paligemma2-3b-pt-224", "lora", "transformers", "text-generation", "base_model:google/paligemma2-3b-pt-224", "license:gemma", "region:us" ]
text-generation
2025-09-02T14:38:16Z
--- library_name: peft license: gemma base_model: google/paligemma2-3b-pt-224 tags: - base_model:adapter:google/paligemma2-3b-pt-224 - lora - transformers pipeline_tag: text-generation model-index: - name: brain-tumor-od-finetuned-paligemma2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # brain-tumor-od-finetuned-paligemma2 This model is a fine-tuned version of [google/paligemma2-3b-pt-224](https://huggingface.co/google/paligemma2-3b-pt-224) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 4 - optimizer: Use OptimizerNames.PAGED_ADAMW_8BIT with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2 - num_epochs: 2 ### Training results ### Framework versions - PEFT 0.17.1 - Transformers 4.55.4 - Pytorch 2.8.0+cu126 - Datasets 4.0.0 - Tokenizers 0.21.4
Martico2432/llama32-thinking-finetune2
Martico2432
2025-09-02T14:38:04Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "generated_from_trainer", "trl", "sft", "base_model:google/gemma-2b-it", "base_model:finetune:google/gemma-2b-it", "endpoints_compatible", "region:us" ]
null
2025-09-02T13:58:14Z
--- base_model: google/gemma-2b-it library_name: transformers model_name: llama32-thinking-finetune2 tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for llama32-thinking-finetune2 This model is a fine-tuned version of [google/gemma-2b-it](https://huggingface.co/google/gemma-2b-it). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="Martico2432/llama32-thinking-finetune2", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/mail-joks02-none/huggingface/runs/dpv18x2x) This model was trained with SFT. ### Framework versions - TRL: 0.13.0 - Transformers: 4.48.0 - Pytorch: 2.8.0+cu126 - Datasets: 3.2.0 - Tokenizers: 0.21.4 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
haihp02/phiiiiiiiiiii
haihp02
2025-09-02T14:37:47Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-09-02T14:37:30Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
akirafudo/blockassist-bc-keen_fast_giraffe_1756823820
akirafudo
2025-09-02T14:37:24Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "keen fast giraffe", "arxiv:2504.07091", "region:us" ]
null
2025-09-02T14:37:17Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - keen fast giraffe --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
mradermacher/Dolphin-Mistral-24B-Venice-fp16-GGUF
mradermacher
2025-09-02T14:36:10Z
127
1
transformers
[ "transformers", "gguf", "mlx", "en", "base_model:root4k/Dolphin-Mistral-24B-Venice-fp16", "base_model:quantized:root4k/Dolphin-Mistral-24B-Venice-fp16", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-09-01T16:07:42Z
--- base_model: root4k/Dolphin-Mistral-24B-Venice-fp16 language: - en library_name: transformers license: apache-2.0 mradermacher: readme_rev: 1 quantized_by: mradermacher tags: - mlx --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> <!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> static quants of https://huggingface.co/root4k/Dolphin-Mistral-24B-Venice-fp16 <!-- provided-files --> ***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Dolphin-Mistral-24B-Venice-fp16-GGUF).*** weighted/imatrix quants are available at https://huggingface.co/mradermacher/Dolphin-Mistral-24B-Venice-fp16-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Dolphin-Mistral-24B-Venice-fp16-GGUF/resolve/main/Dolphin-Mistral-24B-Venice-fp16.Q2_K.gguf) | Q2_K | 9.0 | | | [GGUF](https://huggingface.co/mradermacher/Dolphin-Mistral-24B-Venice-fp16-GGUF/resolve/main/Dolphin-Mistral-24B-Venice-fp16.Q3_K_S.gguf) | Q3_K_S | 10.5 | | | [GGUF](https://huggingface.co/mradermacher/Dolphin-Mistral-24B-Venice-fp16-GGUF/resolve/main/Dolphin-Mistral-24B-Venice-fp16.Q3_K_M.gguf) | Q3_K_M | 11.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Dolphin-Mistral-24B-Venice-fp16-GGUF/resolve/main/Dolphin-Mistral-24B-Venice-fp16.Q3_K_L.gguf) | Q3_K_L | 12.5 | | | [GGUF](https://huggingface.co/mradermacher/Dolphin-Mistral-24B-Venice-fp16-GGUF/resolve/main/Dolphin-Mistral-24B-Venice-fp16.IQ4_XS.gguf) | IQ4_XS | 13.0 | | | [GGUF](https://huggingface.co/mradermacher/Dolphin-Mistral-24B-Venice-fp16-GGUF/resolve/main/Dolphin-Mistral-24B-Venice-fp16.Q4_K_S.gguf) | Q4_K_S | 13.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Dolphin-Mistral-24B-Venice-fp16-GGUF/resolve/main/Dolphin-Mistral-24B-Venice-fp16.Q4_K_M.gguf) | Q4_K_M | 14.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Dolphin-Mistral-24B-Venice-fp16-GGUF/resolve/main/Dolphin-Mistral-24B-Venice-fp16.Q5_K_S.gguf) | Q5_K_S | 16.4 | | | [GGUF](https://huggingface.co/mradermacher/Dolphin-Mistral-24B-Venice-fp16-GGUF/resolve/main/Dolphin-Mistral-24B-Venice-fp16.Q5_K_M.gguf) | Q5_K_M | 16.9 | | | [GGUF](https://huggingface.co/mradermacher/Dolphin-Mistral-24B-Venice-fp16-GGUF/resolve/main/Dolphin-Mistral-24B-Venice-fp16.Q6_K.gguf) | Q6_K | 19.4 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Dolphin-Mistral-24B-Venice-fp16-GGUF/resolve/main/Dolphin-Mistral-24B-Venice-fp16.Q8_0.gguf) | Q8_0 | 25.2 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
TohanBoss/blockassist-bc-regal_spotted_pelican_1756823614
TohanBoss
2025-09-02T14:35:31Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "regal spotted pelican", "arxiv:2504.07091", "region:us" ]
null
2025-09-02T14:34:41Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - regal spotted pelican --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
omerbektass/blockassist-bc-keen_fast_giraffe_1756823682
omerbektass
2025-09-02T14:35:09Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "keen fast giraffe", "arxiv:2504.07091", "region:us" ]
null
2025-09-02T14:35:05Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - keen fast giraffe --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
TohanBoss/blockassist-bc-regal_spotted_pelican_1756823395
TohanBoss
2025-09-02T14:31:09Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "regal spotted pelican", "arxiv:2504.07091", "region:us" ]
null
2025-09-02T14:31:02Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - regal spotted pelican --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
zcopwerq/blockassist-bc-armored_thriving_cod_1756823407
zcopwerq
2025-09-02T14:30:40Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "armored thriving cod", "arxiv:2504.07091", "region:us" ]
null
2025-09-02T14:30:08Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - armored thriving cod --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
kyle0612/345certainP1_model
kyle0612
2025-09-02T14:30:34Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "mllama", "trl", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-09-02T14:30:24Z
--- base_model: unsloth/llama-3.2-11b-vision-instruct-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - mllama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** kyle0612 - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3.2-11b-vision-instruct-unsloth-bnb-4bit This mllama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
vera6/sn105_denoising_13
vera6
2025-09-02T14:28:28Z
0
0
null
[ "region:us" ]
null
2025-09-02T12:17:55Z
DENOISING speech enhancement model
ashanwijebandara/interview-assistant-model
ashanwijebandara
2025-09-02T14:28:25Z
0
0
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-02T14:27:52Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
SKN14-Final-1Team/qwen3-8b-rag-ko-merged
SKN14-Final-1Team
2025-09-02T14:28:13Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-02T14:26:55Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
harshdixit05/Qwen2-0.5B-GRPO-Demo
harshdixit05
2025-09-02T14:27:41Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "trl", "grpo", "conversational", "arxiv:2402.03300", "base_model:Qwen/Qwen2-0.5B-Instruct", "base_model:finetune:Qwen/Qwen2-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-02T14:24:49Z
--- base_model: Qwen/Qwen2-0.5B-Instruct library_name: transformers model_name: Qwen2-0.5B-GRPO-Demo tags: - generated_from_trainer - trl - grpo licence: license --- # Model Card for Qwen2-0.5B-GRPO-Demo This model is a fine-tuned version of [Qwen/Qwen2-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="harshdixit05/Qwen2-0.5B-GRPO-Demo", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/harsh_dixit-sias22-krea-university-top-university-for-li/huggingface/runs/u4yjab76) This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.22.1 - Transformers: 4.55.4 - Pytorch: 2.8.0+cu126 - Datasets: 4.0.0 - Tokenizers: 0.21.4 ## Citations Cite GRPO as: ```bibtex @article{shao2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
TohanBoss/blockassist-bc-regal_spotted_pelican_1756823178
TohanBoss
2025-09-02T14:27:32Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "regal spotted pelican", "arxiv:2504.07091", "region:us" ]
null
2025-09-02T14:27:24Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - regal spotted pelican --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
zcopwerq/blockassist-bc-vicious_shiny_turtle_1756823159
zcopwerq
2025-09-02T14:26:33Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "vicious shiny turtle", "arxiv:2504.07091", "region:us" ]
null
2025-09-02T14:26:00Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - vicious shiny turtle --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
anhaltai/swinunetrv2_Mais_v0_beta
anhaltai
2025-09-02T14:26:28Z
139
0
transformers
[ "transformers", "safetensors", "swinunetrv2", "image-segmentation", "custom_code", "arxiv:1910.09700", "region:us" ]
image-segmentation
2025-08-25T14:58:18Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
2hpsatt/blockassist-bc-huge_deft_eagle_1756823050
2hpsatt
2025-09-02T14:25:07Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "huge deft eagle", "arxiv:2504.07091", "region:us" ]
null
2025-09-02T14:25:03Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - huge deft eagle --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
akirafudo/blockassist-bc-keen_fast_giraffe_1756823039
akirafudo
2025-09-02T14:24:26Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "keen fast giraffe", "arxiv:2504.07091", "region:us" ]
null
2025-09-02T14:24:21Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - keen fast giraffe --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
ekazakos/grove
ekazakos
2025-09-02T14:22:47Z
0
0
null
[ "safetensors", "grove", "arxiv:2503.10781", "region:us" ]
null
2025-09-02T08:45:46Z
# GROVE Model This repo hosts the **artifacts** (config, tokenizer, weights) for the GROVE model from the paper *"Large-scale Pre-training for Grounded Video Caption Generation"*. The inference code is provided in the [`grove-transformers`](https://github.com/ekazakos/grove/grove_transformers) package — a **slimmer version** of the full codebase at [https://github.com/ekazakos/grove/](https://github.com/ekazakos/grove/), designed specifically for **quick inference with GROVE**. --- ## Installation Install the inference package: If you've already cloned the main repo from [https://github.com/ekazakos/grove/](https://github.com/ekazakos/grove/), then run: ```bash cd grove/grove_transformers pip install -e .[torch] --extra-index-url https://download.pytorch.org/whl/cu124 pip install flash-attn==2.7.3 --no-build-isolation ``` Alternatively, run: ```bash pip install -e "git+https://github.com/ekazakos/grove.git#subdirectory=grove_transformers[torch]" \ --extra-index-url https://download.pytorch.org/whl/cu124 pip install flash-attn==2.7.3 --no-build-isolation ``` Also, install **mmcv**, **mmdetection** and **SAM2** as shown [here](https://github.com/ekazakos/grove?tab=readme-ov-file#install-mmdetection). --- ## Notes - This model requires Python ≥3.11. - Auto* classes (e.g. `AutoTokenizer`) are **not supported**; use the custom `Grove*` classes. --- ## Example Usage 1: Minimal (automatic metadata) If you don’t have precomputed token embeddings for GROVE's vocabulary or video metadata, just pass the video path. GROVE will compute everything internally. ```python from grove_transformers import GroveTokenizer, GroveForCausalLM, GroveProcessor tokenizer = GroveTokenizer.from_pretrained("ekazakos/grove") model = GroveForCausalLM.from_pretrained( "ekazakos/grove", torch_dtype=torch.bfloat16, attn_implementation="flash_attention_2", low_cpu_mem_usage=True, ) processor = GroveProcessor.from_pretrained("ekazakos/grove") outputs = processor.generate( model, video_path, token_embeddings=None, device="cuda", start_frame=None, end_frame=None, video_width=None, video_height=None, video_fps=None ) ``` --- ## Example Usage 2: With precomputed inputs If you have precomputed token embeddings for GROVE's vocabulary and video metadata (e.g. from datasets like **HowToGround1M** or **iGround**), you can pass them directly for faster inference and precise trimming. ```python outputs = processor.generate( model, video_path, token_embeddings=precomputed_embeddings, device="cuda", start_frame=dataset_meta["start_frame"], end_frame=dataset_meta["end_frame"], video_width=dataset_meta["width"], video_height=dataset_meta["height"], video_fps=dataset_meta["fps"] ) ``` --- ## Notes - **`token_embeddings`**: pass precomputed token embeddings for speed, or `None` to compute on the fly. For precomputing token embeddings for GROVE's vocabulary, see [embed_tokens.sh](https://github.com/ekazakos/grove/blob/main/embed_tokens.sh). - **Video metadata** (`start_frame`, `end_frame`, `video_width`, `video_height`, `video_fps`): pass if available, otherwise `None` → GROVE computes automatically. - **Trimming**: `start_frame`/`end_frame` let you process only part of a video. --- ```bibtex @article{kazakos2025grove, title={Large-scale Pre-training for Grounded Video Caption Generation}, author={Evangelos Kazakos and Cordelia Schmid and Josef Sivic}, journal={arXiv preprint arXiv:2503.10781}, year={2025} } ```
canoplos112/blockassist-bc-yapping_sleek_squirrel_1756822835
canoplos112
2025-09-02T14:22:40Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "yapping sleek squirrel", "arxiv:2504.07091", "region:us" ]
null
2025-09-02T14:21:10Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - yapping sleek squirrel --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
zcopwerq/blockassist-bc-rapid_quick_flea_1756822814
zcopwerq
2025-09-02T14:20:38Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "rapid quick flea", "arxiv:2504.07091", "region:us" ]
null
2025-09-02T14:20:16Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - rapid quick flea --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
phospho-app/cmsng2001-ACT_BBOX-dataset_20250901_A-1ri4l
phospho-app
2025-09-02T14:20:04Z
0
0
phosphobot
[ "phosphobot", "act", "robotics", "dataset:cmsng2001/dataset_20250901_A", "region:us" ]
robotics
2025-09-02T14:19:19Z
--- datasets: cmsng2001/dataset_20250901_A library_name: phosphobot pipeline_tag: robotics model_name: act tags: - phosphobot - act task_categories: - robotics --- # act model - 🧪 phosphobot training pipeline - **Dataset**: [cmsng2001/dataset_20250901_A](https://huggingface.co/datasets/cmsng2001/dataset_20250901_A) - **Wandb run id**: None ## Error Traceback We faced an issue while training your model. ``` [Errno 2] No such file or directory: '/__modal/volumes/vo-jpHx3K78b6s9tZZNuqKoXe/datasets/cmsng2001/dataset_20250901_A_bboxes/data/chunk-000/episode_000015.parquet' ``` ## Training parameters ```text { "batch_size": 100, "steps": 10000, "save_freq": 5000, "target_detection_instruction": "red lego brick", "image_key": "secondary_0", "image_keys_to_keep": [], "save_steps": 5000 } ``` 📖 **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme) 🤖 **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
johngreendr1/752f0227-6867-453c-974c-4cd441b72b09
johngreendr1
2025-09-02T14:19:43Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:unsloth/codegemma-7b-it", "base_model:adapter:unsloth/codegemma-7b-it", "region:us" ]
null
2025-09-02T11:18:12Z
--- base_model: unsloth/codegemma-7b-it library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.15.1
leeminwaan/SmolLM3-3B-GGUF
leeminwaan
2025-09-02T14:18:31Z
0
0
null
[ "gguf", "text-generation", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2025-09-02T14:15:38Z
--- license: apache-2.0 base_model: SmolLM3-3B pipeline_tag: text-generation --- # Model Card for SmolLM3-3B-GGUF This repository contains multiple quantized versions of the SmolLM3-3B model in GGUF format. It is intended for efficient inference on consumer hardware, making large model deployment more accessible. ## Model Details ### Model Description - **Developed by:** leeminwaan - **Funded by [optional]:** Independent project - **Shared by [optional]:** leeminwaan - **Model type:** Decoder-only transformer language model - **Language(s) (NLP):** English (primary), multilingual capabilities not benchmarked - **License:** Apache-2.0 ### Model Sources - **Repository:** [Hugging Face Repo](https://huggingface.co/leeminwaan/SmolLM3-3B-GGUF) - **Paper [optional]:** Not available - **Demo [optional]:** To be released ## How to Get Started with the Model ```python from huggingface_hub import hf_hub_download model_path = hf_hub_download("leeminwaan/SmolLM3-3B-GGUF", "SmolLM3-3B-q4_k_m.gguf") print("Downloaded:", model_path) ```` Quantized versions available: * Q2\_K, Q3\_K\_S, Q3\_K\_M, Q3\_K\_L * Q4\_0, Q4\_1, Q4\_K\_S, Q4\_K\_M * Q5\_0, Q5\_1, Q5\_K\_S, Q5\_K\_M * Q6\_K, Q8\_0 ## Training Details ### Training Data * Based on SmolLM3-3B pretraining corpus (public large-scale web text, open datasets). * No additional fine-tuning was performed for this release. ### Training Procedure * Original SmolLM3-3B → quantized to GGUF formats. ### Quantization Results | Quantization | Size (vs. FP16) | Speed | Quality | Recommended For | |--------------|-----------------|-----------|------------|--------------------------------------| | Q2_K | Smallest | Fastest | Low | Prototyping, minimal RAM/CPU | | Q3_K_S | Very Small | Very Fast | Low-Med | Lightweight devices, testing | | Q3_K_M | Small | Fast | Med | Lightweight, slightly better quality | | Q3_K_L | Small-Med | Fast | Med | Faster inference, fair quality | | Q4_0 | Medium | Fast | Good | General use, chats, low RAM | | Q4_1 | Medium | Fast | Good+ | Recommended, slightly better quality | | Q4_K_S | Medium | Fast | Good+ | Recommended, balanced | | Q4_K_M | Medium | Fast | Good++ | Recommended, best Q4 option | | Q5_0 | Larger | Moderate | Very Good | Chatbots, longer responses | | Q5_1 | Larger | Moderate | Very Good+ | More demanding tasks | | Q5_K_S | Larger | Moderate | Very Good+ | Advanced users, better accuracy | | Q5_K_M | Larger | Moderate | Excellent | Demanding tasks, high quality | | Q6_K | Large | Slower | Near FP16 | Power users, best quantized quality | | Q8_0 | Largest | Slowest | FP16-like | Maximum quality, high RAM/CPU | > **Note:** > - Lower quantization = smaller model, faster inference, but lower output quality. > - Q4_K_M is ideal for most users; Q6_K/Q8_0 offer the highest quality, best for advanced use. > - All quantizations are suitable for consumer hardware—select based on your quality/speed needs. ## Technical Specifications #### Software * llama.cpp for quantization * Python 3.10, huggingface\_hub ## Citation **BibTeX:** ```bibtex @miscSmolLM3-3B-GGUF, title=SmolLM3-3B-GGUF Quantized Models}, author={leeminwaan}, year={2025}, howpublished={\url{https://huggingface.co/leeminwaan/SmolLM3-3B-GGUF}} } ``` **APA:** ``` leeminwaan. (2025). SmolLM3-3B-GGUF Quantized Models [Computer software]. Hugging Face. https://huggingface.co/leeminwaan/SmolLM3-3B-GGUF ``` ## Glossary * **Quantization:** Reducing precision of weights to lower memory usage. * **GGUF:** Optimized format for llama.cpp inference. ## More Information * This project is experimental. * Expect further updates and quantization benchmarks. ## Model Card Authors * leeminwaan ## Model Card Contact * Hugging Face: [leeminwaan](https://huggingface.co/leeminwaan)
akirafudo/blockassist-bc-keen_fast_giraffe_1756822609
akirafudo
2025-09-02T14:17:17Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "keen fast giraffe", "arxiv:2504.07091", "region:us" ]
null
2025-09-02T14:17:12Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - keen fast giraffe --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
CIRCL/cwe-parent-vulnerability-classification-roberta-base
CIRCL
2025-09-02T14:17:04Z
223
0
transformers
[ "transformers", "pytorch", "safetensors", "roberta", "text-classification", "generated_from_trainer", "base_model:FacebookAI/roberta-base", "base_model:finetune:FacebookAI/roberta-base", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-08-13T10:09:09Z
--- library_name: transformers license: mit base_model: roberta-base tags: - generated_from_trainer metrics: - accuracy model-index: - name: cwe-parent-vulnerability-classification-roberta-base results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # cwe-parent-vulnerability-classification-roberta-base This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.9131 - Accuracy: 0.7701 - F1 Macro: 0.4179 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 40 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 Macro | |:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:| | 3.2726 | 1.0 | 25 | 3.1430 | 0.0230 | 0.0041 | | 3.214 | 2.0 | 50 | 2.9861 | 0.0230 | 0.0041 | | 3.1457 | 3.0 | 75 | 2.9949 | 0.0115 | 0.0019 | | 3.0447 | 4.0 | 100 | 3.0090 | 0.1149 | 0.0265 | | 3.0588 | 5.0 | 125 | 2.9652 | 0.0230 | 0.0039 | | 2.9336 | 6.0 | 150 | 2.9089 | 0.4828 | 0.1081 | | 2.9729 | 7.0 | 175 | 2.9005 | 0.1264 | 0.0606 | | 2.812 | 8.0 | 200 | 2.9174 | 0.3563 | 0.1788 | | 2.6587 | 9.0 | 225 | 2.8268 | 0.3563 | 0.1414 | | 2.5464 | 10.0 | 250 | 2.8296 | 0.3103 | 0.1339 | | 2.4379 | 11.0 | 275 | 2.7762 | 0.2989 | 0.1554 | | 2.2741 | 12.0 | 300 | 2.7595 | 0.4598 | 0.1745 | | 2.1793 | 13.0 | 325 | 2.7483 | 0.4943 | 0.1826 | | 2.0085 | 14.0 | 350 | 2.6646 | 0.4713 | 0.2136 | | 1.9313 | 15.0 | 375 | 2.6414 | 0.6092 | 0.2916 | | 1.7534 | 16.0 | 400 | 2.5186 | 0.6552 | 0.3345 | | 1.6187 | 17.0 | 425 | 2.3736 | 0.6552 | 0.3381 | | 1.5568 | 18.0 | 450 | 2.2908 | 0.6667 | 0.3391 | | 1.4627 | 19.0 | 475 | 2.4101 | 0.6437 | 0.3356 | | 1.2964 | 20.0 | 500 | 2.2791 | 0.6782 | 0.3525 | | 1.2236 | 21.0 | 525 | 2.1636 | 0.6667 | 0.3403 | | 1.1237 | 22.0 | 550 | 2.1584 | 0.6897 | 0.3397 | | 1.0589 | 23.0 | 575 | 2.1262 | 0.6782 | 0.3535 | | 0.952 | 24.0 | 600 | 2.1252 | 0.6782 | 0.3504 | | 0.9137 | 25.0 | 625 | 2.0899 | 0.6667 | 0.3656 | | 0.878 | 26.0 | 650 | 1.9915 | 0.7126 | 0.4012 | | 0.8073 | 27.0 | 675 | 1.9856 | 0.7356 | 0.3857 | | 0.7588 | 28.0 | 700 | 1.9613 | 0.7356 | 0.3737 | | 0.7114 | 29.0 | 725 | 1.9789 | 0.7701 | 0.4103 | | 0.6728 | 30.0 | 750 | 1.9131 | 0.7701 | 0.4179 | | 0.6651 | 31.0 | 775 | 2.0236 | 0.7701 | 0.4231 | | 0.5979 | 32.0 | 800 | 2.0366 | 0.7701 | 0.4668 | | 0.5946 | 33.0 | 825 | 2.0026 | 0.7931 | 0.4478 | | 0.5395 | 34.0 | 850 | 2.0010 | 0.8046 | 0.4544 | | 0.5301 | 35.0 | 875 | 1.9332 | 0.8046 | 0.4500 | | 0.5216 | 36.0 | 900 | 1.9965 | 0.8161 | 0.4966 | | 0.497 | 37.0 | 925 | 1.9930 | 0.8161 | 0.4639 | | 0.5149 | 38.0 | 950 | 1.9813 | 0.8161 | 0.4582 | | 0.5022 | 39.0 | 975 | 1.9775 | 0.8046 | 0.4667 | | 0.4892 | 40.0 | 1000 | 1.9643 | 0.8161 | 0.4688 | ### Framework versions - Transformers 4.55.4 - Pytorch 2.7.1+cu126 - Datasets 4.0.0 - Tokenizers 0.21.2
calegpedia/blockassist-bc-stealthy_slimy_rooster_1756820875
calegpedia
2025-09-02T14:16:23Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "stealthy slimy rooster", "arxiv:2504.07091", "region:us" ]
null
2025-09-02T14:16:19Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - stealthy slimy rooster --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
kafa22/blockassist-bc-regal_leggy_hummingbird_1756822529
kafa22
2025-09-02T14:16:10Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "regal leggy hummingbird", "arxiv:2504.07091", "region:us" ]
null
2025-09-02T14:16:07Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - regal leggy hummingbird --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Nitral-AI/CaptainErisNebula-12B-Chimera-v1.1
Nitral-AI
2025-09-02T14:16:05Z
118
17
null
[ "safetensors", "mistral", "en", "license:other", "region:us" ]
null
2025-08-21T19:19:37Z
--- license: other language: - en --- <div style="background: #000; padding:30px; border-radius:18px; box-shadow: 0 0 15px #00FF88, 0 0 30px #FFFFFF; color:#FFFFFF; max-width:900px; margin:auto; font-family:'Roboto', sans-serif; border:1px solid #FFFFFF;"> <style> @import url('https://fonts.googleapis.com/css2?family=Roboto:wght@400;500;700&display=swap'); @keyframes pulseGlow { 0%, 100% { box-shadow: 0 0 6px #00FF88AA, 0 0 12px #FFFFFFAA; } 50% { box-shadow: 0 0 10px #00FF88, 0 0 20px #FFFFFF; } } @keyframes floatUp { 0%, 100% { transform: translateY(0); } 50% { transform: translateY(-8px); } } @keyframes fadeIn { 0% { opacity: 0; transform: translateY(10px); } 100% { opacity: 1; transform: translateY(0); } } /* Electric effect */ @keyframes electricBorder { 0% { background-position: 0% 50%; } 100% { background-position: 200% 50%; } } .blue-btn { display: inline-block; background: #111; border: none; color: #fff; border-radius: 24px; padding: 6px 12px; font-weight: 500; font-size: 1.05em; margin: 6px 8px; line-height: 1; vertical-align: middle; transition: all 0.4s ease; box-shadow: 0 0 6px #00FF88AA, 0 0 12px #FFFFFFAA; position: relative; z-index: 0; animation: pulseGlow 3s ease-in-out infinite; text-decoration: none; } .blue-btn:hover { color: #00FF88; text-shadow: 0 0 6px #00FF88, 0 0 12px #FFFFFF; box-shadow: 0 0 25px #00FF88, 0 0 40px #FFFFFFAA; transform: translateY(-4px); } /* Glass card + reactive electricity */ .glass-card { position: relative; background: rgba(0, 0, 0, 0.7); backdrop-filter: blur(10px); border-radius: 12px; box-shadow: 0 4px 12px rgba(0, 255, 136, 0.5); padding: 20px; margin-bottom: 2em; overflow: hidden; border: 1px solid rgba(255, 255, 255, 0.4); animation: floatUp 6s ease-in-out infinite; } .glass-card::before { content: ""; position: absolute; top: -2px; left: -2px; right: -2px; bottom: -2px; border-radius: 14px; background: linear-gradient(90deg, #00FF88, #FFFFFF, #00FF88); background-size: 200% 200%; animation: electricBorder 3s linear infinite; z-index: -1; opacity: 0.5; } .glass-card:hover::before { opacity: 1; } /* Image reactive hover */ .preview-img { transition: all 0.4s ease; } .preview-img:hover { transform: scale(1.05); box-shadow: 0 0 25px #00FF88, 0 0 40px #FFFFFFAA; } h1, h2, h3 { transition: transform 0.3s ease-in-out, color 0.3s ease; } h1:hover, h2:hover, h3:hover { transform: translateY(-5px) scale(1.05); color: #00FF88; text-shadow: 0 0 12px #00FF88, 0 0 18px #FFFFFF; } .fade-in { animation: fadeIn 1.2s ease forwards; } /* Electric model details table */ .reactive-table { border-collapse: collapse; width: 100%; color: #fff; font-size: 1em; border-radius: 12px; overflow: hidden; position: relative; } .reactive-table::before { content: ""; position: absolute; top: -2px; left: -2px; right: -2px; bottom: -2px; border-radius: 14px; background: linear-gradient(90deg, #00FF88, #FFFFFF, #00FF88); background-size: 200% 200%; animation: electricBorder 4s linear infinite; z-index: -1; opacity: 0.3; } .reactive-table th, .reactive-table td { border: 1px solid #222; padding: 0.5em; background: rgba(0,0,0,0.8); } .reactive-table tr:hover td { background: rgba(0, 255, 136, 0.1); color: #00FF88; text-shadow: 0 0 8px #00FF88; } </style> <h1 class="fade-in" style="font-size:2.3em; margin-bottom:0.3em;">🌌 CaptainErisNebula-12B-Chimera-v1.1</h1> <!-- Preview Image Section --> <div style="display:flex; justify-content:center; margin-bottom:2em;"> <div class="glass-card fade-in" style="animation-delay:0.6s; text-align:center; padding:20px; max-width:600px;"> <img class="preview-img" src="https://cdn-uploads.huggingface.co/production/uploads/642265bc01c62c1e4102dc36/n6UUAZxfuHrJySM2atnLN.png" alt="Model Preview" style="width:300px; height:auto; border-radius:12px; box-shadow: 0 0 12px #00FF88, 0 0 20px #FFFFFF;"> </div> </div> <div class="glass-card fade-in" style="animation-delay: 1.2s;"> <h2>🎛️ Presets</h2> <p>Prompt is adjusted for different use cases:</p> <div style="margin-top:1.2em;"> <a href="https://huggingface.co/ChaoticNeutrals/CaptainErisNebula-12B-Chimera-v1.1/tree/main/ST" class="blue-btn">🎭 RP /🧠 Reasoning Presets</a> </div> </div> <hr style="border:1px solid #FFFFFF; margin:2em 0;"> <div class="glass-card fade-in" style="animation-delay: 1.4s;"> <h2>⚡ Quantized Models</h2> <p>Optimized versions for faster inference and lower memory usage:</p> <div style="margin-top:1.2em; display:flex; flex-wrap:wrap; gap:20px; justify-content:center;"> <div style="text-align:center;"> <a href="https://huggingface.co/Lewdiculous/CaptainErisNebula-12B-Chimera-v1.1-GGUF-IQ-Imatrix" class="blue-btn fade-in" style="animation-delay:1.6s;"> 🟢 Lewdiculus's Imatrix GGUF's <3 </a> <div class="fade-in" style="animation-delay:1.8s; margin-top:6px; font-size:0.85em; color:#00FF88;"> GGUF Format (IQ-Imatrix) </div> </div> <div style="text-align:center;"> <a href="https://huggingface.co/Nitrals-Quants/CaptainErisNebula-12B-Chimera-v1.1-4bpw-exl3" class="blue-btn fade-in" style="animation-delay:2s;"> 🟢 Nitral's 4bpw EXL3 </a> <div class="fade-in" style="animation-delay:2.2s; margin-top:6px; font-size:0.85em; color:#00FF88;"> EXL3 Format </div> </div> </div> </div> <hr style="border:1px solid #FFFFFF; margin:2em 0;"> <h2>⚙️ Model Details</h2> <table class="reactive-table"> <tr> <th>Feature</th> <th>Description</th> </tr> <tr> <td><strong>Size</strong></td> <td>12B Parameters.</td> </tr> <tr> <td><strong>Library</strong></td> <td>Transformers</td> </tr> <tr> <td><strong>Composition</strong></td> <td> Blending Chimera versions <a href="https://huggingface.co/Nitral-Archive/CaptainErisNebula-12B-Chimera-v1" class="blue-btn">v1</a> with <a href="https://huggingface.co/Nitral-Archive/CaptainErisNebula-12B-Chimera-v0.420" class="blue-btn">v0.420</a>, this v1.1 release sharpens reasoning while preserving creativity. </td> </tr> </table> <hr style="border:1px solid #FFFFFF; margin:2em 0;"> <div class="glass-card fade-in" style="animation-delay: 1.5s;"> <h2>🗒️ Community Note:</h2> <b class="fade-in" style="animation-delay: 1.8s; color:#00FF88; margin-top: 1em;"></p> This is my final open-source model for now, thank you for being part of this strange, but oddly beautiful mess of a journey... Arrivederci, friends 🚀 <a href="https://huggingface.co/Nitral-AI" class="blue-btn">-Nitral-AI</a> </blockquote> </div>
AI-Engine/Mistral-Small-3.2-24B-Instruct-2506-GGUF
AI-Engine
2025-09-02T14:15:03Z
0
0
null
[ "gguf", "license:apache-2.0", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2025-08-21T17:40:43Z
--- license: apache-2.0 --- GGUF [llama.cpp](https://github.com/ggerganov/llama.cpp) quantized version of: - Original model: [Mistral-Small-3.2-24B-Instruct-2506](https://huggingface.co/mistralai/Mistral-Small-3.2-24B-Instruct-2506) - Model creator: [Mistral](https://huggingface.co/mistralai) - [License](https://choosealicense.com/licenses/apache-2.0/) ## Recommended Prompt Format (mistral-v7-tekken) ``` <s>[SYSTEM_PROMPT]<system prompt>[/SYSTEM_PROMPT][INST]<user message>[/INST]<assistant response></s>[INST]<user message>[/INST] ```
TohanBoss/blockassist-bc-regal_spotted_pelican_1756822413
TohanBoss
2025-09-02T14:14:48Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "regal spotted pelican", "arxiv:2504.07091", "region:us" ]
null
2025-09-02T14:14:39Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - regal spotted pelican --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
myfi/parser_model_ner_3.65
myfi
2025-09-02T14:14:28Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "text-generation-inference", "unsloth", "trl", "sft", "conversational", "en", "base_model:unsloth/Qwen3-4B-Instruct-2507", "base_model:finetune:unsloth/Qwen3-4B-Instruct-2507", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-09-02T14:03:39Z
--- base_model: unsloth/Qwen3-4B-Instruct-2507 tags: - text-generation-inference - transformers - unsloth - qwen3 - trl - sft license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** myfi - **License:** apache-2.0 - **Finetuned from model :** unsloth/Qwen3-4B-Instruct-2507 This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
zcopwerq/blockassist-bc-miniature_screeching_alligator_1756822432
zcopwerq
2025-09-02T14:14:25Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "miniature screeching alligator", "arxiv:2504.07091", "region:us" ]
null
2025-09-02T14:13:53Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - miniature screeching alligator --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
AnerYubo/blockassist-bc-snappy_tenacious_eagle_1756822426
AnerYubo
2025-09-02T14:13:50Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "snappy tenacious eagle", "arxiv:2504.07091", "region:us" ]
null
2025-09-02T14:13:47Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - snappy tenacious eagle --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
AnerYubo/blockassist-bc-fanged_camouflaged_cassowary_1756822422
AnerYubo
2025-09-02T14:13:45Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "fanged camouflaged cassowary", "arxiv:2504.07091", "region:us" ]
null
2025-09-02T14:13:42Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - fanged camouflaged cassowary --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
tralalerrotralala228/lilahart
tralalerrotralala228
2025-09-02T14:13:39Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-09-02T13:35:23Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: lilahart --- # Lilahart <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `lilahart` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "lilahart", "lora_weights": "https://huggingface.co/tralalerrotralala228/lilahart/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('tralalerrotralala228/lilahart', weight_name='lora.safetensors') image = pipeline('lilahart').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 2500 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/tralalerrotralala228/lilahart/discussions) to add images that show off what you’ve made with this LoRA.
8septiadi8/blockassist-bc-curious_lightfooted_mouse_1756822324
8septiadi8
2025-09-02T14:13:35Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "curious lightfooted mouse", "arxiv:2504.07091", "region:us" ]
null
2025-09-02T14:13:31Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - curious lightfooted mouse --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Dineochiloane/gemma-3-4b-isizulu-inkuba-v2
Dineochiloane
2025-09-02T14:13:09Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "sft", "trl", "base_model:google/gemma-3-4b-it", "base_model:finetune:google/gemma-3-4b-it", "endpoints_compatible", "region:us" ]
null
2025-09-02T12:19:17Z
--- base_model: google/gemma-3-4b-it library_name: transformers model_name: gemma-3-4b-isizulu-inkuba-v2 tags: - generated_from_trainer - sft - trl licence: license --- # Model Card for gemma-3-4b-isizulu-inkuba-v2 This model is a fine-tuned version of [google/gemma-3-4b-it](https://huggingface.co/google/gemma-3-4b-it). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="Dineochiloane/gemma-3-4b-isizulu-inkuba-v2", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with SFT. ### Framework versions - TRL: 0.22.1 - Transformers: 4.56.0 - Pytorch: 2.7.1+cu118 - Datasets: 4.0.0 - Tokenizers: 0.22.0 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
canoplos112/blockassist-bc-yapping_sleek_squirrel_1756822240
canoplos112
2025-09-02T14:12:32Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "yapping sleek squirrel", "arxiv:2504.07091", "region:us" ]
null
2025-09-02T14:11:15Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - yapping sleek squirrel --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
phospho-app/cmsng2001-ACT_BBOX-dataset_20250901_A-vddoj
phospho-app
2025-09-02T14:11:35Z
0
0
phosphobot
[ "phosphobot", "act", "robotics", "dataset:cmsng2001/dataset_20250901_A", "region:us" ]
robotics
2025-09-02T14:10:58Z
--- datasets: cmsng2001/dataset_20250901_A library_name: phosphobot pipeline_tag: robotics model_name: act tags: - phosphobot - act task_categories: - robotics --- # act model - 🧪 phosphobot training pipeline - **Dataset**: [cmsng2001/dataset_20250901_A](https://huggingface.co/datasets/cmsng2001/dataset_20250901_A) - **Wandb run id**: None ## Error Traceback We faced an issue while training your model. ``` [Errno 2] No such file or directory: '/__modal/volumes/vo-jpHx3K78b6s9tZZNuqKoXe/datasets/cmsng2001/dataset_20250901_A_bboxes/data/chunk-000/episode_000015.parquet' ``` ## Training parameters ```text { "batch_size": 100, "steps": 10000, "save_freq": 5000, "target_detection_instruction": "red lego brick", "image_key": "secondary_0", "image_keys_to_keep": [], "save_steps": 5000 } ``` 📖 **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme) 🤖 **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
derektan95/search-tta-sound
derektan95
2025-09-02T14:11:31Z
0
0
transformers
[ "transformers", "safetensors", "clap_audio_model", "arxiv:2505.11350", "arxiv:2211.06687", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-09-02T14:09:36Z
--- library_name: transformers license: apache-2.0 --- # Model Card for Search-TTA-Sound Fine-tuned on `laion/clap-htsat-fused`. ## Citation ``` @inproceedings{tan2025searchtta, title = {Search-TTA: A Multimodal Test-Time Adaptation Framework for Visual Search in the Wild}, author = {Derek Ming Siang Tan, Shailesh, Boyang Liu, Alok Raj, Qi Xuan Ang, Weiheng Dai, Tanishq Duhan, Jimmy Chiun, Yuhong Cao, Florian Shkurti, Guillaume Sartoretti}, booktitle = {Conference on Robot Learning}, year = {2025}, url = {https://arxiv.org/abs/2505.11350} } @misc{wu2024largescalecontrastivelanguageaudiopretraining, title={Large-scale Contrastive Language-Audio Pretraining with Feature Fusion and Keyword-to-Caption Augmentation}, author={Yusong Wu and Ke Chen and Tianyu Zhang and Yuchen Hui and Marianna Nezhurina and Taylor Berg-Kirkpatrick and Shlomo Dubnov}, year={2024}, eprint={2211.06687}, archivePrefix={arXiv}, primaryClass={cs.SD}, url={https://arxiv.org/abs/2211.06687}, } ```
Intel/dereverb_mel_band_roformer_anvuew_openvino
Intel
2025-09-02T14:10:52Z
0
0
null
[ "license:gpl-3.0", "region:us" ]
null
2025-09-02T13:29:43Z
--- license: gpl-3.0 --- Dereverb MelBandRoformer (by @anvuew) OpenVINO Models This repo stores OpenVINO(TM) models in IR format that are used to perform Reverb Extraction & Removal. The OpenVINO IRs (.xml, .bin files) stored here have been converted from @anvuew's pytorch model checkpoints / configs from here: https://huggingface.co/anvuew/dereverb_mel_band_roformer They are also uploaded to this repo, under `pytorch` folder. The OpenVINO IRs are intended to be used with the set of OpenVINO-based AI plugins for Audacity(R), here: https://github.com/intel/openvino-plugins-ai-audacity To better support a range of OpenVINO-supported devices, the MelBandRoformer model has been sliced / converted to 3 separate OpenVINO IRs: * mel_band_pre.xml/.bin -> Pre-processing operations (such as STFT) which convert input audio waveforms to frequency domain. * mel_band_fwd.xml / .bin -> The majority of the layers / ops in the original model. * mel_band_post.xml / .bin -> Post-processing operations (such as iSTFT) which convert frequency domain outputs from `mel_band_fwd` to output waveforms. The OpenVINO IRs in `mono` directory are a conversion of `pytorch/dereverb_mel_band_roformer_mono_anvuew_sdr_20.4029.ckpt`. ## Intel’s Human Rights Disclaimer: Intel is committed to respecting human rights and avoiding complicity in human rights abuses. See Intel's Global Human Rights Principles. Intel's products and software are intended only to be used in applications that do not cause or contribute to a violation of an internationally recognized human right.
sakibul1998/blockassist-bc-armored_polished_ferret_1756822100
sakibul1998
2025-09-02T14:10:28Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "armored polished ferret", "arxiv:2504.07091", "region:us" ]
null
2025-09-02T14:10:14Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - armored polished ferret --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
omerbektass/blockassist-bc-keen_fast_giraffe_1756822083
omerbektass
2025-09-02T14:09:01Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "keen fast giraffe", "arxiv:2504.07091", "region:us" ]
null
2025-09-02T14:08:23Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - keen fast giraffe --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
akirafudo/blockassist-bc-keen_fast_giraffe_1756821807
akirafudo
2025-09-02T14:04:32Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "keen fast giraffe", "arxiv:2504.07091", "region:us" ]
null
2025-09-02T14:03:50Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - keen fast giraffe --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
kafa22/blockassist-bc-regal_leggy_hummingbird_1756821811
kafa22
2025-09-02T14:04:13Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "regal leggy hummingbird", "arxiv:2504.07091", "region:us" ]
null
2025-09-02T14:04:08Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - regal leggy hummingbird --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
xinnn32/blockassist-bc-meek_winged_caterpillar_1756821756
xinnn32
2025-09-02T14:04:08Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "meek winged caterpillar", "arxiv:2504.07091", "region:us" ]
null
2025-09-02T14:03:39Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - meek winged caterpillar --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
zcopwerq/blockassist-bc-clawed_webbed_dog_1756821765
zcopwerq
2025-09-02T14:03:10Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "clawed webbed dog", "arxiv:2504.07091", "region:us" ]
null
2025-09-02T14:02:46Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - clawed webbed dog --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
maydixit/llama_70B_v2_tool_only_monitor_10epoch
maydixit
2025-09-02T14:02:40Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-09-02T14:02:30Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
fredfit0923/AI
fredfit0923
2025-09-02T14:02:10Z
0
0
null
[ "license:deepfloyd-if-license", "region:us" ]
null
2025-09-02T14:02:10Z
--- license: deepfloyd-if-license ---
yuan571/phi-3.5-mini-0902-data10to64-32-32
yuan571
2025-09-02T14:02:07Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "unsloth", "conversational", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-09-02T13:56:24Z
--- base_model: unsloth/phi-3.5-mini-instruct-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama license: apache-2.0 language: - en --- # Uploaded finetuned model - **Developed by:** yuan571 - **License:** apache-2.0 - **Finetuned from model :** unsloth/phi-3.5-mini-instruct-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
marigold334/skcon-llama3-1-lora
marigold334
2025-09-02T14:01:35Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-02T13:59:57Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
TohanBoss/blockassist-bc-regal_spotted_pelican_1756821618
TohanBoss
2025-09-02T14:01:32Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "regal spotted pelican", "arxiv:2504.07091", "region:us" ]
null
2025-09-02T14:01:24Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - regal spotted pelican --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
omerbektass/blockassist-bc-keen_fast_giraffe_1756821649
omerbektass
2025-09-02T14:01:12Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "keen fast giraffe", "arxiv:2504.07091", "region:us" ]
null
2025-09-02T14:01:06Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - keen fast giraffe --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Gichuhi/sd-class-butterflies-32
Gichuhi
2025-09-02T14:00:50Z
0
0
diffusers
[ "diffusers", "safetensors", "pytorch", "unconditional-image-generation", "diffusion-models-class", "license:mit", "diffusers:DDPMPipeline", "region:us" ]
unconditional-image-generation
2025-09-02T14:00:21Z
--- license: mit tags: - pytorch - diffusers - unconditional-image-generation - diffusion-models-class --- # Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class) This model is a diffusion model for unconditional image generation of cute 🦋. ## Usage ```python from diffusers import DDPMPipeline pipeline = DDPMPipeline.from_pretrained('Gichuhi/sd-class-butterflies-32') image = pipeline().images[0] image ```
omerbkts/blockassist-bc-keen_fast_giraffe_1756821533
omerbkts
2025-09-02T13:59:16Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "keen fast giraffe", "arxiv:2504.07091", "region:us" ]
null
2025-09-02T13:59:11Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - keen fast giraffe --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
maydixit/llama_70B_v2_tool_only_monitor_adv_10epoch
maydixit
2025-09-02T13:59:03Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-09-02T13:58:54Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
zcopwerq/blockassist-bc-furry_rough_orangutan_1756821419
zcopwerq
2025-09-02T13:57:40Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "furry rough orangutan", "arxiv:2504.07091", "region:us" ]
null
2025-09-02T13:57:01Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - furry rough orangutan --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
akirafudo/blockassist-bc-keen_fast_giraffe_1756821384
akirafudo
2025-09-02T13:57:00Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "keen fast giraffe", "arxiv:2504.07091", "region:us" ]
null
2025-09-02T13:56:54Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - keen fast giraffe --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
felixZzz/student_sft_len32k_pseudoteacher_sub1k_multiZ_acc-0902
felixZzz
2025-09-02T13:55:51Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-02T13:40:02Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
kafa22/blockassist-bc-regal_leggy_hummingbird_1756821276
kafa22
2025-09-02T13:55:17Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "regal leggy hummingbird", "arxiv:2504.07091", "region:us" ]
null
2025-09-02T13:55:13Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - regal leggy hummingbird --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
mradermacher/Qwen3-17B-QiMing-V1.0-Total-Recall-Medium-i1-GGUF
mradermacher
2025-09-02T13:55:03Z
0
0
transformers
[ "transformers", "gguf", "programming", "code generation", "code", "codeqwen", "moe", "coding", "coder", "qwen2", "chat", "qwen", "qwen-coder", "qwen3", "finetune", "brainstorm 20x", "brainstorm", "optional thinking", "creative", "all use cases", "QiMing", "QiMing-holos", "bagua", "decision-making", "strategic-analysis", "cognitive-architecture", "philosophy-driven-ai", "en", "fr", "zh", "de", "base_model:DavidAU/Qwen3-17B-QiMing-V1.0-Total-Recall-Medium", "base_model:quantized:DavidAU/Qwen3-17B-QiMing-V1.0-Total-Recall-Medium", "license:apache-2.0", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2025-09-02T12:16:19Z
--- base_model: DavidAU/Qwen3-17B-QiMing-V1.0-Total-Recall-Medium language: - en - fr - zh - de library_name: transformers license: apache-2.0 mradermacher: readme_rev: 1 quantized_by: mradermacher tags: - programming - code generation - code - codeqwen - programming - code generation - code - codeqwen - moe - coding - coder - qwen2 - chat - qwen - qwen-coder - chat - qwen - qwen-coder - qwen3 - finetune - brainstorm 20x - brainstorm - optional thinking - creative - all use cases - QiMing - QiMing-holos - bagua - decision-making - strategic-analysis - cognitive-architecture - chat - philosophy-driven-ai --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> <!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K_M Q4_0 IQ3_XS Q4_1 IQ3_S --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> weighted/imatrix quants of https://huggingface.co/DavidAU/Qwen3-17B-QiMing-V1.0-Total-Recall-Medium <!-- provided-files --> ***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Qwen3-17B-QiMing-V1.0-Total-Recall-Medium-i1-GGUF).*** static quants are available at https://huggingface.co/mradermacher/Qwen3-17B-QiMing-V1.0-Total-Recall-Medium-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Qwen3-17B-QiMing-V1.0-Total-Recall-Medium-i1-GGUF/resolve/main/Qwen3-17B-QiMing-V1.0-Total-Recall-Medium.imatrix.gguf) | imatrix | 0.1 | imatrix file (for creating your own qwuants) | | [GGUF](https://huggingface.co/mradermacher/Qwen3-17B-QiMing-V1.0-Total-Recall-Medium-i1-GGUF/resolve/main/Qwen3-17B-QiMing-V1.0-Total-Recall-Medium.i1-IQ1_S.gguf) | i1-IQ1_S | 4.2 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/Qwen3-17B-QiMing-V1.0-Total-Recall-Medium-i1-GGUF/resolve/main/Qwen3-17B-QiMing-V1.0-Total-Recall-Medium.i1-IQ1_M.gguf) | i1-IQ1_M | 4.5 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/Qwen3-17B-QiMing-V1.0-Total-Recall-Medium-i1-GGUF/resolve/main/Qwen3-17B-QiMing-V1.0-Total-Recall-Medium.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 5.0 | | | [GGUF](https://huggingface.co/mradermacher/Qwen3-17B-QiMing-V1.0-Total-Recall-Medium-i1-GGUF/resolve/main/Qwen3-17B-QiMing-V1.0-Total-Recall-Medium.i1-IQ2_XS.gguf) | i1-IQ2_XS | 5.5 | | | [GGUF](https://huggingface.co/mradermacher/Qwen3-17B-QiMing-V1.0-Total-Recall-Medium-i1-GGUF/resolve/main/Qwen3-17B-QiMing-V1.0-Total-Recall-Medium.i1-IQ2_S.gguf) | i1-IQ2_S | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/Qwen3-17B-QiMing-V1.0-Total-Recall-Medium-i1-GGUF/resolve/main/Qwen3-17B-QiMing-V1.0-Total-Recall-Medium.i1-IQ2_M.gguf) | i1-IQ2_M | 6.2 | | | [GGUF](https://huggingface.co/mradermacher/Qwen3-17B-QiMing-V1.0-Total-Recall-Medium-i1-GGUF/resolve/main/Qwen3-17B-QiMing-V1.0-Total-Recall-Medium.i1-Q2_K_S.gguf) | i1-Q2_K_S | 6.3 | very low quality | | [GGUF](https://huggingface.co/mradermacher/Qwen3-17B-QiMing-V1.0-Total-Recall-Medium-i1-GGUF/resolve/main/Qwen3-17B-QiMing-V1.0-Total-Recall-Medium.i1-Q2_K.gguf) | i1-Q2_K | 6.7 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/Qwen3-17B-QiMing-V1.0-Total-Recall-Medium-i1-GGUF/resolve/main/Qwen3-17B-QiMing-V1.0-Total-Recall-Medium.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 6.9 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Qwen3-17B-QiMing-V1.0-Total-Recall-Medium-i1-GGUF/resolve/main/Qwen3-17B-QiMing-V1.0-Total-Recall-Medium.i1-IQ3_XS.gguf) | i1-IQ3_XS | 7.4 | | | [GGUF](https://huggingface.co/mradermacher/Qwen3-17B-QiMing-V1.0-Total-Recall-Medium-i1-GGUF/resolve/main/Qwen3-17B-QiMing-V1.0-Total-Recall-Medium.i1-Q3_K_S.gguf) | i1-Q3_K_S | 7.8 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/Qwen3-17B-QiMing-V1.0-Total-Recall-Medium-i1-GGUF/resolve/main/Qwen3-17B-QiMing-V1.0-Total-Recall-Medium.i1-IQ3_S.gguf) | i1-IQ3_S | 7.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Qwen3-17B-QiMing-V1.0-Total-Recall-Medium-i1-GGUF/resolve/main/Qwen3-17B-QiMing-V1.0-Total-Recall-Medium.i1-IQ3_M.gguf) | i1-IQ3_M | 8.0 | | | [GGUF](https://huggingface.co/mradermacher/Qwen3-17B-QiMing-V1.0-Total-Recall-Medium-i1-GGUF/resolve/main/Qwen3-17B-QiMing-V1.0-Total-Recall-Medium.i1-Q3_K_M.gguf) | i1-Q3_K_M | 8.5 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/Qwen3-17B-QiMing-V1.0-Total-Recall-Medium-i1-GGUF/resolve/main/Qwen3-17B-QiMing-V1.0-Total-Recall-Medium.i1-Q3_K_L.gguf) | i1-Q3_K_L | 9.2 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/Qwen3-17B-QiMing-V1.0-Total-Recall-Medium-i1-GGUF/resolve/main/Qwen3-17B-QiMing-V1.0-Total-Recall-Medium.i1-IQ4_XS.gguf) | i1-IQ4_XS | 9.4 | | | [GGUF](https://huggingface.co/mradermacher/Qwen3-17B-QiMing-V1.0-Total-Recall-Medium-i1-GGUF/resolve/main/Qwen3-17B-QiMing-V1.0-Total-Recall-Medium.i1-Q4_0.gguf) | i1-Q4_0 | 9.9 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Qwen3-17B-QiMing-V1.0-Total-Recall-Medium-i1-GGUF/resolve/main/Qwen3-17B-QiMing-V1.0-Total-Recall-Medium.i1-IQ4_NL.gguf) | i1-IQ4_NL | 9.9 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/Qwen3-17B-QiMing-V1.0-Total-Recall-Medium-i1-GGUF/resolve/main/Qwen3-17B-QiMing-V1.0-Total-Recall-Medium.i1-Q4_K_S.gguf) | i1-Q4_K_S | 10.0 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/Qwen3-17B-QiMing-V1.0-Total-Recall-Medium-i1-GGUF/resolve/main/Qwen3-17B-QiMing-V1.0-Total-Recall-Medium.i1-Q4_K_M.gguf) | i1-Q4_K_M | 10.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Qwen3-17B-QiMing-V1.0-Total-Recall-Medium-i1-GGUF/resolve/main/Qwen3-17B-QiMing-V1.0-Total-Recall-Medium.i1-Q4_1.gguf) | i1-Q4_1 | 10.9 | | | [GGUF](https://huggingface.co/mradermacher/Qwen3-17B-QiMing-V1.0-Total-Recall-Medium-i1-GGUF/resolve/main/Qwen3-17B-QiMing-V1.0-Total-Recall-Medium.i1-Q5_K_S.gguf) | i1-Q5_K_S | 12.0 | | | [GGUF](https://huggingface.co/mradermacher/Qwen3-17B-QiMing-V1.0-Total-Recall-Medium-i1-GGUF/resolve/main/Qwen3-17B-QiMing-V1.0-Total-Recall-Medium.i1-Q5_K_M.gguf) | i1-Q5_K_M | 12.2 | | | [GGUF](https://huggingface.co/mradermacher/Qwen3-17B-QiMing-V1.0-Total-Recall-Medium-i1-GGUF/resolve/main/Qwen3-17B-QiMing-V1.0-Total-Recall-Medium.i1-Q6_K.gguf) | i1-Q6_K | 14.1 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
omerbektass/blockassist-bc-keen_fast_giraffe_1756821264
omerbektass
2025-09-02T13:54:48Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "keen fast giraffe", "arxiv:2504.07091", "region:us" ]
null
2025-09-02T13:54:43Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - keen fast giraffe --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
TohanBoss/blockassist-bc-regal_spotted_pelican_1756821022
TohanBoss
2025-09-02T13:54:09Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "regal spotted pelican", "arxiv:2504.07091", "region:us" ]
null
2025-09-02T13:51:53Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - regal spotted pelican --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
xinnn32/blockassist-bc-meek_winged_caterpillar_1756820983
xinnn32
2025-09-02T13:51:14Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "meek winged caterpillar", "arxiv:2504.07091", "region:us" ]
null
2025-09-02T13:50:42Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - meek winged caterpillar --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
akirafudo/blockassist-bc-keen_fast_giraffe_1756820993
akirafudo
2025-09-02T13:50:21Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "keen fast giraffe", "arxiv:2504.07091", "region:us" ]
null
2025-09-02T13:50:16Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - keen fast giraffe --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
jeromex1/Lyra-Mistral7B-irrigation-LoRA
jeromex1
2025-09-02T13:50:07Z
0
0
null
[ "safetensors", "mistral", "lora", "qlora", "agriculture", "irrigation", "ecology", "water", "eau", "STEM", "science", "agronomy", "text-generation", "conversational", "fr", "dataset:custom", "base_model:mistralai/Mistral-7B-Instruct-v0.3", "base_model:adapter:mistralai/Mistral-7B-Instruct-v0.3", "license:mit", "region:us" ]
text-generation
2025-08-31T16:56:18Z
--- language: fr tags: - mistral - lora - qlora - agriculture - irrigation - ecology - water - eau - STEM - science - agronomy license: mit datasets: - custom base_model: mistralai/Mistral-7B-Instruct-v0.3 pipeline_tag: text-generation --- 🌐 **Looking for the English version?** Just scroll down — it's right below! 👇 👉 [Jump to English version](#english-version) # Lyra-Mistral7B-irrigation-LoRA 🌱💧 <img src="https://upload.wikimedia.org/wikipedia/en/c/c3/Flag_of_France.svg" width="40px" height="auto" /> ![SouverainAI](https://img.shields.io/badge/🇫🇷%20SouverainAI-oui-success) ![EUstack](https://img.shields.io/badge/🇪🇺%20EUstack-ready-blue) ## Description Ce modèle est une adaptation LoRA du **Mistral-7B-Instruct-v0.3** spécialisée sur des données d'irrigation agricole (format instruction/réponse en français). Objectif : proposer des recommandations simples d’apports en eau (mm) selon le type de sol, le stade phénologique et la tension hydrique (cbar). ## Détails techniques - Base model: `mistralai/Mistral-7B-Instruct-v0.3` - Technique: QLoRA (4-bit, bitsandbytes) - Modules LoRA: q_proj, k_proj, v_proj, o_proj, down_proj - Epochs: 3 - GPU: A100 (Colab Pro) ## Résultats observés Le modèle apprend à répondre en français par des valeurs numériques claires (mm d’irrigation) et adaptées au contexte du prompt. Comparé au modèle de base, il évite les réponses vagues ou hors sujet. ## Utilisation ```python !pip install -q peft transformers accelerate bitsandbytes sentencepiece huggingface_hub hf_xet ``` ```python from transformers import AutoTokenizer, AutoModelForCausalLM from huggingface_hub import login from peft import PeftModel import torch login(token="MY_HF_TOKEN") #entrer ici votre Token (équivalent d'une clé API gratuite, récupéré sur Hugging Face) # préalablement il faut aussi demander une autorisation (par simple clic sur le bouton dédié) sur la page https://huggingface.co/jeromex1/Lyra-Mistral7B-irrigation-LoRA base_model = "mistralai/Mistral-7B-Instruct-v0.3" lora_model = "jeromex1/Lyra-Mistral7B-irrigation-LoRA" tokenizer = AutoTokenizer.from_pretrained(base_model) model = AutoModelForCausalLM.from_pretrained(base_model, load_in_4bit=True, device_map="auto") model = PeftModel.from_pretrained(model, lora_model) prompt = "contexte : agriculture. sol sableux, tension 70 cbar, stade Croissance, quel apport d'eau ?" inputs = tokenizer(prompt, return_tensors="pt").to(model.device) print(tokenizer.decode(model.generate(**inputs, max_new_tokens=50)[0], skip_special_tokens=True)) ``` ## Recommandation Pour éviter les erreurs liées aux dépendances, à l’absence de GPU ou aux lenteurs extrêmes sur PC, nous conseillons vivement d’utiliser [Google Colab](https://colab.research.google.com/) (gratuit ou idéalement Colab Pro), et d'y sélectionner un GPU pour exécuter ce modèle dans un environnement optimisé. Pour cela, une fois dans le notebook Google Colab, aller dans le menu exécution -> modifier le type d'exécution et choisir GPU T4 si gratuit, A100 si Colab Pro. Et être un peu patient le temps que cela charge... 📘 **Pour en savoir plus**, rendez-vous sur ma page GitHub consacrée au projet : 👉 [Lyra-Mistral7B-irrigation-LoRA](https://github.com/Jerome-openclassroom/Lyra-Mistral7B-irrigation-LoRA) 📘 Découvrez mes **40 projets IA et sciences STEM** ici : 👉 [github.com/Jerome-openclassroom](https://github.com/Jerome-openclassroom) <a name="english-version"></a> # 🌍 English Version ## description This model is a LoRA adaptation of **Mistral-7B-Instruct-v0.3**, specialized in agricultural irrigation data (instruction/response format in French). Goal: to provide simple water input recommendations (in mm) based on soil type, phenological stage, and water tension (cbar). ## Technical Details - Base model: `mistralai/Mistral-7B-Instruct-v0.3` - Technique: QLoRA (4-bit, bitsandbytes) - LoRA modules: q_proj, k_proj, v_proj, o_proj, down_proj - Epochs: 3 - GPU: A100 (Colab Pro) ## Observed Results The model learns to respond in French with clear numerical values (mm of irrigation) adapted to the prompt context. Compared to the base model, it avoids vague or irrelevant answers. ## Usage ```python !pip install -q peft transformers accelerate bitsandbytes sentencepiece huggingface_hub hf_xet ``` ```python from transformers import AutoTokenizer, AutoModelForCausalLM from huggingface_hub import login from peft import PeftModel import torch login(token="MY_HF_TOKEN") #entrer ici votre Token (équivalent d'une clé API gratuite, récupéré sur Hugging Face) # préalablement il faut aussi demander une autorisation (par simple clic sur le bouton dédié) sur la page https://huggingface.co/jeromex1/Lyra-Mistral7B-irrigation-LoRA base_model = "mistralai/Mistral-7B-Instruct-v0.3" lora_model = "jeromex1/Lyra-Mistral7B-irrigation-LoRA" tokenizer = AutoTokenizer.from_pretrained(base_model) model = AutoModelForCausalLM.from_pretrained(base_model, load_in_4bit=True, device_map="auto") model = PeftModel.from_pretrained(model, lora_model) prompt = "contexte : agriculture. sol sableux, tension 70 cbar, stade Croissance, quel apport d'eau ?" inputs = tokenizer(prompt, return_tensors="pt").to(model.device) print(tokenizer.decode(model.generate(**inputs, max_new_tokens=50)[0], skip_special_tokens=True)) ``` To avoid issues related to dependencies, lack of GPU, or extremely slow performance on a PC, we strongly recommend using [Google Colab](https://colab.research.google.com/) (free or ideally Colab Pro), and selecting a GPU to run this model in an optimized environment. To do so, once inside the Google Colab notebook, go to the Runtime menu → Change runtime type → choose GPU (T4 for free tier, A100 for Colab Pro). And be a little patient while it loads... 📘 **To learn more**, visit my GitHub page dedicated to the project : 👉 [Lyra-Mistral7B-irrigation-LoRA](https://github.com/Jerome-openclassroom/Lyra-Mistral7B-irrigation-LoRA/blob/main/README_English.md) 📘 Discover my **40 AI and STEM** science projects here : 👉 [github.com/Jerome-openclassroom](https://github.com/Jerome-openclassroom)
Satram/MANUAL_328_Context
Satram
2025-09-02T13:50:01Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-09-01T11:37:24Z
--- base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** Satram - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
SLAVA34/blockassist-bc-diving_fishy_nightingale_1756820878
SLAVA34
2025-09-02T13:49:48Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "diving fishy nightingale", "arxiv:2504.07091", "region:us" ]
null
2025-09-02T13:49:39Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - diving fishy nightingale --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
MCplayer/voxblink2_samresnet100_ft
MCplayer
2025-09-02T13:48:48Z
0
0
null
[ "onnx", "license:apache-2.0", "region:us" ]
null
2025-09-02T12:59:09Z
--- license: apache-2.0 --- # Speaker Similarity Evaluator - 新格式使用说明 ## 概述 音色相似度评估器现在支持从JSONL文件读取输入数据,并且支持两种prompt音频输入模式: 1. **预分割模式**:直接提供S1和S2的分别音频文件 2. **自动分割模式**:提供combined prompt音频,程序自动按说话人标签分割 ## 输入格式 ### JSONL文件格式 每行是一个JSON对象,必须包含以下字段: #### 必需字段 - `text`: 待评估的文本,包含说话人标签[S1][S2] - `output_audio`: 待评估的音频文件路径 #### prompt音频字段(两种模式择一) **模式1:预分割模式** - `prompt_audio_speaker1`: S1说话人的音频文件 - `prompt_text_speaker1`: S1说话人的文本 - `prompt_audio_speaker2`: S2说话人的音频文件 - `prompt_text_speaker2`: S2说话人的文本 **模式2:自动分割模式** - `prompt_audio`: 包含两个说话人的combined音频文件 - `prompt_text`: 包含说话人标签的文本,如"[S1]文本1[S2]文本2" ### 示例 #### 预分割模式示例 ```json { "text": "[S1]是我对不住你。[S2]没有没有!燕子幸亏咱俩没领证!", "prompt_audio_speaker1": "/path/to/speaker1.wav", "prompt_text_speaker1": "一共二十万我都记着呢。我一赚到钱就马上还给你。", "prompt_audio_speaker2": "/path/to/speaker2.wav", "prompt_text_speaker2": "没关系,我不缺钱。", "output_audio": "/path/to/output.wav" } ``` #### 自动分割模式示例 ```json { "text": "[S1]今天天气真好啊。[S2]是的,阳光明媚。", "prompt_audio": "/path/to/combined_prompt.wav", "prompt_text": "[S1]早上好,今天怎么样?[S2]很好,谢谢你的关心。", "output_audio": "/path/to/output.wav" } ``` #### 混合模式示例(同时提供两种模式,优先使用预分割) ```json { "text": "[S1]是我对不住你。[S2]没有没有!", "prompt_audio": "/path/to/combined.wav", "prompt_text": "[S1]一共二十万我都记着呢。[S2]没关系,我不缺钱。", "prompt_audio_speaker1": "/path/to/speaker1.wav", "prompt_text_speaker1": "一共二十万我都记着呢。我一赚到钱就马上还给你。", "prompt_audio_speaker2": "/path/to/speaker2.wav", "prompt_text_speaker2": "没关系,我不缺钱。", "output_audio": "/path/to/output.wav" } ``` ## 使用方法 ### 命令行运行 ```bash # 使用JSONL文件输入 python test.py --jsonl_path /path/to/your/input.jsonl --output_dir /path/to/results # 使用默认示例数据(向后兼容) python test.py --output_dir /path/to/results ``` ### 程序调用 ```python from test import SpeakerSimilarityEvaluator # 创建评估器 evaluator = SpeakerSimilarityEvaluator(output_dir="/path/to/results") # 从JSONL文件处理 evaluator.process_batch_from_jsonl("/path/to/input.jsonl") # 或者直接传入数据列表(旧接口,向后兼容) input_data = [ { 'prompt_audio': "/path/to/prompt.wav", 'prompt_text': "[S1]文本1[S2]文本2", 'text': "[S1]输出文本1[S2]输出文本2", 'output_audio': "/path/to/output.wav" } ] evaluator.process_batch(input_data) ``` ## 优势 ### 预分割模式的优势 1. **更高精度**:避免了自动分割可能带来的误差 2. **更快速度**:跳过音频分割步骤 3. **更稳定**:不依赖词对齐模型的准确性 ### 自动分割模式的优势 1. **便利性**:只需要提供一个combined音频文件 2. **向后兼容**:与现有数据格式兼容 ## 输出文件结构 ``` results_YYYYMMDD_HHMMSS/ ├── segments/ # 分割后的音频片段 ├── prompts/ # prompt音频的S1和S2片段(仅自动分割模式) ├── temp/ # 临时文件(运行结束后清空) └── results/ # 评估结果 ├── speaker_similarity_results_YYYYMMDD_HHMMSS.jsonl └── evaluation_summary_YYYYMMDD_HHMMSS.json ``` ## 注意事项 1. 确保所有音频文件路径正确且文件存在 2. 文本中的说话人标签格式必须为`[S1]`和`[S2]` 3. 如果同时提供两种模式的数据,程序优先使用预分割模式 4. JSONL文件中的每行必须是有效的JSON格式 5. 程序会自动验证输入数据的完整性,跳过有问题的行并继续处理
Muapi/crimson-luminary
Muapi
2025-09-02T13:48:29Z
0
0
null
[ "lora", "stable-diffusion", "flux.1-d", "license:openrail++", "region:us" ]
null
2025-09-02T13:48:16Z
--- license: openrail++ tags: - lora - stable-diffusion - flux.1-d model_type: LoRA --- # Crimson Luminary ![preview](./preview.jpg) **Base model**: Flux.1 D **Trained words**: ArsMJStyle, Crimson Luminary ## 🧠 Usage (Python) 🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys) ```python import requests, os url = "https://api.muapi.ai/api/v1/flux_dev_lora_image" headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")} payload = { "prompt": "masterpiece, best quality, 1girl, looking at viewer", "model_id": [{"model": "civitai:1133008@1273772", "weight": 1.0}], "width": 1024, "height": 1024, "num_images": 1 } print(requests.post(url, headers=headers, json=payload).json()) ```
okuzarabasi/Qwen3-0.6B-Gensyn-Swarm-grunting_toothy_elk
okuzarabasi
2025-09-02T13:48:25Z
24
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "rl-swarm", "genrl-swarm", "grpo", "gensyn", "I am grunting_toothy_elk", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-30T06:33:53Z
--- library_name: transformers tags: - rl-swarm - genrl-swarm - grpo - gensyn - I am grunting_toothy_elk --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
omerbektass/blockassist-bc-keen_fast_giraffe_1756820879
omerbektass
2025-09-02T13:48:21Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "keen fast giraffe", "arxiv:2504.07091", "region:us" ]
null
2025-09-02T13:48:17Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - keen fast giraffe --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
lqpl/blockassist-bc-hairy_insectivorous_antelope_1756820773
lqpl
2025-09-02T13:47:52Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "hairy insectivorous antelope", "arxiv:2504.07091", "region:us" ]
null
2025-09-02T13:47:11Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - hairy insectivorous antelope --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
hyunjoonkang/mirror_erase_drawing_davla_1
hyunjoonkang
2025-09-02T13:47:47Z
0
0
lerobot
[ "lerobot", "safetensors", "smolvla", "robotics", "dataset:hyunjoonkang/merge_mirror_erase_drawing", "arxiv:2506.01844", "base_model:lerobot/smolvla_base", "base_model:finetune:lerobot/smolvla_base", "license:apache-2.0", "region:us" ]
robotics
2025-09-02T13:47:21Z
--- base_model: lerobot/smolvla_base datasets: hyunjoonkang/merge_mirror_erase_drawing library_name: lerobot license: apache-2.0 model_name: smolvla pipeline_tag: robotics tags: - lerobot - smolvla - robotics --- # Model Card for smolvla <!-- Provide a quick summary of what the model is/does. --> [SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware. This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot). See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index). --- ## How to Get Started with the Model For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy). Below is the short version on how to train and run inference/eval: ### Train from scratch ```bash python -m lerobot.scripts.train \ --dataset.repo_id=${HF_USER}/<dataset> \ --policy.type=act \ --output_dir=outputs/train/<desired_policy_repo_id> \ --job_name=lerobot_training \ --policy.device=cuda \ --policy.repo_id=${HF_USER}/<desired_policy_repo_id> --wandb.enable=true ``` _Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._ ### Evaluate the policy/run inference ```bash python -m lerobot.record \ --robot.type=so100_follower \ --dataset.repo_id=<hf_user>/eval_<dataset> \ --policy.path=<hf_user>/<desired_policy_repo_id> \ --episodes=10 ``` Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint. --- ## Model Details - **License:** apache-2.0
luckeciano/Qwen-2.5-7B-GRPO-NoBaseline-FisherMaskToken-0.1-v2_2890
luckeciano
2025-09-02T13:47:31Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "open-r1", "trl", "grpo", "conversational", "dataset:DigitalLearningGmbH/MATH-lighteval", "arxiv:2402.03300", "base_model:Qwen/Qwen2.5-Math-7B", "base_model:finetune:Qwen/Qwen2.5-Math-7B", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-02T09:20:41Z
--- base_model: Qwen/Qwen2.5-Math-7B datasets: DigitalLearningGmbH/MATH-lighteval library_name: transformers model_name: Qwen-2.5-7B-GRPO-NoBaseline-FisherMaskToken-0.1-v2_2890 tags: - generated_from_trainer - open-r1 - trl - grpo licence: license --- # Model Card for Qwen-2.5-7B-GRPO-NoBaseline-FisherMaskToken-0.1-v2_2890 This model is a fine-tuned version of [Qwen/Qwen2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B) on the [DigitalLearningGmbH/MATH-lighteval](https://huggingface.co/datasets/DigitalLearningGmbH/MATH-lighteval) dataset. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="luckeciano/Qwen-2.5-7B-GRPO-NoBaseline-FisherMaskToken-0.1-v2_2890", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/max-ent-llms/PolicyGradientStability/runs/tv6nf48r) This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.16.0.dev0 - Transformers: 4.49.0 - Pytorch: 2.5.1 - Datasets: 3.4.1 - Tokenizers: 0.21.2 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
omerbkts/blockassist-bc-keen_fast_giraffe_1756820745
omerbkts
2025-09-02T13:46:07Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "keen fast giraffe", "arxiv:2504.07091", "region:us" ]
null
2025-09-02T13:46:03Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - keen fast giraffe --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
lemonhat/Qwen2.5-3B-Instruct-APIGen_5k_hermes_new_1
lemonhat
2025-09-02T13:45:16Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "llama-factory", "full", "generated_from_trainer", "conversational", "base_model:Qwen/Qwen2.5-3B-Instruct", "base_model:finetune:Qwen/Qwen2.5-3B-Instruct", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-02T13:43:59Z
--- library_name: transformers license: other base_model: Qwen/Qwen2.5-3B-Instruct tags: - llama-factory - full - generated_from_trainer model-index: - name: APIGen_5k_hermes_new_1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # APIGen_5k_hermes_new_1 This model is a fine-tuned version of [Qwen/Qwen2.5-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-3B-Instruct) on the APIGen_5k_hermes_new_1 dataset. It achieves the following results on the evaluation set: - Loss: 0.2040 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - total_train_batch_size: 4 - total_eval_batch_size: 4 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.1933 | 0.2398 | 100 | 0.2606 | | 0.2539 | 0.4796 | 200 | 0.2273 | | 0.167 | 0.7194 | 300 | 0.2076 | | 0.2261 | 0.9592 | 400 | 0.2035 | ### Framework versions - Transformers 4.46.1 - Pytorch 2.6.0+cu124 - Datasets 3.1.0 - Tokenizers 0.20.3
cody-li/whisper_fined_tuned_16-16_xl
cody-li
2025-09-02T13:43:10Z
0
0
transformers
[ "transformers", "safetensors", "whisper", "automatic-speech-recognition", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2025-09-02T13:42:45Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
aptosdinoland/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-pawing_twitchy_capybara
aptosdinoland
2025-09-02T13:42:54Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "rl-swarm", "genrl-swarm", "grpo", "gensyn", "I am pawing_twitchy_capybara", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-02T13:27:24Z
--- library_name: transformers tags: - rl-swarm - genrl-swarm - grpo - gensyn - I am pawing_twitchy_capybara --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
0xZeno/flux1-dev-LashGlow-LoRAV1
0xZeno
2025-09-02T13:41:43Z
0
0
diffusers
[ "diffusers", "text-to-image", "diffusers-training", "lora", "flux", "flux-diffusers", "template:sd-lora", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-09-02T11:15:16Z
--- base_model: black-forest-labs/FLUX.1-dev library_name: diffusers license: other instance_prompt: a photo of zzyskpq on a white table widget: - text: a photo of zzyskpq on a white table output: url: image_0.png - text: a photo of zzyskpq on a white table output: url: image_1.png - text: a photo of zzyskpq on a white table output: url: image_2.png - text: a photo of zzyskpq on a white table output: url: image_3.png tags: - text-to-image - diffusers-training - diffusers - lora - flux - flux-diffusers - template:sd-lora --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # Flux DreamBooth LoRA - 0xZeno/flux1-dev-LashGlow-LoRAV1 <Gallery /> ## Model description These are 0xZeno/flux1-dev-LashGlow-LoRAV1 DreamBooth LoRA weights for black-forest-labs/FLUX.1-dev. The weights were trained using [DreamBooth](https://dreambooth.github.io/) with the [Flux diffusers trainer](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/README_flux.md). Was LoRA for the text encoder enabled? False. ## Trigger words You should use `a photo of zzyskpq on a white table` to trigger the image generation. ## Download model [Download the *.safetensors LoRA](0xZeno/flux1-dev-LashGlow-LoRAV1/tree/main) in the Files & versions tab. ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained("black-forest-labs/FLUX.1-dev", torch_dtype=torch.bfloat16).to('cuda') pipeline.load_lora_weights('0xZeno/flux1-dev-LashGlow-LoRAV1', weight_name='pytorch_lora_weights.safetensors') image = pipeline('a photo of zzyskpq on a white table').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## License Please adhere to the licensing terms as described [here](https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md). ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
omerbektass/blockassist-bc-keen_fast_giraffe_1756820400
omerbektass
2025-09-02T13:40:31Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "keen fast giraffe", "arxiv:2504.07091", "region:us" ]
null
2025-09-02T13:40:25Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - keen fast giraffe --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).