modelId
string
author
string
last_modified
timestamp[us, tz=UTC]
downloads
int64
likes
int64
library_name
string
tags
list
pipeline_tag
string
createdAt
timestamp[us, tz=UTC]
card
string
tabito123/gohan-product-recommendation
tabito123
2025-08-21T07:48:32Z
0
0
pytorch
[ "pytorch", "recommendation", "tabular", "ft-transformer", "ja", "license:apache-2.0", "region:us" ]
null
2025-08-21T07:04:25Z
--- license: apache-2.0 language: - ja library_name: pytorch tags: - recommendation - tabular - ft-transformer model_name: gohan-product-recommendation repo_type: model --- # Gohan Product Recommendation Model This model provides product recommendations for Gohan (rice) products using FT-Transformer architecture. ## Model Information - **Architecture**: FT-Transformer - **Training Epochs**: 30 - **Validation Performance**: 0.7736 - **Input Features**: Categorical features - **Output**: Product recommendations ## Usage ```python from inference import load_model, predict # Load the model model = load_model() # Make predictions predictions = predict(model, input_data) ``` ## Files - `epoch_030_p30_0.7736.pt`: Trained PyTorch model (Git LFS) - `gohan_product_master_data.csv`: Product master data - `encoders/`: JSON-encoded feature encoders - `configs/config.json`: Model configuration ## License Apache-2.0
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1755762389
IvanJAjebu
2025-08-21T07:47:34Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "thorny slender capybara", "arxiv:2504.07091", "region:us" ]
null
2025-08-21T07:47:30Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - thorny slender capybara --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Prior-Labs/TabPFN-v2-clf
Prior-Labs
2025-08-21T07:46:47Z
23,964
50
tabpfn
[ "tabpfn", "tabular-classification", "license:other", "region:us" ]
tabular-classification
2025-01-02T14:28:38Z
--- pipeline_tag: tabular-classification library_name: tabpfn license: other license_name: priorlabs-1-1 license_link: https://github.com/PriorLabs/TabPFN/blob/49394b053a6759cfe68e90c21a2d51c31b396768/LICENSE --- # TabPFN v2: A Tabular Foundation Model TabPFN is a transformer-based foundation model for tabular data that leverages prior-data based learning to achieve strong performance on small tabular datasets without requiring task-specific training. ## Installation ```bash pip install tabpfn ``` ## Model Details - **Developed by:** Prior Labs - **Model type:** Transformer-based foundation model for tabular data - **License:** [Prior Labs License (Apache 2.0 with additional attribution requirement)](https://priorlabs.ai/tabpfn-license/) - **Paper:** Published in Nature (January 2025) - **Repository:** [GitHub - priorlabs/tabpfn](https://github.com/priorlabs/tabpfn) ### 📚 Citation ```bibtex @article{hollmann2025tabpfn, title={Accurate predictions on small data with a tabular foundation model}, author={Hollmann, Noah and M{\"u}ller, Samuel and Purucker, Lennart and Krishnakumar, Arjun and K{\"o}rfer, Max and Hoo, Shi Bin and Schirrmeister, Robin Tibor and Hutter, Frank}, journal={Nature}, year={2025}, month={01}, day={09}, doi={10.1038/s41586-024-08328-6}, publisher={Springer Nature}, url={https://www.nature.com/articles/s41586-024-08328-6}, } ``` ## Quick Start 📚 For detailed usage examples and best practices, check out: - [Interactive Colab Tutorial](https://tinyurl.com/tabpfn-colab-local) ## Technical Requirements - Python ≥ 3.9 - PyTorch ≥ 2.1 - scikit-learn ≥ 1.0 - Hardware: 16GB+ RAM, CPU (GPU optional) ## Resources - **Documentation:** https://priorlabs.ai/docs - **Source:** https://github.com/priorlabs/tabpfn - **Paper:** https://doi.org/10.1038/s41586-024-08328-6 ### Team - Noah Hollmann - Samuel Müller - Lennart Purucker - Arjun Krishnakumar - Max Körfer - Shi Bin Hoo - Robin Tibor Schirrmeister - Frank Hutter - Eddie Bergman
llencia/blockassist-bc-wiry_wise_hedgehog_1755762355
llencia
2025-08-21T07:46:28Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "wiry wise hedgehog", "arxiv:2504.07091", "region:us" ]
null
2025-08-21T07:46:19Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - wiry wise hedgehog --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
melvindave/gemma270m-chess-ft
melvindave
2025-08-21T07:46:23Z
0
0
transformers
[ "transformers", "safetensors", "gemma3_text", "text-generation", "text-generation-inference", "unsloth", "trl", "conversational", "en", "base_model:unsloth/gemma-3-270m-it-unsloth-bnb-4bit", "base_model:quantized:unsloth/gemma-3-270m-it-unsloth-bnb-4bit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2025-08-21T07:45:54Z
--- base_model: unsloth/gemma-3-270m-it-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - gemma3_text - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** melvindave - **License:** apache-2.0 - **Finetuned from model :** unsloth/gemma-3-270m-it-unsloth-bnb-4bit This gemma3_text model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
0xaoyama/blockassist-bc-muscular_zealous_gorilla_1755762341
0xaoyama
2025-08-21T07:46:15Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "muscular zealous gorilla", "arxiv:2504.07091", "region:us" ]
null
2025-08-21T07:46:03Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - muscular zealous gorilla --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
thanobidex/blockassist-bc-colorful_shiny_hare_1755760779
thanobidex
2025-08-21T07:45:56Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "colorful shiny hare", "arxiv:2504.07091", "region:us" ]
null
2025-08-21T07:45:52Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - colorful shiny hare --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1755762088
IvanJAjebu
2025-08-21T07:42:40Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "thorny slender capybara", "arxiv:2504.07091", "region:us" ]
null
2025-08-21T07:42:31Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - thorny slender capybara --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
lemonhat/Qwen2.5-Coder-3B-Instruct-t2_25k_v2_tag5_processed
lemonhat
2025-08-21T07:41:06Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "llama-factory", "full", "generated_from_trainer", "conversational", "base_model:Qwen/Qwen2.5-Coder-3B-Instruct", "base_model:finetune:Qwen/Qwen2.5-Coder-3B-Instruct", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-21T07:39:49Z
--- library_name: transformers license: other base_model: Qwen/Qwen2.5-Coder-3B-Instruct tags: - llama-factory - full - generated_from_trainer model-index: - name: t2_25k_v2_tag5_processed results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t2_25k_v2_tag5_processed This model is a fine-tuned version of [Qwen/Qwen2.5-Coder-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-3B-Instruct) on the t2_25k_v2_tag5_processed dataset. It achieves the following results on the evaluation set: - Loss: 0.2197 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - total_train_batch_size: 4 - total_eval_batch_size: 4 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.5549 | 0.0634 | 100 | 0.3669 | | 0.3389 | 0.1268 | 200 | 0.2919 | | 0.3116 | 0.1902 | 300 | 0.2736 | | 0.3141 | 0.2536 | 400 | 0.2597 | | 0.2972 | 0.3171 | 500 | 0.2492 | | 0.3075 | 0.3805 | 600 | 0.2427 | | 0.234 | 0.4439 | 700 | 0.2384 | | 0.3061 | 0.5073 | 800 | 0.2332 | | 0.3022 | 0.5707 | 900 | 0.2293 | | 0.2999 | 0.6341 | 1000 | 0.2274 | | 0.3069 | 0.6975 | 1100 | 0.2262 | | 0.287 | 0.7609 | 1200 | 0.2220 | | 0.2456 | 0.8244 | 1300 | 0.2204 | | 0.2238 | 0.8878 | 1400 | 0.2200 | | 0.2854 | 0.9512 | 1500 | 0.2198 | ### Framework versions - Transformers 4.46.1 - Pytorch 2.6.0+cu124 - Datasets 3.1.0 - Tokenizers 0.20.3
freshcodestech/lingospace
freshcodestech
2025-08-21T07:39:46Z
0
0
null
[ "text=to-image", "lora", "stable-diffusion", "text-to-image", "en", "base_model:stable-diffusion-v1-5/stable-diffusion-v1-5", "base_model:adapter:stable-diffusion-v1-5/stable-diffusion-v1-5", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2025-08-21T07:15:19Z
--- license: creativeml-openrail-m language: - en base_model: - stable-diffusion-v1-5/stable-diffusion-v1-5 pipeline_tag: text-to-image tags: - text=to-image - lora - stable-diffusion --- # Model Description This model is a fine-tuned Stable Diffusion model trained to generate cartoon/anime-style characters in different outfits, roles, and professions. The outputs are vibrant, colorful, and suitable for illustrations, educational content, and storytelling. # Example Outputs - A cute tiger astronaut in a forest. - A robot explorer in the desert. - A cat doctor in a hospital with medical tools. - A cat police officer in a police station. # Intended Uses - Creating illustrations for children’s books. - Designing characters for storytelling, comics, or animations. # How to Use from diffusers import StableDiffusionPipeline import torch pipe = StableDiffusionPipeline.from_pretrained("username/lingospace", torch_dtype=torch.float16).to("cuda") prompt = "a cartoon cat dressed as a firefighter, standing in front of a fire truck" image = pipe(prompt).images[0] image.save("cat_firefighter.png") # Prompt Examples - "a cartoon tiger astronaut standing on the moon, wearing an orange spacesuit, cute and colorful style" - "a robot explorer walking in the desert, cartoon style, expressive face" - "a cartoon cat doctor with stethoscope in a hospital, anime illustration" - "a police cat in uniform sitting at a desk, cartoon style"
llencia/blockassist-bc-wiry_wise_hedgehog_1755761941
llencia
2025-08-21T07:39:25Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "wiry wise hedgehog", "arxiv:2504.07091", "region:us" ]
null
2025-08-21T07:39:21Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - wiry wise hedgehog --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1755761822
IvanJAjebu
2025-08-21T07:38:08Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "thorny slender capybara", "arxiv:2504.07091", "region:us" ]
null
2025-08-21T07:37:58Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - thorny slender capybara --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
usmanalam82/Llama3.2_1B_finetuned
usmanalam82
2025-08-21T07:36:48Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "unsloth", "conversational", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-08-21T07:35:51Z
--- base_model: unsloth/llama-3.2-1b-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama license: apache-2.0 language: - en --- # Uploaded finetuned model - **Developed by:** usmanalam82 - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3.2-1b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
vicvreth/blockassist-bc-stocky_graceful_puma_1755761535
vicvreth
2025-08-21T07:34:30Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "stocky graceful puma", "arxiv:2504.07091", "region:us" ]
null
2025-08-21T07:34:04Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - stocky graceful puma --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
mang3dd/blockassist-bc-tangled_slithering_alligator_1755760104
mang3dd
2025-08-21T07:34:17Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "tangled slithering alligator", "arxiv:2504.07091", "region:us" ]
null
2025-08-21T07:34:13Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - tangled slithering alligator --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
sampingkaca72/blockassist-bc-armored_stealthy_elephant_1755760220
sampingkaca72
2025-08-21T07:34:14Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "armored stealthy elephant", "arxiv:2504.07091", "region:us" ]
null
2025-08-21T07:34:10Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - armored stealthy elephant --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
pidbu/blockassist-bc-whistling_alert_shrew_1755761476
pidbu
2025-08-21T07:32:44Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "whistling alert shrew", "arxiv:2504.07091", "region:us" ]
null
2025-08-21T07:32:06Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - whistling alert shrew --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
llencia/blockassist-bc-wiry_wise_hedgehog_1755761522
llencia
2025-08-21T07:32:34Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "wiry wise hedgehog", "arxiv:2504.07091", "region:us" ]
null
2025-08-21T07:32:26Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - wiry wise hedgehog --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
hakimjustbao/blockassist-bc-raging_subtle_wasp_1755759874
hakimjustbao
2025-08-21T07:31:24Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "raging subtle wasp", "arxiv:2504.07091", "region:us" ]
null
2025-08-21T07:31:20Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - raging subtle wasp --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
parkky21/orpheus-3b-hi-base-en-ft
parkky21
2025-08-21T07:30:09Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:parkky21/orpheus-3b-hi-ft-1e", "base_model:finetune:parkky21/orpheus-3b-hi-ft-1e", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-08-21T07:29:38Z
--- base_model: parkky21/orpheus-3b-hi-ft-1e tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** parkky21 - **License:** apache-2.0 - **Finetuned from model :** parkky21/orpheus-3b-hi-ft-1e This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
katanyasekolah/blockassist-bc-silky_sprightly_cassowary_1755759686
katanyasekolah
2025-08-21T07:29:59Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "silky sprightly cassowary", "arxiv:2504.07091", "region:us" ]
null
2025-08-21T07:29:56Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - silky sprightly cassowary --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
llencia/blockassist-bc-wiry_wise_hedgehog_1755761365
llencia
2025-08-21T07:29:49Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "wiry wise hedgehog", "arxiv:2504.07091", "region:us" ]
null
2025-08-21T07:29:45Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - wiry wise hedgehog --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
llencia/blockassist-bc-wiry_wise_hedgehog_1755761245
llencia
2025-08-21T07:27:47Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "wiry wise hedgehog", "arxiv:2504.07091", "region:us" ]
null
2025-08-21T07:27:43Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - wiry wise hedgehog --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
calegpedia/blockassist-bc-stealthy_slimy_rooster_1755759582
calegpedia
2025-08-21T07:27:16Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "stealthy slimy rooster", "arxiv:2504.07091", "region:us" ]
null
2025-08-21T07:27:13Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - stealthy slimy rooster --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
faiza-safdar177/TinyLlama-PakLegal-QLoRA
faiza-safdar177
2025-08-21T07:25:35Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-08-21T07:25:28Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Team-Atom/smolvlab_record_pp_ryb_t_96_100000
Team-Atom
2025-08-21T07:25:14Z
0
0
lerobot
[ "lerobot", "safetensors", "robotics", "smolvla", "dataset:Team-Atom/PiPl_RYB_test", "arxiv:2506.01844", "base_model:lerobot/smolvla_base", "base_model:finetune:lerobot/smolvla_base", "license:apache-2.0", "region:us" ]
robotics
2025-08-21T07:25:00Z
--- base_model: lerobot/smolvla_base datasets: Team-Atom/PiPl_RYB_test library_name: lerobot license: apache-2.0 model_name: smolvla pipeline_tag: robotics tags: - robotics - smolvla - lerobot --- # Model Card for smolvla <!-- Provide a quick summary of what the model is/does. --> [SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware. This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot). See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index). --- ## How to Get Started with the Model For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy). Below is the short version on how to train and run inference/eval: ### Train from scratch ```bash python -m lerobot.scripts.train \ --dataset.repo_id=${HF_USER}/<dataset> \ --policy.type=act \ --output_dir=outputs/train/<desired_policy_repo_id> \ --job_name=lerobot_training \ --policy.device=cuda \ --policy.repo_id=${HF_USER}/<desired_policy_repo_id> --wandb.enable=true ``` _Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._ ### Evaluate the policy/run inference ```bash python -m lerobot.record \ --robot.type=so100_follower \ --dataset.repo_id=<hf_user>/eval_<dataset> \ --policy.path=<hf_user>/<desired_policy_repo_id> \ --episodes=10 ``` Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint. --- ## Model Details - **License:** apache-2.0
Fatin757/ssf-retriever-modernbert-embed-base
Fatin757
2025-08-21T07:24:04Z
0
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "sentence-similarity", "feature-extraction", "dense", "generated_from_trainer", "dataset_size:6032", "loss:MultipleNegativesRankingLoss", "dataset:Fatin757/ssf-train-valid", "arxiv:1908.10084", "arxiv:1705.00652", "base_model:sentence-transformers/all-MiniLM-L6-v2", "base_model:finetune:sentence-transformers/all-MiniLM-L6-v2", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2025-08-21T07:23:57Z
--- tags: - sentence-transformers - sentence-similarity - feature-extraction - dense - generated_from_trainer - dataset_size:6032 - loss:MultipleNegativesRankingLoss base_model: sentence-transformers/all-MiniLM-L6-v2 widget: - source_sentence: The Senior Technician (Signal and Communications) is technically inclined and skilled in preventive and corrective maintenance of various signal, communication and control systems. He/She provides technical guidance and on-the-job coaching to his team and supervises the work of contractors and external stakeholders to ensure adherence to operating requirements and safety standards. He may be required to perform shift duties at various rail premises such as workshops, depots, train stations, and train tunnels. He is a team-player and is able t communicate with junior and senior staff member to achieve work objectives. sentences: - The Junior Technician (Signal and Communications) is responsible for assisting in the maintenance of signal, communication, and control systems under the supervision of senior staff. This entry-level position involves basic technical tasks and support in preventive and corrective maintenance activities. The Junior Technician will primarily work during standard hours at designated rail facilities, focusing on routine checks and reporting issues to senior technicians. While teamwork is essential, the role requires limited interaction with external contractors and stakeholders, as the emphasis is on learning and skill development within the team. - The Signal and Communications Specialist is a highly skilled professional responsible for the proactive and reactive maintenance of diverse signal, communication, and control systems. This role involves providing technical expertise and coaching to team members, while also overseeing the work of contractors and external partners to ensure compliance with operational protocols and safety regulations. The Specialist may need to work shifts across various rail facilities, including workshops, depots, train stations, and tunnels. A strong collaborator, the Specialist effectively communicates with both junior and senior staff to meet organizational goals. - The Trade Compliance Specialist plays a crucial role in ensuring that our organization adheres to all trade regulatory requirements while collaborating effectively with various stakeholders. This position involves a thorough review of the organization's compliance with applicable regulations, assessing the adequacy and effectiveness of current practices, and providing actionable recommendations for improvement. Furthermore, the Trade Compliance Specialist will engage with colleagues across the region to stay updated on the latest regulatory standards and guidelines, ensuring our compliance efforts are aligned both locally and regionally. Strong communication and coordination skills, along with meticulous attention to detail, are essential for success in this role. - source_sentence: The Quality Assurance Manager manages the conduct of various quality assurance tests and analyses to ensure that the product meets or exceeds specified quality standards and end-user requirements. He/She determines quality assurance testing objectives and reviews test plans to ensure alignment of quality testing governance framework and standards. He ensures that system tests are completed, documented and all problems are resolved before release to users. He anticipates internal and/or external business challenges and/or regulatory issues, and recommends process, product, or service improvements. He may lead projects or project steps within a broader project or have accountability for ongoing activities or objectives. He works in a team setting and is proficient in programming languages required by the organisation. He is familiar with international quality standards and processes, as well as applicable test automation tools. The Quality Assurance Manager champions high service standards in ensuring products are issue-free and is methodical in performing quality assurance testing, anticipating problems and resolving issues that occur. He applies knowledge from multiple disciplines to develop innovative improvement solutions and communicate his improvement recommendations effectively. sentences: - The Bus Maintenance Technician is responsible for executing maintenance tasks on designated bus sub-systems within their area of expertise. Key responsibilities include performing both corrective and preventive maintenance, troubleshooting issues to diagnose faults, and conducting functionality tests after repairs. Additionally, the technician assists with general housekeeping duties and the upkeep of workshop tools and equipment while adhering to Workplace Safety and Health (WSH) protocols. This role requires working in a bus workshop and/or depot environment on a rotating shift basis. The technician is technically skilled and has the chance to enhance their technical knowledge and abilities in maintaining various bus sub-systems. A collaborative team member, they contribute to achieving operational and maintenance goals. - The Quality Control Supervisor oversees the implementation of quality control procedures and assessments to ensure that products consistently meet or exceed established quality benchmarks and customer expectations. They set quality control testing goals and evaluate testing protocols to guarantee compliance with quality governance frameworks and standards. They ensure that system evaluations are thoroughly conducted, documented, and that any identified issues are addressed prior to product launch. They proactively identify potential internal and external challenges and regulatory concerns, recommending enhancements to processes, products, or services. They may take the lead on specific projects or components within larger initiatives and are responsible for ongoing tasks and objectives. They collaborate within a team environment and possess proficiency in relevant programming languages as required by the organization. They are knowledgeable about international quality standards and methodologies, as well as relevant test automation tools. The Quality Control Supervisor promotes exceptional service standards by ensuring that products are free from defects and is meticulous in executing quality control assessments, foreseeing issues and resolving them promptly. They leverage insights from various disciplines to craft innovative solutions for improvement and effectively communicate their recommendations. - 'The Quality Assurance Analyst conducts various assessments and evaluations to confirm that the software meets or falls below specified quality benchmarks and client expectations. They identify quality assurance testing goals and analyze testing strategies to ensure compliance with the quality assurance framework and standards. They ensure that system evaluations are performed, documented, and any issues are noted before deployment to clients. They react to internal and external business challenges and regulatory concerns, suggesting modifications to processes, products, or services. They may assist in projects or project components within a larger framework or have responsibility for continuous tasks or goals. They operate independently and are knowledgeable in programming languages relevant to the organization. They are acquainted with industry-specific quality standards and methods, as well as relevant testing automation tools. The Quality Assurance Analyst supports satisfactory service standards by ensuring software is error-prone and is thorough in conducting quality assessments, addressing issues that arise. They utilize knowledge from a singular discipline to develop standard solutions and communicate their findings adequately. ## Reason The negative description shifts focus from a managerial role in quality assurance to an analyst role, which significantly changes the level of responsibility and scope of work. The Quality Assurance Analyst is more focused on evaluating software rather than managing a team or overseeing quality control processes, altering the core job functions and outcomes.' - source_sentence: The Planning Supervisor (Fleet Management) assists in aircraft lifecycle planning activities and supports in planning of resources to accomplish fleet management functions. He/She generates sub-contract requisitions, conducts inventory planning and control, and reviews warranty claims. He schedules and tracks maintenance work orders as per scheduled maintenance plans. He analyses data from supply chain management (SCM) systems, monitors supplier performance and schedules regular programme reviews with customers and suppliers. He monitors compliance with airworthiness and legislative requirements, and the organisation's safety, health and quality systems. He implements continuous improvement initiatives and lean practices in fleet management to achieve schedule reliability and cost efficiency, improving aircraft performance and availability. He should be methodical and well-organised, and should possess planning and stakeholder management skills. He should be a team player, possess good verbal and written communication skills, and participate in cross-departmental problem-solving to ensure adherence to planned maintenance schedules and uninterrupted supply of planned resources. sentences: - 'The Fleet Operations Manager is responsible for overseeing the daily operations of the fleet and ensuring optimal utilization of resources to achieve operational goals. This role includes generating reports on fleet performance, managing driver schedules, and coordinating vehicle maintenance activities. The manager analyzes operational data to improve efficiency and reduce costs while ensuring compliance with transportation regulations and safety standards. They also monitor fuel consumption and implement strategies for cost savings across the fleet. The ideal candidate should demonstrate strong leadership and organizational skills, with the ability to communicate effectively with team members and stakeholders. A focus on continuous improvement and operational excellence is key to achieving fleet efficiency and effectiveness. ## Reason The negative description focuses on fleet operations management rather than aircraft lifecycle planning, significantly altering the core responsibilities and domain of the role. While both roles share similar titles and some operational aspects, the Fleet Operations Manager is more concerned with vehicle management and transportation regulations rather than aircraft maintenance and compliance.' - The Fleet Planning Coordinator plays a crucial role in overseeing aircraft lifecycle management and resource planning to ensure efficient fleet operations. This position involves generating subcontract requisitions, managing inventory planning and control, and reviewing warranty claims to enhance operational efficiency. The coordinator is responsible for scheduling and tracking maintenance work orders in alignment with maintenance plans. Additionally, they analyze data from supply chain management systems, monitor supplier performance, and conduct regular program reviews with both customers and suppliers. Compliance with airworthiness standards and legislative requirements is paramount, along with adherence to the organization’s safety, health, and quality protocols. The role includes implementing continuous improvement initiatives and lean practices to enhance schedule reliability and cost-effectiveness, ultimately improving aircraft performance and availability. The ideal candidate should be methodical, well-organized, and possess strong planning and stakeholder management skills. Effective communication abilities and a collaborative mindset are essential for successful cross-departmental problem-solving to maintain planned maintenance schedules and ensure a steady supply of necessary resources. - The Freight Operations Manager is tasked with overseeing and optimizing freight operational policies, standards, and procedures to align with the needs of the freight business and its clients. This role involves the implementation of effective freight solutions while managing business resources, including personnel, internal assets, and external vendors. The ideal candidate should be resourceful and analytical, capable of securing support from both internal and external stakeholders. Additionally, the Freight Operations Manager is expected to lead a team, make independent business decisions, and take responsibility for the department's profitability. - source_sentence: The Head of Education and Programmes oversees the delivery of educational programmes for a diverse group of audiences, ranging from senior executives to students and members of the public. These programmes are designed to broaden science, arts and/or cultural awareness and knowledge. He/She is also responsible for the management of budgets for these programmes, and leads the negotiation with external vendors, contractors and suppliers in the development and execution of these programmes. Innovative and insightful, he displays creativity and strong communication skills in bringing educational programmes to life for his audiences. He is adept in building and maintaining relationships with multiple stakeholders involved in the development and execution of the educational programmes. He also serves as a mentor to direct reports, and provides operational guidance to them on the development and execution of the educational programmes. He works in a flexible work-week as these educational programmes often occur through weekends and public holidays. sentences: - The Cybersecurity Product Manager leads the evaluation of information security and cyber threats related to product innovation, offering insights on necessary control measures in line with risk policies and standards. This role involves managing and coordinating responses to regulatory audits, inquiries, and inspections while ensuring that cybersecurity policies and standards are effectively established and enforced. The manager supervises the creation of analytical reports and implements strategic policies, guiding the ongoing monitoring and management of security operations and incident response efforts. He/She is responsible for team performance and results, fostering communication and collaboration with stakeholders regarding security protocols. With a strong understanding of cybersecurity standards and frameworks, the Cybersecurity Product Manager ensures compliance with the Cyber Security Act 2018. Utilizing a variety of cybersecurity monitoring tools and techniques tailored to the organization’s specific needs, he/she applies risk mitigation strategies to address potential cybersecurity challenges in products. The ideal candidate is proactive, analytical, and adept at foreseeing cybersecurity risks, ensuring they are addressed before they escalate. Excellent communication skills and the ability to cultivate a collaborative work environment are essential for success in this role. - 'The Director of Community Engagement is tasked with managing outreach initiatives aimed at various community groups, including local leaders, residents, and public service members. These initiatives are designed to enhance public awareness of health, safety, and environmental issues. The Director is also in charge of overseeing funding for these initiatives and leads discussions with local organizations, partners, and service providers to facilitate these outreach efforts. He/she demonstrates creativity and effective communication skills in promoting community engagement activities. Additionally, the Director builds and maintains relationships with diverse stakeholders involved in the planning and execution of community initiatives. He/she provides mentorship to team members and offers operational guidance in the development and execution of these outreach initiatives. The position requires adaptability, as many community engagement activities often take place during evenings and weekends. ## Reason The negative description differs from the anchor by focusing on community outreach rather than educational programmes, highlighting a distinct function and domain shift. The job role is "Director of Community Engagement."' - The Director of Learning and Development is responsible for the implementation and management of educational initiatives aimed at a wide range of participants, including corporate leaders, students, and community members. These initiatives are crafted to enhance understanding and appreciation of science, arts, and cultural topics. The Director also oversees budget allocations for these initiatives and plays a key role in negotiating with external partners, vendors, and service providers to ensure successful programme delivery. With a focus on innovation and creativity, he/she excels in communication, making educational initiatives engaging and impactful for all participants. Furthermore, the Director fosters strong relationships with various stakeholders involved in the planning and execution of learning programmes. He/she also mentors team members, offering operational support and guidance in the development and implementation of these educational initiatives. The role requires flexibility, as many learning programmes are scheduled on weekends and public holidays. - source_sentence: The Deck Officer (Special Limit) performs bridge navigation and deck watch duties, and voyage planning on board a ship operating within Singapores 'Special Limit' or about 30 nautical miles from Singapores port. He/She assists in search and rescue operations, and is responsible for maintaining the bridge navigational and communications, fire-fighting and life-saving equipment. He must pass a colour vision test and fulfil the requirements stipulated in the Standards of Training, Certification and Watchkeeping for Seafarers (STCW) issued by the International Maritime Organisation (IMO). sentences: - The Bus Maintenance Technician plays a vital role in assisting the team with regular bus servicing and preventive maintenance tasks. Responsibilities include organizing work activities, executing designated servicing and maintenance operations on various bus sub-systems, maintaining cleanliness of workshop tools and equipment, and strictly following Workplace Safety and Health (WSH) protocols. The technician may also be called upon to provide on-the-road assistance during bus breakdowns. This position requires working in a bus workshop or depot environment on a rotating shift schedule. The technician will have opportunities to collaborate with colleagues, enhancing their experience and technical skills in bus maintenance. - 'The Deck Officer (Unlimited) carries out bridge navigation and deck watch responsibilities, along with voyage planning for a ship operating beyond Singapore''s ''Special Limit''. This position includes participating in search and rescue operations and is tasked with ensuring the upkeep of navigational systems, communication devices, and emergency equipment. Applicants are required to pass a color vision assessment and adhere to the standards specified in the International Maritime Organisation (IMO) guidelines for maritime personnel. ## Reason The negative description presents a Deck Officer (Unlimited) role, which operates beyond the ''Special Limit'' as opposed to within it, indicating a different scope of responsibility and operational context. The focus on an ''Unlimited'' capacity suggests a broader range of duties and navigation responsibilities compared to the original role.' - The Navigation Officer (Special Limit) is responsible for conducting bridge navigation and overseeing deck watch operations while planning voyages aboard a vessel operating within Singapore's 'Special Limit', approximately 30 nautical miles from the port. This role involves assisting in search and rescue missions and ensuring the proper maintenance of navigational and communication equipment, as well as fire-fighting and life-saving apparatus. Candidates must pass a color vision test and meet the criteria outlined in the Standards of Training, Certification and Watchkeeping for Seafarers (STCW) as mandated by the International Maritime Organisation (IMO). datasets: - Fatin757/ssf-train-valid pipeline_tag: sentence-similarity library_name: sentence-transformers --- # SentenceTransformer based on sentence-transformers/all-MiniLM-L6-v2 This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) on the [ssf-train-valid](https://huggingface.co/datasets/Fatin757/ssf-train-valid) dataset. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) <!-- at revision c9745ed1d9f207416be6d2e6f8de32d1f16199bf --> - **Maximum Sequence Length:** 256 tokens - **Output Dimensionality:** 384 dimensions - **Similarity Function:** Cosine Similarity - **Training Dataset:** - [ssf-train-valid](https://huggingface.co/datasets/Fatin757/ssf-train-valid) <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 256, 'do_lower_case': False, 'architecture': 'BertModel'}) (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("Fatin757/ssf-retriever-modernbert-embed-base") # Run inference sentences = [ "The Deck Officer (Special Limit) performs bridge navigation and deck watch duties, and voyage planning on board a ship operating within Singapores 'Special Limit' or about 30 nautical miles from Singapores port. He/She assists in search and rescue operations, and is responsible for maintaining the bridge navigational and communications, fire-fighting and life-saving equipment. He must pass a colour vision test and fulfil the requirements stipulated in the Standards of Training, Certification and Watchkeeping for Seafarers (STCW) issued by the International Maritime Organisation (IMO).", "The Navigation Officer (Special Limit) is responsible for conducting bridge navigation and overseeing deck watch operations while planning voyages aboard a vessel operating within Singapore's 'Special Limit', approximately 30 nautical miles from the port. This role involves assisting in search and rescue missions and ensuring the proper maintenance of navigational and communication equipment, as well as fire-fighting and life-saving apparatus. Candidates must pass a color vision test and meet the criteria outlined in the Standards of Training, Certification and Watchkeeping for Seafarers (STCW) as mandated by the International Maritime Organisation (IMO).", "The Deck Officer (Unlimited) carries out bridge navigation and deck watch responsibilities, along with voyage planning for a ship operating beyond Singapore's 'Special Limit'. This position includes participating in search and rescue operations and is tasked with ensuring the upkeep of navigational systems, communication devices, and emergency equipment. Applicants are required to pass a color vision assessment and adhere to the standards specified in the International Maritime Organisation (IMO) guidelines for maritime personnel.\n\n## Reason\nThe negative description presents a Deck Officer (Unlimited) role, which operates beyond the 'Special Limit' as opposed to within it, indicating a different scope of responsibility and operational context. The focus on an 'Unlimited' capacity suggests a broader range of duties and navigation responsibilities compared to the original role.", ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 384] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities) # tensor([[1.0000, 0.9752, 0.4232], # [0.9752, 1.0000, 0.4447], # [0.4232, 0.4447, 1.0000]]) ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### ssf-train-valid * Dataset: [ssf-train-valid](https://huggingface.co/datasets/Fatin757/ssf-train-valid) at [dc3bb19](https://huggingface.co/datasets/Fatin757/ssf-train-valid/tree/dc3bb190639c78784f80f4eeae998321843d93e5) * Size: 6,032 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 64 tokens</li><li>mean: 168.21 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 70 tokens</li><li>mean: 162.17 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 73 tokens</li><li>mean: 175.32 tokens</li><li>max: 256 tokens</li></ul> | * Samples: | anchor | positive | negative | |:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | <code>The Assistant Research Director works role in the field of social work research. He/She has expertise and experience in domains under social work research in order to assist in supervising advance research designs, methods, collection and analysis of data, project management and collaborations with external organisations. He leads the formulation of systemic, collaborative research, integration of research findings to social service, fund management, administrative and operational functions, and strategic foreign analysis with professionals. He is also responsible for advising external organisations and related ministries on social work-related protocols and programmes. A highly experienced researcher who is committed, service-orientated and possesses the willingness to mentor, the Assistant Research Director works in academic settings. He also works in collaboration with other agencies and ministries and academic institutions in the course of his work.</code> | <code>The Associate Director of Social Work Research plays a pivotal role in advancing the field of social work through comprehensive research initiatives. This individual possesses extensive expertise in various aspects of social work research, enabling them to oversee complex research designs, methodologies, and data analysis. They are responsible for managing projects and fostering collaborations with external entities, ensuring that research findings are effectively integrated into social services. Additionally, the Associate Director handles fund management, administrative tasks, and strategic analyses in collaboration with professionals in the field. They also provide guidance to external organizations and government ministries regarding social work protocols and programs. A dedicated and experienced researcher, the Associate Director is committed to service excellence and mentoring others, working closely with academic institutions and other agencies throughout their career.</code> | <code>The Senior Director of Social Services oversees the operations within community service organizations. This leader has substantial experience in managing various social service programs and is responsible for ensuring effective service delivery and compliance with regulations. They lead the development of community outreach initiatives, program evaluations, and inter-agency collaborations. The Senior Director also manages budgets, administrative functions, and strategic planning with a focus on enhancing service quality. They provide support to local government bodies and community organizations regarding social service practices and initiatives. A seasoned professional in the field, the Senior Director is dedicated to community engagement and workforce development, often working with different stakeholders to improve service outcomes.<br><br>## Reason<br>The negative description focuses on a different role within social services rather than research, emphasizing operational management and comm...</code> | | <code>The Senior Assistant Engineer/Assistant Engineer (Automatic Fare Collection) is responsible for supervising his/her team in performing preventive and corrective maintenance work on Automatic Fare Collection (AFC) systems. His duties also include proposing workflow improvements to improve the reliability of the AFC systems. He also manages teams performance in achieving established Key Performance Indicators (KPIs), as well as facilitating the work of external contractors. He is required to carry out his duties in the depot, workshop and/or at various train stations during train operating hours. He is meticulous, analytical, conducts hi work and leads his team in a systematic approach to resolve technical issues and challenges.</code> | <code>The Lead Systems Engineer (Automatic Fare Collection) oversees a team dedicated to executing preventive and corrective maintenance on Automatic Fare Collection (AFC) systems. This role involves recommending workflow enhancements to boost the reliability of AFC systems. Additionally, the Lead Systems Engineer is responsible for managing team performance to meet established Key Performance Indicators (KPIs) and coordinating the efforts of external contractors. The position requires working in depots, workshops, and various train stations during operational hours. An ideal candidate is detail-oriented, analytical, and employs a systematic approach to lead the team in addressing technical challenges effectively.</code> | <code>The Junior Systems Analyst (Automatic Payment Processing) assists in the execution of routine maintenance tasks related to Automatic Payment Processing systems. This position involves implementing minor adjustments to enhance system performance. The Junior Systems Analyst also tracks team progress towards achieving set performance metrics and collaborates with external vendors. The role requires presence in offices and service centers during operational hours. A successful candidate is organized, detail-focused, and supports the team in troubleshooting system issues and operational hurdles.<br><br>## Reason<br>The negative description outlines the role of a Junior Systems Analyst in Automatic Payment Processing, which differs from the original role focused on Automatic Fare Collection systems. The responsibilities and scope are distinct, emphasizing a lower seniority level and a different function within a related domain.</code> | | <code>The Senior Engineer/Engineer (Permanent Way and Civil Structure) leads multiple teams in performing preventive and corrective maintenance on tracks, railway reserves and buildings. He/She is accountable for planning the maintenance work activities, providing technical advice to team members as well as supervising complex issues pertaining to fault analysis and testing of permanent ways and civil structures. He is also involved in the engagement and management of external contractors and ensuring the achievement of operating standards and quality standards. He is required to work in shifts and carries out his duties at various rail premises such as on train tracks, in train tunnels and at various train stations. He has a strong understanding of civil and structural design and is methodical in approaching engineering challenges. He is a team player with good interpersonal skills and is able to demonstrate strong supervisory and leadership skills to implement work processes and systems to...</code> | <code>The Lead Civil Engineer for Rail Infrastructure is responsible for overseeing multiple teams dedicated to the preventive and corrective maintenance of railway tracks, reserves, and associated structures. This role entails planning maintenance activities, offering technical guidance to team members, and addressing complex issues related to fault analysis and testing of civil infrastructure. The Lead Civil Engineer also manages external contractors to ensure compliance with operating and quality standards. The position requires shift work across various rail facilities, including train tracks, tunnels, and stations. A deep understanding of civil and structural design is essential, along with a methodical approach to engineering challenges. The ideal candidate is a collaborative team player with excellent interpersonal skills and proven supervisory and leadership abilities to effectively implement work processes that meet operational needs.</code> | <code>The Junior Civil Engineer for Urban Development assists in various projects related to the maintenance and construction of urban infrastructure, such as roads, bridges, and public facilities. This position focuses on supporting senior engineers in the planning and execution of maintenance tasks while providing basic technical assistance. The Junior Civil Engineer engages with contractors to facilitate project execution but is not primarily responsible for ensuring quality standards. The role does not involve shift work and is typically based in an office environment, with occasional site visits. A foundational knowledge of civil engineering principles is required, and the candidate should exhibit teamwork skills and a willingness to learn from experienced engineers to contribute to project success.<br><br>## Reason<br>The negative description is for a Junior Civil Engineer in Urban Development, which differs from the Senior Engineer role in the anchor due to its focus on urban infrastructure ra...</code> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim", "gather_across_devices": false } ``` ### Evaluation Dataset #### ssf-train-valid * Dataset: [ssf-train-valid](https://huggingface.co/datasets/Fatin757/ssf-train-valid) at [dc3bb19](https://huggingface.co/datasets/Fatin757/ssf-train-valid/tree/dc3bb190639c78784f80f4eeae998321843d93e5) * Size: 1,508 evaluation samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 58 tokens</li><li>mean: 167.27 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 61 tokens</li><li>mean: 161.99 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 79 tokens</li><li>mean: 175.94 tokens</li><li>max: 256 tokens</li></ul> | * Samples: | anchor | positive | negative | |:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | <code>The Chief Executive Officer/Chief Operating Officer/Managing Director/General Manager/President defines the long-term strategic direction to grow the business in line with the organisations overall vision, mission and values. He/She translates broad goals into achievable steps, anticipates and stays ahead of trends, and takes advantage of business opportunities. He represents the organisation with customers, investors, and business partners, and holds responsibility for fostering a culture of workplace safety and health and adherence to industry quality standards. He inspires the organisation towards achieving business goals and fulfilling the vision, mission and values by striving for continuous improvement, driving innovation and equipping the organisation to embrace change. He possesses excellent analytical, problem-solving and leadership skills and is an effective people leader.</code> | <code>The Chief Executive Officer (CEO) is responsible for establishing the long-term strategic vision to enhance the growth of the organization in alignment with its core values and mission. This role involves translating overarching objectives into actionable plans, proactively identifying emerging trends, and capitalizing on new business opportunities. The CEO serves as the primary representative of the organization to clients, investors, and partners, while also ensuring a commitment to workplace safety, health, and compliance with industry quality standards. By fostering a culture of continuous improvement and innovation, the CEO motivates the organization to achieve its goals and fulfill its vision. Strong analytical, problem-solving, and leadership capabilities are essential, as well as the ability to effectively lead and inspire teams.</code> | <code>The Chief Executive Officer (CEO) of a Non-Profit Organization is tasked with defining the long-term strategic initiatives to enhance community outreach in line with the organization’s mission and values. This position involves translating general objectives into specific community programs, predicting and responding to social trends, and leveraging partnerships for funding opportunities. The CEO represents the organization to stakeholders, donors, and community leaders, while being responsible for promoting a culture of volunteer engagement and compliance with regulatory standards. By driving community involvement and fostering innovation, the CEO encourages the organization to meet its outreach goals and fulfill its mission. Exceptional communication, problem-solving, and leadership skills are crucial, along with the ability to effectively manage a diverse group of volunteers.<br><br>## Reason<br>The negative description differs from the anchor by focusing on a non-profit context rather than ...</code> | | <code>The Assistant Technical Superintendent monitors ship operations and evaluates technical aspects of vessels for maintenance needs. He/She collaborates with vessel operators to develop the proper technical repair plans to address identified maintenance needs, and supervises maintenance procedures to ensure compliance with port rules and regulations, as well as international codes and regulations, including the International Maritime Organisation (IMO) code, International Labour Organisation (ILO) regulations, the International Safety Management (ISM) code, International Ship and Port Facility Security (ISPS) code, Maritime Labour Convention (MLC) regulations, and relevant ISO standards. He is also in-charge of crew-level administration matters. He is flexible and possesses strong initiative and good communication skills</code> | <code>The Marine Technical Supervisor is responsible for overseeing ship operations and assessing the technical requirements for vessel maintenance. This role involves working closely with vessel operators to create effective technical repair strategies that address maintenance issues. The Marine Technical Supervisor ensures that all maintenance activities adhere to port regulations and international standards, including the codes set forth by the International Maritime Organisation (IMO), International Labour Organisation (ILO), International Safety Management (ISM), International Ship and Port Facility Security (ISPS), Maritime Labour Convention (MLC), and applicable ISO standards. Additionally, this position includes managing crew-related administrative tasks. The ideal candidate should demonstrate flexibility, strong initiative, and excellent communication skills.</code> | <code>The Junior Marine Safety Officer is tasked with ensuring compliance with safety regulations and conducting safety drills on board vessels. This role focuses on monitoring safety protocols and providing training to crew members to maintain a safe working environment. The Junior Marine Safety Officer will also be responsible for reporting safety incidents and recommending improvements to safety procedures. This position requires attention to detail and the ability to communicate effectively with crew members. However, it does not involve technical evaluations of vessel maintenance or management of repair plans.</code> | | <code>Make-up and/or Hair Artists are responsible for applying make-up and hairstyles for cast before and during a performance to capture their visual appearance in line with the desired look and vision of the production as outlined by the make-up and hair design plans. This may include the application of both cosmetic and special effects make-up. They are responsible for translating the vision for each cast into their physical appearance through effective make-up and hairstyles. Make-up and/or Hair Artists need to be aware of factors such as production lighting that may impact the appearance of make-up and hair. They should also consult with cast on any skincare concerns or allergic precautions and be able to cater to cast of all age groups, genders and racial/ethnic backgrounds. In productions where cast are responsible for their own make-up and hair, Make-up and/or Hair Artists may provide additional support and assistance. Make-up and/or Hair Artists are typically present in larger venue...</code> | <code>The Makeup and Hair Designer is tasked with creating and applying makeup and hairstyles for performers prior to and during shows, ensuring their visual presentation aligns with the artistic vision of the production as outlined in the design plans. This includes the use of both cosmetic and special effects makeup. The designer is responsible for translating the creative vision into the performers' physical appearances through skillful application of makeup and hairstyles. They must consider elements such as production lighting that can affect the final look and engage in discussions with performers regarding any skincare issues or allergies. The role requires adaptability to work with individuals of all ages, genders, and diverse backgrounds. In larger productions, the Makeup and Hair Designer typically operates within a dedicated team, while in smaller settings, these responsibilities may be shared with other production staff.</code> | <code>The Makeup and Hair Coordinator is responsible for overseeing the application of makeup and hairstyles for models during photo shoots to ensure their appearance meets the specific aesthetic requirements of the campaign as defined by the creative team. This may involve the use of both traditional and avant-garde makeup techniques. The coordinator is tasked with interpreting the creative brief into the models' looks through precise makeup and hairstyling. They must take into account factors such as camera lighting that could influence the makeup and hair appearance. The role includes consulting with models about any skin sensitivities or allergies and requires versatility in working with models of various ages, genders, and cultural backgrounds. In large-scale campaigns, the Makeup and Hair Coordinator may lead a team of artists, while in smaller projects, they may personally handle the application of makeup and hair.<br><br>## Reason<br>The negative description presents a Makeup and Hair Coordin...</code> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim", "gather_across_devices": false } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: epoch - `per_device_train_batch_size`: 32 - `per_device_eval_batch_size`: 16 - `gradient_accumulation_steps`: 16 - `learning_rate`: 2e-05 - `num_train_epochs`: 5 - `lr_scheduler_type`: cosine - `warmup_ratio`: 0.1 - `bf16`: True - `tf32`: False - `load_best_model_at_end`: True - `batch_sampler`: no_duplicates #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: epoch - `prediction_loss_only`: True - `per_device_train_batch_size`: 32 - `per_device_eval_batch_size`: 16 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 16 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 2e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 5 - `max_steps`: -1 - `lr_scheduler_type`: cosine - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.1 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: True - `fp16`: False - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: False - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: True - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch_fused - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: None - `hub_always_push`: False - `hub_revision`: None - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `include_for_metrics`: [] - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `use_liger_kernel`: False - `liger_kernel_config`: None - `eval_use_gather_object`: False - `average_tokens_across_devices`: False - `prompts`: None - `batch_sampler`: no_duplicates - `multi_dataset_batch_sampler`: proportional - `router_mapping`: {} - `learning_rate_mapping`: {} </details> ### Training Logs | Epoch | Step | Training Loss | Validation Loss | |:-------:|:------:|:-------------:|:---------------:| | 1.0 | 12 | 0.1362 | 0.0076 | | 2.0 | 24 | 0.0121 | 0.0041 | | 3.0 | 36 | 0.0082 | 0.0035 | | 4.0 | 48 | 0.0072 | 0.0033 | | **5.0** | **60** | **0.0071** | **0.0033** | * The bold row denotes the saved checkpoint. ### Framework Versions - Python: 3.12.11 - Sentence Transformers: 5.1.0 - Transformers: 4.55.2 - PyTorch: 2.8.0+cu128 - Accelerate: 1.10.0 - Datasets: 4.0.0 - Tokenizers: 0.21.4 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### MultipleNegativesRankingLoss ```bibtex @misc{henderson2017efficient, title={Efficient Natural Language Response Suggestion for Smart Reply}, author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil}, year={2017}, eprint={1705.00652}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
vukrosic/hybrid-llm
vukrosic
2025-08-21T07:23:01Z
0
1
null
[ "pytorch", "hybrid_llm", "region:us" ]
null
2025-08-21T06:25:33Z
# Hybrid LLM Model This is a hybrid transformer-Mamba model uploaded via script. ## Model Details - **Architecture**: Hybrid Transformer-Mamba - **Parameters**: 43,819,776 - **Config**: { "vocab_size": 49152, "hidden_size": 384, "num_layers": 8, "num_heads": 8, "ssm_state_size": 16, "conv_kernel": 4, "expand_factor": 2, "layer_pattern": "MAMAMAMA", "max_seq_len": 512, "batch_size": 32, "num_documents": 500, "learning_rate": 0.0005, "num_steps": 500, "dropout": 0.1, "grad_clip": 1.0, "log_every": 50, "experiment_name": "pattern_ablation", "pattern_name": "MAMAMAMA", "eval_every": 100, "save_every": 2000, "num_eval_batches": 50, "hf_repo": "vukrosic/hybrid-llm" } ## Usage ```python from transformers import AutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained("vukrosic/hybrid-llm") ```
teysty/vjepa2-vitl-fpc16-256-ssv2-fdet_64frames_1clip_5epochs
teysty
2025-08-21T07:22:29Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "vjepa2", "video-classification", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
video-classification
2025-08-21T07:21:39Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
unitova/blockassist-bc-zealous_sneaky_raven_1755759271
unitova
2025-08-21T07:22:12Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "zealous sneaky raven", "arxiv:2504.07091", "region:us" ]
null
2025-08-21T07:22:08Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - zealous sneaky raven --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1755760803
IvanJAjebu
2025-08-21T07:21:17Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "thorny slender capybara", "arxiv:2504.07091", "region:us" ]
null
2025-08-21T07:21:02Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - thorny slender capybara --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
kojeklollipop/blockassist-bc-spotted_amphibious_stork_1755759137
kojeklollipop
2025-08-21T07:18:43Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "spotted amphibious stork", "arxiv:2504.07091", "region:us" ]
null
2025-08-21T07:18:39Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - spotted amphibious stork --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1755760478
IvanJAjebu
2025-08-21T07:15:48Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "thorny slender capybara", "arxiv:2504.07091", "region:us" ]
null
2025-08-21T07:15:39Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - thorny slender capybara --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
aleebaster/blockassist-bc-sly_eager_boar_1755758750
aleebaster
2025-08-21T07:11:31Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "sly eager boar", "arxiv:2504.07091", "region:us" ]
null
2025-08-21T07:11:25Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - sly eager boar --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
thanobidex/blockassist-bc-colorful_shiny_hare_1755758737
thanobidex
2025-08-21T07:11:26Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "colorful shiny hare", "arxiv:2504.07091", "region:us" ]
null
2025-08-21T07:11:22Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - colorful shiny hare --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
chihangc/1823-whisper-1823only-20250821-1
chihangc
2025-08-21T07:08:30Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-08-21T07:08:25Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mradermacher/LFM2-VL-450M-i1-GGUF
mradermacher
2025-08-21T07:08:03Z
169
1
transformers
[ "transformers", "gguf", "liquid", "lfm2", "lfm2-vl", "edge", "en", "base_model:LiquidAI/LFM2-VL-450M", "base_model:quantized:LiquidAI/LFM2-VL-450M", "license:other", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2025-08-17T13:53:22Z
--- base_model: LiquidAI/LFM2-VL-450M language: - en library_name: transformers license: other license_link: LICENSE license_name: lfm1.0 mradermacher: readme_rev: 1 quantized_by: mradermacher tags: - liquid - lfm2 - lfm2-vl - edge --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> <!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K_M Q4_0 IQ3_XS Q4_1 IQ3_S --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> weighted/imatrix quants of https://huggingface.co/LiquidAI/LFM2-VL-450M <!-- provided-files --> ***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#LFM2-VL-450M-i1-GGUF).*** static quants are available at https://huggingface.co/mradermacher/LFM2-VL-450M-GGUF **This is a vision model - mmproj files (if any) will be in the [static repository](https://huggingface.co/mradermacher/LFM2-VL-450M-GGUF).** ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/LFM2-VL-450M-i1-GGUF/resolve/main/LFM2-VL-450M.imatrix.gguf) | imatrix | 0.1 | imatrix file (for creating your own qwuants) | | [GGUF](https://huggingface.co/mradermacher/LFM2-VL-450M-i1-GGUF/resolve/main/LFM2-VL-450M.i1-IQ1_S.gguf) | i1-IQ1_S | 0.2 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/LFM2-VL-450M-i1-GGUF/resolve/main/LFM2-VL-450M.i1-IQ1_M.gguf) | i1-IQ1_M | 0.2 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/LFM2-VL-450M-i1-GGUF/resolve/main/LFM2-VL-450M.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 0.2 | | | [GGUF](https://huggingface.co/mradermacher/LFM2-VL-450M-i1-GGUF/resolve/main/LFM2-VL-450M.i1-IQ2_XS.gguf) | i1-IQ2_XS | 0.2 | | | [GGUF](https://huggingface.co/mradermacher/LFM2-VL-450M-i1-GGUF/resolve/main/LFM2-VL-450M.i1-IQ2_S.gguf) | i1-IQ2_S | 0.2 | | | [GGUF](https://huggingface.co/mradermacher/LFM2-VL-450M-i1-GGUF/resolve/main/LFM2-VL-450M.i1-IQ2_M.gguf) | i1-IQ2_M | 0.2 | | | [GGUF](https://huggingface.co/mradermacher/LFM2-VL-450M-i1-GGUF/resolve/main/LFM2-VL-450M.i1-Q2_K_S.gguf) | i1-Q2_K_S | 0.3 | very low quality | | [GGUF](https://huggingface.co/mradermacher/LFM2-VL-450M-i1-GGUF/resolve/main/LFM2-VL-450M.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 0.3 | lower quality | | [GGUF](https://huggingface.co/mradermacher/LFM2-VL-450M-i1-GGUF/resolve/main/LFM2-VL-450M.i1-Q2_K.gguf) | i1-Q2_K | 0.3 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/LFM2-VL-450M-i1-GGUF/resolve/main/LFM2-VL-450M.i1-IQ3_XS.gguf) | i1-IQ3_XS | 0.3 | | | [GGUF](https://huggingface.co/mradermacher/LFM2-VL-450M-i1-GGUF/resolve/main/LFM2-VL-450M.i1-IQ3_S.gguf) | i1-IQ3_S | 0.3 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/LFM2-VL-450M-i1-GGUF/resolve/main/LFM2-VL-450M.i1-Q3_K_S.gguf) | i1-Q3_K_S | 0.3 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/LFM2-VL-450M-i1-GGUF/resolve/main/LFM2-VL-450M.i1-IQ3_M.gguf) | i1-IQ3_M | 0.3 | | | [GGUF](https://huggingface.co/mradermacher/LFM2-VL-450M-i1-GGUF/resolve/main/LFM2-VL-450M.i1-Q3_K_M.gguf) | i1-Q3_K_M | 0.3 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/LFM2-VL-450M-i1-GGUF/resolve/main/LFM2-VL-450M.i1-Q3_K_L.gguf) | i1-Q3_K_L | 0.3 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/LFM2-VL-450M-i1-GGUF/resolve/main/LFM2-VL-450M.i1-IQ4_XS.gguf) | i1-IQ4_XS | 0.3 | | | [GGUF](https://huggingface.co/mradermacher/LFM2-VL-450M-i1-GGUF/resolve/main/LFM2-VL-450M.i1-IQ4_NL.gguf) | i1-IQ4_NL | 0.3 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/LFM2-VL-450M-i1-GGUF/resolve/main/LFM2-VL-450M.i1-Q4_0.gguf) | i1-Q4_0 | 0.3 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/LFM2-VL-450M-i1-GGUF/resolve/main/LFM2-VL-450M.i1-Q4_K_S.gguf) | i1-Q4_K_S | 0.3 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/LFM2-VL-450M-i1-GGUF/resolve/main/LFM2-VL-450M.i1-Q4_K_M.gguf) | i1-Q4_K_M | 0.3 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/LFM2-VL-450M-i1-GGUF/resolve/main/LFM2-VL-450M.i1-Q4_1.gguf) | i1-Q4_1 | 0.3 | | | [GGUF](https://huggingface.co/mradermacher/LFM2-VL-450M-i1-GGUF/resolve/main/LFM2-VL-450M.i1-Q5_K_S.gguf) | i1-Q5_K_S | 0.4 | | | [GGUF](https://huggingface.co/mradermacher/LFM2-VL-450M-i1-GGUF/resolve/main/LFM2-VL-450M.i1-Q5_K_M.gguf) | i1-Q5_K_M | 0.4 | | | [GGUF](https://huggingface.co/mradermacher/LFM2-VL-450M-i1-GGUF/resolve/main/LFM2-VL-450M.i1-Q6_K.gguf) | i1-Q6_K | 0.4 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
mradermacher/LFM2-VL-1.6B-i1-GGUF
mradermacher
2025-08-21T07:07:59Z
995
1
transformers
[ "transformers", "gguf", "liquid", "lfm2", "lfm2-vl", "edge", "en", "base_model:LiquidAI/LFM2-VL-1.6B", "base_model:quantized:LiquidAI/LFM2-VL-1.6B", "license:other", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2025-08-17T13:56:21Z
--- base_model: LiquidAI/LFM2-VL-1.6B language: - en library_name: transformers license: other license_link: LICENSE license_name: lfm1.0 mradermacher: readme_rev: 1 quantized_by: mradermacher tags: - liquid - lfm2 - lfm2-vl - edge --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> <!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K_M Q4_0 IQ3_XS Q4_1 IQ3_S --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> weighted/imatrix quants of https://huggingface.co/LiquidAI/LFM2-VL-1.6B <!-- provided-files --> ***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#LFM2-VL-1.6B-i1-GGUF).*** static quants are available at https://huggingface.co/mradermacher/LFM2-VL-1.6B-GGUF **This is a vision model - mmproj files (if any) will be in the [static repository](https://huggingface.co/mradermacher/LFM2-VL-1.6B-GGUF).** ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/LFM2-VL-1.6B-i1-GGUF/resolve/main/LFM2-VL-1.6B.imatrix.gguf) | imatrix | 0.1 | imatrix file (for creating your own qwuants) | | [GGUF](https://huggingface.co/mradermacher/LFM2-VL-1.6B-i1-GGUF/resolve/main/LFM2-VL-1.6B.i1-IQ1_S.gguf) | i1-IQ1_S | 0.4 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/LFM2-VL-1.6B-i1-GGUF/resolve/main/LFM2-VL-1.6B.i1-IQ1_M.gguf) | i1-IQ1_M | 0.4 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/LFM2-VL-1.6B-i1-GGUF/resolve/main/LFM2-VL-1.6B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 0.5 | | | [GGUF](https://huggingface.co/mradermacher/LFM2-VL-1.6B-i1-GGUF/resolve/main/LFM2-VL-1.6B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 0.5 | | | [GGUF](https://huggingface.co/mradermacher/LFM2-VL-1.6B-i1-GGUF/resolve/main/LFM2-VL-1.6B.i1-IQ2_S.gguf) | i1-IQ2_S | 0.5 | | | [GGUF](https://huggingface.co/mradermacher/LFM2-VL-1.6B-i1-GGUF/resolve/main/LFM2-VL-1.6B.i1-IQ2_M.gguf) | i1-IQ2_M | 0.5 | | | [GGUF](https://huggingface.co/mradermacher/LFM2-VL-1.6B-i1-GGUF/resolve/main/LFM2-VL-1.6B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 0.6 | very low quality | | [GGUF](https://huggingface.co/mradermacher/LFM2-VL-1.6B-i1-GGUF/resolve/main/LFM2-VL-1.6B.i1-Q2_K.gguf) | i1-Q2_K | 0.6 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/LFM2-VL-1.6B-i1-GGUF/resolve/main/LFM2-VL-1.6B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 0.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/LFM2-VL-1.6B-i1-GGUF/resolve/main/LFM2-VL-1.6B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 0.6 | | | [GGUF](https://huggingface.co/mradermacher/LFM2-VL-1.6B-i1-GGUF/resolve/main/LFM2-VL-1.6B.i1-IQ3_S.gguf) | i1-IQ3_S | 0.7 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/LFM2-VL-1.6B-i1-GGUF/resolve/main/LFM2-VL-1.6B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 0.7 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/LFM2-VL-1.6B-i1-GGUF/resolve/main/LFM2-VL-1.6B.i1-IQ3_M.gguf) | i1-IQ3_M | 0.7 | | | [GGUF](https://huggingface.co/mradermacher/LFM2-VL-1.6B-i1-GGUF/resolve/main/LFM2-VL-1.6B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 0.7 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/LFM2-VL-1.6B-i1-GGUF/resolve/main/LFM2-VL-1.6B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 0.7 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/LFM2-VL-1.6B-i1-GGUF/resolve/main/LFM2-VL-1.6B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 0.8 | | | [GGUF](https://huggingface.co/mradermacher/LFM2-VL-1.6B-i1-GGUF/resolve/main/LFM2-VL-1.6B.i1-IQ4_NL.gguf) | i1-IQ4_NL | 0.8 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/LFM2-VL-1.6B-i1-GGUF/resolve/main/LFM2-VL-1.6B.i1-Q4_0.gguf) | i1-Q4_0 | 0.8 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/LFM2-VL-1.6B-i1-GGUF/resolve/main/LFM2-VL-1.6B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 0.8 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/LFM2-VL-1.6B-i1-GGUF/resolve/main/LFM2-VL-1.6B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 0.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/LFM2-VL-1.6B-i1-GGUF/resolve/main/LFM2-VL-1.6B.i1-Q4_1.gguf) | i1-Q4_1 | 0.9 | | | [GGUF](https://huggingface.co/mradermacher/LFM2-VL-1.6B-i1-GGUF/resolve/main/LFM2-VL-1.6B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 0.9 | | | [GGUF](https://huggingface.co/mradermacher/LFM2-VL-1.6B-i1-GGUF/resolve/main/LFM2-VL-1.6B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 0.9 | | | [GGUF](https://huggingface.co/mradermacher/LFM2-VL-1.6B-i1-GGUF/resolve/main/LFM2-VL-1.6B.i1-Q6_K.gguf) | i1-Q6_K | 1.1 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
milliarderdol/blockassist-bc-roaring_rough_scorpion_1755757912
milliarderdol
2025-08-21T07:07:02Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "roaring rough scorpion", "arxiv:2504.07091", "region:us" ]
null
2025-08-21T07:06:21Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - roaring rough scorpion --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
chainway9/blockassist-bc-untamed_quick_eel_1755758364
chainway9
2025-08-21T07:06:25Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "untamed quick eel", "arxiv:2504.07091", "region:us" ]
null
2025-08-21T07:06:21Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - untamed quick eel --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
vicvreth/blockassist-bc-stocky_graceful_puma_1755759780
vicvreth
2025-08-21T07:04:52Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "stocky graceful puma", "arxiv:2504.07091", "region:us" ]
null
2025-08-21T07:04:31Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - stocky graceful puma --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
llencia/blockassist-bc-wiry_wise_hedgehog_1755759820
llencia
2025-08-21T07:04:05Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "wiry wise hedgehog", "arxiv:2504.07091", "region:us" ]
null
2025-08-21T07:04:01Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - wiry wise hedgehog --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
rasmus-aau/gemma-ft-GUIDANCE
rasmus-aau
2025-08-21T07:03:28Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "generated_from_trainer", "sft", "trl", "base_model:google/gemma-3-4b-it", "base_model:finetune:google/gemma-3-4b-it", "endpoints_compatible", "region:us" ]
null
2025-08-20T12:06:41Z
--- base_model: google/gemma-3-4b-it library_name: transformers model_name: gemma-ft-GUIDANCE tags: - generated_from_trainer - sft - trl licence: license --- # Model Card for gemma-ft-GUIDANCE This model is a fine-tuned version of [google/gemma-3-4b-it](https://huggingface.co/google/gemma-3-4b-it). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="rasmus-aau/gemma-ft-GUIDANCE", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with SFT. ### Framework versions - TRL: 0.19.1 - Transformers: 4.53.2 - Pytorch: 2.6.0+cu124 - Datasets: 3.6.0 - Tokenizers: 0.21.0 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
zhengli97/prompt_learning_dataset
zhengli97
2025-08-21T07:03:22Z
0
0
null
[ "license:mit", "region:us" ]
null
2025-08-20T14:20:05Z
--- license: mit --- ## Datasets: Base-to-Novel: [ImageNet-1K](https://image-net.org/challenges/LSVRC/2012/index.php), [Caltech101](https://data.caltech.edu/records/mzrjq-6wc02), [Oxford Pets](https://www.robots.ox.ac.uk/~vgg/data/pets/), [StanfordCars](https://ai.stanford.edu/~jkrause/cars/car_dataset.html), [Flowers102](https://www.robots.ox.ac.uk/~vgg/data/flowers/102/), [Food101](https://vision.ee.ethz.ch/datasets_extra/food-101/), [FGVC Aircraft](https://www.robots.ox.ac.uk/~vgg/data/fgvc-aircraft/), [SUN397](http://vision.princeton.edu/projects/2010/SUN/), [DTD](https://www.robots.ox.ac.uk/~vgg/data/dtd/), [EuroSAT](https://github.com/phelber/EuroSAT), [UCF101](https://www.crcv.ucf.edu/data/UCF101.php). Domain Generalization: [ImageNet-V2](https://github.com/modestyachts/ImageNetV2), [ImageNet-Sketch](https://github.com/HaohanWang/ImageNet-Sketch), [ImageNet-Adversarial](https://github.com/hendrycks/natural-adv-examples), [ImageNet-Rendition](https://github.com/hendrycks/imagenet-r). Due to various factors, the links to some datasets may be outdated or invalid. To make it easy for you to download these datasets, we maintain a repository on HuggingFace, which contains all the datasets to be used (except ImageNet). Each dataset also includes the corresponding split_zhou_xx.json file. ## Instructions for How to download these datasets: ### Using the huggingface-cli command-line tool: Install the CLI tool if not already installed. `pip install -U huggingface-hub` Download the datasets. `huggingface-cli download zhengli97/prompt_learning_dataset`
sampingkaca72/blockassist-bc-armored_stealthy_elephant_1755758248
sampingkaca72
2025-08-21T07:02:19Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "armored stealthy elephant", "arxiv:2504.07091", "region:us" ]
null
2025-08-21T07:02:16Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - armored stealthy elephant --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
llencia/blockassist-bc-wiry_wise_hedgehog_1755759685
llencia
2025-08-21T07:01:52Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "wiry wise hedgehog", "arxiv:2504.07091", "region:us" ]
null
2025-08-21T07:01:44Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - wiry wise hedgehog --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
pidbu/blockassist-bc-whistling_alert_shrew_1755759493
pidbu
2025-08-21T06:59:43Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "whistling alert shrew", "arxiv:2504.07091", "region:us" ]
null
2025-08-21T06:59:01Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - whistling alert shrew --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
zhangchenxu/Qwen2.5-3B-Instruct-t2_25k_v2_tag4_processed
zhangchenxu
2025-08-21T06:57:43Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "llama-factory", "full", "generated_from_trainer", "conversational", "base_model:Qwen/Qwen2.5-3B-Instruct", "base_model:finetune:Qwen/Qwen2.5-3B-Instruct", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-21T06:55:07Z
--- library_name: transformers license: other base_model: Qwen/Qwen2.5-3B-Instruct tags: - llama-factory - full - generated_from_trainer model-index: - name: t2_25k_v2_tag4_processed results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t2_25k_v2_tag4_processed This model is a fine-tuned version of [Qwen/Qwen2.5-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-3B-Instruct) on the t2_25k_v2_tag4_processed dataset. It achieves the following results on the evaluation set: - Loss: 0.3884 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - total_train_batch_size: 8 - total_eval_batch_size: 8 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.3013 | 0.2410 | 100 | 0.4413 | | 0.3485 | 0.4819 | 200 | 0.4096 | | 0.307 | 0.7229 | 300 | 0.3916 | | 0.2568 | 0.9639 | 400 | 0.3886 | ### Framework versions - Transformers 4.46.1 - Pytorch 2.6.0+cu124 - Datasets 3.1.0 - Tokenizers 0.20.3
Shifatislam/Finetuned-logits
Shifatislam
2025-08-21T06:57:37Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:sagorsarker/bangla-bert-base", "base_model:finetune:sagorsarker/bangla-bert-base", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-08-21T06:57:03Z
--- library_name: transformers license: mit base_model: sagorsarker/bangla-bert-base tags: - generated_from_trainer metrics: - accuracy model-index: - name: Finetuned-logits results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Finetuned-logits This model is a fine-tuned version of [sagorsarker/bangla-bert-base](https://huggingface.co/sagorsarker/bangla-bert-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.8038 - Accuracy: 0.7070 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.06 - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.7811 | 1.0 | 2221 | 0.7657 | 0.6959 | | 0.6799 | 2.0 | 4442 | 0.7569 | 0.6979 | | 0.5298 | 3.0 | 6663 | 0.8038 | 0.7070 | | 0.361 | 4.0 | 8884 | 0.8919 | 0.6927 | ### Framework versions - Transformers 4.52.4 - Pytorch 2.6.0+cu124 - Datasets 3.6.0 - Tokenizers 0.21.2
lavinzco/blockassist-bc-thick_climbing_giraffe_1755756364
lavinzco
2025-08-21T06:56:31Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "thick climbing giraffe", "arxiv:2504.07091", "region:us" ]
null
2025-08-21T06:56:15Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - thick climbing giraffe --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
thejaminator/gemma-multiepoch
thejaminator
2025-08-21T06:53:54Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-08-21T06:53:30Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
calegpedia/blockassist-bc-stealthy_slimy_rooster_1755757576
calegpedia
2025-08-21T06:53:38Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "stealthy slimy rooster", "arxiv:2504.07091", "region:us" ]
null
2025-08-21T06:53:34Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - stealthy slimy rooster --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
ihsanridzi/blockassist-bc-wiry_flexible_owl_1755757535
ihsanridzi
2025-08-21T06:53:22Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "wiry flexible owl", "arxiv:2504.07091", "region:us" ]
null
2025-08-21T06:53:19Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - wiry flexible owl --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Bahrom1996/whisper-uz-v2
Bahrom1996
2025-08-21T06:53:06Z
12
0
transformers
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "uz", "dataset:common_voice_17_0", "base_model:openai/whisper-base", "base_model:finetune:openai/whisper-base", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2025-08-20T05:14:30Z
--- library_name: transformers language: - uz license: apache-2.0 base_model: openai/whisper-base tags: - generated_from_trainer datasets: - common_voice_17_0 metrics: - wer model-index: - name: Whisper base uz - Bahromcc results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper base uz - Bahromcc This model is a fine-tuned version of [openai/whisper-base](https://huggingface.co/openai/whisper-base) on the common_voice_17_0 dataset. It achieves the following results on the evaluation set: - Loss: 0.9907 - Wer: 78.0922 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 500 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:------:|:----:|:---------------:|:-------:| | 1.0779 | 0.1320 | 500 | 0.9907 | 78.0922 | ### Framework versions - Transformers 4.49.0 - Pytorch 2.8.0+cu126 - Datasets 3.6.0 - Tokenizers 0.21.4
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1755758835
IvanJAjebu
2025-08-21T06:48:26Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "thorny slender capybara", "arxiv:2504.07091", "region:us" ]
null
2025-08-21T06:48:17Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - thorny slender capybara --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Team-Atom/act_door_01_64_20000
Team-Atom
2025-08-21T06:48:25Z
0
0
lerobot
[ "lerobot", "safetensors", "robotics", "act", "dataset:Team-Atom/door_01", "arxiv:2304.13705", "license:apache-2.0", "region:us" ]
robotics
2025-08-21T06:48:12Z
--- datasets: Team-Atom/door_01 library_name: lerobot license: apache-2.0 model_name: act pipeline_tag: robotics tags: - robotics - lerobot - act --- # Model Card for act <!-- Provide a quick summary of what the model is/does. --> [Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high success rates. This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot). See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index). --- ## How to Get Started with the Model For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy). Below is the short version on how to train and run inference/eval: ### Train from scratch ```bash python -m lerobot.scripts.train \ --dataset.repo_id=${HF_USER}/<dataset> \ --policy.type=act \ --output_dir=outputs/train/<desired_policy_repo_id> \ --job_name=lerobot_training \ --policy.device=cuda \ --policy.repo_id=${HF_USER}/<desired_policy_repo_id> --wandb.enable=true ``` _Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._ ### Evaluate the policy/run inference ```bash python -m lerobot.record \ --robot.type=so100_follower \ --dataset.repo_id=<hf_user>/eval_<dataset> \ --policy.path=<hf_user>/<desired_policy_repo_id> \ --episodes=10 ``` Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint. --- ## Model Details - **License:** apache-2.0
llencia/blockassist-bc-wiry_wise_hedgehog_1755758826
llencia
2025-08-21T06:47:33Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "wiry wise hedgehog", "arxiv:2504.07091", "region:us" ]
null
2025-08-21T06:47:29Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - wiry wise hedgehog --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
pidbu/blockassist-bc-whistling_alert_shrew_1755758683
pidbu
2025-08-21T06:45:56Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "whistling alert shrew", "arxiv:2504.07091", "region:us" ]
null
2025-08-21T06:45:22Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - whistling alert shrew --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
computerandgyein/gemma_270m-text-normalisation-for-number-stage1
computerandgyein
2025-08-21T06:44:12Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "unsloth", "sft", "trl", "base_model:unsloth/gemma-3-270m-unsloth-bnb-4bit", "base_model:finetune:unsloth/gemma-3-270m-unsloth-bnb-4bit", "endpoints_compatible", "region:us" ]
null
2025-08-21T05:45:56Z
--- base_model: unsloth/gemma-3-270m-unsloth-bnb-4bit library_name: transformers model_name: gemma_270m-text-normalisation-for-number-stage1 tags: - generated_from_trainer - unsloth - sft - trl licence: license --- # Model Card for gemma_270m-text-normalisation-for-number-stage1 This model is a fine-tuned version of [unsloth/gemma-3-270m-unsloth-bnb-4bit](https://huggingface.co/unsloth/gemma-3-270m-unsloth-bnb-4bit). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="computerandgyein/gemma_270m-text-normalisation-for-number-stage1", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/computerandgyein-ufo/text-normalisation/runs/0p37xxsi) This model was trained with SFT. ### Framework versions - TRL: 0.21.0 - Transformers: 4.55.2 - Pytorch: 2.8.0+cu126 - Datasets: 3.6.0 - Tokenizers: 0.21.4 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
TuanNM171284/TuanNM171284-HaLong-embedding-medical-v3
TuanNM171284
2025-08-21T06:43:41Z
0
0
sentence-transformers
[ "sentence-transformers", "safetensors", "xlm-roberta", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:5808", "loss:MatryoshkaLoss", "loss:MultipleNegativesRankingLoss", "arxiv:1908.10084", "arxiv:2205.13147", "arxiv:1705.00652", "base_model:hiieu/halong_embedding", "base_model:finetune:hiieu/halong_embedding", "model-index", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2025-08-21T06:43:15Z
--- tags: - sentence-transformers - sentence-similarity - feature-extraction - generated_from_trainer - dataset_size:5808 - loss:MatryoshkaLoss - loss:MultipleNegativesRankingLoss base_model: hiieu/halong_embedding widget: - source_sentence: Bệnh Brucellosis có thể gây ra những biến chứng nghiêm trọng nào? sentences: - Để chẩn đoán bệnh cơ bẩm sinh, bác sĩ có thể đề nghị xét nghiệm máu (tìm enzyme creatine kinase), đo điện cơ (EMG), xét nghiệm gen hoặc sinh thiết cơ. Ngoài ra, có thể thực hiện chẩn đoán tiền sản như sinh thiết gai nhau, chọc dịch màng ối, thử nghiệm di truyền trước sinh. - 'Brucellosis có thể gây ra các biến chứng nghiêm trọng ảnh hưởng đến hầu hết mọi bộ phận của cơ thể, bao gồm: viêm nội tâm mạc (gây tổn thương van tim, là nguyên nhân hàng đầu gây tử vong), viêm khớp (đau, cứng và sưng khớp), viêm mào tinh hoàn, viêm và nhiễm trùng lách và gan, và nhiễm trùng hệ thần kinh trung ương (viêm màng não, viêm não).' - Để chẩn đoán gãy xương sườn, bác sĩ có thể yêu cầu chụp X-quang ngực, giúp phát hiện 75% trường hợp và các tình trạng như xẹp phổi, tràn khí/dịch màng phổi. CT-scan cũng có thể được sử dụng để phát hiện những trường hợp X-quang bỏ sót và tổn thương mô mềm, các cơ quan kèm theo như phổi, gan, thận, lách. - source_sentence: Những yếu tố nào có thể dẫn đến dậy thì sớm ngoại biên ở cả bé gái và bé trai? sentences: - Các triệu chứng như mất khả năng nói, nghe hiểu, liệt nửa người, hoặc không thể tự chăm sóc bản thân có thể là di chứng của tai biến mạch máu não, một biến chứng thường gặp của bệnh lý động mạch cảnh. - Ở cả bé gái và bé trai, dậy thì sớm ngoại biên có thể do khối u ở tuyến thượng thận, tuyến yên, buồng trứng (ở bé gái) hoặc tinh hoàn (ở bé trai); hội chứng McCune – Albright; tiếp xúc với các nguồn estrogen và testosterone bên ngoài như thoa kem hoặc thuốc mỡ; hoặc một rối loạn di truyền hiếm gặp không liên quan đến hormone GnRH dẫn đến việc sản xuất testosterone sớm ở bé trai. - Đây có thể là dấu hiệu của gai cột sống, với vị trí đau hiện diện ở vùng cột sống hình thành gai. - source_sentence: Những nguyên nhân chính nào dẫn đến tình trạng béo phì? sentences: - Cơn thiếu máu não thoáng qua tương tự tai biến mạch máu não nhưng chỉ kéo dài ngắn và thường không gây tổn thương não. Những người có cơn này có nguy cơ rất cao mắc phải một đợt tai biến mạch máu não thật sự trong tương lai. - Nguyên nhân chính gây béo phì là do có sự mất cân bằng giữa năng lượng nạp vào và tiêu thụ. Các yếu tố cụ thể bao gồm chế độ ăn uống không hợp lý (như ăn nhiều calo, khẩu phần lớn, ăn ngoài, uống đồ ngọt/cồn), thiếu hoạt động thể chất, mắc các bệnh nền (như suy giáp, hội chứng Cushing) hoặc tác dụng phụ của thuốc (như corticoid, thuốc trị động kinh, tiểu đường), và những thay đổi về lối sống (như thiếu ngủ, ngưng hút thuốc lá). - Để phòng tránh cúm mùa, cần tăng cường vệ sinh cá nhân như rửa tay bằng xà phòng, che miệng và mũi khi ho, hắt hơi; vệ sinh và mở cửa thoáng mát nơi ở, lau chùi vật dụng; tự theo dõi sức khỏe và thông báo khi có biểu hiện bệnh; tránh tiếp xúc với người bệnh hoặc người nghi ngờ mắc bệnh. - source_sentence: Khi một người có các triệu chứng như nhãn cầu bị chìm và đỏ bừng mặt, bác sĩ có thể chẩn đoán bệnh gì? sentences: - Khi một người có các triệu chứng như nhãn cầu bị chìm và đỏ bừng mặt, cùng với co đồng tử, sụp mí mắt và giảm tiết mồ hôi ở một bên mặt, bác sĩ có thể chẩn đoán Hội chứng Horner. - Thông thường, bệnh cầu thận màng là hậu quả của các phản ứng miễn dịch tự miễn, khi hệ thống miễn dịch tấn công nhầm mô khỏe mạnh. Ngoài ra, bệnh cũng có thể do các nguyên nhân thứ phát như bệnh tự miễn (ví dụ lupus ban đỏ hệ thống), nhiễm siêu vi (viêm gan B, C, giang mai), một số loại thuốc (thuốc kháng viêm không steroid) hoặc các bệnh lý ung thư (ung thư máu). - Các dấu hiệu như co đồng tử, sụp mí mắt trên và giảm tiết mồ hôi ở một bên mặt có thể là triệu chứng của Hội chứng Horner. - source_sentence: Nếu một con bò có hành vi bất thường, khó khăn khi di chuyển và bị giảm thể trọng, đó có thể là dấu hiệu của bệnh gì? sentences: - Trong giai đoạn viêm tấy cấp tính khi bị giãn dây chằng, người bệnh tuyệt đối không được chườm nóng, xoa dầu nóng, rượu thuốc vì có thể làm tổn thương nặng hơn. Đồng thời, không vận động vùng tổn thương và không được tiêm kháng viêm trực tiếp vào vùng tổn thương. - Nếu một con bò có các biểu hiện như hành vi bất thường, khó khăn trong di chuyển và giảm thể trọng, đó có thể là dấu hiệu của bệnh bò điên (Bovine Spongiform Encephalopathy - BSE). - Bệnh liệt dương còn có tên gọi khác là chứng “bất lực” xảy ra ở nam giới. Bệnh có đặc điểm là không có khả năng duy trì sự cương cứng đủ để giao hợp hoặc không thể đạt được xuất tinh, hoặc cả hai. pipeline_tag: sentence-similarity library_name: sentence-transformers metrics: - cosine_accuracy@1 - cosine_accuracy@3 - cosine_accuracy@5 - cosine_accuracy@10 - cosine_precision@1 - cosine_precision@3 - cosine_precision@5 - cosine_precision@10 - cosine_recall@1 - cosine_recall@3 - cosine_recall@5 - cosine_recall@10 - cosine_ndcg@10 - cosine_mrr@10 - cosine_map@100 model-index: - name: SentenceTransformer based on hiieu/halong_embedding results: - task: type: information-retrieval name: Information Retrieval dataset: name: dim 768 type: dim_768 metrics: - type: cosine_accuracy@1 value: 0.7276170798898072 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 0.956267217630854 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 0.9784779614325069 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 0.993629476584022 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.7276170798898072 name: Cosine Precision@1 - type: cosine_precision@3 value: 0.3187557392102846 name: Cosine Precision@3 - type: cosine_precision@5 value: 0.19569559228650144 name: Cosine Precision@5 - type: cosine_precision@10 value: 0.0993629476584022 name: Cosine Precision@10 - type: cosine_recall@1 value: 0.7276170798898072 name: Cosine Recall@1 - type: cosine_recall@3 value: 0.956267217630854 name: Cosine Recall@3 - type: cosine_recall@5 value: 0.9784779614325069 name: Cosine Recall@5 - type: cosine_recall@10 value: 0.993629476584022 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.8833780982120104 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.8457527794175517 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.8461373702297227 name: Cosine Map@100 - task: type: information-retrieval name: Information Retrieval dataset: name: dim 512 type: dim_512 metrics: - type: cosine_accuracy@1 value: 0.7283057851239669 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 0.9578168044077136 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 0.9796831955922864 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 0.9925964187327824 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.7283057851239669 name: Cosine Precision@1 - type: cosine_precision@3 value: 0.3192722681359044 name: Cosine Precision@3 - type: cosine_precision@5 value: 0.1959366391184573 name: Cosine Precision@5 - type: cosine_precision@10 value: 0.09925964187327825 name: Cosine Precision@10 - type: cosine_recall@1 value: 0.7283057851239669 name: Cosine Recall@1 - type: cosine_recall@3 value: 0.9578168044077136 name: Cosine Recall@3 - type: cosine_recall@5 value: 0.9796831955922864 name: Cosine Recall@5 - type: cosine_recall@10 value: 0.9925964187327824 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.8839467645087845 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.846739991910446 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.8472158521574085 name: Cosine Map@100 - task: type: information-retrieval name: Information Retrieval dataset: name: dim 256 type: dim_256 metrics: - type: cosine_accuracy@1 value: 0.7336432506887053 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 0.9643595041322314 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 0.9831267217630854 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 0.993801652892562 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.7336432506887053 name: Cosine Precision@1 - type: cosine_precision@3 value: 0.3214531680440771 name: Cosine Precision@3 - type: cosine_precision@5 value: 0.1966253443526171 name: Cosine Precision@5 - type: cosine_precision@10 value: 0.09938016528925621 name: Cosine Precision@10 - type: cosine_recall@1 value: 0.7336432506887053 name: Cosine Recall@1 - type: cosine_recall@3 value: 0.9643595041322314 name: Cosine Recall@3 - type: cosine_recall@5 value: 0.9831267217630854 name: Cosine Recall@5 - type: cosine_recall@10 value: 0.993801652892562 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.8881257373807646 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.8517980125934663 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.8521818606854007 name: Cosine Map@100 - task: type: information-retrieval name: Information Retrieval dataset: name: dim 128 type: dim_128 metrics: - type: cosine_accuracy@1 value: 0.7329545454545454 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 0.9616046831955923 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 0.9817493112947658 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 0.9941460055096418 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.7329545454545454 name: Cosine Precision@1 - type: cosine_precision@3 value: 0.3205348943985307 name: Cosine Precision@3 - type: cosine_precision@5 value: 0.19634986225895318 name: Cosine Precision@5 - type: cosine_precision@10 value: 0.0994146005509642 name: Cosine Precision@10 - type: cosine_recall@1 value: 0.7329545454545454 name: Cosine Recall@1 - type: cosine_recall@3 value: 0.9616046831955923 name: Cosine Recall@3 - type: cosine_recall@5 value: 0.9817493112947658 name: Cosine Recall@5 - type: cosine_recall@10 value: 0.9941460055096418 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.8872894905588286 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.8506679347588435 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.8510233840356517 name: Cosine Map@100 - task: type: information-retrieval name: Information Retrieval dataset: name: dim 64 type: dim_64 metrics: - type: cosine_accuracy@1 value: 0.7272727272727273 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 0.9585055096418733 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 0.9801997245179064 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 0.9924242424242424 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.7272727272727273 name: Cosine Precision@1 - type: cosine_precision@3 value: 0.3195018365472911 name: Cosine Precision@3 - type: cosine_precision@5 value: 0.1960399449035813 name: Cosine Precision@5 - type: cosine_precision@10 value: 0.09924242424242426 name: Cosine Precision@10 - type: cosine_recall@1 value: 0.7272727272727273 name: Cosine Recall@1 - type: cosine_recall@3 value: 0.9585055096418733 name: Cosine Recall@3 - type: cosine_recall@5 value: 0.9801997245179064 name: Cosine Recall@5 - type: cosine_recall@10 value: 0.9924242424242424 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.8834760003003882 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.8461536359263628 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.8465610862269766 name: Cosine Map@100 --- # SentenceTransformer based on hiieu/halong_embedding This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [hiieu/halong_embedding](https://huggingface.co/hiieu/halong_embedding). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [hiieu/halong_embedding](https://huggingface.co/hiieu/halong_embedding) <!-- at revision b57776031035f70ed2030d2e35ecc533eb0f8f71 --> - **Maximum Sequence Length:** 512 tokens - **Output Dimensionality:** 768 dimensions - **Similarity Function:** Cosine Similarity <!-- - **Training Dataset:** Unknown --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: XLMRobertaModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("TuanNM171284/TuanNM171284-HaLong-embedding-medical-v3") # Run inference sentences = [ 'Nếu một con bò có hành vi bất thường, khó khăn khi di chuyển và bị giảm thể trọng, đó có thể là dấu hiệu của bệnh gì?', 'Nếu một con bò có các biểu hiện như hành vi bất thường, khó khăn trong di chuyển và giảm thể trọng, đó có thể là dấu hiệu của bệnh bò điên (Bovine Spongiform Encephalopathy - BSE).', 'Trong giai đoạn viêm tấy cấp tính khi bị giãn dây chằng, người bệnh tuyệt đối không được chườm nóng, xoa dầu nóng, rượu thuốc vì có thể làm tổn thương nặng hơn. Đồng thời, không vận động vùng tổn thương và không được tiêm kháng viêm trực tiếp vào vùng tổn thương.', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 768] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> ## Evaluation ### Metrics #### Information Retrieval * Dataset: `dim_768` * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) with these parameters: ```json { "truncate_dim": 768 } ``` | Metric | Value | |:--------------------|:-----------| | cosine_accuracy@1 | 0.7276 | | cosine_accuracy@3 | 0.9563 | | cosine_accuracy@5 | 0.9785 | | cosine_accuracy@10 | 0.9936 | | cosine_precision@1 | 0.7276 | | cosine_precision@3 | 0.3188 | | cosine_precision@5 | 0.1957 | | cosine_precision@10 | 0.0994 | | cosine_recall@1 | 0.7276 | | cosine_recall@3 | 0.9563 | | cosine_recall@5 | 0.9785 | | cosine_recall@10 | 0.9936 | | **cosine_ndcg@10** | **0.8834** | | cosine_mrr@10 | 0.8458 | | cosine_map@100 | 0.8461 | #### Information Retrieval * Dataset: `dim_512` * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) with these parameters: ```json { "truncate_dim": 512 } ``` | Metric | Value | |:--------------------|:-----------| | cosine_accuracy@1 | 0.7283 | | cosine_accuracy@3 | 0.9578 | | cosine_accuracy@5 | 0.9797 | | cosine_accuracy@10 | 0.9926 | | cosine_precision@1 | 0.7283 | | cosine_precision@3 | 0.3193 | | cosine_precision@5 | 0.1959 | | cosine_precision@10 | 0.0993 | | cosine_recall@1 | 0.7283 | | cosine_recall@3 | 0.9578 | | cosine_recall@5 | 0.9797 | | cosine_recall@10 | 0.9926 | | **cosine_ndcg@10** | **0.8839** | | cosine_mrr@10 | 0.8467 | | cosine_map@100 | 0.8472 | #### Information Retrieval * Dataset: `dim_256` * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) with these parameters: ```json { "truncate_dim": 256 } ``` | Metric | Value | |:--------------------|:-----------| | cosine_accuracy@1 | 0.7336 | | cosine_accuracy@3 | 0.9644 | | cosine_accuracy@5 | 0.9831 | | cosine_accuracy@10 | 0.9938 | | cosine_precision@1 | 0.7336 | | cosine_precision@3 | 0.3215 | | cosine_precision@5 | 0.1966 | | cosine_precision@10 | 0.0994 | | cosine_recall@1 | 0.7336 | | cosine_recall@3 | 0.9644 | | cosine_recall@5 | 0.9831 | | cosine_recall@10 | 0.9938 | | **cosine_ndcg@10** | **0.8881** | | cosine_mrr@10 | 0.8518 | | cosine_map@100 | 0.8522 | #### Information Retrieval * Dataset: `dim_128` * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) with these parameters: ```json { "truncate_dim": 128 } ``` | Metric | Value | |:--------------------|:-----------| | cosine_accuracy@1 | 0.733 | | cosine_accuracy@3 | 0.9616 | | cosine_accuracy@5 | 0.9817 | | cosine_accuracy@10 | 0.9941 | | cosine_precision@1 | 0.733 | | cosine_precision@3 | 0.3205 | | cosine_precision@5 | 0.1963 | | cosine_precision@10 | 0.0994 | | cosine_recall@1 | 0.733 | | cosine_recall@3 | 0.9616 | | cosine_recall@5 | 0.9817 | | cosine_recall@10 | 0.9941 | | **cosine_ndcg@10** | **0.8873** | | cosine_mrr@10 | 0.8507 | | cosine_map@100 | 0.851 | #### Information Retrieval * Dataset: `dim_64` * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) with these parameters: ```json { "truncate_dim": 64 } ``` | Metric | Value | |:--------------------|:-----------| | cosine_accuracy@1 | 0.7273 | | cosine_accuracy@3 | 0.9585 | | cosine_accuracy@5 | 0.9802 | | cosine_accuracy@10 | 0.9924 | | cosine_precision@1 | 0.7273 | | cosine_precision@3 | 0.3195 | | cosine_precision@5 | 0.196 | | cosine_precision@10 | 0.0992 | | cosine_recall@1 | 0.7273 | | cosine_recall@3 | 0.9585 | | cosine_recall@5 | 0.9802 | | cosine_recall@10 | 0.9924 | | **cosine_ndcg@10** | **0.8835** | | cosine_mrr@10 | 0.8462 | | cosine_map@100 | 0.8466 | <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### Unnamed Dataset * Size: 5,808 training samples * Columns: <code>anchor</code> and <code>positive</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | |:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | | details | <ul><li>min: 8 tokens</li><li>mean: 23.52 tokens</li><li>max: 83 tokens</li></ul> | <ul><li>min: 10 tokens</li><li>mean: 71.2 tokens</li><li>max: 170 tokens</li></ul> | * Samples: | anchor | positive | |:----------------------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | <code>Bệnh Addison là gì?</code> | <code>Bệnh Addison, còn được gọi là suy thượng thận nguyên phát, là tình trạng tuyến thượng thận không sản xuất đủ các hormone cortisol và aldosterone.</code> | | <code>Nguyên nhân chính gây ra bệnh Addison là gì?</code> | <code>Hầu hết các trường hợp bệnh Addison là do bệnh tự miễn, khi hệ thống miễn dịch tấn công nhầm tuyến thượng thận. Các nguyên nhân khác bao gồm nhiễm trùng kéo dài như bệnh lao, HIV, nhiễm nấm, và các tế bào ung thư lây lan đến tuyến thượng thận.</code> | | <code>Tôi thường xuyên mệt mỏi, yếu cơ, sụt cân và da bị sạm màu. Đây có thể là triệu chứng của bệnh gì?</code> | <code>Các triệu chứng như mệt mỏi mãn tính, yếu cơ, giảm cân, và da sẫm màu (nám, sạm đen, tàn nhang) có thể là dấu hiệu của bệnh Addison.</code> | * Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters: ```json { "loss": "MultipleNegativesRankingLoss", "matryoshka_dims": [ 768, 512, 256, 128, 64 ], "matryoshka_weights": [ 1, 1, 1, 1, 1 ], "n_dims_per_step": -1 } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: steps - `per_device_eval_batch_size`: 4 - `gradient_accumulation_steps`: 4 - `learning_rate`: 2e-05 - `lr_scheduler_type`: cosine - `warmup_ratio`: 0.1 - `bf16`: True - `load_best_model_at_end`: True - `optim`: adamw_torch_fused - `batch_sampler`: no_duplicates #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: steps - `prediction_loss_only`: True - `per_device_train_batch_size`: 8 - `per_device_eval_batch_size`: 4 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 4 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 2e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 3 - `max_steps`: -1 - `lr_scheduler_type`: cosine - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.1 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: True - `fp16`: False - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: True - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch_fused - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: None - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `include_for_metrics`: [] - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `use_liger_kernel`: False - `eval_use_gather_object`: False - `average_tokens_across_devices`: False - `prompts`: None - `batch_sampler`: no_duplicates - `multi_dataset_batch_sampler`: proportional </details> ### Training Logs | Epoch | Step | Training Loss | dim_768_cosine_ndcg@10 | dim_512_cosine_ndcg@10 | dim_256_cosine_ndcg@10 | dim_128_cosine_ndcg@10 | dim_64_cosine_ndcg@10 | |:------:|:----:|:-------------:|:----------------------:|:----------------------:|:----------------------:|:----------------------:|:---------------------:| | 0.0551 | 10 | 0.9863 | 0.8053 | 0.8005 | 0.7821 | 0.7483 | 0.6675 | | 0.1102 | 20 | 1.046 | 0.8175 | 0.8137 | 0.8027 | 0.7784 | 0.7122 | | 0.1653 | 30 | 1.045 | 0.8269 | 0.8241 | 0.8172 | 0.8010 | 0.7524 | | 0.2204 | 40 | 1.1269 | 0.8329 | 0.8309 | 0.8275 | 0.8147 | 0.7808 | | 0.2755 | 50 | 0.5781 | 0.8390 | 0.8370 | 0.8357 | 0.8271 | 0.7991 | | 0.3306 | 60 | 0.319 | 0.8391 | 0.8377 | 0.8379 | 0.8321 | 0.8101 | | 0.3857 | 70 | 0.1891 | 0.8460 | 0.8454 | 0.8447 | 0.8414 | 0.8279 | | 0.4408 | 80 | 0.2646 | 0.8514 | 0.8518 | 0.8506 | 0.8486 | 0.8335 | | 0.4959 | 90 | 0.2737 | 0.8542 | 0.8544 | 0.8541 | 0.8514 | 0.8372 | | 0.5510 | 100 | 0.1422 | 0.8565 | 0.8574 | 0.8564 | 0.8525 | 0.8366 | | 0.6061 | 110 | 0.3984 | 0.8556 | 0.8555 | 0.8551 | 0.8520 | 0.8382 | | 0.6612 | 120 | 0.443 | 0.8589 | 0.8593 | 0.8606 | 0.8565 | 0.8440 | | 0.7163 | 130 | 0.1714 | 0.8584 | 0.8598 | 0.8610 | 0.8578 | 0.8477 | | 0.7713 | 140 | 0.0658 | 0.8577 | 0.8583 | 0.8597 | 0.8573 | 0.8496 | | 0.8264 | 150 | 0.2713 | 0.8597 | 0.8604 | 0.8618 | 0.8606 | 0.8519 | | 0.8815 | 160 | 0.6141 | 0.8627 | 0.8638 | 0.8653 | 0.8623 | 0.8537 | | 0.9366 | 170 | 0.741 | 0.8711 | 0.8709 | 0.8695 | 0.8651 | 0.8509 | | 0.9917 | 180 | 0.1569 | 0.8694 | 0.8692 | 0.8689 | 0.8669 | 0.8545 | | 1.0441 | 190 | 0.042 | 0.8638 | 0.8638 | 0.8657 | 0.8651 | 0.8567 | | 1.0992 | 200 | 0.0586 | 0.8581 | 0.8587 | 0.8629 | 0.8610 | 0.8533 | | 1.1543 | 210 | 0.2068 | 0.8616 | 0.8634 | 0.8666 | 0.8665 | 0.8588 | | 1.2094 | 220 | 0.1943 | 0.8733 | 0.8741 | 0.8773 | 0.8747 | 0.8664 | | 1.2645 | 230 | 0.0132 | 0.8664 | 0.8675 | 0.8704 | 0.8705 | 0.8629 | | 1.3196 | 240 | 0.1768 | 0.8681 | 0.8696 | 0.8725 | 0.8727 | 0.8654 | | 1.3747 | 250 | 0.1628 | 0.8772 | 0.8771 | 0.8797 | 0.8790 | 0.8716 | | 1.4298 | 260 | 0.0547 | 0.8749 | 0.8756 | 0.8784 | 0.8783 | 0.8721 | | 1.4848 | 270 | 0.0532 | 0.8794 | 0.8800 | 0.8826 | 0.8808 | 0.8743 | | 1.5399 | 280 | 0.0991 | 0.8779 | 0.8785 | 0.8817 | 0.8808 | 0.8747 | | 1.5950 | 290 | 0.0678 | 0.8763 | 0.8767 | 0.8801 | 0.8794 | 0.8735 | | 1.6501 | 300 | 0.0354 | 0.8760 | 0.8768 | 0.8800 | 0.8793 | 0.8732 | | 1.7052 | 310 | 0.0463 | 0.8787 | 0.8796 | 0.8825 | 0.8823 | 0.8767 | | 1.7603 | 320 | 0.0391 | 0.8791 | 0.8799 | 0.8834 | 0.8826 | 0.8780 | | 1.8154 | 330 | 0.0855 | 0.8777 | 0.8781 | 0.8813 | 0.8810 | 0.8763 | | 1.8705 | 340 | 0.0339 | 0.8782 | 0.8785 | 0.8820 | 0.8816 | 0.8774 | | 1.9256 | 350 | 0.0305 | 0.8760 | 0.8771 | 0.8814 | 0.8813 | 0.8772 | | 1.9807 | 360 | 0.082 | 0.8784 | 0.8791 | 0.8831 | 0.8826 | 0.8791 | | 2.0331 | 370 | 0.0791 | 0.8781 | 0.8793 | 0.8832 | 0.8832 | 0.8801 | | 2.0882 | 380 | 0.0336 | 0.8799 | 0.8805 | 0.8845 | 0.8845 | 0.8818 | | 2.1433 | 390 | 0.0374 | 0.8788 | 0.8799 | 0.8843 | 0.8842 | 0.8813 | | 2.1983 | 400 | 0.0232 | 0.8779 | 0.8791 | 0.8833 | 0.8824 | 0.8808 | | 2.2534 | 410 | 0.0276 | 0.8776 | 0.8791 | 0.8822 | 0.8822 | 0.8804 | | 2.3085 | 420 | 0.0442 | 0.8777 | 0.8791 | 0.8830 | 0.8818 | 0.8798 | | 2.3636 | 430 | 0.0323 | 0.8786 | 0.8798 | 0.8838 | 0.8827 | 0.8803 | | 2.4187 | 440 | 0.0284 | 0.8823 | 0.8828 | 0.8862 | 0.8856 | 0.8826 | | 2.4738 | 450 | 0.0275 | 0.8840 | 0.8848 | 0.8885 | 0.8874 | 0.8839 | | 2.5289 | 460 | 0.0267 | 0.8842 | 0.8847 | 0.8892 | 0.8878 | 0.8837 | | 2.5840 | 470 | 0.023 | 0.8838 | 0.8846 | 0.8889 | 0.8879 | 0.8839 | | 2.6391 | 480 | 0.2072 | 0.8838 | 0.8843 | 0.8885 | 0.8879 | 0.8838 | | 2.6942 | 490 | 0.0506 | 0.8836 | 0.8841 | 0.8882 | 0.8877 | 0.8836 | | 2.7493 | 500 | 0.0368 | 0.8834 | 0.8838 | 0.8883 | 0.8872 | 0.8834 | | 2.8044 | 510 | 0.0205 | 0.8832 | 0.8838 | 0.8885 | 0.8876 | 0.8836 | | 2.8595 | 520 | 0.022 | 0.8831 | 0.8836 | 0.8880 | 0.8875 | 0.8833 | | 2.9146 | 530 | 0.0167 | 0.8836 | 0.8840 | 0.8882 | 0.8876 | 0.8839 | | 2.9697 | 540 | 0.0654 | 0.8834 | 0.8839 | 0.8881 | 0.8873 | 0.8835 | ### Framework Versions - Python: 3.11.13 - Sentence Transformers: 4.1.0 - Transformers: 4.52.4 - PyTorch: 2.6.0+cu124 - Accelerate: 1.8.1 - Datasets: 3.6.0 - Tokenizers: 0.21.2 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### MatryoshkaLoss ```bibtex @misc{kusupati2024matryoshka, title={Matryoshka Representation Learning}, author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi}, year={2024}, eprint={2205.13147}, archivePrefix={arXiv}, primaryClass={cs.LG} } ``` #### MultipleNegativesRankingLoss ```bibtex @misc{henderson2017efficient, title={Efficient Natural Language Response Suggestion for Smart Reply}, author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil}, year={2017}, eprint={1705.00652}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
suminseo/llama3.1_0821_1
suminseo
2025-08-21T06:42:29Z
0
0
transformers
[ "transformers", "gguf", "llama", "text-generation-inference", "unsloth", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-08-21T06:40:27Z
--- base_model: unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - gguf license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** suminseo - **License:** apache-2.0 - **Finetuned from model :** unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
llencia/blockassist-bc-wiry_wise_hedgehog_1755758500
llencia
2025-08-21T06:42:04Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "wiry wise hedgehog", "arxiv:2504.07091", "region:us" ]
null
2025-08-21T06:42:00Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - wiry wise hedgehog --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Medved444/blockassist-bc-bellowing_finicky_manatee_1755757134
Medved444
2025-08-21T06:39:00Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "bellowing finicky manatee", "arxiv:2504.07091", "region:us" ]
null
2025-08-21T06:38:23Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - bellowing finicky manatee --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
unitova/blockassist-bc-zealous_sneaky_raven_1755756599
unitova
2025-08-21T06:38:04Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "zealous sneaky raven", "arxiv:2504.07091", "region:us" ]
null
2025-08-21T06:38:00Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - zealous sneaky raven --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
quantumxnode/blockassist-bc-dormant_peckish_seahorse_1755756754
quantumxnode
2025-08-21T06:37:53Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "dormant peckish seahorse", "arxiv:2504.07091", "region:us" ]
null
2025-08-21T06:37:49Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - dormant peckish seahorse --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1755758166
IvanJAjebu
2025-08-21T06:37:20Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "thorny slender capybara", "arxiv:2504.07091", "region:us" ]
null
2025-08-21T06:37:10Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - thorny slender capybara --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
DopeorNope/lean_base
DopeorNope
2025-08-21T06:35:17Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-21T06:29:03Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
eshanroy5678/blockassist-bc-untamed_dextrous_dingo_1755757614
eshanroy5678
2025-08-21T06:33:26Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "untamed dextrous dingo", "arxiv:2504.07091", "region:us" ]
null
2025-08-21T06:30:43Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - untamed dextrous dingo --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
indoempatnol/blockassist-bc-fishy_wary_swan_1755756384
indoempatnol
2025-08-21T06:32:42Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "fishy wary swan", "arxiv:2504.07091", "region:us" ]
null
2025-08-21T06:32:39Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - fishy wary swan --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
nightmedia/QiMing-Holos-Plus-Qwen3-14B-q6-mlx
nightmedia
2025-08-21T06:30:27Z
0
0
mlx
[ "mlx", "safetensors", "qwen3", "qwen", "unsloth", "qiming", "qiming-holos", "bagua", "decision-making", "strategic-analysis", "cognitive-architecture", "chat", "lora", "philosophy-driven-ai", "text-generation", "conversational", "zh", "en", "base_model:aifeifei798/QiMing-Holos-Plus-Qwen3-14B", "base_model:adapter:aifeifei798/QiMing-Holos-Plus-Qwen3-14B", "license:apache-2.0", "6-bit", "region:us" ]
text-generation
2025-08-21T05:41:09Z
--- license: apache-2.0 language: - zh - en tags: - qwen - qwen3 - unsloth - qiming - qiming-holos - bagua - decision-making - strategic-analysis - cognitive-architecture - chat - lora - philosophy-driven-ai - mlx pipeline_tag: text-generation library_name: mlx base_model: aifeifei798/QiMing-Holos-Plus-Qwen3-14B --- # QiMing-Holos-Plus-Qwen3-14B-q6-mlx This model [QiMing-Holos-Plus-Qwen3-14B-q6-mlx](https://huggingface.co/QiMing-Holos-Plus-Qwen3-14B-q6-mlx) was converted to MLX format from [aifeifei798/QiMing-Holos-Plus-Qwen3-14B](https://huggingface.co/aifeifei798/QiMing-Holos-Plus-Qwen3-14B) using mlx-lm version **0.26.3**. ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("QiMing-Holos-Plus-Qwen3-14B-q6-mlx") prompt = "hello" if tokenizer.chat_template is not None: messages = [{"role": "user", "content": prompt}] prompt = tokenizer.apply_chat_template( messages, add_generation_prompt=True ) response = generate(model, tokenizer, prompt=prompt, verbose=True) ```
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1755757731
IvanJAjebu
2025-08-21T06:30:08Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "thorny slender capybara", "arxiv:2504.07091", "region:us" ]
null
2025-08-21T06:30:00Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - thorny slender capybara --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
ritesh431/MunshiAI_llama_3.3_70b_finetunned
ritesh431
2025-08-21T06:29:39Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "unsloth", "conversational", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-08-20T13:07:15Z
--- base_model: unsloth/llama-3.3-70b-instruct-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama license: apache-2.0 language: - en --- # Uploaded finetuned model - **Developed by:** ritesh431 - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3.3-70b-instruct-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
lemonhat/Qwen2.5-7B-Instruct-t2_25k_v2_tag5_processed
lemonhat
2025-08-21T06:29:33Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "llama-factory", "full", "generated_from_trainer", "conversational", "base_model:Qwen/Qwen2.5-7B-Instruct", "base_model:finetune:Qwen/Qwen2.5-7B-Instruct", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-21T06:28:11Z
--- library_name: transformers license: other base_model: Qwen/Qwen2.5-7B-Instruct tags: - llama-factory - full - generated_from_trainer model-index: - name: t2_25k_v2_tag5_processed results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t2_25k_v2_tag5_processed This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the t2_25k_v2_tag5_processed dataset. It achieves the following results on the evaluation set: - Loss: 0.1914 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - total_train_batch_size: 4 - total_eval_batch_size: 4 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.4392 | 0.0634 | 100 | 0.2912 | | 0.2845 | 0.1268 | 200 | 0.2614 | | 0.2933 | 0.1902 | 300 | 0.2460 | | 0.2773 | 0.2536 | 400 | 0.2335 | | 0.2758 | 0.3171 | 500 | 0.2255 | | 0.2858 | 0.3805 | 600 | 0.2217 | | 0.2113 | 0.4439 | 700 | 0.2136 | | 0.2754 | 0.5073 | 800 | 0.2105 | | 0.2672 | 0.5707 | 900 | 0.2041 | | 0.2721 | 0.6341 | 1000 | 0.2013 | | 0.2707 | 0.6975 | 1100 | 0.1984 | | 0.2538 | 0.7609 | 1200 | 0.1953 | | 0.2143 | 0.8244 | 1300 | 0.1928 | | 0.1926 | 0.8878 | 1400 | 0.1921 | | 0.2476 | 0.9512 | 1500 | 0.1915 | ### Framework versions - Transformers 4.46.1 - Pytorch 2.6.0+cu124 - Datasets 3.1.0 - Tokenizers 0.20.3
trphuoctoan/vivos_2
trphuoctoan
2025-08-21T06:28:35Z
0
0
null
[ "region:us" ]
null
2025-08-21T06:27:48Z
# VIVOS Vietnamese ASR (char, Conformer) - Sample rate: 16 kHz - Token type: char - Checkpoint: valid.loss.ave.pth - Inference config: conf/decode.yaml
Shirish24/smolvla_run_trossen_run1.9
Shirish24
2025-08-21T06:27:10Z
0
0
lerobot
[ "lerobot", "safetensors", "robotics", "smolvla", "dataset:Shirish24/benchmark_single_cube_v2", "arxiv:2506.01844", "base_model:lerobot/smolvla_base", "base_model:finetune:lerobot/smolvla_base", "license:apache-2.0", "region:us" ]
robotics
2025-08-21T06:26:38Z
--- base_model: lerobot/smolvla_base datasets: Shirish24/benchmark_single_cube_v2 library_name: lerobot license: apache-2.0 model_name: smolvla pipeline_tag: robotics tags: - robotics - smolvla - lerobot --- # Model Card for smolvla <!-- Provide a quick summary of what the model is/does. --> [SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware. This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot). See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index). --- ## How to Get Started with the Model For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy). Below is the short version on how to train and run inference/eval: ### Train from scratch ```bash python -m lerobot.scripts.train \ --dataset.repo_id=${HF_USER}/<dataset> \ --policy.type=act \ --output_dir=outputs/train/<desired_policy_repo_id> \ --job_name=lerobot_training \ --policy.device=cuda \ --policy.repo_id=${HF_USER}/<desired_policy_repo_id> --wandb.enable=true ``` *Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`.* ### Evaluate the policy/run inference ```bash python -m lerobot.record \ --robot.type=so100_follower \ --dataset.repo_id=<hf_user>/eval_<dataset> \ --policy.path=<hf_user>/<desired_policy_repo_id> \ --episodes=10 ``` Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint. --- ## Model Details * **License:** apache-2.0
utkububa/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-dormant_grassy_coral
utkububa
2025-08-21T06:26:40Z
105
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "rl-swarm", "genrl-swarm", "grpo", "gensyn", "I am dormant_grassy_coral", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-17T23:11:56Z
--- library_name: transformers tags: - rl-swarm - genrl-swarm - grpo - gensyn - I am dormant_grassy_coral --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
lisaozill03/blockassist-bc-rugged_prickly_alpaca_1755755961
lisaozill03
2025-08-21T06:24:30Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "rugged prickly alpaca", "arxiv:2504.07091", "region:us" ]
null
2025-08-21T06:24:26Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - rugged prickly alpaca --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
prajwalg1997/phi4mini-medical-lora2
prajwalg1997
2025-08-21T06:24:19Z
0
0
null
[ "safetensors", "region:us" ]
null
2025-08-21T06:23:42Z
# Phi-4-mini Instruct — Medical LoRA (ctx=1024) **Base model:** `microsoft/Phi-4-mini-instruct` **Adapter type:** LoRA (QLoRA, 4-bit training) **Domain:** Medical instruction + reasoning **Created:** 2025-08-21 06:23:42 ## Training Summary - Context length: **1024** - Epochs: **2** - Learning rate: **0.00021726991983621354** - LoRA r / alpha / dropout: **32 / 84 / 0.05** - Grad accumulation: **16** - Eval loss (100 examples, ctx=1024): **1.4389** ## Dataset - `FreedomIntelligence/medical-o1-reasoning-SFT` (config: `en`) - ~500 examples (80/20 split) used for quick HPO & final demo run ## Usage ```python from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig from peft import PeftModel import torch base_id = "microsoft/Phi-4-mini-instruct" adapter_id = "prajwalg1997/phi4mini-medical-lora2" tokenizer = AutoTokenizer.from_pretrained(base_id, trust_remote_code=True) if tokenizer.pad_token is None: tokenizer.pad_token = tokenizer.eos_token bnb = BitsAndBytesConfig(load_in_4bit=True, bnb_4bit_quant_type="nf4", bnb_4bit_use_double_quant=True, bnb_4bit_compute_dtype=torch.float16) base = AutoModelForCausalLM.from_pretrained( base_id, device_map="auto", trust_remote_code=True, quantization_config=bnb, attn_implementation="sdpa" ) model = PeftModel.from_pretrained(base, adapter_id) model.eval() ``` ## Notes - Trained with transformers==4.49.0, accelerate==1.3.0, bitsandbytes==0.47.0 - Generation tip: `temperature=0.2`, `do_sample=False`, `max_new_tokens≈256–512`
manusiaperahu2012/blockassist-bc-roaring_long_tuna_1755755814
manusiaperahu2012
2025-08-21T06:23:31Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "roaring long tuna", "arxiv:2504.07091", "region:us" ]
null
2025-08-21T06:23:28Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - roaring long tuna --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
llencia/blockassist-bc-wiry_wise_hedgehog_1755757364
llencia
2025-08-21T06:23:14Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "wiry wise hedgehog", "arxiv:2504.07091", "region:us" ]
null
2025-08-21T06:23:04Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - wiry wise hedgehog --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
vwzyrraz7l/blockassist-bc-tall_hunting_vulture_1755755736
vwzyrraz7l
2025-08-21T06:23:11Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "tall hunting vulture", "arxiv:2504.07091", "region:us" ]
null
2025-08-21T06:23:07Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - tall hunting vulture --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Kartikeya/videomae-base-finetuned-yt_short_classification
Kartikeya
2025-08-21T06:22:24Z
0
0
transformers
[ "transformers", "safetensors", "videomae", "video-classification", "generated_from_trainer", "base_model:MCG-NJU/videomae-base", "base_model:finetune:MCG-NJU/videomae-base", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
video-classification
2025-08-20T22:39:31Z
--- library_name: transformers license: cc-by-nc-4.0 base_model: MCG-NJU/videomae-base tags: - generated_from_trainer metrics: - accuracy model-index: - name: videomae-base-finetuned-yt_short_classification results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # videomae-base-finetuned-yt_short_classification This model is a fine-tuned version of [MCG-NJU/videomae-base](https://huggingface.co/MCG-NJU/videomae-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4704 - Accuracy: 0.7815 - 0 Precision: 0.7484 - 0 Recall: 0.8149 - 0 F1-score: 0.7803 - 0 Support: 6322.0 - 1 Precision: 0.8170 - 1 Recall: 0.7510 - 1 F1-score: 0.7827 - 1 Support: 6957.0 - Accuracy F1-score: 0.7815 - Macro avg Precision: 0.7827 - Macro avg Recall: 0.7830 - Macro avg F1-score: 0.7815 - Macro avg Support: 13279.0 - Weighted avg Precision: 0.7844 - Weighted avg Recall: 0.7815 - Weighted avg F1-score: 0.7815 - Weighted avg Support: 13279.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - training_steps: 2060 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | 0 Precision | 0 Recall | 0 F1-score | 0 Support | 1 Precision | 1 Recall | 1 F1-score | 1 Support | Accuracy F1-score | Macro avg Precision | Macro avg Recall | Macro avg F1-score | Macro avg Support | Weighted avg Precision | Weighted avg Recall | Weighted avg F1-score | Weighted avg Support | |:-------------:|:------:|:----:|:---------------:|:--------:|:-----------:|:--------:|:----------:|:---------:|:-----------:|:--------:|:----------:|:---------:|:-----------------:|:-------------------:|:----------------:|:------------------:|:-----------------:|:----------------------:|:-------------------:|:---------------------:|:--------------------:| | 0.6282 | 0.2005 | 413 | 0.6101 | 0.6848 | 0.7561 | 0.4991 | 0.6012 | 6322.0 | 0.6522 | 0.8537 | 0.7395 | 6957.0 | 0.6848 | 0.7041 | 0.6764 | 0.6704 | 13279.0 | 0.7016 | 0.6848 | 0.6737 | 13279.0 | | 0.6569 | 1.2005 | 826 | 0.5357 | 0.7290 | 0.7392 | 0.6655 | 0.7004 | 6322.0 | 0.7213 | 0.7867 | 0.7526 | 6957.0 | 0.7290 | 0.7303 | 0.7261 | 0.7265 | 13279.0 | 0.7298 | 0.7290 | 0.7277 | 13279.0 | | 0.5064 | 2.2005 | 1239 | 0.4839 | 0.7687 | 0.7517 | 0.7680 | 0.7597 | 6322.0 | 0.7849 | 0.7694 | 0.7771 | 6957.0 | 0.7687 | 0.7683 | 0.7687 | 0.7684 | 13279.0 | 0.7691 | 0.7687 | 0.7688 | 13279.0 | | 0.4293 | 3.2005 | 1652 | 0.5120 | 0.7518 | 0.6850 | 0.8861 | 0.7727 | 6322.0 | 0.8589 | 0.6297 | 0.7267 | 6957.0 | 0.7518 | 0.7719 | 0.7579 | 0.7497 | 13279.0 | 0.7761 | 0.7518 | 0.7486 | 13279.0 | | 0.421 | 4.1981 | 2060 | 0.4704 | 0.7815 | 0.7484 | 0.8149 | 0.7803 | 6322.0 | 0.8170 | 0.7510 | 0.7827 | 6957.0 | 0.7815 | 0.7827 | 0.7830 | 0.7815 | 13279.0 | 0.7844 | 0.7815 | 0.7815 | 13279.0 | ### Framework versions - Transformers 4.46.3 - Pytorch 2.0.0+cu117 - Datasets 3.1.0 - Tokenizers 0.20.3
Gemneye/Flux-dev.1-Dora
Gemneye
2025-08-21T06:20:18Z
8
0
diffusers
[ "diffusers", "text-to-image", "lora", "template:diffusion-lora", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:apache-2.0", "region:us" ]
text-to-image
2025-08-19T19:46:40Z
--- tags: - text-to-image - lora - diffusers - template:diffusion-lora widget: - output: url: images/Dora-004.jpg text: '-' - output: url: images/Dora-010.jpg text: '-' - output: url: images/Dora-002-1080x1350.jpg text: >- 🟨 Scene Setup Location: Chichen Itza, Mexico Time of Day / Lighting: Late afternoon with warm sunlight Mood / Atmosphere: Mystical and adventurous Camera Type: DSLR Lens Type: 35mm Aperture & Depth of Field: f/2.8, shallow depth of field Camera Angle & Framing: Mid-shot, slightly elevated 🟩 Subject Description Age / Ethnicity / Gender: Young adult, Caucasian, female Hair: Long, straight blonde hair Eyes: Dark eyes Clothing & Accessories: Colorful, patterned dress, sunglasses in hand Pose / Body Language: Standing with one hand on hip, the other adjusting hair Facial Expression: Confident and curious Skin Details / Texture: Smooth, sun-kissed skin 🟦 Background & Environmental Detail Foreground Elements: Ancient stone steps Midground Composition: El Castillo pyramid Background Elements: Lush green jungle Lighting Effects: Soft shadows, warm sunlight Color, Materials, Texture: Earthy tones, vibrant dress colors 🟪 Style & Realism Enhancers Style Reference / Genre: Travel photography Realism Tags: High detail, natural lighting, precise anatomy, lifelike texture base_model: black-forest-labs/FLUX.1-dev instance_prompt: D0ra license: apache-2.0 --- # Dora-Flux <Gallery /> ## Model description Dora @2000 steps trained on Flux-dev.1 ## Trigger words You should use `D0ra` to trigger the image generation. ## Download model [Download](/Gemneye/Flux-dev.1-Dora/tree/main) them in the Files & versions tab.
llencia/blockassist-bc-wiry_wise_hedgehog_1755757184
llencia
2025-08-21T06:20:07Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "wiry wise hedgehog", "arxiv:2504.07091", "region:us" ]
null
2025-08-21T06:20:03Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - wiry wise hedgehog --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
hwan99/llama3ko-8b-qualcomm-lora_merged
hwan99
2025-08-21T06:19:51Z
0
1
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-21T06:08:40Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
1122QQTT/Cyber_Xiao_Yan
1122QQTT
2025-08-21T06:18:21Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-08-21T06:18:21Z
--- license: apache-2.0 ---
fcfsbus/blockassist-bc-prehistoric_humming_lemur_1755756945
fcfsbus
2025-08-21T06:17:33Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "prehistoric humming lemur", "arxiv:2504.07091", "region:us" ]
null
2025-08-21T06:17:16Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - prehistoric humming lemur --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
joppertiu/blockassist-bc-grunting_squinting_clam_1755757002
joppertiu
2025-08-21T06:16:55Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "grunting squinting clam", "arxiv:2504.07091", "region:us" ]
null
2025-08-21T06:16:42Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - grunting squinting clam --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
llencia/blockassist-bc-wiry_wise_hedgehog_1755756867
llencia
2025-08-21T06:15:00Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "wiry wise hedgehog", "arxiv:2504.07091", "region:us" ]
null
2025-08-21T06:14:52Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - wiry wise hedgehog --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
joppertiu/blockassist-bc-tiny_fierce_bee_1755756835
joppertiu
2025-08-21T06:14:08Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "tiny fierce bee", "arxiv:2504.07091", "region:us" ]
null
2025-08-21T06:13:55Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - tiny fierce bee --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1755756659
IvanJAjebu
2025-08-21T06:12:15Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "thorny slender capybara", "arxiv:2504.07091", "region:us" ]
null
2025-08-21T06:12:06Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - thorny slender capybara --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
zelk12/MT2-Gen3_gemma-3-12B-Q6_K-GGUF
zelk12
2025-08-21T06:12:06Z
0
0
null
[ "gguf", "merge", "mergekit", "lazymergekit", "IlyaGusev/saiga_gemma3_12b", "zelk12/MT1-gemma-3-12B", "soob3123/amoral-gemma3-12B-v2", "zelk12/MT-Gen1-gemma-3-12B", "zelk12/MT-gemma-3-12B", "llama-cpp", "gguf-my-repo", "image-text-to-text", "base_model:zelk12/MT2-Gen3_gemma-3-12B", "base_model:quantized:zelk12/MT2-Gen3_gemma-3-12B", "license:gemma", "endpoints_compatible", "region:us", "conversational" ]
image-text-to-text
2025-08-21T06:11:26Z
--- base_model: zelk12/MT2-Gen3_gemma-3-12B tags: - merge - mergekit - lazymergekit - IlyaGusev/saiga_gemma3_12b - zelk12/MT1-gemma-3-12B - soob3123/amoral-gemma3-12B-v2 - zelk12/MT-Gen1-gemma-3-12B - zelk12/MT-gemma-3-12B - llama-cpp - gguf-my-repo license: gemma pipeline_tag: image-text-to-text --- # zelk12/MT2-Gen3_gemma-3-12B-Q6_K-GGUF This model was converted to GGUF format from [`zelk12/MT2-Gen3_gemma-3-12B`](https://huggingface.co/zelk12/MT2-Gen3_gemma-3-12B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/zelk12/MT2-Gen3_gemma-3-12B) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo zelk12/MT2-Gen3_gemma-3-12B-Q6_K-GGUF --hf-file mt2-gen3_gemma-3-12b-q6_k.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo zelk12/MT2-Gen3_gemma-3-12B-Q6_K-GGUF --hf-file mt2-gen3_gemma-3-12b-q6_k.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo zelk12/MT2-Gen3_gemma-3-12B-Q6_K-GGUF --hf-file mt2-gen3_gemma-3-12b-q6_k.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo zelk12/MT2-Gen3_gemma-3-12B-Q6_K-GGUF --hf-file mt2-gen3_gemma-3-12b-q6_k.gguf -c 2048 ```
llencia/blockassist-bc-wiry_wise_hedgehog_1755756640
llencia
2025-08-21T06:11:06Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "wiry wise hedgehog", "arxiv:2504.07091", "region:us" ]
null
2025-08-21T06:11:03Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - wiry wise hedgehog --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
thanobidex/blockassist-bc-colorful_shiny_hare_1755755170
thanobidex
2025-08-21T06:10:53Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "colorful shiny hare", "arxiv:2504.07091", "region:us" ]
null
2025-08-21T06:10:50Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - colorful shiny hare --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
EMABC/Huatuogpt2-lora-sft
EMABC
2025-08-21T06:10:43Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "region:us" ]
null
2025-08-21T06:07:58Z
--- base_model: HuatuoGPT2-7B library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.13.2
tudoubing/MeloTTS-Chinese
tudoubing
2025-08-21T06:08:37Z
0
0
null
[ "onnx", "license:apache-2.0", "region:us" ]
null
2025-08-21T05:58:59Z
--- license: apache-2.0 ---
trl-algo/summary_tags_qwen2_vl_v1
trl-algo
2025-08-21T06:07:48Z
0
0
null
[ "safetensors", "qwen2_vl", "llama-factory", "qwen2-vl-2b-instruct", "fine-tuned", "merged", "summary", "tags", "ddc", "text-generation", "conversational", "en", "zh", "base_model:Qwen/Qwen2-VL-2B-Instruct", "base_model:finetune:Qwen/Qwen2-VL-2B-Instruct", "license:apache-2.0", "region:us" ]
text-generation
2025-08-21T06:07:04Z
--- base_model: Qwen/Qwen2-VL-2B-Instruct tags: - llama-factory - qwen2-vl-2b-instruct - fine-tuned - merged - summary - tags - ddc license: apache-2.0 language: - en - zh pipeline_tag: text-generation model_type: qwen2 --- # summary_tags_qwen2_vl_v1 Fine-tuned Qwen2-VL-2B model for summary, tags extraction, and DDC classification tasks. ## Model Details - **Base Model**: Qwen/Qwen2-VL-2B-Instruct - **Training Method**: LoRA fine-tuning + model merging - **Tasks**: Text summarization, tag extraction, DDC classification
zelk12/MT2-Gen3_gemma-3-12B
zelk12
2025-08-21T06:06:22Z
0
0
null
[ "safetensors", "gemma3", "merge", "mergekit", "lazymergekit", "IlyaGusev/saiga_gemma3_12b", "zelk12/MT1-gemma-3-12B", "soob3123/amoral-gemma3-12B-v2", "zelk12/MT-Gen1-gemma-3-12B", "zelk12/MT-gemma-3-12B", "image-text-to-text", "conversational", "base_model:IlyaGusev/saiga_gemma3_12b", "base_model:merge:IlyaGusev/saiga_gemma3_12b", "base_model:soob3123/amoral-gemma3-12B-v2", "base_model:merge:soob3123/amoral-gemma3-12B-v2", "base_model:zelk12/MT-Gen1-gemma-3-12B", "base_model:merge:zelk12/MT-Gen1-gemma-3-12B", "base_model:zelk12/MT-gemma-3-12B", "base_model:merge:zelk12/MT-gemma-3-12B", "base_model:zelk12/MT1-gemma-3-12B", "base_model:merge:zelk12/MT1-gemma-3-12B", "license:gemma", "region:us" ]
image-text-to-text
2025-08-21T05:31:58Z
--- base_model: - IlyaGusev/saiga_gemma3_12b - zelk12/MT1-gemma-3-12B - soob3123/amoral-gemma3-12B-v2 - zelk12/MT-Gen1-gemma-3-12B - zelk12/MT-gemma-3-12B tags: - merge - mergekit - lazymergekit - IlyaGusev/saiga_gemma3_12b - zelk12/MT1-gemma-3-12B - soob3123/amoral-gemma3-12B-v2 - zelk12/MT-Gen1-gemma-3-12B - zelk12/MT-gemma-3-12B license: gemma pipeline_tag: image-text-to-text --- # MT2-Gen3_gemma-3-12B MT2-Gen3_gemma-3-12B is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [IlyaGusev/saiga_gemma3_12b](https://huggingface.co/IlyaGusev/saiga_gemma3_12b) * [zelk12/MT1-gemma-3-12B](https://huggingface.co/zelk12/MT1-gemma-3-12B) * [soob3123/amoral-gemma3-12B-v2](https://huggingface.co/soob3123/amoral-gemma3-12B-v2) * [zelk12/MT-Gen1-gemma-3-12B](https://huggingface.co/zelk12/MT-Gen1-gemma-3-12B) * [zelk12/MT-gemma-3-12B](https://huggingface.co/zelk12/MT-gemma-3-12B) ## 🧩 Configuration ```yaml models: - model: TheDrummer/Fallen-Gemma3-12B-v1 #no parameters necessary for base model - model: IlyaGusev/saiga_gemma3_12b parameters: density: 0.5 weight: 0.5 - model: zelk12/MT1-gemma-3-12B parameters: density: 0.5 weight: 0.507 - model: soob3123/amoral-gemma3-12B-v2 parameters: density: 0.5 weight: 0.615 - model: zelk12/MT-Gen1-gemma-3-12B parameters: density: 0.5 weight: 0.781 - model: zelk12/MT-gemma-3-12B parameters: density: 0.5 weight: 0.8 merge_method: dare_ties base_model: TheDrummer/Fallen-Gemma3-12B-v1 parameters: normalize: true dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "zelk12/MT2-Gen3_gemma-3-12B" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
coelacanthxyz/blockassist-bc-finicky_thriving_grouse_1755754643
coelacanthxyz
2025-08-21T06:05:15Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "finicky thriving grouse", "arxiv:2504.07091", "region:us" ]
null
2025-08-21T06:05:09Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - finicky thriving grouse --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
unitova/blockassist-bc-zealous_sneaky_raven_1755754601
unitova
2025-08-21T06:04:45Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "zealous sneaky raven", "arxiv:2504.07091", "region:us" ]
null
2025-08-21T06:04:42Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - zealous sneaky raven --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).