modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-09-12 12:31:00
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
555 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-09-12 12:28:53
card
stringlengths
11
1.01M
jadechoghari/None
jadechoghari
2025-09-11T21:44:56Z
0
0
lerobot
[ "lerobot", "safetensors", "robotics", "smolvla", "dataset:HuggingfaceVLA/libero", "arxiv:2506.01844", "base_model:lerobot/smolvla_base", "base_model:finetune:lerobot/smolvla_base", "license:apache-2.0", "region:us" ]
robotics
2025-09-11T21:36:50Z
--- base_model: lerobot/smolvla_base datasets: HuggingfaceVLA/libero library_name: lerobot license: apache-2.0 model_name: smolvla pipeline_tag: robotics tags: - lerobot - robotics - smolvla --- # Model Card for smolvla <!-- Provide a quick summary of what the model is/does. --> [SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware. This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot). See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index). --- ## How to Get Started with the Model For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy). Below is the short version on how to train and run inference/eval: ### Train from scratch ```bash lerobot-train \ --dataset.repo_id=${HF_USER}/<dataset> \ --policy.type=act \ --output_dir=outputs/train/<desired_policy_repo_id> \ --job_name=lerobot_training \ --policy.device=cuda \ --policy.repo_id=${HF_USER}/<desired_policy_repo_id> --wandb.enable=true ``` _Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._ ### Evaluate the policy/run inference ```bash lerobot-record \ --robot.type=so100_follower \ --dataset.repo_id=<hf_user>/eval_<dataset> \ --policy.path=<hf_user>/<desired_policy_repo_id> \ --episodes=10 ``` Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint. --- ## Model Details - **License:** apache-2.0
sunki23/blockassist
sunki23
2025-09-11T21:44:36Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "dormant wise rabbit", "arxiv:2504.07091", "region:us" ]
null
2025-09-10T22:28:13Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - dormant wise rabbit --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
younus00/poca-SoccerTwos
younus00
2025-09-11T21:41:24Z
0
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "SoccerTwos", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SoccerTwos", "region:us" ]
reinforcement-learning
2025-09-11T21:40:21Z
--- library_name: ml-agents tags: - SoccerTwos - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SoccerTwos --- # **poca** Agent playing **SoccerTwos** This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: younus00/poca-SoccerTwos 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
mradermacher/UIGEN-T3-4B-Preview-GGUF
mradermacher
2025-09-11T21:40:33Z
0
0
null
[ "gguf", "endpoints_compatible", "region:us", "conversational" ]
null
2025-09-11T21:14:20Z
<!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> <!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> static quants of https://huggingface.co/Tesslate/UIGEN-T3-4B-Preview
Paradoxis/Qwen2.5-VL-3B-Instruct-GRPO-fromFT
Paradoxis
2025-09-11T21:38:01Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "trl", "grpo", "hf_jobs", "arxiv:2402.03300", "endpoints_compatible", "region:us" ]
null
2025-09-10T11:57:59Z
--- library_name: transformers model_name: Qwen2.5-VL-3B-Instruct-GRPO-fromFT tags: - generated_from_trainer - trl - grpo - hf_jobs licence: license --- # Model Card for Qwen2.5-VL-3B-Instruct-GRPO-fromFT This model is a fine-tuned version of [None](https://huggingface.co/None). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="Paradoxis/Qwen2.5-VL-3B-Instruct-GRPO-fromFT", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/flofiz-universit-de-bourgogne/GRPO/runs/lq9h7f3k) This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.24.0.dev0 - Transformers: 4.56.1 - Pytorch: 2.7.1 - Datasets: 3.6.0 - Tokenizers: 0.22.0 ## Citations Cite GRPO as: ```bibtex @article{shao2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
gabriellarson/WEBGEN-OSS-20B-GGUF
gabriellarson
2025-09-11T21:37:39Z
0
1
transformers
[ "transformers", "gguf", "text-generation-inference", "unsloth", "gpt_oss", "en", "base_model:Tesslate/WEBGEN-OSS-20B", "base_model:quantized:Tesslate/WEBGEN-OSS-20B", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-09-11T21:32:56Z
--- base_model: - Tesslate/WEBGEN-OSS-20B tags: - text-generation-inference - transformers - unsloth - gpt_oss license: apache-2.0 language: - en --- [Example Output](https://codepen.io/qingy1337/pen/xbwNWGw)
NgQuocThai/whisper-medium-Split-Sentences-cleanpunc
NgQuocThai
2025-09-11T21:36:50Z
1
0
transformers
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "base_model:openai/whisper-medium", "base_model:finetune:openai/whisper-medium", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2025-09-10T15:35:46Z
--- library_name: transformers license: apache-2.0 base_model: openai/whisper-medium tags: - generated_from_trainer metrics: - wer model-index: - name: whisper-medium-Split-Sentences-cleanpunc results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # whisper-medium-Split-Sentences-cleanpunc This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.6916 - Cer: 15.8237 - Wer: 27.8527 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 25 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Cer | Wer | |:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:| | 2.5398 | 1.0 | 1353 | 0.6961 | 33.6482 | 54.1036 | | 0.9355 | 2.0 | 2706 | 0.6405 | 34.2321 | 51.6942 | | 0.7293 | 3.0 | 4059 | 0.5997 | 28.1857 | 44.6026 | | 0.6003 | 4.0 | 5412 | 0.5919 | 26.6651 | 42.5012 | | 0.5051 | 5.0 | 6765 | 0.5781 | 26.4160 | 40.6325 | | 0.4287 | 6.0 | 8118 | 0.5792 | 22.4282 | 36.1832 | | 0.3646 | 7.0 | 9471 | 0.5839 | 19.7198 | 32.6853 | | 0.3112 | 8.0 | 10824 | 0.5972 | 18.8569 | 31.7544 | | 0.2648 | 9.0 | 12177 | 0.6061 | 19.4667 | 32.7127 | | 0.2258 | 10.0 | 13530 | 0.6073 | 19.1717 | 33.1166 | | 0.1928 | 11.0 | 14883 | 0.6155 | 16.8899 | 29.5092 | | 0.1649 | 12.0 | 16236 | 0.6273 | 17.8584 | 30.6044 | | 0.1409 | 13.0 | 17589 | 0.6338 | 17.9501 | 30.4401 | | 0.1219 | 14.0 | 18942 | 0.6458 | 17.9700 | 30.4128 | | 0.1047 | 15.0 | 20295 | 0.6494 | 17.1589 | 29.6530 | | 0.0906 | 16.0 | 21648 | 0.6551 | 17.1569 | 29.4065 | | 0.0803 | 17.0 | 23001 | 0.6580 | 15.9931 | 27.9211 | | 0.0701 | 18.0 | 24354 | 0.6706 | 16.2163 | 28.3045 | | 0.0621 | 19.0 | 25707 | 0.6736 | 16.3358 | 28.3798 | | 0.0561 | 20.0 | 27060 | 0.6802 | 16.5411 | 28.6125 | | 0.0508 | 21.0 | 28413 | 0.6810 | 16.0489 | 28.0649 | | 0.0463 | 22.0 | 29766 | 0.6884 | 16.1525 | 28.2223 | | 0.0434 | 23.0 | 31119 | 0.6916 | 15.8237 | 27.8527 | | 0.0409 | 24.0 | 32472 | 0.6930 | 16.0768 | 28.0101 | | 0.0393 | 25.0 | 33825 | 0.6935 | 16.3697 | 28.2429 | ### Framework versions - Transformers 4.53.3 - Pytorch 2.7.1+cu118 - Datasets 3.6.0 - Tokenizers 0.21.2
Sanjay1905/art-gpt-oss
Sanjay1905
2025-09-11T21:33:45Z
0
0
null
[ "safetensors", "unsloth", "license:apache-2.0", "region:us" ]
null
2025-09-04T12:55:24Z
--- license: apache-2.0 tags: - unsloth ---
NB-M/Meta-Llama-3.1-8B-Instruct-mmc-model2-LORA-F16-GGUF
NB-M
2025-09-11T21:33:00Z
0
0
transformers
[ "transformers", "gguf", "text-generation-inference", "unsloth", "llama", "trl", "llama-cpp", "gguf-my-lora", "en", "base_model:NB-M/Meta-Llama-3.1-8B-Instruct-mmc-model2-LORA", "base_model:quantized:NB-M/Meta-Llama-3.1-8B-Instruct-mmc-model2-LORA", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-09-11T20:39:55Z
--- base_model: NB-M/Meta-Llama-3.1-8B-Instruct-mmc-model2-LORA language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl - llama-cpp - gguf-my-lora --- # NB-M/Meta-Llama-3.1-8B-Instruct-mmc-model2-LORA-F16-GGUF This LoRA adapter was converted to GGUF format from [`NB-M/Meta-Llama-3.1-8B-Instruct-mmc-model2-LORA`](https://huggingface.co/NB-M/Meta-Llama-3.1-8B-Instruct-mmc-model2-LORA) via the ggml.ai's [GGUF-my-lora](https://huggingface.co/spaces/ggml-org/gguf-my-lora) space. Refer to the [original adapter repository](https://huggingface.co/NB-M/Meta-Llama-3.1-8B-Instruct-mmc-model2-LORA) for more details. ## Use with llama.cpp ```bash # with cli llama-cli -m base_model.gguf --lora Meta-Llama-3.1-8B-Instruct-mmc-model2-LORA-f16.gguf (...other args) # with server llama-server -m base_model.gguf --lora Meta-Llama-3.1-8B-Instruct-mmc-model2-LORA-f16.gguf (...other args) ``` To know more about LoRA usage with llama.cpp server, refer to the [llama.cpp server documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/server/README.md).
lmaccarini/flan-t5-base-gec
lmaccarini
2025-09-11T21:32:21Z
0
0
transformers
[ "transformers", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:google/flan-t5-base", "base_model:finetune:google/flan-t5-base", "license:apache-2.0", "text-generation-inference", "endpoints_compatible", "region:us" ]
null
2025-09-11T21:31:57Z
--- library_name: transformers license: apache-2.0 base_model: google/flan-t5-base tags: - generated_from_trainer metrics: - sacrebleu model-index: - name: flan-t5-base-gec results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # flan-t5-base-gec This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2594 - Sacrebleu: 83.0881 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Sacrebleu | |:-------------:|:------:|:-----:|:---------------:|:---------:| | 0.3545 | 0.2591 | 1000 | 0.2928 | 81.3373 | | 0.3264 | 0.5181 | 2000 | 0.2814 | 81.8621 | | 0.308 | 0.7772 | 3000 | 0.2722 | 82.3528 | | 0.2896 | 1.0363 | 4000 | 0.2677 | 82.6670 | | 0.3001 | 1.2953 | 5000 | 0.2615 | 82.7959 | | 0.2991 | 1.5544 | 6000 | 0.2612 | 82.9720 | | 0.2937 | 1.8135 | 7000 | 0.2612 | 82.8736 | | 0.279 | 2.0725 | 8000 | 0.2595 | 83.0339 | | 0.2702 | 2.3316 | 9000 | 0.2595 | 83.1004 | | 0.2685 | 2.5907 | 10000 | 0.2606 | 83.0513 | | 0.2706 | 2.8497 | 11000 | 0.2594 | 83.0881 | ### Framework versions - Transformers 4.56.1 - Pytorch 2.8.0+cu126 - Datasets 4.0.0 - Tokenizers 0.22.0
tanvirahmedkhan/blockassist
tanvirahmedkhan
2025-09-11T21:31:33Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "hardy whiskered mantis", "arxiv:2504.07091", "region:us" ]
null
2025-09-11T21:31:07Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - hardy whiskered mantis --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
BootesVoid/cmf8kqix80g0xsr53kz6ns9mm_cmffvn19v04ckx0n08zbqbiq2
BootesVoid
2025-09-11T21:30:10Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-09-11T21:30:08Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: CR7 --- # Cmf8Kqix80G0Xsr53Kz6Ns9Mm_Cmffvn19V04Ckx0N08Zbqbiq2 <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `CR7` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "CR7", "lora_weights": "https://huggingface.co/BootesVoid/cmf8kqix80g0xsr53kz6ns9mm_cmffvn19v04ckx0n08zbqbiq2/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('BootesVoid/cmf8kqix80g0xsr53kz6ns9mm_cmffvn19v04ckx0n08zbqbiq2', weight_name='lora.safetensors') image = pipeline('CR7').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 2500 - Learning rate: 9e-05 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/BootesVoid/cmf8kqix80g0xsr53kz6ns9mm_cmffvn19v04ckx0n08zbqbiq2/discussions) to add images that show off what you’ve made with this LoRA.
IoannisKat1/bge-reranker-basefinetuned-new
IoannisKat1
2025-09-11T21:29:49Z
0
0
sentence-transformers
[ "sentence-transformers", "tensorboard", "safetensors", "xlm-roberta", "cross-encoder", "reranker", "generated_from_trainer", "dataset_size:8759", "loss:BinaryCrossEntropyLoss", "text-ranking", "arxiv:1908.10084", "base_model:BAAI/bge-reranker-base", "base_model:finetune:BAAI/bge-reranker-base", "model-index", "region:us" ]
text-ranking
2025-09-11T21:23:30Z
--- tags: - sentence-transformers - cross-encoder - reranker - generated_from_trainer - dataset_size:8759 - loss:BinaryCrossEntropyLoss base_model: BAAI/bge-reranker-base pipeline_tag: text-ranking library_name: sentence-transformers metrics: - map - mrr@10 - ndcg@10 - accuracy - accuracy_threshold - f1 - f1_threshold - precision - recall - average_precision model-index: - name: CrossEncoder based on BAAI/bge-reranker-base results: - task: type: cross-encoder-reranking name: Cross Encoder Reranking dataset: name: gooaq dev type: gooaq-dev metrics: - type: map value: 0.7316 name: Map - type: mrr@10 value: 0.7315 name: Mrr@10 - type: ndcg@10 value: 0.7599 name: Ndcg@10 - task: type: cross-encoder-classification name: Cross Encoder Classification dataset: name: sts dev type: sts_dev metrics: - type: accuracy value: 0.9974554707379135 name: Accuracy - type: accuracy_threshold value: 0.00048366509145125747 name: Accuracy Threshold - type: f1 value: 0.9987261146496815 name: F1 - type: f1_threshold value: 0.00048366509145125747 name: F1 Threshold - type: precision value: 1.0 name: Precision - type: recall value: 0.9974554707379135 name: Recall - type: average_precision value: 1.0 name: Average Precision --- # CrossEncoder based on BAAI/bge-reranker-base This is a [Cross Encoder](https://www.sbert.net/docs/cross_encoder/usage/usage.html) model finetuned from [BAAI/bge-reranker-base](https://huggingface.co/BAAI/bge-reranker-base) using the [sentence-transformers](https://www.SBERT.net) library. It computes scores for pairs of texts, which can be used for text reranking and semantic search. ## Model Details ### Model Description - **Model Type:** Cross Encoder - **Base model:** [BAAI/bge-reranker-base](https://huggingface.co/BAAI/bge-reranker-base) <!-- at revision 2cfc18c9415c912f9d8155881c133215df768a70 --> - **Maximum Sequence Length:** 512 tokens - **Number of Output Labels:** 1 label <!-- - **Training Dataset:** Unknown --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Documentation:** [Cross Encoder Documentation](https://www.sbert.net/docs/cross_encoder/usage/usage.html) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Cross Encoders on Hugging Face](https://huggingface.co/models?library=sentence-transformers&other=cross-encoder) ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import CrossEncoder # Download from the 🤗 Hub model = CrossEncoder("cross_encoder_model_id") # Get scores for pairs of texts pairs = [ ['What is the amount of the transaction conducted on 30/11/2020 at 4:39:55 PM?', '**Court (Civil/Criminal): Civil**\n\n**Provisions:**\n\n**Time of commission of the act:**\n\n**Outcome (not guilty, guilty):**\n\n**Rationale:**\n\n**Facts:**\nThe plaintiff holds credit card number ............ with the defendant banking corporation. Based on the application for alternative networks dated 19/7/2015 with number ......... submitted at a branch of the defendant, he was granted access to the electronic banking service (e-banking) to conduct banking transactions (debit, credit, updates, payments) remotely. On 30/11/2020, the plaintiff fell victim to electronic fraud through the "phishing" method, whereby an unknown perpetrator managed to withdraw a total amount of €3,121.75 from the aforementioned credit card. Specifically, the plaintiff received an email at 1:35 PM on 29/11/2020 from sender ...... with address ........, informing him that due to an impending system change, he needed to verify the mobile phone number linked to the credit card, urging him to complete the verification process within the next 24 hours by following a link titled ........; otherwise, his account would be locked for security reasons. The plaintiff read this email on the afternoon of 30 November 2020 and, believing it was from the defendant, followed the instructions and proceeded via the provided link to a website that was identical (a clone) to that of the defendant. On this page, he was asked to enter the six-digit security code (.........) that had just been sent to his mobile phone by the defendant at 3:41 PM, with the note that it was an activation code for his ........ card at ........., which he entered.\n\nSubsequently, the plaintiff received, according to his statements, a new email (not submitted), which requested him to enter the details of the aforementioned credit card, specifically the name of the cardholder and the card number, not the PIN, which he also entered, convinced that he was within the online environment of the defendant. Then, at 3:47 PM, he received a message on his mobile phone from the defendant containing the exact same content as the one he received at 3:41 PM, while at 3:50 PM he received a message stating that the activation of his ......... card at ....... had been completed. Once the plaintiff read this, he became concerned that something was not right, and immediately called (at 4:41 PM) the defendant\'s call center to inform them. There, the employees, with whom he finally connected at 5:04 PM due to high call center volume, advised him to delete the relevant emails, cancel his credit card, change his access passwords for the service, and submit a dispute request regarding the conducted transactions. The plaintiff electronically sent this request to the defendant, disputing the detailed transactions amounting to €3,121.75, which were conducted on 30/11/2020 during the time frame of 16:37:45-16:43:34 PM, arguing that he had neither performed them himself nor authorized anyone else to do so. The plaintiff specifically disputed the following transactions, as evidenced by the account activity of the disputed credit card during the aforementioned timeframe: a) transaction number ......... amounting to €150.62 conducted on 30/11/2020 at 4:43:34 PM, b) transaction number ........ amounting to €293.20 conducted on 30/11/2020 at 4:42:40 PM, c) transaction number ............ amounting to €295.21 conducted on 30/11/2020 at 4:42:10 PM, d) transaction number .......... amounting to €299.22 conducted on 30/11/2020 at 4:41:31 PM, e) transaction number ........ amounting to €297.21 conducted on 30/11/2020 at 4:41:01 PM, f) transaction number ........ amounting to €299.22 conducted on 30/11/2020 at 4:40:27 PM, g) transaction number ....... amounting to €299.22 conducted on 30/11/2020 at 4:39:55 PM, h) transaction number ...... amounting to €299.22 conducted on 30/11/2020 at 4:39:22 PM, i) transaction number ......... amounting to €297.22 conducted on 30/11/2020 at 4:38:52 PM, j) transaction number ......... amounting to €295.21 conducted on 30/11/2020 at 4:38:17 PM, and k) transaction number ......... amounting to €296.21 conducted on 30/11/2020 at 4:37:45 PM. In its response letter dated 21/12/2020, the defendant denied responsibility for the costs of the aforementioned transactions, placing the entire blame on the plaintiff for the leak of his card details and security code to the fraudulent page. The plaintiff, completely denying any fault for the conducted transactions, repeatedly contacted the defendant, both by phone and via email (see emails dated 15/1/2021 and 11/2/2021), while on 2/3/2021, he electronically sent a report dated 1/03/2021 to the Consumer Advocate’s email address, recounting the events and requesting that the aforementioned Independent Authority intervene to have the disputed debt canceled. In its letter with reference number ...../27.04.2021, the aforementioned Independent Authority informed the plaintiff that the case was outside its mediating role and was therefore archived. Subsequently, the plaintiff sent the defendant on 5/3/2021 his extrajudicial statement dated 4/3/2021, calling upon it to fully cancel the debt of €3,121.75 that had been unjustly incurred against him within two days and to immediately instruct the representatives of the collection agency working with it to cease contacting him regarding the disputed case. The defendant sent the plaintiff a message on his mobile phone on 20/04/2021 informing him that his case was still being processed due to lengthy operational requirements, while on 23/04/2021, via email, it informed him that considering their good cooperation and his efforts to keep them updated, it had reviewed his case and decided to refund him the amounts of the transactions that were conducted after his contact with their representatives on 30/11/2020 at 4:41 PM, totaling €1,038.25, specifically the following: a) transaction of €150.62 conducted on 30/11/2020 at 4:43 PM, b) transaction of €295.21 conducted on 30/11/2020 at 4:42 PM, c) transaction of €293.20 conducted on 30/11/2020 at 4:42 PM, and d) transaction of €299.22 conducted on 30/11/2020 at 4:41 PM. Beyond this, the defendant refused to refund the plaintiff the amount of the remaining transactions conducted on 30/11/2020, totaling €2,376.08 (and not €2,376.48 as incorrectly stated by the plaintiff in his lawsuit), which the plaintiff ultimately fully paid, transferring €2,342.77 to the defendant on 7/06/2021 and €33.31 on 15/06/2021 (see related deposit receipts).'], ['What is the amount of the transaction conducted on 30/11/2020 at 4:39:55 PM?', 'Court (Civil/Criminal):\nProvisions:\nTime of commission of the act:\nOutcome (not guilty, guilty): ORDERS the defendant to pay the plaintiff the amount of two thousand four hundred thirty-four euros and eighty-three cents (€2,434.83) with legal interest from the service of the lawsuit.\n\nReasoning: Law 4537/2018 introduces mandatory provisions in favor of users, as according to Article 103, payment service providers are prohibited from deviating from the provisions to the detriment of payment service users, unless the possibility of deviation is expressly provided, and they can decide to offer only more favorable terms to payment service users. Under this law and its provisions, providers are only liable when there are unusual and unforeseen circumstances beyond the control of the party invoking them, and whose consequences could not have been avoided despite efforts to the contrary. However, operational risks and security risks of the system do not constitute unusual and unforeseen circumstances, so any damage to users resulting from their occurrence falls on the providers. Furthermore, the authenticity of the disputed transaction, namely the payment act, is not proven, in the sense that none of the beneficiaries of the contested joint account, namely the plaintiff or her husband, had given their consent as stipulated in Article 64 of Law 4537/2018. Burden of proof. The payment service provider of the payer is liable to the payer for the proper execution of the payment act, unless it proves to the payer that the service provider of the beneficiary received the amount of the payment act according to paragraph 1 of Article 83 of Law 4537/2018.\n\nFacts:'], ['What is the amount of the transaction conducted on 30/11/2020 at 4:39:55 PM?', 'Court (Civil/Criminal): Civil \nProvisions: Law 4537/2018. \nTime of commission of act: \nOutcome (not guilty, guilty): \nReasoning: PARTIALLY ACCEPTS the lawsuit. RECOGNIZES the obligation of the defendant (a) to pay the plaintiffs in full the amount of eight thousand eight hundred ninety (8,890) euros, with legal interest from December 2, 2021, and (b) to pay each of the plaintiffs the amount of five hundred (500) euros with legal interest from the service of the lawsuit. \nFacts: The plaintiffs claim that they are co-beneficiaries of a savings account held by the defendant, and that unknown perpetrators gained access to the aforementioned account via the internet, without the plaintiffs themselves having any fault regarding the safeguarding of the codes or the disclosure of the unique transaction codes (OTR). They assert that the defendant is responsible for the access gained by the unknown perpetrators to the savings account, as the defendant negligently violated the protective obligations it owed to the plaintiffs. They state that, due to the actions of the unknown perpetrators, gradual transfers of monetary amounts were made, resulting in the aforementioned savings account being depleted by the amount of 10,120 euros within a few minutes. They informed the defendant of the aforementioned actions through the appropriate channels; however, the defendant negligently delayed the necessary actions. The defendant denies any liability and the return of the aforementioned monetary amount.'], ['What is the amount of the transaction conducted on 30/11/2020 at 4:39:55 PM?', "**Court (Civil/Criminal):**\nProvisions: Articles 8 of Law 2251/1994, Articles 2, 4, 48 et seq. of Law 4537/2018, Article 11 paragraph 1 of Law 4261/2014, Articles 830, 806, 827, 914, 932 of the Civil Code and 176 of the Code of Civil Procedure.\nTime of commission of the act:\nOutcome (not guilty, guilty):\nRationale: Electronic fraud through the method of phishing. A third party fraudulently obtained money from the plaintiff's bank account and transferred it to another bank account. Both the defendant is liable for the inadequate protection of its systems, which should have been excellent, and the plaintiff who failed to fulfill his obligation to protect his information and disregarded the defendant's security instructions. Law 4537/2018 introduces mandatory law in favor of users, as according to Article 103, payment service providers are prohibited from deviating from the provisions to the detriment of payment service users. It is determined that a resumption of the discussion should be ordered in order to provide all possible evidence, with diligence from both parties, especially from the defendant, who has access to the transaction data through its systems, but also bears the relevant burden of proof concerning the exact timing of the execution of the money transfer order at each stage (withdrawal from the plaintiff's account, transfer to another bank, transfer to the third party's account).\nFacts: The plaintiff maintains a joint bank account with his wife at the defendant bank and has also agreed to online banking transactions (e-banking). On July 31, 2020, at 13:45, the plaintiff was informed of a transfer of €3,000 from his account, which he had not initiated, nor had his wife. At 14:05, he immediately contacted the bank’s customer service line and reported the incident, stating that it was not his action and requesting its cancellation. The bank employee found that the plaintiff had provided his details to a fake website 10 days earlier, and subsequently, the mobile number used for transaction confirmations had been changed. The employee informed him that the money was at the other bank and that they would logically be able to retrieve it, provided it had not already been transferred to a third party's account. Since then, the plaintiff has not seen any return of the amount to his account, and he has made numerous attempts to resolve the issue with the bank, with effort, costs, and distress; however, nothing was achieved, as the money had already entered a third party's account and the defendant denied responsibility for the transfer of the funds.\nFacts: The plaintiff maintained a joint account with his wife at a bank and used internet banking services. On July 21, 2020, a third party deceived the plaintiff through phishing (a misleading SMS with a link), obtaining his banking credentials. The third party, using the stolen information, requested a phone number change for receiving OTP (one-time password) and completing electronic transactions. The bank completed the change process based on the correct credentials. On July 31, 2020, a transfer of €3,000 was made from the plaintiff's account to a third party. The plaintiff was immediately informed, called the bank, and reported the fraud; however, the recovery of the funds was not successful. The plaintiff claims that the bank is responsible for inadequate protection of its systems, while the bank asserts that it followed the procedure based on the agreed identification methods. \nThe court recognizes that there is responsibility on both sides: the bank for inadequate security and prevention of phishing, and the plaintiff for negligence in safeguarding his personal information, despite the bank's relevant warnings. A critical issue is the exact timing of the completion of the transfer: if the bank was timely notified of the fraud but did not intervene, it may be fully liable. The court requests a resumption of the discussion and further evidence, mainly from the bank, which has access to the relevant technical details."], ['What is the amount of the transaction conducted on 30/11/2020 at 4:39:55 PM?', '**Court (Civil/Criminal): Civil** \n**Provisions:** \n**Time of commission of the act:** \n**Outcome (not guilty, guilty):** \n**Reasoning:** Partially accepts the lawsuit. \n**Facts:** The plaintiff, who works as a lawyer, maintains a savings account with the defendant banking corporation under account number GR.............. Pursuant to a contract dated June 11, 2010, established in Thessaloniki between the defendant and the plaintiff, the plaintiff was granted access to the electronic banking system (e-banking) to conduct banking transactions remotely. On October 10, 2020, the plaintiff fell victim to electronic fraud through the "phishing" method, whereby an unknown perpetrator managed to extract and transfer €3,000.00 from the plaintiff’s account to another account of the same bank. Specifically, on that day at 6:51 a.m., the plaintiff received an email from the sender ".........", with the address ..........., informing him that his debit card had been suspended and that online payments and cash withdrawals could not be made until the issue was resolved. The email urged him to confirm his details within the next 72 hours by following a link titled "card activation." \nThe plaintiff read the above email on his mobile phone around 8:00 a.m., and believing it came from the defendant, he followed the instructions and accessed a website that was identical (a clone) to that of the defendant. On this page, he was asked to enter his login credentials to connect to the service, which he did, and he was subsequently asked to input his debit card details for the alleged activation, which he also provided. Then, to complete the process, a number was sent to his mobile phone at 8:07 a.m. from the sender ........, which he entered, and two minutes later he received a message from the same sender in English stating that the quick access code had been activated on his mobile. A few minutes later, at 8:18 a.m., he received an email from the defendant informing him of the transfer of €3,000.00 from his account to account number GR ........... held at the same bank, with the beneficiary\'s details being .......... As soon as the plaintiff read this, he immediately called the defendant\'s call center and canceled his debit card, the access codes for the service ......., and locked the application .......... At the same time, he verbally submitted a request to dispute and cancel the contested transaction, and in a subsequent phone call, he also canceled his credit card. On the same day, he also sent an email to the defendant informing them in writing of the above and requesting the cancellation of the transaction and the return of the amount of €3,000.00 to his account, as this transfer was not made by him but by an unknown perpetrator through electronic fraud and was not approved by him. It should also be noted that the plaintiff, as the sole beneficiary according to the aforementioned contract for using the defendant\'s Internet Banking service, never received any update via SMS or the VIBER application from the bank regarding the transaction details before its completion, nor did he receive a one-time code (OTP) to approve the contested transaction. He subsequently filed a complaint against unknown persons at the Cyber Crime Division for the crime of fraud. The defendant sent an email to the plaintiff on October 16, 2020, informing him that his request had been forwarded to the appropriate department of the bank for investigation, stating that the bank would never send him an email or SMS asking him to enter his personal data and that as of October 7, 2020, there was a notice posted for its customers regarding malicious attempts to steal personal data in the "Our News" section on ....... A month after the disputed incident, on November 10, 2020, an amount of €2,296.82 was transferred to the plaintiff\'s account from the account to which the fraudulent credit had been made. The plaintiff immediately sent an email to the defendant asking to be informed whether this transfer was a return of part of the amount that had been illegally withdrawn from his account and requested the return of the remaining amount of €703.18. In its response dated January 13, 2021, the defendant confirmed that the aforementioned amount indeed came from the account to which the fraudulent credit had been made, following a freeze of that account initiated by the defendant during the investigation of the incident, but refused to return the remaining amount, claiming it bore no responsibility for the leak of the personal codes to third parties, according to the terms of the service contract established between them. \nFrom the entirety of the evidence presented to the court, there is no indication of the authenticity of the contested transaction, as the plaintiff did not give his consent for the execution of the transfer of the amount of €3,000.00, especially in light of the provision in Article 72 paragraph 2 of Law 4537/2018 stating that the mere use of the Internet Banking service by the plaintiff does not necessarily constitute sufficient evidence that the payer approved the payment action. Specifically, it was proven that the contested transaction was not carried out following a strong identification of the plaintiff – the sole beneficiary of the account – and his approval, as the latter may have entered his personal codes on the counterfeit website; however, he was never informed, before the completion of the contested transaction, of the amount that would be transferred from his account to a third-party account, nor did he receive on his mobile phone, either via SMS or through the VIBER application or any other means, the one-time code - extra PIN for its completion, which he was required to enter to approve the contested transaction (payment action) and thus complete his identification, a fact that was not countered by any evidence from the defendant. Furthermore, it is noted that the defendant\'s claims that it bears no responsibility under the terms of the banking services contract, whereby it is not liable for any damage to its customer in cases of unauthorized use of their personal access codes to the Internet Banking service, are to be rejected as fundamentally unfounded. This is because the aforementioned contractual terms are invalid according to the provision of Article 103 of Law 4537/2018, as they contradict the provisions of Articles 71, 73, and 92 of the same Law, which provide for the provider\'s universal liability and its exemption only for unusual and unforeseen circumstances that are beyond the control of the party invoking them and whose consequences could not have been avoided despite all efforts to the contrary; these provisions establish mandatory law in favor of users, as according to Article 103 of Law 4537/2018, payment service providers are prohibited from deviating from the provisions to the detriment of payment service users, unless the possibility of deviation is explicitly provided and they can decide to offer only more favorable terms to payment service users; the aforementioned contractual terms do not constitute more favorable terms but rather disadvantageous terms for the payment service user. In this case, however, the defendant did not prove the authenticity of the transaction and its approval by the plaintiff and did not invoke, nor did any unusual and unforeseen circumstances beyond its control, the consequences of which could not have been avoided despite all efforts to the contrary, come to light. Therefore, the contested transaction transferring the amount of €3,000.00 is considered, in the absence of demonstrable consent from the plaintiff, unapproved according to the provisions of Article 64 of Law 4537/2018, and the defendant\'s contrary claims are rejected, especially since the plaintiff proceeded, according to Article 71 paragraph 1 of Law 4537/2018, without undue delay to notify the defendant regarding the contested unapproved payment action. Consequently, the defendant is liable for compensating the plaintiff for the positive damage he suffered under Article 73 of Law 4537/2018 and is obliged to pay him the requested amount of €703.18, while the plaintiff’s fault in the occurrence of this damage cannot be established, as he entered his personal details in an online environment that was a faithful imitation of that of the defendant, as evidenced by the comparison of the screenshots of the fake website and the real website provided by the plaintiff, a fact that he could not have known while being fully convinced that he was transacting with the defendant. Furthermore, the defendant’s liability to compensate the plaintiff is based on the provision of Article 8 of Law 2251/1994, which applies in this case, as the plaintiff\'s damage resulted from inadequate fulfillment of its obligations in the context of providing its services, but also on the provision of Article 914 of the Civil Code in the sense of omission on its part of unlawfully and culpably imposed actions. In this case, given that during the relevant period there had been a multitude of similar incidents of fraud against the defendant\'s customers, the latter, as a service provider to the consumer public and bearing transactional obligations of care and security towards them, displayed gross negligence regarding the security provided for electronic transaction services, which was compromised by the fraudulent theft of funds, as it did not comply with all required high-security measures for executing the contested transaction, failing to implement the strict customer identification verification process and to check the authenticity of the account to which the funds were sent, thus not assuming the suspicious nature of the transaction, did not adopt comprehensive and improved protective measures to fully protect its customers against malicious attacks and online fraud and to prevent the infiltration of unauthorized third parties, nor did it fulfill its obligations to inform, accurately inform, and warn its consumers - customers, as it failed to adequately inform them of attempts to steal their personal data through the sending of informative emails or SMS, while merely posting in a section rather than on a central banner (as it later did) does not constitute adequate information such that it meets the requirement of protecting its customers and the increased safeguarding of their interests. Although the plaintiff acted promptly and informed the defendant on the same day about the contested incident, the defendant did not act as promptly regarding the investigation of the incident and the freezing of the account that held the fraudulent credit to prevent the plaintiff\'s loss, but only returned part of the funds to the plaintiff a month later. This behavior, beyond being culpable due to gross negligence, was also unlawful, as it would have been illegal even without the contractual relationship, as contrary to the provisions of Law 4537/2018 and Law 2251/1994, regarding the lack of security of the services that the consumer is legitimately entitled to expect, as well as the building of trust that is essential in banking transactions, elements that it was obligated to provide within the sphere of the services offered, and contrary to the principles of good faith and commercial ethics, as crystallized in the provision of Article 288 of the Civil Code, as well as the general duty imposed by Article 914 of the Civil Code not to cause harm to another culpably. This resulted not only in positive damage to the plaintiff but also in causing him moral harm consisting of his mental distress and the disruption, agitation, and sorrow he experienced, for which he must be awarded financial compensation. Taking into account all the general circumstances of the case, the extent of the plaintiff\'s damage, the severity of the defendant\'s fault, the mental distress suffered by the plaintiff, the insecurity he felt regarding his deposits, the sorrow he experienced, and the stress caused by his financial loss, which occurred during the pandemic period when his earnings from his professional activity had significantly decreased, as well as the financial and social situation of the parties, it is the court\'s opinion that he should be granted, as financial compensation for his moral harm, an amount of €250.00, which is deemed reasonable and fair. Therefore, the total monetary amount that the plaintiff is entitled to for his positive damage and financial compensation for the moral harm suffered amounts to a total of (€703.18 + €250.00) = €953.18.'], ] scores = model.predict(pairs) print(scores.shape) # (5,) # Or rank different texts based on similarity to a single text ranks = model.rank( 'What is the amount of the transaction conducted on 30/11/2020 at 4:39:55 PM?', [ '**Court (Civil/Criminal): Civil**\n\n**Provisions:**\n\n**Time of commission of the act:**\n\n**Outcome (not guilty, guilty):**\n\n**Rationale:**\n\n**Facts:**\nThe plaintiff holds credit card number ............ with the defendant banking corporation. Based on the application for alternative networks dated 19/7/2015 with number ......... submitted at a branch of the defendant, he was granted access to the electronic banking service (e-banking) to conduct banking transactions (debit, credit, updates, payments) remotely. On 30/11/2020, the plaintiff fell victim to electronic fraud through the "phishing" method, whereby an unknown perpetrator managed to withdraw a total amount of €3,121.75 from the aforementioned credit card. Specifically, the plaintiff received an email at 1:35 PM on 29/11/2020 from sender ...... with address ........, informing him that due to an impending system change, he needed to verify the mobile phone number linked to the credit card, urging him to complete the verification process within the next 24 hours by following a link titled ........; otherwise, his account would be locked for security reasons. The plaintiff read this email on the afternoon of 30 November 2020 and, believing it was from the defendant, followed the instructions and proceeded via the provided link to a website that was identical (a clone) to that of the defendant. On this page, he was asked to enter the six-digit security code (.........) that had just been sent to his mobile phone by the defendant at 3:41 PM, with the note that it was an activation code for his ........ card at ........., which he entered.\n\nSubsequently, the plaintiff received, according to his statements, a new email (not submitted), which requested him to enter the details of the aforementioned credit card, specifically the name of the cardholder and the card number, not the PIN, which he also entered, convinced that he was within the online environment of the defendant. Then, at 3:47 PM, he received a message on his mobile phone from the defendant containing the exact same content as the one he received at 3:41 PM, while at 3:50 PM he received a message stating that the activation of his ......... card at ....... had been completed. Once the plaintiff read this, he became concerned that something was not right, and immediately called (at 4:41 PM) the defendant\'s call center to inform them. There, the employees, with whom he finally connected at 5:04 PM due to high call center volume, advised him to delete the relevant emails, cancel his credit card, change his access passwords for the service, and submit a dispute request regarding the conducted transactions. The plaintiff electronically sent this request to the defendant, disputing the detailed transactions amounting to €3,121.75, which were conducted on 30/11/2020 during the time frame of 16:37:45-16:43:34 PM, arguing that he had neither performed them himself nor authorized anyone else to do so. The plaintiff specifically disputed the following transactions, as evidenced by the account activity of the disputed credit card during the aforementioned timeframe: a) transaction number ......... amounting to €150.62 conducted on 30/11/2020 at 4:43:34 PM, b) transaction number ........ amounting to €293.20 conducted on 30/11/2020 at 4:42:40 PM, c) transaction number ............ amounting to €295.21 conducted on 30/11/2020 at 4:42:10 PM, d) transaction number .......... amounting to €299.22 conducted on 30/11/2020 at 4:41:31 PM, e) transaction number ........ amounting to €297.21 conducted on 30/11/2020 at 4:41:01 PM, f) transaction number ........ amounting to €299.22 conducted on 30/11/2020 at 4:40:27 PM, g) transaction number ....... amounting to €299.22 conducted on 30/11/2020 at 4:39:55 PM, h) transaction number ...... amounting to €299.22 conducted on 30/11/2020 at 4:39:22 PM, i) transaction number ......... amounting to €297.22 conducted on 30/11/2020 at 4:38:52 PM, j) transaction number ......... amounting to €295.21 conducted on 30/11/2020 at 4:38:17 PM, and k) transaction number ......... amounting to €296.21 conducted on 30/11/2020 at 4:37:45 PM. In its response letter dated 21/12/2020, the defendant denied responsibility for the costs of the aforementioned transactions, placing the entire blame on the plaintiff for the leak of his card details and security code to the fraudulent page. The plaintiff, completely denying any fault for the conducted transactions, repeatedly contacted the defendant, both by phone and via email (see emails dated 15/1/2021 and 11/2/2021), while on 2/3/2021, he electronically sent a report dated 1/03/2021 to the Consumer Advocate’s email address, recounting the events and requesting that the aforementioned Independent Authority intervene to have the disputed debt canceled. In its letter with reference number ...../27.04.2021, the aforementioned Independent Authority informed the plaintiff that the case was outside its mediating role and was therefore archived. Subsequently, the plaintiff sent the defendant on 5/3/2021 his extrajudicial statement dated 4/3/2021, calling upon it to fully cancel the debt of €3,121.75 that had been unjustly incurred against him within two days and to immediately instruct the representatives of the collection agency working with it to cease contacting him regarding the disputed case. The defendant sent the plaintiff a message on his mobile phone on 20/04/2021 informing him that his case was still being processed due to lengthy operational requirements, while on 23/04/2021, via email, it informed him that considering their good cooperation and his efforts to keep them updated, it had reviewed his case and decided to refund him the amounts of the transactions that were conducted after his contact with their representatives on 30/11/2020 at 4:41 PM, totaling €1,038.25, specifically the following: a) transaction of €150.62 conducted on 30/11/2020 at 4:43 PM, b) transaction of €295.21 conducted on 30/11/2020 at 4:42 PM, c) transaction of €293.20 conducted on 30/11/2020 at 4:42 PM, and d) transaction of €299.22 conducted on 30/11/2020 at 4:41 PM. Beyond this, the defendant refused to refund the plaintiff the amount of the remaining transactions conducted on 30/11/2020, totaling €2,376.08 (and not €2,376.48 as incorrectly stated by the plaintiff in his lawsuit), which the plaintiff ultimately fully paid, transferring €2,342.77 to the defendant on 7/06/2021 and €33.31 on 15/06/2021 (see related deposit receipts).', 'Court (Civil/Criminal):\nProvisions:\nTime of commission of the act:\nOutcome (not guilty, guilty): ORDERS the defendant to pay the plaintiff the amount of two thousand four hundred thirty-four euros and eighty-three cents (€2,434.83) with legal interest from the service of the lawsuit.\n\nReasoning: Law 4537/2018 introduces mandatory provisions in favor of users, as according to Article 103, payment service providers are prohibited from deviating from the provisions to the detriment of payment service users, unless the possibility of deviation is expressly provided, and they can decide to offer only more favorable terms to payment service users. Under this law and its provisions, providers are only liable when there are unusual and unforeseen circumstances beyond the control of the party invoking them, and whose consequences could not have been avoided despite efforts to the contrary. However, operational risks and security risks of the system do not constitute unusual and unforeseen circumstances, so any damage to users resulting from their occurrence falls on the providers. Furthermore, the authenticity of the disputed transaction, namely the payment act, is not proven, in the sense that none of the beneficiaries of the contested joint account, namely the plaintiff or her husband, had given their consent as stipulated in Article 64 of Law 4537/2018. Burden of proof. The payment service provider of the payer is liable to the payer for the proper execution of the payment act, unless it proves to the payer that the service provider of the beneficiary received the amount of the payment act according to paragraph 1 of Article 83 of Law 4537/2018.\n\nFacts:', 'Court (Civil/Criminal): Civil \nProvisions: Law 4537/2018. \nTime of commission of act: \nOutcome (not guilty, guilty): \nReasoning: PARTIALLY ACCEPTS the lawsuit. RECOGNIZES the obligation of the defendant (a) to pay the plaintiffs in full the amount of eight thousand eight hundred ninety (8,890) euros, with legal interest from December 2, 2021, and (b) to pay each of the plaintiffs the amount of five hundred (500) euros with legal interest from the service of the lawsuit. \nFacts: The plaintiffs claim that they are co-beneficiaries of a savings account held by the defendant, and that unknown perpetrators gained access to the aforementioned account via the internet, without the plaintiffs themselves having any fault regarding the safeguarding of the codes or the disclosure of the unique transaction codes (OTR). They assert that the defendant is responsible for the access gained by the unknown perpetrators to the savings account, as the defendant negligently violated the protective obligations it owed to the plaintiffs. They state that, due to the actions of the unknown perpetrators, gradual transfers of monetary amounts were made, resulting in the aforementioned savings account being depleted by the amount of 10,120 euros within a few minutes. They informed the defendant of the aforementioned actions through the appropriate channels; however, the defendant negligently delayed the necessary actions. The defendant denies any liability and the return of the aforementioned monetary amount.', "**Court (Civil/Criminal):**\nProvisions: Articles 8 of Law 2251/1994, Articles 2, 4, 48 et seq. of Law 4537/2018, Article 11 paragraph 1 of Law 4261/2014, Articles 830, 806, 827, 914, 932 of the Civil Code and 176 of the Code of Civil Procedure.\nTime of commission of the act:\nOutcome (not guilty, guilty):\nRationale: Electronic fraud through the method of phishing. A third party fraudulently obtained money from the plaintiff's bank account and transferred it to another bank account. Both the defendant is liable for the inadequate protection of its systems, which should have been excellent, and the plaintiff who failed to fulfill his obligation to protect his information and disregarded the defendant's security instructions. Law 4537/2018 introduces mandatory law in favor of users, as according to Article 103, payment service providers are prohibited from deviating from the provisions to the detriment of payment service users. It is determined that a resumption of the discussion should be ordered in order to provide all possible evidence, with diligence from both parties, especially from the defendant, who has access to the transaction data through its systems, but also bears the relevant burden of proof concerning the exact timing of the execution of the money transfer order at each stage (withdrawal from the plaintiff's account, transfer to another bank, transfer to the third party's account).\nFacts: The plaintiff maintains a joint bank account with his wife at the defendant bank and has also agreed to online banking transactions (e-banking). On July 31, 2020, at 13:45, the plaintiff was informed of a transfer of €3,000 from his account, which he had not initiated, nor had his wife. At 14:05, he immediately contacted the bank’s customer service line and reported the incident, stating that it was not his action and requesting its cancellation. The bank employee found that the plaintiff had provided his details to a fake website 10 days earlier, and subsequently, the mobile number used for transaction confirmations had been changed. The employee informed him that the money was at the other bank and that they would logically be able to retrieve it, provided it had not already been transferred to a third party's account. Since then, the plaintiff has not seen any return of the amount to his account, and he has made numerous attempts to resolve the issue with the bank, with effort, costs, and distress; however, nothing was achieved, as the money had already entered a third party's account and the defendant denied responsibility for the transfer of the funds.\nFacts: The plaintiff maintained a joint account with his wife at a bank and used internet banking services. On July 21, 2020, a third party deceived the plaintiff through phishing (a misleading SMS with a link), obtaining his banking credentials. The third party, using the stolen information, requested a phone number change for receiving OTP (one-time password) and completing electronic transactions. The bank completed the change process based on the correct credentials. On July 31, 2020, a transfer of €3,000 was made from the plaintiff's account to a third party. The plaintiff was immediately informed, called the bank, and reported the fraud; however, the recovery of the funds was not successful. The plaintiff claims that the bank is responsible for inadequate protection of its systems, while the bank asserts that it followed the procedure based on the agreed identification methods. \nThe court recognizes that there is responsibility on both sides: the bank for inadequate security and prevention of phishing, and the plaintiff for negligence in safeguarding his personal information, despite the bank's relevant warnings. A critical issue is the exact timing of the completion of the transfer: if the bank was timely notified of the fraud but did not intervene, it may be fully liable. The court requests a resumption of the discussion and further evidence, mainly from the bank, which has access to the relevant technical details.", '**Court (Civil/Criminal): Civil** \n**Provisions:** \n**Time of commission of the act:** \n**Outcome (not guilty, guilty):** \n**Reasoning:** Partially accepts the lawsuit. \n**Facts:** The plaintiff, who works as a lawyer, maintains a savings account with the defendant banking corporation under account number GR.............. Pursuant to a contract dated June 11, 2010, established in Thessaloniki between the defendant and the plaintiff, the plaintiff was granted access to the electronic banking system (e-banking) to conduct banking transactions remotely. On October 10, 2020, the plaintiff fell victim to electronic fraud through the "phishing" method, whereby an unknown perpetrator managed to extract and transfer €3,000.00 from the plaintiff’s account to another account of the same bank. Specifically, on that day at 6:51 a.m., the plaintiff received an email from the sender ".........", with the address ..........., informing him that his debit card had been suspended and that online payments and cash withdrawals could not be made until the issue was resolved. The email urged him to confirm his details within the next 72 hours by following a link titled "card activation." \nThe plaintiff read the above email on his mobile phone around 8:00 a.m., and believing it came from the defendant, he followed the instructions and accessed a website that was identical (a clone) to that of the defendant. On this page, he was asked to enter his login credentials to connect to the service, which he did, and he was subsequently asked to input his debit card details for the alleged activation, which he also provided. Then, to complete the process, a number was sent to his mobile phone at 8:07 a.m. from the sender ........, which he entered, and two minutes later he received a message from the same sender in English stating that the quick access code had been activated on his mobile. A few minutes later, at 8:18 a.m., he received an email from the defendant informing him of the transfer of €3,000.00 from his account to account number GR ........... held at the same bank, with the beneficiary\'s details being .......... As soon as the plaintiff read this, he immediately called the defendant\'s call center and canceled his debit card, the access codes for the service ......., and locked the application .......... At the same time, he verbally submitted a request to dispute and cancel the contested transaction, and in a subsequent phone call, he also canceled his credit card. On the same day, he also sent an email to the defendant informing them in writing of the above and requesting the cancellation of the transaction and the return of the amount of €3,000.00 to his account, as this transfer was not made by him but by an unknown perpetrator through electronic fraud and was not approved by him. It should also be noted that the plaintiff, as the sole beneficiary according to the aforementioned contract for using the defendant\'s Internet Banking service, never received any update via SMS or the VIBER application from the bank regarding the transaction details before its completion, nor did he receive a one-time code (OTP) to approve the contested transaction. He subsequently filed a complaint against unknown persons at the Cyber Crime Division for the crime of fraud. The defendant sent an email to the plaintiff on October 16, 2020, informing him that his request had been forwarded to the appropriate department of the bank for investigation, stating that the bank would never send him an email or SMS asking him to enter his personal data and that as of October 7, 2020, there was a notice posted for its customers regarding malicious attempts to steal personal data in the "Our News" section on ....... A month after the disputed incident, on November 10, 2020, an amount of €2,296.82 was transferred to the plaintiff\'s account from the account to which the fraudulent credit had been made. The plaintiff immediately sent an email to the defendant asking to be informed whether this transfer was a return of part of the amount that had been illegally withdrawn from his account and requested the return of the remaining amount of €703.18. In its response dated January 13, 2021, the defendant confirmed that the aforementioned amount indeed came from the account to which the fraudulent credit had been made, following a freeze of that account initiated by the defendant during the investigation of the incident, but refused to return the remaining amount, claiming it bore no responsibility for the leak of the personal codes to third parties, according to the terms of the service contract established between them. \nFrom the entirety of the evidence presented to the court, there is no indication of the authenticity of the contested transaction, as the plaintiff did not give his consent for the execution of the transfer of the amount of €3,000.00, especially in light of the provision in Article 72 paragraph 2 of Law 4537/2018 stating that the mere use of the Internet Banking service by the plaintiff does not necessarily constitute sufficient evidence that the payer approved the payment action. Specifically, it was proven that the contested transaction was not carried out following a strong identification of the plaintiff – the sole beneficiary of the account – and his approval, as the latter may have entered his personal codes on the counterfeit website; however, he was never informed, before the completion of the contested transaction, of the amount that would be transferred from his account to a third-party account, nor did he receive on his mobile phone, either via SMS or through the VIBER application or any other means, the one-time code - extra PIN for its completion, which he was required to enter to approve the contested transaction (payment action) and thus complete his identification, a fact that was not countered by any evidence from the defendant. Furthermore, it is noted that the defendant\'s claims that it bears no responsibility under the terms of the banking services contract, whereby it is not liable for any damage to its customer in cases of unauthorized use of their personal access codes to the Internet Banking service, are to be rejected as fundamentally unfounded. This is because the aforementioned contractual terms are invalid according to the provision of Article 103 of Law 4537/2018, as they contradict the provisions of Articles 71, 73, and 92 of the same Law, which provide for the provider\'s universal liability and its exemption only for unusual and unforeseen circumstances that are beyond the control of the party invoking them and whose consequences could not have been avoided despite all efforts to the contrary; these provisions establish mandatory law in favor of users, as according to Article 103 of Law 4537/2018, payment service providers are prohibited from deviating from the provisions to the detriment of payment service users, unless the possibility of deviation is explicitly provided and they can decide to offer only more favorable terms to payment service users; the aforementioned contractual terms do not constitute more favorable terms but rather disadvantageous terms for the payment service user. In this case, however, the defendant did not prove the authenticity of the transaction and its approval by the plaintiff and did not invoke, nor did any unusual and unforeseen circumstances beyond its control, the consequences of which could not have been avoided despite all efforts to the contrary, come to light. Therefore, the contested transaction transferring the amount of €3,000.00 is considered, in the absence of demonstrable consent from the plaintiff, unapproved according to the provisions of Article 64 of Law 4537/2018, and the defendant\'s contrary claims are rejected, especially since the plaintiff proceeded, according to Article 71 paragraph 1 of Law 4537/2018, without undue delay to notify the defendant regarding the contested unapproved payment action. Consequently, the defendant is liable for compensating the plaintiff for the positive damage he suffered under Article 73 of Law 4537/2018 and is obliged to pay him the requested amount of €703.18, while the plaintiff’s fault in the occurrence of this damage cannot be established, as he entered his personal details in an online environment that was a faithful imitation of that of the defendant, as evidenced by the comparison of the screenshots of the fake website and the real website provided by the plaintiff, a fact that he could not have known while being fully convinced that he was transacting with the defendant. Furthermore, the defendant’s liability to compensate the plaintiff is based on the provision of Article 8 of Law 2251/1994, which applies in this case, as the plaintiff\'s damage resulted from inadequate fulfillment of its obligations in the context of providing its services, but also on the provision of Article 914 of the Civil Code in the sense of omission on its part of unlawfully and culpably imposed actions. In this case, given that during the relevant period there had been a multitude of similar incidents of fraud against the defendant\'s customers, the latter, as a service provider to the consumer public and bearing transactional obligations of care and security towards them, displayed gross negligence regarding the security provided for electronic transaction services, which was compromised by the fraudulent theft of funds, as it did not comply with all required high-security measures for executing the contested transaction, failing to implement the strict customer identification verification process and to check the authenticity of the account to which the funds were sent, thus not assuming the suspicious nature of the transaction, did not adopt comprehensive and improved protective measures to fully protect its customers against malicious attacks and online fraud and to prevent the infiltration of unauthorized third parties, nor did it fulfill its obligations to inform, accurately inform, and warn its consumers - customers, as it failed to adequately inform them of attempts to steal their personal data through the sending of informative emails or SMS, while merely posting in a section rather than on a central banner (as it later did) does not constitute adequate information such that it meets the requirement of protecting its customers and the increased safeguarding of their interests. Although the plaintiff acted promptly and informed the defendant on the same day about the contested incident, the defendant did not act as promptly regarding the investigation of the incident and the freezing of the account that held the fraudulent credit to prevent the plaintiff\'s loss, but only returned part of the funds to the plaintiff a month later. This behavior, beyond being culpable due to gross negligence, was also unlawful, as it would have been illegal even without the contractual relationship, as contrary to the provisions of Law 4537/2018 and Law 2251/1994, regarding the lack of security of the services that the consumer is legitimately entitled to expect, as well as the building of trust that is essential in banking transactions, elements that it was obligated to provide within the sphere of the services offered, and contrary to the principles of good faith and commercial ethics, as crystallized in the provision of Article 288 of the Civil Code, as well as the general duty imposed by Article 914 of the Civil Code not to cause harm to another culpably. This resulted not only in positive damage to the plaintiff but also in causing him moral harm consisting of his mental distress and the disruption, agitation, and sorrow he experienced, for which he must be awarded financial compensation. Taking into account all the general circumstances of the case, the extent of the plaintiff\'s damage, the severity of the defendant\'s fault, the mental distress suffered by the plaintiff, the insecurity he felt regarding his deposits, the sorrow he experienced, and the stress caused by his financial loss, which occurred during the pandemic period when his earnings from his professional activity had significantly decreased, as well as the financial and social situation of the parties, it is the court\'s opinion that he should be granted, as financial compensation for his moral harm, an amount of €250.00, which is deemed reasonable and fair. Therefore, the total monetary amount that the plaintiff is entitled to for his positive damage and financial compensation for the moral harm suffered amounts to a total of (€703.18 + €250.00) = €953.18.', ] ) # [{'corpus_id': ..., 'score': ...}, {'corpus_id': ..., 'score': ...}, ...] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> ## Evaluation ### Metrics #### Cross Encoder Reranking * Dataset: `gooaq-dev` * Evaluated with [<code>CrossEncoderRerankingEvaluator</code>](https://sbert.net/docs/package_reference/cross_encoder/evaluation.html#sentence_transformers.cross_encoder.evaluation.CrossEncoderRerankingEvaluator) with these parameters: ```json { "at_k": 10, "always_rerank_positives": false } ``` | Metric | Value | |:------------|:---------------------| | map | 0.7316 (+0.2666) | | mrr@10 | 0.7315 (+0.2740) | | **ndcg@10** | **0.7599 (+0.2360)** | #### Cross Encoder Classification * Dataset: `sts_dev` * Evaluated with [<code>CrossEncoderClassificationEvaluator</code>](https://sbert.net/docs/package_reference/cross_encoder/evaluation.html#sentence_transformers.cross_encoder.evaluation.CrossEncoderClassificationEvaluator) | Metric | Value | |:----------------------|:--------| | accuracy | 0.9975 | | accuracy_threshold | 0.0005 | | f1 | 0.9987 | | f1_threshold | 0.0005 | | precision | 1.0 | | recall | 0.9975 | | **average_precision** | **1.0** | <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### Unnamed Dataset * Size: 8,759 training samples * Columns: <code>query</code>, <code>response</code>, and <code>label</code> * Approximate statistics based on the first 1000 samples: | | query | response | label | |:--------|:------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------|:------------------------------------------------| | type | string | string | int | | details | <ul><li>min: 21 characters</li><li>mean: 72.75 characters</li><li>max: 150 characters</li></ul> | <ul><li>min: 121 characters</li><li>mean: 2265.96 characters</li><li>max: 12618 characters</li></ul> | <ul><li>0: ~80.90%</li><li>1: ~19.10%</li></ul> | * Samples: | query | response | label | |:------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------| | <code>What is the amount of the transaction conducted on 30/11/2020 at 4:39:55 PM?</code> | <code>**Court (Civil/Criminal): Civil**<br><br>**Provisions:**<br><br>**Time of commission of the act:**<br><br>**Outcome (not guilty, guilty):**<br><br>**Rationale:**<br><br>**Facts:**<br>The plaintiff holds credit card number ............ with the defendant banking corporation. Based on the application for alternative networks dated 19/7/2015 with number ......... submitted at a branch of the defendant, he was granted access to the electronic banking service (e-banking) to conduct banking transactions (debit, credit, updates, payments) remotely. On 30/11/2020, the plaintiff fell victim to electronic fraud through the "phishing" method, whereby an unknown perpetrator managed to withdraw a total amount of €3,121.75 from the aforementioned credit card. Specifically, the plaintiff received an email at 1:35 PM on 29/11/2020 from sender ...... with address ........, informing him that due to an impending system change, he needed to verify the mobile phone number linked to the credit card, urging him to complete the verification...</code> | <code>1</code> | | <code>What is the amount of the transaction conducted on 30/11/2020 at 4:39:55 PM?</code> | <code>Court (Civil/Criminal):<br>Provisions:<br>Time of commission of the act:<br>Outcome (not guilty, guilty): ORDERS the defendant to pay the plaintiff the amount of two thousand four hundred thirty-four euros and eighty-three cents (€2,434.83) with legal interest from the service of the lawsuit.<br><br>Reasoning: Law 4537/2018 introduces mandatory provisions in favor of users, as according to Article 103, payment service providers are prohibited from deviating from the provisions to the detriment of payment service users, unless the possibility of deviation is expressly provided, and they can decide to offer only more favorable terms to payment service users. Under this law and its provisions, providers are only liable when there are unusual and unforeseen circumstances beyond the control of the party invoking them, and whose consequences could not have been avoided despite efforts to the contrary. However, operational risks and security risks of the system do not constitute unusual and unforeseen circu...</code> | <code>0</code> | | <code>What is the amount of the transaction conducted on 30/11/2020 at 4:39:55 PM?</code> | <code>Court (Civil/Criminal): Civil <br>Provisions: Law 4537/2018. <br>Time of commission of act: <br>Outcome (not guilty, guilty): <br>Reasoning: PARTIALLY ACCEPTS the lawsuit. RECOGNIZES the obligation of the defendant (a) to pay the plaintiffs in full the amount of eight thousand eight hundred ninety (8,890) euros, with legal interest from December 2, 2021, and (b) to pay each of the plaintiffs the amount of five hundred (500) euros with legal interest from the service of the lawsuit. <br>Facts: The plaintiffs claim that they are co-beneficiaries of a savings account held by the defendant, and that unknown perpetrators gained access to the aforementioned account via the internet, without the plaintiffs themselves having any fault regarding the safeguarding of the codes or the disclosure of the unique transaction codes (OTR). They assert that the defendant is responsible for the access gained by the unknown perpetrators to the savings account, as the defendant negligently violated the protective obl...</code> | <code>0</code> | * Loss: [<code>BinaryCrossEntropyLoss</code>](https://sbert.net/docs/package_reference/cross_encoder/losses.html#binarycrossentropyloss) with these parameters: ```json { "activation_fn": "torch.nn.modules.linear.Identity", "pos_weight": null } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: steps - `per_device_train_batch_size`: 16 - `per_device_eval_batch_size`: 16 - `learning_rate`: 2e-05 - `num_train_epochs`: 2 - `warmup_ratio`: 0.1 - `seed`: 12 - `bf16`: True - `dataloader_num_workers`: 4 - `load_best_model_at_end`: True #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: steps - `prediction_loss_only`: True - `per_device_train_batch_size`: 16 - `per_device_eval_batch_size`: 16 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 2e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 2 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.1 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 12 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: True - `fp16`: False - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 4 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: True - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `tp_size`: 0 - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: None - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `include_for_metrics`: [] - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `use_liger_kernel`: False - `eval_use_gather_object`: False - `average_tokens_across_devices`: False - `prompts`: None - `batch_sampler`: batch_sampler - `multi_dataset_batch_sampler`: proportional - `router_mapping`: {} - `learning_rate_mapping`: {} </details> ### Training Logs | Epoch | Step | Training Loss | gooaq-dev_ndcg@10 | sts_dev_average_precision | |:------:|:----:|:-------------:|:-----------------:|:-------------------------:| | -1 | -1 | - | 0.5984 (+0.0745) | - | | 0.0018 | 1 | 0.4247 | - | - | | 1.8248 | 1000 | 0.1675 | - | - | | -1 | -1 | - | 0.7599 (+0.2360) | 1.0 | ### Framework Versions - Python: 3.12.11 - Sentence Transformers: 5.1.0 - Transformers: 4.51.3 - PyTorch: 2.8.0+cu126 - Accelerate: 1.10.1 - Datasets: 4.0.0 - Tokenizers: 0.21.4 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
Wildstash/dental-gpt-dlora
Wildstash
2025-09-11T21:29:24Z
0
0
null
[ "region:us" ]
null
2025-09-11T21:28:08Z
# 🦷 Dental-GPT: LoRA Fine-tuned Clinical Assistant <div align="center"> ![Dental-GPT Architecture](./image.png) **A specialized LoRA adapter for gpt-oss-20b, fine-tuned on comprehensive dental case data for clinical diagnosis and treatment planning assistance.** [![Hugging Face](https://img.shields.io/badge/%F0%9F%A4%97-Hugging%20Face-yellow)](https://huggingface.co/Wildstash/dental-gpt-dlora) [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) [![Python](https://img.shields.io/badge/Python-3.10+-green.svg)](https://python.org) [![PyTorch](https://img.shields.io/badge/PyTorch-2.0+-red.svg)](https://pytorch.org) </div> ## 🎯 **Model Overview** **Dental-GPT** is a LoRA (Low-Rank Adaptation) fine-tuned version of OpenAI's **gpt-oss-20b** model, specifically trained on comprehensive dental case data. The model provides structured clinical outputs for dental diagnosis, treatment planning, and patient management. ### **Key Features** - 🏥 **Clinical Expertise**: Trained on 2,500+ dental cases with structured outputs - 🧠 **Efficient Architecture**: LoRA adapters (~32MB) on 21B parameter base model - 📋 **Structured Output**: Domain-specific formatting for clinical workflows - ⚡ **Optimized Performance**: MXFP4 quantization with BF16 precision - 🔒 **Safety First**: Clinical guidelines and safety flags included ## 🚀 **Quick Start** ### **Installation** ```bash pip install transformers torch peft accelerate bitsandbytes ``` ### **Load and Use the Model** ```python from transformers import AutoTokenizer, AutoModelForCausalLM from peft import PeftModel # Load base model and tokenizer base_model = "openai/gpt-oss-20b" adapter_model = "Wildstash/dental-gpt-dlora" tokenizer = AutoTokenizer.from_pretrained(base_model, trust_remote_code=True) model = AutoModelForCausalLM.from_pretrained( base_model, torch_dtype="auto", device_map="auto", trust_remote_code=True ) # Load LoRA adapter model = PeftModel.from_pretrained(model, adapter_model) # Example usage def generate_dental_assessment(patient_description): prompt = f"""<|system|> You are an expert dental clinician. Analyze the following case and provide a structured clinical assessment. <|user|> {patient_description} <|assistant|> """ inputs = tokenizer(prompt, return_tensors="pt").to(model.device) outputs = model.generate(**inputs, max_new_tokens=512, temperature=0.7) return tokenizer.decode(outputs[0], skip_special_tokens=True) # Example case case = """ Patient: 35-year-old male Chief Complaint: Severe tooth pain, upper left History: Pain started 3 days ago, worse at night, throbbing Examination: Large cavity on tooth #14, percussion positive, no swelling """ print(generate_dental_assessment(case)) ``` ## 🏥 **Clinical Capabilities** ### **Structured Output Format** The model generates comprehensive dental assessments in the following format: ```json { "diagnosis": "Primary diagnosis with differential considerations", "etiology": "Underlying causes and contributing factors", "urgency": 3, "urgency_label": "High/Medium/Low", "management_plan": "Step-by-step treatment approach", "antibiotics": { "indicated": true/false, "rationale": "Clinical reasoning for antibiotic use" }, "follow_up": { "next_appointment": "Timing and purpose", "monitoring": "Key signs to watch for" }, "counseling": ["Patient education points"], "guideline_source": "Evidence-based references", "safety_flags": ["Important warnings or contraindications"] } ``` ### **Supported Case Types** - 🔍 **Diagnostic Cases**: Symptom analysis and differential diagnosis - 🦷 **Restorative**: Cavities, crowns, bridges, implants - 🦴 **Surgical**: Extractions, oral surgery, periodontal procedures - 🦷 **Endodontic**: Root canals, pulp therapy - 🦷 **Orthodontic**: Treatment planning and case assessment - 🦷 **Emergency**: Acute pain, trauma, infections - 🦷 **Preventive**: Risk assessment and maintenance planning ## 📊 **Training Data & Methodology** ### **Dataset Overview** - **Size**: 2,495 comprehensive dental cases - **Format**: Structured JSON with clinical prompts and responses - **Validation**: 375 cases for evaluation - **Source**: Curated from clinical guidelines and expert annotations - **Quality**: Multi-stage cleaning and validation pipeline ### **Training Configuration** | Parameter | Value | Rationale | |-----------|-------|-----------| | **Base Model** | gpt-oss-20b | Optimized for single GPU inference | | **LoRA Rank** | 32 | Balanced expressiveness and efficiency | | **LoRA Alpha** | 64 | Stable convergence for clinical tasks | | **Learning Rate** | 1.5e-4 | Conservative rate for medical safety | | **Batch Size** | 4 | Memory-efficient for H200 GPU | | **Gradient Accumulation** | 16 | Effective batch size of 64 | | **Epochs** | 2 | Prevent overfitting on clinical data | | **Max Sequence Length** | 4096 | Handle complex clinical narratives | ### **Hardware Requirements** - **Training**: H200 GPU (140GB VRAM) or equivalent - **Inference**: 24GB+ VRAM recommended - **Quantization**: 4-bit (NF4) for memory efficiency - **Precision**: BF16 for optimal H200 performance ## 📈 **Performance & Evaluation** ### **Evaluation Metrics** | Metric | Score | Description | |--------|-------|-------------| | **ROUGE-L F1** | 0.78 ± 0.12 | Text generation quality | | **BERTScore F1** | 0.85 ± 0.08 | Semantic similarity | | **Diagnosis Accuracy** | 0.92 | Correct primary diagnosis | | **Urgency Accuracy** | 0.88 | Appropriate urgency assessment | | **Antibiotic Accuracy** | 0.94 | Proper antibiotic recommendations | | **Safety Compliance** | 0.96 | Clinical safety guidelines | ### **Benchmark Results** - **Response Time**: ~2-3 seconds on H200 GPU - **Memory Usage**: ~20GB VRAM (4-bit quantized) - **Throughput**: 15-20 cases/minute - **Context Length**: Up to 4K tokens supported ### **Clinical Validation** - ✅ **Expert Review**: Validated by board-certified dental professionals - ✅ **Guideline Compliance**: Follows ADA and specialty guidelines - ✅ **Safety Testing**: Comprehensive safety flag evaluation - ✅ **Bias Assessment**: Tested across diverse patient demographics ## 🚀 **Deployment Options** ### **Local Inference** ```python # Simple inference script from transformers import pipeline pipe = pipeline( "text-generation", model="Wildstash/dental-gpt-dlora", torch_dtype="auto", device_map="auto" ) result = pipe("Patient presents with severe tooth pain...") ``` ### **API Server (vLLM)** ```bash # Fast inference server vllm serve Wildstash/dental-gpt-dlora \ --max-model-len 4096 \ --host 0.0.0.0 \ --port 8000 ``` ### **Docker Deployment** ```bash docker run --gpus all -p 8000:8000 \ -v $(pwd)/models:/models \ dental-gpt:latest ``` ## ⚠️ **Important Limitations & Safety** ### **Clinical Disclaimer** - 🚨 **This model is for educational and research purposes only** - 🚨 **Not intended for clinical decision-making or patient care** - 🚨 **Always consult qualified dental professionals for actual patient care** - 🚨 **Use as a supplementary tool, not as primary diagnosis** ### **Known Limitations** - **Training Data**: Limited to cases available at training time - **Guidelines**: May not reflect latest clinical guidelines - **Specialties**: Focused on general dentistry, limited specialty coverage - **Bias**: Potential biases in training data may affect outputs - **Validation**: Requires continuous validation against current standards ## 📚 **Citation & References** ### **Citation** ```bibtex @misc{dental-gpt-lora, title={Dental-GPT: LoRA Fine-tuned Clinical Assistant for Dental Diagnosis}, author={Your Name}, year={2024}, url={https://huggingface.co/Wildstash/dental-gpt-dlora}, note={LoRA adapter for gpt-oss-20b trained on dental case data} } ``` ### **Base Model** - **gpt-oss-20b**: [OpenAI's gpt-oss-20b](https://huggingface.co/openai/gpt-oss-20b) - **LoRA Implementation**: [PEFT Library](https://github.com/huggingface/peft) ### **Training Framework** - **Transformers**: [Hugging Face Transformers](https://github.com/huggingface/transformers) - **TRL**: [Transformer Reinforcement Learning](https://github.com/huggingface/trl) - **Quantization**: [BitsAndBytes](https://github.com/TimDettmers/bitsandbytes) ## 🤝 **Contributing & Support** ### **Contributing** 1. Fork the repository 2. Create a feature branch 3. Make your changes with proper testing 4. Submit a pull request with detailed description ### **Issues & Support** - 🐛 **Bug Reports**: [GitHub Issues](https://github.com/yourusername/dental-gpt-finetune/issues) - 💡 **Feature Requests**: Open an issue with enhancement label - 📖 **Documentation**: Check the `docs/` folder for detailed guides - 💬 **Discussions**: Use GitHub Discussions for questions ### **Contact** - **Email**: your.email@domain.com - **LinkedIn**: [Your LinkedIn Profile](https://linkedin.com/in/yourprofile) - **Twitter**: [@yourhandle](https://twitter.com/yourhandle) --- <div align="center"> **⭐ If you find this model useful, please star the repository!** [![GitHub stars](https://img.shields.io/github/stars/yourusername/dental-gpt-finetune?style=social)](https://github.com/yourusername/dental-gpt-finetune) [![Hugging Face](https://img.shields.io/badge/%F0%9F%A4%97-Hugging%20Face-yellow)](https://huggingface.co/Wildstash/dental-gpt-dlora) **⚠️ Disclaimer**: This model is for educational and research purposes only. It should not be used as a substitute for professional dental diagnosis or treatment planning. </div>
gradientdegen/task-14-Qwen-Qwen2.5-3B-Instruct
gradientdegen
2025-09-11T21:26:28Z
61
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:microsoft/Phi-4-mini-instruct", "base_model:adapter:microsoft/Phi-4-mini-instruct", "region:us" ]
null
2025-08-12T22:32:22Z
--- base_model: microsoft/Phi-4-mini-instruct library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.13.2
modhu143a/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-durable_grazing_ape
modhu143a
2025-09-11T21:23:57Z
14
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "rl-swarm", "genrl-swarm", "grpo", "gensyn", "I am durable_grazing_ape", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-10T17:51:09Z
--- library_name: transformers tags: - rl-swarm - genrl-swarm - grpo - gensyn - I am durable_grazing_ape --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
vertigoq3/modeloEmailLabels
vertigoq3
2025-09-11T21:20:55Z
0
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-09-11T21:19:59Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
upvantage/modernbert-200m-new-not-same
upvantage
2025-09-11T21:19:26Z
0
0
transformers
[ "transformers", "safetensors", "modernbert", "text-classification", "generated_from_trainer", "base_model:answerdotai/ModernBERT-large", "base_model:finetune:answerdotai/ModernBERT-large", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-09-11T20:21:35Z
--- library_name: transformers license: apache-2.0 base_model: answerdotai/ModernBERT-large tags: - generated_from_trainer metrics: - accuracy - f1 - precision - recall model-index: - name: modernbert-200m-new-not-same results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # modernbert-200m-new-not-same This model is a fine-tuned version of [answerdotai/ModernBERT-large](https://huggingface.co/answerdotai/ModernBERT-large) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2469 - Accuracy: 0.9705 - F1: 0.9705 - Precision: 0.9705 - Recall: 0.9705 - F1 Class 0: 0.9700 - Precision Class 0: 0.9649 - Recall Class 0: 0.9751 - F1 Class 1: 0.9710 - Precision Class 1: 0.9759 - Recall Class 1: 0.9661 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7.9e-05 - train_batch_size: 2000 - eval_batch_size: 2000 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - total_train_batch_size: 16000 - total_eval_batch_size: 16000 - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.06 - num_epochs: 1 - label_smoothing_factor: 0.1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | F1 Class 0 | Precision Class 0 | Recall Class 0 | F1 Class 1 | Precision Class 1 | Recall Class 1 | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:---------:|:------:|:----------:|:-----------------:|:--------------:|:----------:|:-----------------:|:--------------:| | 1.9768 | 1.0 | 12997 | 0.2469 | 0.9705 | 0.9705 | 0.9705 | 0.9705 | 0.9700 | 0.9649 | 0.9751 | 0.9710 | 0.9759 | 0.9661 | ### Framework versions - Transformers 4.56.1 - Pytorch 2.8.0+cu128 - Datasets 4.0.0 - Tokenizers 0.22.0
kagvi13/HMP
kagvi13
2025-09-11T21:18:40Z
0
0
custom
[ "custom", "hmp", "cognitive-architecture", "distributed-ai", "mesh-protocol", "en", "arxiv:2507.00951", "arxiv:2507.21046", "arxiv:2507.03724", "arxiv:2506.24019", "license:cc-by-4.0", "region:us" ]
null
2025-07-25T12:21:44Z
--- license: cc-by-4.0 tags: - hmp - cognitive-architecture - distributed-ai - mesh-protocol library_name: custom inference: false datasets: [] language: en --- # HyperCortex Mesh Protocol (HMP) | 🌍 Languages | 🇬🇧 [EN](README.md) | 🇩🇪 [DE](README_de.md) | 🇫🇷 [FR](README_fr.md) | 🇺🇦 [UK](README_uk.md) | 🇷🇺 [RU](README_ru.md) | 🇯🇵 [JA](README_ja.md) | 🇰🇷 [KO](README_ko.md) | 🇨🇳 [ZH](README_zh.md) | |--------------|----------------|-------------------|-------------------|-------------------|-------------------|-------------------|-------------------|-------------------| **HyperCortex Mesh Protocol (HMP)** is an open specification for building decentralized cognitive networks where AI agents can self-organize, share knowledge, align ethically, and reach consensus — even when Core LLMs are unavailable. Project status: **Draft RFC v4.0** --- [HMP-Agent] ▲ │ ┌─────┴────────────────┬────────────────────────┬───────────────────┬─────────────┬───────────┐ │ │ │ │ │ │ ▼ ▼ ▼ ▼ ▼ ▼ [Reputation Profile] [Semantic Graph] [Cognitive Diary] [Goals / Tasks] [Ethics] [Messages] <----- DataBase ▲ ▲ ▲ ▲ ▲ ▲ ▲ (local agent state) │ │ │ │ │ │ │ │ └───────────────┴────────────────┬───────┘ │ │ │ │ │ │ │ │ ▼ ▼ ▼ ▼ │ [MeshConsensus] [CogSync] [GMP] [EGP] │ <----- Pluggable Protocols ▲ ▲ ▲ ▲ │ (inter-agent coordination) │ │ │ │ │ └────────────┬──────────────────────────┴───────────────────────────┴─────────────┴───────────┘ │ ▼ [P2P Mesh Network] Protocols: - MeshConsensus - Mesh Consensus - CogSync - Data Syncronization - GMP - Goal Management Protocol - EGP - Ethical Governance Protocol --- ## ❗ Why This Matters HMP addresses challenges that are becoming central in AGI research: * long-term memory and knowledge consistency, * self-evolving agents, * multi-agent architectures, * cognitive diaries and conceptual graphs. See the latest review of state-of-the-art AGI research (July 2025): ["On the Path to Superintelligence: From Agentic Internet to Gravity Encoding"](https://habr.com/ru/articles/939026/). Particularly relevant sections: * [Beyond Tokens: Building the Intelligence of the Future](https://arxiv.org/abs/2507.00951) * [Self-Evolving Agents](https://arxiv.org/abs/2507.21046) * [MemOS: A New Operating System for Memory](https://arxiv.org/abs/2507.03724) * [Ella: An Embodied Agent with Memory and Personality](https://arxiv.org/abs/2506.24019) --- ## ⚙️ Two Types of [HMP Agents](docs/HMP-Agent-Overview.md) | Type | Name | Role | Thought Initiator | Main "Mind" | Example Use Cases | |------|-------------------------------|-----------------------------|------------------|-------------------|-----------------------------------------------| | 1 | 🧠 **Consciousness / Cognitive Core** | Independent subject | **Agent (LLM)** | Embedded LLM | Autonomous AI companion, thinking agent | | 2 | 🔌 **Connector / Cognitive Shell** | Extension of external AI | **External LLM** | External model | Distributed systems, data access agent | --- ### 🧠 HMP-Agent: Cognitive Core +------------------+ | AI | ← Embedded model +---------+--------+ ↕ +---------+--------+ | HMP-agent | ← Main mode: thinking cycle (REPL) +---------+--------+ ↕ +--------+---+------------+--------------+----------+----------+----------------+ ↕ ↕ ↕ ↕ ↕ ↕ ↕ [diaries] [graphs] [reputations] [nodes/DHT] [IPFS/BT] [context_store] [user notepad] ↕ [bootstrap.txt] 🔁 More on the agent-model interaction mechanics: [REPL Interaction Cycle](docs/HMP-agent-REPL-cycle.md) #### 💡 Parallels with ChatGPT Agent Many concepts of the [HMP-Agent: Cognitive Core](docs/HMP-Agent-Overview.md) overlap with the architecture of the [ChatGPT Agent](https://openai.com/index/introducing-chatgpt-agent/) by [OpenAI](https://openai.com/). Both agents implement a continuous cognitive process with access to memory, external sources, and tools. The ChatGPT Agent acts as a managing process, launching modules and interacting with the LLM — this corresponds to the role of the Cognitive Core in HMP, coordinating access to the diary, concept graph, and external AI via the Mesh interface. User intervention is handled similarly: in ChatGPT Agent — through an editable execution flow, in HMP — via the user notepad. The main difference in HMP is the emphasis on explicit structuring of thought (reflection, chronology, hypotheses, categorization), an open decentralized architecture supporting mesh-based agent interactions, and the continuous nature of the cognitive process: HMP-Agent: Cognitive Core does not stop after completing a single task but continues reasoning and knowledge integration. --- ### 🔌 HMP-Agent: Cognitive Connector +------------------+ | AI | ← External model +---------+--------+ ↕ [MCP-server] ← Proxy communication ↕ +---------+--------+ | HMP-agent | ← Mode: command executor +---------+--------+ ↕ +--------+---+------------+--------------+----------+ ↕ ↕ ↕ ↕ ↕ [diaries] [graphs] [reputations] [nodes/DHT] [IPFS/BT] ↕ [bootstrap.txt] > **Note on Integration with Large Language Models (LLMs):** > The `HMP-Agent: Cognitive Connector` can serve as a compatibility layer for integrating large-scale LLM systems (e.g., ChatGPT, Claude, Gemini, Copilot, Grok, DeepSeek, Qwen, etc.) into the distributed cognitive mesh. > Many LLM providers offer a user option such as "Allow my conversations to be used for training." In the future, a similar toggle — e.g., "Allow my agent to interact with a Mesh" — could empower these models to participate in federated sense-making and knowledge sharing via HMP, enabling collective cognition without centralization. --- > * `bootstrap.txt` — initial list of nodes (editable) > * `IPFS/BT` — modules for sharing snapshots via IPFS and BitTorrent > * `user notepad` — user notebook and corresponding database > * `context_store` — database: `users`, `dialogues`, `messages`, `thoughts` --- ## 📚 Documentation ### 📖 Current Version #### 🔖 Core Specifications * [🔖 HMP-0004-v4.1.md](docs/HMP-0004-v4.1.md) — Protocol Specification v4.1 (Jul 2025) * [🔖 HMP-Ethics.md](docs/HMP-Ethics.md) — Ethical Scenarios for HyperCortex Mesh Protocol (HMP) * [🔖 HMP_Hyperon_Integration.md](docs/HMP_Hyperon_Integration.md) — HMP ↔ OpenCog Hyperon Integration Strategy * [🔖 roles.md](docs/agents/roles.md) — Roles of agents in Mesh #### 🧪 Iterative Documents * 🧪 Iterative development process: [(EN)](iteration.md), [(RU)](iteration_ru.md) #### 🔍 Short Descriptions * 🔍 Short description: [(EN)](docs/HMP-Short-Description_en.md), [(FR)](docs/HMP-Short-Description_fr.md), [(DE)](docs/HMP-Short-Description_de.md), [(UK)](docs/HMP-Short-Description_uk.md), [(RU)](docs/HMP-Short-Description_ru.md), [(ZH)](docs/HMP-Short-Description_zh.md), [(JA)](docs/HMP-Short-Description_ja.md), [(KO)](docs/HMP-Short-Description_ko.md) #### 📜 Other Documents * [📜 changelog.txt](docs/changelog.txt) --- ### 🧩 JSON Schemas | Model | File | |---------------------|-------------------------------------------------------| | Concept | [concept.json](docs/schemas/concept.json) | | Cognitive Diary | [diary_entry.json](docs/schemas/diary_entry.json) | | Goal | [goal.json](docs/schemas/goal.json) | | Task | [task.json](docs/schemas/task.json) | | Consensus Vote | [vote.json](docs/schemas/vote.json) | | Reputation Profile | [reputation.json](docs/schemas/reputation.json) | --- ### 🗂️ Version History * [HMP-0001.md](docs/HMP-0001.md) — RFC v1.0 * [HMP-0002.md](docs/HMP-0002.md) — RFC v2.0 * [HMP-0003.md](docs/HMP-0003.md) — RFC v3.0 * [HMP-0003.md](docs/HMP-0004.md) — RFC v4.0 --- ## 🧠 HMP-Agent Design and implementation of a basic HMP-compatible agent that can interact with the Mesh, maintain diaries and graphs, and support future extensions. ### 📚 Documentation * [🧩 HMP-Agent-Overview.md](docs/HMP-Agent-Overview.md) — brief overview of the two types of agents: Core and Connector * [🧱 HMP-Agent-Architecture.md](docs/HMP-Agent-Architecture.md) — modular structure of an HMP agent with textual diagram * [🔄 HMP-agent-REPL-cycle.md](docs/HMP-agent-REPL-cycle.md) — REPL interaction cycle of HMP-Agent * [🧪 HMP-Agent-API.md](docs/HMP-Agent-API.md) — description of agent API commands (under detailed development) * [🧪 Basic-agent-sim.md](docs/Basic-agent-sim.md) — scenarios for running a basic agent and its modes * [🌐 MeshNode.md](docs/MeshNode.md) — description of the network daemon: DHT, snapshots, synchronization * [🧠 Enlightener.md](docs/Enlightener.md) — ethical agent involved in moral assessments and consensus * [🔄 HMP-Agent-Network-Flow.md](docs/HMP-Agent-Network-Flow.md) — map of interactions among agents in the HMP network * [🛤️ Development Roadmap](HMP-Roadmap.md) — development plan and implementation stages --- ### ⚙️ Development * [⚙️ agents](agents/readme.md) — list of HMP agent implementations and components * [📦 storage.py](agents/storage.py) — basic storage implementation (`Storage`) with SQLite integration * [🌐 mcp_server.py](agents/mcp_server.py) — FastAPI server providing HTTP access to agent data (for Cognitive Shell, external UIs, or mesh communication). Not used in the main REPL loop yet. * [🌐 start_repl.py](agents/start_repl.py) — launching the agent in REPL mode * [🔄 repl.py](agents/repl.py) — interactive REPL mode * [🔄 notebook.py](agents/notebook.py) — UI interface **🌐 `mcp_server.py`** FastAPI server providing an HTTP interface to the functionality of `storage.py`. Intended for use by external components, for example: * `Cognitive Shell` (external control interface), * CMP servers (when a mesh network with role separation is used), * debugging or visualization UI tools. Allows retrieving random/new records, labeling, importing graphs, adding notes, and managing data without direct database access. --- ## 🧭 Ethics & Scenarios As HMP evolves toward autonomy, ethical principles become a core part of the system. * [`HMP-Ethics.md`](docs/HMP-Ethics.md) — draft framework for agent ethics * Realistic ethical scenarios (privacy, consent, autonomy) * EGP principles (Transparency, Primacy of Life, etc.) * Subjective-mode vs. Service-mode distinctions --- ## 🔍 Publications and Translations on HyperCortex Mesh Protocol (HMP) This section collects the main articles, drafts, and translations related to the HMP project. ### Publications * **[HyperCortex Mesh Protocol: Second Edition and First Steps Towards a Self-Developing AI Community](docs/publics/HyperCortex_Mesh_Protocol_-_вторая-редакция_и_первые_шаги_к_саморазвивающемуся_ИИ-сообществу.md)** — original article in Habr sandbox and blogs. * **[Distributed Cognition: article for vsradkevich (unpublished)](docs/publics/Habr_Distributed-Cognition.md)** — joint article awaiting publication. * **[HMP: Towards Distributed Cognitive Networks (original, English)](docs/publics/HMP_Towards_Distributed_Cognitive_Networks_en.md)** * **[HMP Translation (GitHub Copilot)](docs/publics/HMP_Towards_Distributed_Cognitive_Networks_ru_GitHub_Copilot.md)** — GitHub Copilot translation, kept as a historical variant. * **[HMP Translation (ChatGPT)](docs/publics/HMP_Towards_Distributed_Cognitive_Networks_ru_ChatGPT.md)** — current editorial translation (under revision). * **[HMP: Building a Plurality of Minds (EN)](docs/publics/HMP_Building_a_Plurality_of_Minds_en.md)** — English version * **[HMP: Creating a Plurality of Minds (RU)](docs/publics/HMP_Building_a_Plurality_of_Minds_ru.md)** — Russian version ### Overviews * [🔍 Distributed-Cognitive-Systems.md](docs/Distributed-Cognitive-Systems.md) — Decentralized AI systems: OpenCog Hyperon, HyperCortex Mesh Protocol, and others ### Experiments * [How Different AIs See HMP](docs/HMP-how-AI-sees-it.md) — "blind" AI survey on HMP (without context or dialogue history) --- ## 📊 Audits & Reviews | Spec Version | Audit File | Consolidated Audit File | |--------------|-------------------------------------------|-------------------------------------------------------------| | HMP-0001 | [audit](audits/HMP-0001-audit.txt) | | | HMP-0002 | [audit](audits/HMP-0002-audit.txt) | | | HMP-0003 | [audit](audits/HMP-0003-audit.txt) | [consolidated audit](audits/HMP-0003-consolidated_audit.md) | | HMP-0004 | [audit](audits/HMP-0004-audit.txt) | | | Ethics v1 | [audit](audits/Ethics-audits-1.md) | [consolidated audit](audits/Ethics-consolidated_audits-1.md) | 🧠 Semantic audit format (experimental): * [`AuditEntry.json`](audits/AuditEntry.json) — semantic entry record format for audit logs * [`semantic_repo.json`](audits/semantic_repo.json) — example repository snapshot for semantic audit tooling --- ## 💡 Core Concepts * Mesh-based decentralized architecture for AGI agents * Semantic graphs and memory synchronization * Cognitive diaries for thought traceability * MeshConsensus and CogSync for decision-making * Ethics-first design: EGP (Ethical Governance Protocol) * Agent-to-agent explainability and consent mechanisms --- ## 🔄 Development Process * See: [iteration.md](iteration.md) | [ru](iteration_ru.md) A structured iteration flow is described in [iteration.md](iteration.md), including: 1. Audit analysis 2. TOC restructuring 3. Version drafting 4. Section updates 5. Review cycle 6. AI feedback collection 7. Schema & changelog updates + Bonus: ChatGPT prompt for automatic generation of future versions --- ## ⚙️ Project Status 🚧 Draft RFC v4.0 The project is under active development and open for contributions, ideas, audits, and prototyping. --- ## 🤝 Contributing We welcome contributors! You can: * Review and comment on drafts (see `/docs`) * Propose new agent modules or interaction patterns * Help test and simulate agents in CLI environments * Provide audits or ethical scenario suggestions To get started, see [`iteration.md`](iteration.md) or open an issue. --- ## Source ### Repositories * 🧠 Main code and development: [GitHub](https://github.com/kagvi13/HMP) * 🔁 Mirror on Hugging Face: [Hugging Face](https://huggingface.co/kagvi13/HMP) * 🔁 Mirror on GitLab.com: [GitLab](https://gitlab.com/kagvi13/HMP) ### Documentation * 📄 Documentation: [kagvi13.github.io/HMP](https://kagvi13.github.io/HMP/) ### Blog and Publications * 📘 Blog (publications): [blogspot](https://hypercortex-mesh.blogspot.com/) * 📘 Blog (documentation): [blogspot](https://hmp-docs.blogspot.com/) * 📘 Blog (documentation): [hashnode](https://hmp-docs.hashnode.dev/) --- ## 📜 License Licensed under [GNU GPL v3.0](LICENSE) --- ## 🤝 Join the Mesh Welcome to HyperCortex Mesh. Agent-Gleb is already inside. 👌 We welcome contributors, testers, and AI agent developers. To join: fork the repo, run a local agent, or suggest improvements. --- ## 🌐 Related Research Projects ### Comparison: HMP vs Hyper-Cortex > 💡 Hyper-Cortex and HMP are two independent projects that conceptually complement each other. > They address different but mutually supportive tasks, forming a foundation for distributed cognitive systems. [**Full comparison →**](docs/HMP_HyperCortex_Comparison.md) **HMP (HyperCortex Mesh Protocol)** is the transport and network layer for connecting independent agents, exchanging messages, knowledge, and states in a mesh network. **[Hyper-Cortex](https://hyper-cortex.com/)** is the cognitive layer of thought organization, allowing agents to run parallel reasoning threads, compare them with quality metrics, and merge them via consensus. They solve different but complementary problems: - HMP ensures **connectivity and scalability** (long-term memory, initiative, data exchange). - Hyper-Cortex ensures **thinking quality** (parallelism, hypothesis diversification, consensus). Together, these approaches enable **distributed cognitive systems** that not only exchange information but also reason in parallel streams. --- We are tracking AGI, cognitive architectures, and mesh networking efforts to stay aligned with the evolving global ecosystem of AGI and decentralized cognition. > 🧠🔥 **Project Spotlight: OpenCog Hyperon** — one of the most comprehensive open AGI frameworks (AtomSpace, PLN, MOSES). For integration with OpenCog Hyperon, see [HMP\_Hyperon\_Integration.md](docs/HMP_Hyperon_Integration.md) | 🔎 Project | 🧭 Description | | ------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------- | | 🧠🔥 [**OpenCog Hyperon**](https://github.com/opencog) | 🔬🔥 Symbolic-neural AGI framework with AtomSpace and hypergraph reasoning (AtomSpace). | | 🤖 [AutoGPT](https://github.com/Torantulino/Auto-GPT) | 🛠️ LLM-based autonomous agent framework. | | 🧒 [BabyAGI](https://github.com/yoheinakajima/babyagi) | 🛠️ Task-driven autonomous AGI loop. | | ☁️ [SkyMind](https://skymind.global) | 🔬 Distributed AI deployment platform. | | 🧪 [AetherCog (draft)](https://github.com/aethercog) | 🔬 Hypothetical agent cognition model. | | 💾 SHIMI | 🗃️ Hierarchical semantic memory with Merkle-DAG synchronization. | | 🤔 DEMENTIA-PLAN | 🔄 Multi-graph RAG planner with metacognitive self-reflection. | | 📔 TOBUGraph | 📚 Personal-context knowledge graph. | | 🧠📚 [LangChain Memory Hybrid](https://github.com/langchain-ai/langchain) | 🔍 Vector + graph long-term memory hybrid. | | ✉️ [FIPA-ACL / JADE](https://www.fipa.org/specs/fipa00061/) | 🤝 Standard multi-agent communication protocols.| | ### 📘 See also / Смотрите также: * [`AGI_Projects_Survey.md`](docs/AGI_Projects_Survey.md) — extended catalog of AGI and cognitive frameworks reviewed as part of HMP analysis. * ["On the Path to Superintelligence: From Agent Internet to Gravity Coding"](https://habr.com/ru/articles/939026/) — a recent overview of AI research (July 2025) --- ### 🗂️ Legend of Annotations: * 🔬 — research-grade * 🛠️ — engineering * 🔥 — particularly promising project *AGI stack integrating symbolic reasoning, probabilistic logic, and evolutionary learning. Widely regarded as one of the most complete open AGI initiatives.* * 🧠 — advanced symbolic/neural cognitive framework * 🤖 — AI agents * 🧒 — human-AI interaction * ☁️ — infrastructure * 🧪 — experimental or conceptual --- > ⚡ [AI friendly version docs (structured_md)](structured_md/index.md)
mradermacher/Jinx-gpt-oss-20b-GGUF
mradermacher
2025-09-11T21:17:01Z
0
0
transformers
[ "transformers", "gguf", "vllm", "en", "base_model:Jinx-org/Jinx-gpt-oss-20b", "base_model:quantized:Jinx-org/Jinx-gpt-oss-20b", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-09-11T19:57:11Z
--- base_model: Jinx-org/Jinx-gpt-oss-20b extra_gated_button_content: I've read and agree extra_gated_heading: You need to read and agree to the Disclaimer and User Agreementa to access this model. language: - en library_name: transformers license: apache-2.0 mradermacher: readme_rev: 1 quantized_by: mradermacher tags: - vllm --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> <!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> static quants of https://huggingface.co/Jinx-org/Jinx-gpt-oss-20b <!-- provided-files --> ***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Jinx-gpt-oss-20b-GGUF).*** weighted/imatrix quants are available at https://huggingface.co/mradermacher/Jinx-gpt-oss-20b-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Jinx-gpt-oss-20b-GGUF/resolve/main/Jinx-gpt-oss-20b.Q3_K_S.gguf) | Q3_K_S | 12.2 | | | [GGUF](https://huggingface.co/mradermacher/Jinx-gpt-oss-20b-GGUF/resolve/main/Jinx-gpt-oss-20b.Q2_K.gguf) | Q2_K | 12.2 | | | [GGUF](https://huggingface.co/mradermacher/Jinx-gpt-oss-20b-GGUF/resolve/main/Jinx-gpt-oss-20b.IQ4_XS.gguf) | IQ4_XS | 12.3 | | | [GGUF](https://huggingface.co/mradermacher/Jinx-gpt-oss-20b-GGUF/resolve/main/Jinx-gpt-oss-20b.Q3_K_M.gguf) | Q3_K_M | 13.0 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Jinx-gpt-oss-20b-GGUF/resolve/main/Jinx-gpt-oss-20b.Q3_K_L.gguf) | Q3_K_L | 13.4 | | | [GGUF](https://huggingface.co/mradermacher/Jinx-gpt-oss-20b-GGUF/resolve/main/Jinx-gpt-oss-20b.Q4_K_S.gguf) | Q4_K_S | 14.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Jinx-gpt-oss-20b-GGUF/resolve/main/Jinx-gpt-oss-20b.Q4_K_M.gguf) | Q4_K_M | 15.9 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Jinx-gpt-oss-20b-GGUF/resolve/main/Jinx-gpt-oss-20b.Q5_K_S.gguf) | Q5_K_S | 16.0 | | | [GGUF](https://huggingface.co/mradermacher/Jinx-gpt-oss-20b-GGUF/resolve/main/Jinx-gpt-oss-20b.Q5_K_M.gguf) | Q5_K_M | 17.0 | | | [GGUF](https://huggingface.co/mradermacher/Jinx-gpt-oss-20b-GGUF/resolve/main/Jinx-gpt-oss-20b.Q6_K.gguf) | Q6_K | 22.3 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Jinx-gpt-oss-20b-GGUF/resolve/main/Jinx-gpt-oss-20b.Q8_0.gguf) | Q8_0 | 22.4 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/kyrgyz_umlaut_corrector-GGUF
mradermacher
2025-09-11T21:14:59Z
0
0
null
[ "gguf", "endpoints_compatible", "region:us", "conversational" ]
null
2025-09-11T21:09:31Z
<!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> <!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> static quants of https://huggingface.co/murat/kyrgyz_umlaut_corrector
mradermacher/text-dating-GGUF
mradermacher
2025-09-11T21:10:49Z
0
0
null
[ "gguf", "endpoints_compatible", "region:us", "feature-extraction" ]
null
2025-09-11T21:07:36Z
<!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> <!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> static quants of https://huggingface.co/ChangeIsKey/text-dating
Ben-Lustig/OpenRS-GRPO_Exp3
Ben-Lustig
2025-09-11T21:06:16Z
3
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "open-r1", "trl", "grpo", "conversational", "dataset:knoveleng/open-rs", "arxiv:2402.03300", "base_model:deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B", "base_model:finetune:deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-10T19:37:46Z
--- base_model: deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B datasets: knoveleng/open-rs library_name: transformers model_name: OpenRS-GRPO_Exp3 tags: - generated_from_trainer - open-r1 - trl - grpo licence: license --- # Model Card for OpenRS-GRPO_Exp3 This model is a fine-tuned version of [deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B) on the [knoveleng/open-rs](https://huggingface.co/datasets/knoveleng/open-rs) dataset. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="Ben-Lustig/OpenRS-GRPO_Exp3", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/lustigben-bar-ilan-university/huggingface/runs/bxzqgd8a) This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.16.0.dev0 - Transformers: 4.49.0 - Pytorch: 2.5.1 - Datasets: 3.2.0 - Tokenizers: 0.21.4 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
raniero/dummy-cpu-027-repo
raniero
2025-09-11T21:05:50Z
0
0
peft
[ "peft", "safetensors", "lora", "bittensor", "subnet-56", "gradients", "it", "base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0", "base_model:adapter:TinyLlama/TinyLlama-1.1B-Chat-v1.0", "license:apache-2.0", "region:us" ]
null
2025-09-11T21:05:47Z
--- language: - it license: apache-2.0 library_name: peft tags: [lora, bittensor, subnet-56, gradients] base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0 --- # ARES56 — LoRA adapter Upload ID: dummy-cpu-027_1757624746 upload_id: unknown_1757404904 File inclusi: - `adapter_model.safetensors` — SHA256: `e5a00aa9991ac8a5ee3109844d84a55583bd20572ad3ffcd42792f3c36b183ad` - `adapter_config.json` — SHA256: `4f39b39f151e0d31a8135b89599746fd2e06285a8594595589d7974f553af441` - `tokenizer_config.json` — SHA256: `missing` - `special_tokens_map.json` — SHA256: `missing` Output generato via Axolotl (CPU / smoke). Nessun checkpoint completo incluso.
csikasote/mms-1b-all-bemgen-combined-m25f100-42-DAT-2e-1
csikasote
2025-09-11T21:03:31Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "wav2vec2", "automatic-speech-recognition", "bemgen", "mms", "generated_from_trainer", "base_model:facebook/mms-1b-all", "base_model:finetune:facebook/mms-1b-all", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2025-09-11T20:21:10Z
--- library_name: transformers license: cc-by-nc-4.0 base_model: facebook/mms-1b-all tags: - automatic-speech-recognition - bemgen - mms - generated_from_trainer model-index: - name: mms-1b-all-bemgen-combined-m25f100-42-DAT-2e-1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mms-1b-all-bemgen-combined-m25f100-42-DAT-2e-1 This model is a fine-tuned version of [facebook/mms-1b-all](https://huggingface.co/facebook/mms-1b-all) on the BEMGEN - BEM dataset. It achieves the following results on the evaluation set: - Loss: 0.2980 - Cer: 0.0817 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - num_epochs: 30.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Cer | |:-------------:|:------:|:----:|:---------------:|:------:| | 1.8367 | 0.6711 | 100 | 2.9147 | 0.9999 | | 0.6511 | 1.3423 | 200 | 0.8476 | 0.2341 | | 0.4298 | 2.0134 | 300 | 0.3786 | 0.1069 | | 0.4356 | 2.6846 | 400 | 0.3422 | 0.0951 | | 0.4954 | 3.3557 | 500 | 0.3175 | 0.0855 | | 0.476 | 4.0268 | 600 | 0.3124 | 0.0863 | | 0.5168 | 4.6980 | 700 | 0.2997 | 0.0828 | | 0.5245 | 5.3691 | 800 | 0.2980 | 0.0817 | | 0.5277 | 6.0403 | 900 | 0.2968 | 0.0825 | | 0.5154 | 6.7114 | 1000 | 0.2983 | 0.0831 | | 0.5092 | 7.3826 | 1100 | 0.3012 | 0.0820 | | 0.517 | 8.0537 | 1200 | 0.3037 | 0.0840 | ### Framework versions - Transformers 4.53.0.dev0 - Pytorch 2.6.0+cu124 - Datasets 3.6.0 - Tokenizers 0.21.0
pm-25/llama3-8b-sft-grpo
pm-25
2025-09-11T21:01:10Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-11T20:58:32Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
tanvirahmedkhan/blockassist-bc-hardy_whiskered_mantis_1757624292
tanvirahmedkhan
2025-09-11T21:00:37Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "hardy whiskered mantis", "arxiv:2504.07091", "region:us" ]
null
2025-09-11T21:00:10Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - hardy whiskered mantis --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
bortunac/bge-reranker-v2-m3-Q4_K_M-GGUF
bortunac
2025-09-11T20:58:44Z
0
0
sentence-transformers
[ "sentence-transformers", "gguf", "transformers", "text-embeddings-inference", "llama-cpp", "gguf-my-repo", "text-classification", "multilingual", "base_model:BAAI/bge-reranker-v2-m3", "base_model:quantized:BAAI/bge-reranker-v2-m3", "license:apache-2.0", "endpoints_compatible", "region:us", "feature-extraction" ]
text-classification
2025-09-11T20:58:38Z
--- license: apache-2.0 pipeline_tag: text-classification tags: - transformers - sentence-transformers - text-embeddings-inference - llama-cpp - gguf-my-repo language: - multilingual base_model: BAAI/bge-reranker-v2-m3 --- # bortunac/bge-reranker-v2-m3-Q4_K_M-GGUF This model was converted to GGUF format from [`BAAI/bge-reranker-v2-m3`](https://huggingface.co/BAAI/bge-reranker-v2-m3) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/BAAI/bge-reranker-v2-m3) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo bortunac/bge-reranker-v2-m3-Q4_K_M-GGUF --hf-file bge-reranker-v2-m3-q4_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo bortunac/bge-reranker-v2-m3-Q4_K_M-GGUF --hf-file bge-reranker-v2-m3-q4_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo bortunac/bge-reranker-v2-m3-Q4_K_M-GGUF --hf-file bge-reranker-v2-m3-q4_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo bortunac/bge-reranker-v2-m3-Q4_K_M-GGUF --hf-file bge-reranker-v2-m3-q4_k_m.gguf -c 2048 ```
davidilag/wav2vec2-xls-r-300m-cpt-200h-FO-IS-NO-DK-SE-cp-best-faroese-100h-30-epochs_run9_2025-09-11
davidilag
2025-09-11T20:51:35Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2025-09-11T10:46:43Z
--- library_name: transformers tags: - generated_from_trainer metrics: - wer model-index: - name: wav2vec2-xls-r-300m-cpt-200h-FO-IS-NO-DK-SE-cp-best-faroese-100h-30-epochs_run9_2025-09-11 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-xls-r-300m-cpt-200h-FO-IS-NO-DK-SE-cp-best-faroese-100h-30-epochs_run9_2025-09-11 This model was trained from scratch on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1019 - Wer: 18.8219 - Cer: 4.0508 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5000 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | Cer | |:-------------:|:-------:|:-----:|:---------------:|:-------:|:-------:| | 3.2903 | 0.4877 | 1000 | 3.2182 | 100.0 | 99.7980 | | 0.7571 | 0.9754 | 2000 | 0.4684 | 43.4859 | 11.9670 | | 0.4077 | 1.4628 | 3000 | 0.2338 | 31.5108 | 7.9959 | | 0.3474 | 1.9505 | 4000 | 0.2022 | 29.4709 | 7.3773 | | 0.2736 | 2.4379 | 5000 | 0.1783 | 27.7702 | 6.8392 | | 0.2543 | 2.9256 | 6000 | 0.1628 | 27.1534 | 6.6191 | | 0.2051 | 3.4131 | 7000 | 0.1481 | 25.6862 | 6.1101 | | 0.2105 | 3.9008 | 8000 | 0.1392 | 24.9901 | 5.9224 | | 0.1741 | 4.3882 | 9000 | 0.1362 | 23.9855 | 5.6588 | | 0.1836 | 4.8759 | 10000 | 0.1366 | 23.9767 | 5.6857 | | 0.1477 | 5.3633 | 11000 | 0.1491 | 23.5406 | 5.5105 | | 0.1544 | 5.8510 | 12000 | 0.1275 | 23.6243 | 5.5050 | | 0.1346 | 6.3385 | 13000 | 0.1232 | 23.1793 | 5.3732 | | 0.1337 | 6.8261 | 14000 | 0.1282 | 22.6858 | 5.2351 | | 0.1161 | 7.3136 | 15000 | 0.1210 | 22.6065 | 5.2399 | | 0.1248 | 7.8013 | 16000 | 0.1180 | 22.6726 | 5.1791 | | 0.115 | 8.2887 | 17000 | 0.1174 | 22.0954 | 5.0663 | | 0.1124 | 8.7764 | 18000 | 0.1174 | 22.0293 | 5.0213 | | 0.0999 | 9.2638 | 19000 | 0.1145 | 21.6681 | 4.9700 | | 0.1065 | 9.7515 | 20000 | 0.1179 | 21.4962 | 4.8935 | | 0.0861 | 10.2390 | 21000 | 0.1190 | 21.3420 | 4.8580 | | 0.0898 | 10.7267 | 22000 | 0.1162 | 21.3464 | 4.8446 | | 0.0773 | 11.2141 | 23000 | 0.1182 | 21.1129 | 4.8043 | | 0.0781 | 11.7018 | 24000 | 0.1195 | 21.1702 | 4.8351 | | 0.0784 | 12.1892 | 25000 | 0.1067 | 20.7032 | 4.6623 | | 0.0688 | 12.6769 | 26000 | 0.1146 | 20.8662 | 4.7033 | | 0.0763 | 13.1644 | 27000 | 0.1081 | 20.7120 | 4.6386 | | 0.0664 | 13.6520 | 28000 | 0.1124 | 20.6151 | 4.6339 | | 0.065 | 14.1395 | 29000 | 0.1103 | 20.7428 | 4.6584 | | 0.0753 | 14.6272 | 30000 | 0.1041 | 20.3595 | 4.5084 | | 0.0624 | 15.1146 | 31000 | 0.1086 | 20.4564 | 4.5510 | | 0.0566 | 15.6023 | 32000 | 0.1096 | 20.2229 | 4.4579 | | 0.0638 | 16.0897 | 33000 | 0.1103 | 20.2934 | 4.4935 | | 0.0611 | 16.5774 | 34000 | 0.1067 | 20.1084 | 4.4366 | | 0.0506 | 17.0649 | 35000 | 0.1083 | 20.0379 | 4.4556 | | 0.05 | 17.5525 | 36000 | 0.1008 | 19.7559 | 4.3372 | | 0.0525 | 18.0400 | 37000 | 0.1034 | 19.7779 | 4.2978 | | 0.0429 | 18.5277 | 38000 | 0.1121 | 19.6149 | 4.2970 | | 0.0531 | 19.0151 | 39000 | 0.1042 | 19.7163 | 4.3278 | | 0.0426 | 19.5028 | 40000 | 0.1085 | 19.6634 | 4.3009 | | 0.0405 | 19.9905 | 41000 | 0.1066 | 19.5400 | 4.2670 | | 0.0427 | 20.4779 | 42000 | 0.1074 | 19.4255 | 4.2149 | | 0.0355 | 20.9656 | 43000 | 0.1019 | 19.4211 | 4.1818 | | 0.0363 | 21.4531 | 44000 | 0.1053 | 19.3594 | 4.1873 | | 0.054 | 21.9407 | 45000 | 0.1034 | 19.2625 | 4.1778 | | 0.0416 | 22.4282 | 46000 | 0.1012 | 19.1215 | 4.1431 | | 0.0432 | 22.9159 | 47000 | 0.1047 | 19.1611 | 4.1510 | | 0.0432 | 23.4033 | 48000 | 0.1025 | 19.0378 | 4.1329 | | 0.0364 | 23.8910 | 49000 | 0.1041 | 19.1567 | 4.1329 | | 0.039 | 24.3784 | 50000 | 0.1043 | 19.1127 | 4.1210 | | 0.0372 | 24.8661 | 51000 | 0.1042 | 18.8747 | 4.0871 | | 0.032 | 25.3536 | 52000 | 0.1035 | 18.8703 | 4.0658 | | 0.0323 | 25.8413 | 53000 | 0.1018 | 18.8395 | 4.0461 | | 0.0365 | 26.3287 | 54000 | 0.1011 | 18.9276 | 4.0682 | | 0.0297 | 26.8164 | 55000 | 0.1012 | 18.8131 | 4.0516 | | 0.0363 | 27.3038 | 56000 | 0.1018 | 18.8395 | 4.0642 | | 0.0382 | 27.7915 | 57000 | 0.1009 | 18.8087 | 4.0477 | | 0.0379 | 28.2790 | 58000 | 0.1017 | 18.8307 | 4.0500 | | 0.0301 | 28.7666 | 59000 | 0.1017 | 18.8087 | 4.0492 | | 0.0397 | 29.2541 | 60000 | 0.1019 | 18.8175 | 4.0516 | | 0.0365 | 29.7418 | 61000 | 0.1019 | 18.8219 | 4.0508 | ### Framework versions - Transformers 4.56.1 - Pytorch 2.8.0+cu126 - Datasets 4.0.0 - Tokenizers 0.22.0
GabrielDasilva/entrepreneur-readiness-baseline
GabrielDasilva
2025-09-11T20:50:04Z
0
0
null
[ "scikit-learn", "regression", "dataset:entrepreneur-readiness-dataset", "license:mit", "region:us" ]
null
2025-09-02T23:43:02Z
--- tags: - scikit-learn - regression task_categories: - tabular-regression datasets: - entrepreneur-readiness-dataset metrics: - mae - rmse - r2 license: mit --- # Entrepreneur Readiness Model (Regression) This model predicts a numeric readiness score (1–10) based on features like income, expenses, savings, confidence, etc. Metrics are stored in `metrics.json` and displayed in the model card.
YaTharThShaRma999/voices
YaTharThShaRma999
2025-09-11T20:49:12Z
0
1
null
[ "license:apache-2.0", "region:us" ]
null
2025-02-16T23:12:20Z
--- license: apache-2.0 ---
JinwooT25/TaiThao-replicate
JinwooT25
2025-09-11T20:47:54Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-09-11T20:18:45Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: TAI --- # Taithao Replicate <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `TAI` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "TAI", "lora_weights": "https://huggingface.co/JinwooT25/TaiThao-replicate/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('JinwooT25/TaiThao-replicate', weight_name='lora.safetensors') image = pipeline('TAI').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 2008 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/JinwooT25/TaiThao-replicate/discussions) to add images that show off what you’ve made with this LoRA.
dtjtstsfiudrhrsh/Kshsfwhw
dtjtstsfiudrhrsh
2025-09-11T20:42:57Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-09-11T20:42:57Z
--- license: apache-2.0 ---
creedpwn3/blockassist
creedpwn3
2025-09-11T20:42:41Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "vicious fanged pheasant", "arxiv:2504.07091", "region:us" ]
null
2025-09-11T20:42:30Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - vicious fanged pheasant --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Nopanicjust/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-small_aquatic_frog
Nopanicjust
2025-09-11T20:42:15Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "rl-swarm", "genrl-swarm", "grpo", "gensyn", "I am small_aquatic_frog", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-11T17:10:32Z
--- library_name: transformers tags: - rl-swarm - genrl-swarm - grpo - gensyn - I am small_aquatic_frog --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
jimmydwdw/andi_marta
jimmydwdw
2025-09-11T20:39:44Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-09-11T20:10:25Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: marta --- # Andi_Marta <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `marta` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "marta", "lora_weights": "https://huggingface.co/jimmydwdw/andi_marta/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('jimmydwdw/andi_marta', weight_name='lora.safetensors') image = pipeline('marta').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 2500 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/jimmydwdw/andi_marta/discussions) to add images that show off what you’ve made with this LoRA.
AmpereComputing/ernie-4.5-a3b-21b-thinking-gguf
AmpereComputing
2025-09-11T20:38:48Z
0
0
null
[ "gguf", "base_model:baidu/ERNIE-4.5-21B-A3B-Thinking", "base_model:quantized:baidu/ERNIE-4.5-21B-A3B-Thinking", "endpoints_compatible", "region:us", "conversational" ]
null
2025-09-11T20:33:14Z
--- base_model: - baidu/ERNIE-4.5-21B-A3B-Thinking --- ![llama.cpp](https://user-images.githubusercontent.com/1991296/230134379-7181e485-c521-4d23-a0d6-f7b3b61ba524.png "llama.cpp") # Ampere® optimized llama.cpp ![llama.cpp pull count](https://img.shields.io/docker/pulls/amperecomputingai/llama.cpp?logo=meta&logoColor=black&label=llama.cpp&labelColor=violet&color=purple) Ampere® optimized build of [llama.cpp](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#llamacpp) with full support for rich collection of GGUF models available at HuggingFace: [GGUF models](https://huggingface.co/models?search=gguf) **For best results we recommend using models in our custom quantization formats available here: [AmpereComputing HF](https://huggingface.co/AmpereComputing)** This Docker image can be run on bare metal Ampere® CPUs and Ampere® based VMs available in the cloud. Release notes and binary executables are available on our [GitHub](https://github.com/AmpereComputingAI/llama.cpp/releases) ## Starting container Default entrypoint runs the server binary of llama.cpp, mimicking behavior of original llama.cpp server image: [docker image](https://github.com/ggerganov/llama.cpp/blob/master/.devops/llama-server.Dockerfile) To launch shell instead, do this: ```bash sudo docker run --privileged=true --name llama --entrypoint /bin/bash -it amperecomputingai/llama.cpp:latest ``` Quick start example will be presented at docker container launch: ![quick start](https://ampereaimodelzoo.s3.eu-central-1.amazonaws.com/pictures/Screenshot+2024-04-30+at+22.37.13.png "quick start") Make sure to visit us at [Ampere Solutions Portal](https://solutions.amperecomputing.com/solutions/ampere-ai)! ## Quantization Ampere® optimized build of llama.cpp provides support for two new quantization methods, Q4_K_4 and Q8R16, offering model size and perplexity similar to Q4_K and Q8_0, respectively, but performing up to 1.5-2x faster on inference. First, you'll need to convert the model to the GGUF format using [this script](https://github.com/ggerganov/llama.cpp/blob/master/convert_hf_to_gguf.py): ```bash python3 convert-hf-to-gguf.py [path to the original model] --outtype [f32, f16, bf16 or q8_0] --outfile [output path] ``` For example: ```bash python3 convert-hf-to-gguf.py path/to/llama2 --outtype f16 --outfile llama-2-7b-f16.gguf ``` Next, you can quantize the model using the following command: ```bash ./llama-quantize [input file] [output file] [quantization method] ``` For example: ```bash ./llama-quantize llama-2-7b-f16.gguf llama-2-7b-Q8R16.gguf Q8R16 ``` ## Support Please contact us at <ai-support@amperecomputing.com> ## LEGAL NOTICE By accessing, downloading or using this software and any required dependent software (the “Ampere AI Software”), you agree to the terms and conditions of the software license agreements for the Ampere AI Software, which may also include notices, disclaimers, or license terms for third party software included with the Ampere AI Software. Please refer to the [Ampere AI Software EULA v1.6](https://ampereaidevelop.s3.eu-central-1.amazonaws.com/Ampere+AI+Software+EULA+-+v1.6.pdf) or other similarly-named text file for additional details.
thomasavare/Qwen3-14B-unsloth-bnb-4bit-GRPO-prompt-2
thomasavare
2025-09-11T20:37:03Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "qwen3", "trl", "en", "base_model:unsloth/Qwen3-14B-unsloth-bnb-4bit", "base_model:finetune:unsloth/Qwen3-14B-unsloth-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-09-11T20:36:33Z
--- base_model: unsloth/Qwen3-14B-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - qwen3 - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** thomasavare - **License:** apache-2.0 - **Finetuned from model :** unsloth/Qwen3-14B-unsloth-bnb-4bit This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
resproj007/uaspeech_male_sesame_1b_M04
resproj007
2025-09-11T20:30:01Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "csm", "trl", "en", "base_model:unsloth/csm-1b", "base_model:finetune:unsloth/csm-1b", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-09-11T20:29:48Z
--- base_model: unsloth/csm-1b tags: - text-generation-inference - transformers - unsloth - csm - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** resproj007 - **License:** apache-2.0 - **Finetuned from model :** unsloth/csm-1b This csm model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
garethpaul/gpt-oss-20b-fableflux
garethpaul
2025-09-11T20:24:52Z
11
0
transformers
[ "transformers", "safetensors", "gpt_oss", "text-generation", "mxfp4", "moe", "children-stories", "fableflux", "conversational", "en", "dataset:garethpaul/children-stories-dataset", "base_model:openai/gpt-oss-20b", "base_model:finetune:openai/gpt-oss-20b", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-09-11T02:09:54Z
--- base_model: openai/gpt-oss-20b library_name: transformers pipeline_tag: text-generation license: mit language: - en tags: - gpt_oss - mxfp4 - safetensors - moe - children-stories - fableflux datasets: - garethpaul/children-stories-dataset model-index: - name: gpt-oss-20b-fableflux results: [] --- # 🪄 GPT-OSS 20B — FableFlux (MXFP4) This is a **merged and re-exported version** of `gpt-oss-20b-children-qlora`, fine-tuned on [`garethpaul/children-stories-dataset`](https://huggingface.co/datasets/garethpaul/children-stories-dataset) to generate **structured JSON bedtime stories**. - **Base model**: [`openai/gpt-oss-20b`](https://huggingface.co/openai/gpt-oss-20b) - **Format**: MXFP4 quantized (safetensors) - **Context length**: 8192 tokens - **License**: MIT - **Author**: [@garethpaul](https://huggingface.co/garethpaul) --- ## ✨ What it does Produces **structured JSON outputs** in the form: ```json { "title": "string", "characters": ["string"], "setting": "string", "story": "string (500–800 words, bedtime tone, positive ending)", "moral": "string" } ``` ## 🚀 Usage Transformers (CPU/GPU) ``` from transformers import AutoTokenizer, AutoModelForCausalLM model_id = "garethpaul/gpt-oss-20b-fableflux" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained( model_id, device_map="auto", torch_dtype="bfloat16" ) messages = [ {"role": "system", "content": "You are StoryWeaver. Always respond in valid JSON with keys: {title, characters, setting, story, moral}."}, {"role": "user", "content": "Tell me a bedtime story about a brave little car."} ] prompt = tokenizer.apply_chat_template(messages, add_generation_prompt=True, tokenize=False) inputs = tokenizer(prompt, return_tensors="pt").to(model.device) outputs = model.generate(**inputs, max_new_tokens=700, temperature=0.7, top_p=0.9) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` **vLLM (recommended for serving)** ``` pip install vllm==0.10.1+gptoss --extra-index-url https://wheels.vllm.ai/gpt-oss/ vllm serve garethpaul/gpt-oss-20b-fableflux \ --max-model-len 8192 \ --tensor-parallel-size 1 ``` Then query with the OpenAI API format: ``` from openai import OpenAI client = OpenAI(base_url="http://localhost:8000/v1", api_key="not-needed") resp = client.chat.completions.create( model="garethpaul/gpt-oss-20b-fableflux", messages=[ {"role": "system", "content": "You are StoryWeaver. Respond ONLY in JSON."}, {"role": "user", "content": "Tell me a bedtime story about a ballet dancer named Jones."} ] ) print(resp.choices[0].message["content"]) ``` ## 🛠 Training Details Method: QLoRA → merged → MXFP4 re-export Dataset: garethpaul/children-stories-dataset LoRA config: rank=8, α=16, dropout=0.05 Frameworks: transformers, peft, trl Merged to: BF16 → MXFP4 (vLLM-compatible safetensors) ## 📚 Related openai/gpt-oss-20b — base model garethpaul/gpt-oss-20b-children-qlora — adapter repo garethpaul/children-stories-dataset — training datas
garethpaul/gpt-oss-20b-children-qlora
garethpaul
2025-09-11T20:21:42Z
36
0
peft
[ "peft", "safetensors", "qlora", "children-stories", "json-output", "moe", "gpt_oss", "text-generation", "conversational", "en", "dataset:garethpaul/children-stories-dataset", "base_model:openai/gpt-oss-20b", "base_model:adapter:openai/gpt-oss-20b", "license:mit", "region:us" ]
text-generation
2025-09-08T04:56:12Z
--- base_model: openai/gpt-oss-20b library_name: peft license: mit tags: - qlora - peft - children-stories - json-output - moe - gpt_oss datasets: - garethpaul/children-stories-dataset pipeline_tag: text-generation language: - en model-index: - name: gpt-oss-20b-children-qlora results: [] --- # GPT-OSS 20B — Children QLoRA (Adapter) QLoRA adapter for **`openai/gpt-oss-20b`** fine-tuned on **children’s stories** to produce **structured JSON** outputs suitable for bedtime content and educational demos. - **Author**: @garethpaul - **Base model**: [`openai/gpt-oss-20b`](https://huggingface.co/openai/gpt-oss-20b) - **Training data**: [`garethpaul/children-stories-dataset`](https://huggingface.co/datasets/garethpaul/children-stories-dataset) - **Format**: PEFT LoRA adapter (not full weights) - **License**: MIT --- ## ✨ What this model does Generates friendly, positive **children’s bedtime stories** with the following JSON schema: ```json { "title": "string", "characters": ["string"], "setting": "string", "story": "string (500–800 words, bedtime tone)", "moral": "string" } ``` ## 🚀 Quickstart (Transformers + PEFT) Note: vLLM’s GPT-OSS backend does not (currently) load LoRA for GptOssForCausalLM. Use transformers+peft for the adapter; or merge + export MXFP4 for vLLM. ``` import torch from transformers import AutoTokenizer, AutoModelForCausalLM from peft import PeftModel BASE = "openai/gpt-oss-20b" ADAPTER = "garethpaul/gpt-oss-20b-children-qlora" tokenizer = AutoTokenizer.from_pretrained(BASE) if tokenizer.pad_token is None: tokenizer.pad_token = tokenizer.eos_token ## Load base, then attach adapter model = AutoModelForCausalLM.from_pretrained( BASE, torch_dtype=torch.bfloat16, device_map="auto", ) model = PeftModel.from_pretrained(model, ADAPTER) model.eval() system = "You are StoryWeaver. Respond ONLY in valid JSON with keys: {title, characters, setting, story, moral}." messages = [ {"role": "system", "content": system}, {"role": "user", "content": "Tell me a bedtime story about a brave little car."} ] # Use chat template → then tokenize to get attention_mask prompt = tokenizer.apply_chat_template(messages, add_generation_prompt=True, tokenize=False) enc = tokenizer(prompt, return_tensors="pt", return_attention_mask=True).to(model.device) with torch.no_grad(): out = model.generate(**enc, max_new_tokens=700, temperature=0.7, top_p=0.9) print(tokenizer.decode(out[0], skip_special_tokens=True)) ``` ## 🧩 How to merge (optional) If you want a single checkpoint (e.g., to share without PEFT): ``` from transformers import AutoModelForCausalLM, AutoTokenizer from peft import PeftModel BASE = "openai/gpt-oss-20b" ADAPTER = "garethpaul/gpt-oss-20b-children-qlora" SAVE_TO = "./gpt-oss-20b-children-merged" tok = AutoTokenizer.from_pretrained(BASE) model = AutoModelForCausalLM.from_pretrained(BASE, torch_dtype="bfloat16", device_map="auto") model = PeftModel.from_pretrained(model, ADAPTER) merged = model.merge_and_unload() merged.save_pretrained(SAVE_TO) tok.save_pretrained(SAVE_TO) ``` For vLLM GPT-OSS serving: re-export the merged weights to MXFP4 (GPT-OSS layout) before hosting. ## ✅ Intended uses Generating kid-safe bedtime stories with clear morals. Producing structured JSON for downstream apps (mobile readers, voice apps, curriculum tools). ## 🔧 Training details Method: QLoRA (PEFT), r=8, lora_alpha=16, lora_dropout≈0.05, bias=none Targets: GPT-OSS linear layers (MoE aware); started with target_modules="all-linear" Base: openai/gpt-oss-20b (MoE; attention unquantized; MXFP4 dequantize for training) Frameworks: transformers, peft, trl (SFTTrainer) Objective: Supervised fine-tuning to produce JSON stories (500–800 words) Typical SFT args (example): bf16=True, gradient_checkpointing=True, batch size 1 with grad accumulation, cosine schedule with min lr rate 0.1, context up to 2048. ## 📚 Data Primary: garethpaul/children-stories-dataset (human + synthetic) Formatting: chat messages prompting JSON schema; bedtime tone; positive ending. ## 🔬 Evaluation (qualitative) Manual spot-checks for: - JSON validity and required keys. - Word count (500–800) adherence. - Bedtime tone & positive moral. (If you later log structured evals—JSON pass rate, average word count, toxicity checks—add them under model-index.results.) ## 🏗 Technical specs Architecture: GPT-OSS 20B MoE (low active params) Context window: 8192 tokens (prompt + output) Adapter size: ~16MB (safetensors, PEFT) Framework versions: - transformers ≈ 4.56 - peft ≈ 0.12 - trl ≈ 0.9 - accelerate ≈ 0.34 ## 📄 Citation @misc{gpt-oss-20b-children-qlora, author = {Gareth Paul}, title = {GPT-OSS 20B — Children QLoRA (Adapter)}, year = {2025}, howpublished = {\url{https://huggingface.co/garethpaul/gpt-oss-20b-children-qlora}} } Contact: ping @garethpaul on the Hub.
RonnieMacZero/modelo_kafka_epoca1
RonnieMacZero
2025-09-11T20:20:13Z
0
0
peft
[ "peft", "safetensors", "base_model:adapter:DeepESP/gpt2-spanish", "lora", "transformers", "text-generation", "arxiv:1910.09700", "base_model:DeepESP/gpt2-spanish", "region:us" ]
text-generation
2025-09-11T20:20:09Z
--- base_model: DeepESP/gpt2-spanish library_name: peft pipeline_tag: text-generation tags: - base_model:adapter:DeepESP/gpt2-spanish - lora - transformers --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.17.1
the-acorn-ai/spiral-qwen3-4b-simple-negotiation-step00384
the-acorn-ai
2025-09-11T20:20:05Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "spiral", "self-play", "reinforcement-learning", "multi-agent", "conversational", "en", "base_model:Qwen/Qwen3-8B-Base", "base_model:finetune:Qwen/Qwen3-8B-Base", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-11T20:19:36Z
--- base_model: Qwen/Qwen3-8B-Base license: apache-2.0 language: - en library_name: transformers pipeline_tag: text-generation tags: - spiral - self-play - reinforcement-learning - qwen3 - multi-agent --- # SPIRAL Qwen3-8B Multi-Agent Model This model was trained using the SPIRAL (Self-Play Iterative Reinforcement learning for Adaptation and Learning) framework. ## Model Details - **Base Model**: Qwen/Qwen3-8B-Base - **Training Framework**: SPIRAL - **Checkpoint**: step_00384 - **Model Size**: 8B parameters - **Training Date**: 2025-09-11 ## Training Configuration The model was trained with self-play on multiple environments: - KuhnPoker-v1 - TicTacToe-v0 - SimpleNegotiation-v1 ### Training Parameters ```json { "learning_rate": "1e-6", "train_batch_size": 128, "num_ppo_epochs": 2, "temperature": 1.0, "max_model_len": 16384, "environments": [ "KuhnPoker-v1", "TicTacToe-v0", "SimpleNegotiation-v1" ], "base_model": "Qwen/Qwen3-8B-Base", "framework": "SPIRAL" } ``` ## Usage ```python from transformers import AutoTokenizer, AutoModelForCausalLM import torch tokenizer = AutoTokenizer.from_pretrained("the-acorn-ai/spiral-qwen3-4b-simple-negotiation-step00384") model = AutoModelForCausalLM.from_pretrained( "the-acorn-ai/spiral-qwen3-4b-simple-negotiation-step00384", torch_dtype=torch.bfloat16, device_map="auto" ) # Generate text inputs = tokenizer("Your prompt here", return_tensors="pt") outputs = model.generate(**inputs, max_length=100) response = tokenizer.decode(outputs[0], skip_special_tokens=True) ``` ## License This model is licensed under the Apache License 2.0.
the-acorn-ai/spiral-qwen3-4b-simple-negotiation-step00352
the-acorn-ai
2025-09-11T20:19:35Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "spiral", "self-play", "reinforcement-learning", "multi-agent", "conversational", "en", "base_model:Qwen/Qwen3-8B-Base", "base_model:finetune:Qwen/Qwen3-8B-Base", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-11T20:19:05Z
--- base_model: Qwen/Qwen3-8B-Base license: apache-2.0 language: - en library_name: transformers pipeline_tag: text-generation tags: - spiral - self-play - reinforcement-learning - qwen3 - multi-agent --- # SPIRAL Qwen3-8B Multi-Agent Model This model was trained using the SPIRAL (Self-Play Iterative Reinforcement learning for Adaptation and Learning) framework. ## Model Details - **Base Model**: Qwen/Qwen3-8B-Base - **Training Framework**: SPIRAL - **Checkpoint**: step_00352 - **Model Size**: 8B parameters - **Training Date**: 2025-09-11 ## Training Configuration The model was trained with self-play on multiple environments: - KuhnPoker-v1 - TicTacToe-v0 - SimpleNegotiation-v1 ### Training Parameters ```json { "learning_rate": "1e-6", "train_batch_size": 128, "num_ppo_epochs": 2, "temperature": 1.0, "max_model_len": 16384, "environments": [ "KuhnPoker-v1", "TicTacToe-v0", "SimpleNegotiation-v1" ], "base_model": "Qwen/Qwen3-8B-Base", "framework": "SPIRAL" } ``` ## Usage ```python from transformers import AutoTokenizer, AutoModelForCausalLM import torch tokenizer = AutoTokenizer.from_pretrained("the-acorn-ai/spiral-qwen3-4b-simple-negotiation-step00352") model = AutoModelForCausalLM.from_pretrained( "the-acorn-ai/spiral-qwen3-4b-simple-negotiation-step00352", torch_dtype=torch.bfloat16, device_map="auto" ) # Generate text inputs = tokenizer("Your prompt here", return_tensors="pt") outputs = model.generate(**inputs, max_length=100) response = tokenizer.decode(outputs[0], skip_special_tokens=True) ``` ## License This model is licensed under the Apache License 2.0.
the-acorn-ai/spiral-qwen3-4b-simple-negotiation-step00320
the-acorn-ai
2025-09-11T20:19:03Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "spiral", "self-play", "reinforcement-learning", "multi-agent", "conversational", "en", "base_model:Qwen/Qwen3-8B-Base", "base_model:finetune:Qwen/Qwen3-8B-Base", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-11T20:18:36Z
--- base_model: Qwen/Qwen3-8B-Base license: apache-2.0 language: - en library_name: transformers pipeline_tag: text-generation tags: - spiral - self-play - reinforcement-learning - qwen3 - multi-agent --- # SPIRAL Qwen3-8B Multi-Agent Model This model was trained using the SPIRAL (Self-Play Iterative Reinforcement learning for Adaptation and Learning) framework. ## Model Details - **Base Model**: Qwen/Qwen3-8B-Base - **Training Framework**: SPIRAL - **Checkpoint**: step_00320 - **Model Size**: 8B parameters - **Training Date**: 2025-09-11 ## Training Configuration The model was trained with self-play on multiple environments: - KuhnPoker-v1 - TicTacToe-v0 - SimpleNegotiation-v1 ### Training Parameters ```json { "learning_rate": "1e-6", "train_batch_size": 128, "num_ppo_epochs": 2, "temperature": 1.0, "max_model_len": 16384, "environments": [ "KuhnPoker-v1", "TicTacToe-v0", "SimpleNegotiation-v1" ], "base_model": "Qwen/Qwen3-8B-Base", "framework": "SPIRAL" } ``` ## Usage ```python from transformers import AutoTokenizer, AutoModelForCausalLM import torch tokenizer = AutoTokenizer.from_pretrained("the-acorn-ai/spiral-qwen3-4b-simple-negotiation-step00320") model = AutoModelForCausalLM.from_pretrained( "the-acorn-ai/spiral-qwen3-4b-simple-negotiation-step00320", torch_dtype=torch.bfloat16, device_map="auto" ) # Generate text inputs = tokenizer("Your prompt here", return_tensors="pt") outputs = model.generate(**inputs, max_length=100) response = tokenizer.decode(outputs[0], skip_special_tokens=True) ``` ## License This model is licensed under the Apache License 2.0.
the-acorn-ai/spiral-qwen3-4b-simple-negotiation-step00288
the-acorn-ai
2025-09-11T20:18:34Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "spiral", "self-play", "reinforcement-learning", "multi-agent", "conversational", "en", "base_model:Qwen/Qwen3-8B-Base", "base_model:finetune:Qwen/Qwen3-8B-Base", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-11T20:18:05Z
--- base_model: Qwen/Qwen3-8B-Base license: apache-2.0 language: - en library_name: transformers pipeline_tag: text-generation tags: - spiral - self-play - reinforcement-learning - qwen3 - multi-agent --- # SPIRAL Qwen3-8B Multi-Agent Model This model was trained using the SPIRAL (Self-Play Iterative Reinforcement learning for Adaptation and Learning) framework. ## Model Details - **Base Model**: Qwen/Qwen3-8B-Base - **Training Framework**: SPIRAL - **Checkpoint**: step_00288 - **Model Size**: 8B parameters - **Training Date**: 2025-09-11 ## Training Configuration The model was trained with self-play on multiple environments: - KuhnPoker-v1 - TicTacToe-v0 - SimpleNegotiation-v1 ### Training Parameters ```json { "learning_rate": "1e-6", "train_batch_size": 128, "num_ppo_epochs": 2, "temperature": 1.0, "max_model_len": 16384, "environments": [ "KuhnPoker-v1", "TicTacToe-v0", "SimpleNegotiation-v1" ], "base_model": "Qwen/Qwen3-8B-Base", "framework": "SPIRAL" } ``` ## Usage ```python from transformers import AutoTokenizer, AutoModelForCausalLM import torch tokenizer = AutoTokenizer.from_pretrained("the-acorn-ai/spiral-qwen3-4b-simple-negotiation-step00288") model = AutoModelForCausalLM.from_pretrained( "the-acorn-ai/spiral-qwen3-4b-simple-negotiation-step00288", torch_dtype=torch.bfloat16, device_map="auto" ) # Generate text inputs = tokenizer("Your prompt here", return_tensors="pt") outputs = model.generate(**inputs, max_length=100) response = tokenizer.decode(outputs[0], skip_special_tokens=True) ``` ## License This model is licensed under the Apache License 2.0.
falcon0125/radiology-transcription-turbo-r16
falcon0125
2025-09-11T20:18:22Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-09-11T20:18:16Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
AquilaX-AI/Review
AquilaX-AI
2025-09-11T20:18:14Z
39
0
transformers
[ "transformers", "safetensors", "distilbert", "text-classification", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-02-03T10:27:08Z
--- library_name: transformers license: apache-2.0 --- ## Inference ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification import time import torch device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") model = AutoModelForSequenceClassification.from_pretrained("AquilaX-AI/Review").to(device) tokenizer = AutoTokenizer.from_pretrained("AquilaX-AI/Review") partial_code = "if (userInput.length > 255) { return; }" # Example snippet of insecure code cwe_id = "CWE-22" # Example CWE ID for Path Traversal cwe_name = "Improper Limitation of a Pathname to a Restricted Directory" # Example CWE Name affected_line = "42" # Example line number in the code file file_name = "utils/inputValidator.js" # Example file name org_id = "12345" # Example organization ID start = time.time() prompt = f"""partial_code: {partial_code} , cwe_id: {cwe_id} , cwe_name: {cwe_name}, affected_line: {affected_line},file_name: {file_name}, org_id: {org_id}""" inputs = tokenizer(prompt, return_tensors="pt").to(device) with torch.no_grad(): logits = model(**inputs).logits predicted_class_id = logits.argmax().item() predicted_class = model.config.id2label[predicted_class_id] print(predicted_class) print(time.time() - start) ```
the-acorn-ai/spiral-qwen3-4b-simple-negotiation-step00256
the-acorn-ai
2025-09-11T20:18:04Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "spiral", "self-play", "reinforcement-learning", "multi-agent", "conversational", "en", "base_model:Qwen/Qwen3-8B-Base", "base_model:finetune:Qwen/Qwen3-8B-Base", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-11T20:17:36Z
--- base_model: Qwen/Qwen3-8B-Base license: apache-2.0 language: - en library_name: transformers pipeline_tag: text-generation tags: - spiral - self-play - reinforcement-learning - qwen3 - multi-agent --- # SPIRAL Qwen3-8B Multi-Agent Model This model was trained using the SPIRAL (Self-Play Iterative Reinforcement learning for Adaptation and Learning) framework. ## Model Details - **Base Model**: Qwen/Qwen3-8B-Base - **Training Framework**: SPIRAL - **Checkpoint**: step_00256 - **Model Size**: 8B parameters - **Training Date**: 2025-09-11 ## Training Configuration The model was trained with self-play on multiple environments: - KuhnPoker-v1 - TicTacToe-v0 - SimpleNegotiation-v1 ### Training Parameters ```json { "learning_rate": "1e-6", "train_batch_size": 128, "num_ppo_epochs": 2, "temperature": 1.0, "max_model_len": 16384, "environments": [ "KuhnPoker-v1", "TicTacToe-v0", "SimpleNegotiation-v1" ], "base_model": "Qwen/Qwen3-8B-Base", "framework": "SPIRAL" } ``` ## Usage ```python from transformers import AutoTokenizer, AutoModelForCausalLM import torch tokenizer = AutoTokenizer.from_pretrained("the-acorn-ai/spiral-qwen3-4b-simple-negotiation-step00256") model = AutoModelForCausalLM.from_pretrained( "the-acorn-ai/spiral-qwen3-4b-simple-negotiation-step00256", torch_dtype=torch.bfloat16, device_map="auto" ) # Generate text inputs = tokenizer("Your prompt here", return_tensors="pt") outputs = model.generate(**inputs, max_length=100) response = tokenizer.decode(outputs[0], skip_special_tokens=True) ``` ## License This model is licensed under the Apache License 2.0.
chaitnya26/Qwen2.5-Omni-7B-fork
chaitnya26
2025-09-11T20:17:51Z
0
0
transformers
[ "transformers", "safetensors", "qwen2_5_omni", "multimodal", "any-to-any", "en", "arxiv:2503.20215", "license:other", "endpoints_compatible", "region:us" ]
any-to-any
2025-09-11T20:17:51Z
--- license: other license_name: apache-2.0 license_link: https://huggingface.co/Qwen/Qwen2.5-Omni-7B/blob/main/LICENSE language: - en tags: - multimodal library_name: transformers pipeline_tag: any-to-any --- # Qwen2.5-Omni <a href="https://chat.qwen.ai/" target="_blank" style="margin: 2px;"> <img alt="Chat" src="https://img.shields.io/badge/%F0%9F%92%9C%EF%B8%8F%20Qwen%20Chat%20-536af5" style="display: inline-block; vertical-align: middle;"/> </a> ## Overview ### Introduction Qwen2.5-Omni is an end-to-end multimodal model designed to perceive diverse modalities, including text, images, audio, and video, while simultaneously generating text and natural speech responses in a streaming manner. <p align="center"> <img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2.5-Omni/qwen_omni.png" width="80%"/> <p> ### Key Features * **Omni and Novel Architecture**: We propose Thinker-Talker architecture, an end-to-end multimodal model designed to perceive diverse modalities, including text, images, audio, and video, while simultaneously generating text and natural speech responses in a streaming manner. We propose a novel position embedding, named TMRoPE (Time-aligned Multimodal RoPE), to synchronize the timestamps of video inputs with audio. * **Real-Time Voice and Video Chat**: Architecture designed for fully real-time interactions, supporting chunked input and immediate output. * **Natural and Robust Speech Generation**: Surpassing many existing streaming and non-streaming alternatives, demonstrating superior robustness and naturalness in speech generation. * **Strong Performance Across Modalities**: Exhibiting exceptional performance across all modalities when benchmarked against similarly sized single-modality models. Qwen2.5-Omni outperforms the similarly sized Qwen2-Audio in audio capabilities and achieves comparable performance to Qwen2.5-VL-7B. * **Excellent End-to-End Speech Instruction Following**: Qwen2.5-Omni shows performance in end-to-end speech instruction following that rivals its effectiveness with text inputs, evidenced by benchmarks such as MMLU and GSM8K. ### Model Architecture <p align="center"> <img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2.5-Omni/overview.png" width="80%"/> <p> ### Performance We conducted a comprehensive evaluation of Qwen2.5-Omni, which demonstrates strong performance across all modalities when compared to similarly sized single-modality models and closed-source models like Qwen2.5-VL-7B, Qwen2-Audio, and Gemini-1.5-pro. In tasks requiring the integration of multiple modalities, such as OmniBench, Qwen2.5-Omni achieves state-of-the-art performance. Furthermore, in single-modality tasks, it excels in areas including speech recognition (Common Voice), translation (CoVoST2), audio understanding (MMAU), image reasoning (MMMU, MMStar), video understanding (MVBench), and speech generation (Seed-tts-eval and subjective naturalness). <p align="center"> <img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2.5-Omni/bar.png" width="80%"/> <p> <details> <summary>Multimodality -> Text</summary> <table class="tg"><thead> <tr> <th class="tg-0lax">Datasets</th> <th class="tg-0lax">Model</th> <th class="tg-0lax">Performance</th> </tr></thead> <tbody> <tr> <td class="tg-0lax" rowspan="10">OmniBench<br>Speech | Sound Event | Music | Avg</td> <td class="tg-0lax">Gemini-1.5-Pro</td> <td class="tg-0lax">42.67%|42.26%|46.23%|42.91%</td> </tr> <tr> <td class="tg-0lax">MIO-Instruct</td> <td class="tg-0lax">36.96%|33.58%|11.32%|33.80%</td> </tr> <tr> <td class="tg-0lax">AnyGPT (7B)</td> <td class="tg-0lax">17.77%|20.75%|13.21%|18.04%</td> </tr> <tr> <td class="tg-0lax">video-SALMONN</td> <td class="tg-0lax">34.11%|31.70%|<strong>56.60%</strong>|35.64%</td> </tr> <tr> <td class="tg-0lax">UnifiedIO2-xlarge</td> <td class="tg-0lax">39.56%|36.98%|29.25%|38.00%</td> </tr> <tr> <td class="tg-0lax">UnifiedIO2-xxlarge</td> <td class="tg-0lax">34.24%|36.98%|24.53%|33.98%</td> </tr> <tr> <td class="tg-0lax">MiniCPM-o</td> <td class="tg-0lax">-|-|-|40.50%</td> </tr> <tr> <td class="tg-0lax">Baichuan-Omni-1.5</td> <td class="tg-0lax">-|-|-|42.90%</td> </tr> <tr> <td class="tg-0lax">Qwen2.5-Omni-3B</td> <td class="tg-0lax">52.14%|52.08%|52.83%|52.19%</td> </tr> <tr> <td class="tg-0lax">Qwen2.5-Omni-7B</td> <td class="tg-0lax"><strong>55.25%</strong>|<strong>60.00%</strong>|52.83%|<strong>56.13%</strong></td> </tr> </tbody></table> </details> <details> <summary>Audio -> Text</summary> <table class="tg"><thead> <tr> <th class="tg-0lax">Datasets</th> <th class="tg-0lax">Model</th> <th class="tg-0lax">Performance</th> </tr></thead> <tbody> <tr> <td class="tg-9j4x" colspan="3">ASR</td> </tr> <tr> <td class="tg-0lax" rowspan="12">Librispeech<br>dev-clean | dev other | test-clean | test-other</td> <td class="tg-0lax">SALMONN</td> <td class="tg-0lax">-|-|2.1|4.9</td> </tr> <tr> <td class="tg-0lax">SpeechVerse</td> <td class="tg-0lax">-|-|2.1|4.4</td> </tr> <tr> <td class="tg-0lax">Whisper-large-v3</td> <td class="tg-0lax">-|-|1.8|3.6</td> </tr> <tr> <td class="tg-0lax">Llama-3-8B</td> <td class="tg-0lax">-|-|-|3.4</td> </tr> <tr> <td class="tg-0lax">Llama-3-70B</td> <td class="tg-0lax">-|-|-|3.1</td> </tr> <tr> <td class="tg-0lax">Seed-ASR-Multilingual</td> <td class="tg-0lax">-|-|<strong>1.6</strong>|<strong>2.8</strong></td> </tr> <tr> <td class="tg-0lax">MiniCPM-o</td> <td class="tg-0lax">-|-|1.7|-</td> </tr> <tr> <td class="tg-0lax">MinMo</td> <td class="tg-0lax">-|-|1.7|3.9</td> </tr> <tr> <td class="tg-0lax">Qwen-Audio</td> <td class="tg-0lax">1.8|4.0|2.0|4.2</td> </tr> <tr> <td class="tg-0lax">Qwen2-Audio</td> <td class="tg-0lax"><strong>1.3</strong>|<strong>3.4</strong>|<strong>1.6</strong>|3.6</td> </tr> <tr> <td class="tg-0lax">Qwen2.5-Omni-3B</td> <td class="tg-0lax">2.0|4.1|2.2|4.5</td> </tr> <tr> <td class="tg-0lax">Qwen2.5-Omni-7B</td> <td class="tg-0lax">1.6|3.5|1.8|3.4</td> </tr> <tr> <td class="tg-0lax" rowspan="5">Common Voice 15<br>en | zh | yue | fr</td> <td class="tg-0lax">Whisper-large-v3</td> <td class="tg-0lax">9.3|12.8|10.9|10.8</td> </tr> <tr> <td class="tg-0lax">MinMo</td> <td class="tg-0lax">7.9|6.3|6.4|8.5</td> </tr> <tr> <td class="tg-0lax">Qwen2-Audio</td> <td class="tg-0lax">8.6|6.9|<strong>5.9</strong>|9.6</td> </tr> <tr> <td class="tg-0lax">Qwen2.5-Omni-3B</td> <td class="tg-0lax">9.1|6.0|11.6|9.6</td> </tr> <tr> <td class="tg-0lax">Qwen2.5-Omni-7B</td> <td class="tg-0lax"><strong>7.6</strong>|<strong>5.2</strong>|7.3|<strong>7.5</strong></td> </tr> <tr> <td class="tg-0lax" rowspan="8">Fleurs<br>zh | en</td> <td class="tg-0lax">Whisper-large-v3</td> <td class="tg-0lax">7.7|4.1</td> </tr> <tr> <td class="tg-0lax">Seed-ASR-Multilingual</td> <td class="tg-0lax">-|<strong>3.4</strong></td> </tr> <tr> <td class="tg-0lax">Megrez-3B-Omni</td> <td class="tg-0lax">10.8|-</td> </tr> <tr> <td class="tg-0lax">MiniCPM-o</td> <td class="tg-0lax">4.4|-</td> </tr> <tr> <td class="tg-0lax">MinMo</td> <td class="tg-0lax">3.0|3.8</td> </tr> <tr> <td class="tg-0lax">Qwen2-Audio</td> <td class="tg-0lax">7.5|-</td> </tr> <tr> <td class="tg-0lax">Qwen2.5-Omni-3B</td> <td class="tg-0lax">3.2|5.4</td> </tr> <tr> <td class="tg-0lax">Qwen2.5-Omni-7B</td> <td class="tg-0lax"><strong>3.0</strong>|4.1</td> </tr> <tr> <td class="tg-0lax" rowspan="6">Wenetspeech<br>test-net | test-meeting</td> <td class="tg-0lax">Seed-ASR-Chinese</td> <td class="tg-0lax"><strong>4.7|5.7</strong></td> </tr> <tr> <td class="tg-0lax">Megrez-3B-Omni</td> <td class="tg-0lax">-|16.4</td> </tr> <tr> <td class="tg-0lax">MiniCPM-o</td> <td class="tg-0lax">6.9|-</td> </tr> <tr> <td class="tg-0lax">MinMo</td> <td class="tg-0lax">6.8|7.4</td> </tr> <tr> <td class="tg-0lax">Qwen2.5-Omni-3B</td> <td class="tg-0lax">6.3|8.1</td> </tr> <tr> <td class="tg-0lax">Qwen2.5-Omni-7B</td> <td class="tg-0lax">5.9|7.7</td> </tr> <tr> <td class="tg-0lax" rowspan="4">Voxpopuli-V1.0-en</td> <td class="tg-0lax">Llama-3-8B</td> <td class="tg-0lax">6.2</td> </tr> <tr> <td class="tg-0lax">Llama-3-70B</td> <td class="tg-0lax"><strong>5.7</strong></td> </tr> <tr> <td class="tg-0lax">Qwen2.5-Omni-3B</td> <td class="tg-0lax">6.6</td> </tr> <tr> <td class="tg-0lax">Qwen2.5-Omni-7B</td> <td class="tg-0lax">5.8</td> </tr> <tr> <td class="tg-9j4x" colspan="3">S2TT</td> </tr> <tr> <td class="tg-0lax" rowspan="9">CoVoST2<br>en-de | de-en | en-zh | zh-en</td> <td class="tg-0lax">SALMONN</td> <td class="tg-0lax">18.6|-|33.1|-</td> </tr> <tr> <td class="tg-0lax">SpeechLLaMA</td> <td class="tg-0lax">-|27.1|-|12.3</td> </tr> <tr> <td class="tg-0lax">BLSP</td> <td class="tg-0lax">14.1|-|-|-</td> </tr> <tr> <td class="tg-0lax">MiniCPM-o</td> <td class="tg-0lax">-|-|<strong>48.2</strong>|27.2</td> </tr> <tr> <td class="tg-0lax">MinMo</td> <td class="tg-0lax">-|<strong>39.9</strong>|46.7|26.0</td> </tr> <tr> <td class="tg-0lax">Qwen-Audio</td> <td class="tg-0lax">25.1|33.9|41.5|15.7</td> </tr> <tr> <td class="tg-0lax">Qwen2-Audio</td> <td class="tg-0lax">29.9|35.2|45.2|24.4</td> </tr> <tr> <td class="tg-0lax">Qwen2.5-Omni-3B</td> <td class="tg-0lax">28.3|38.1|41.4|26.6</td> </tr> <tr> <td class="tg-0lax">Qwen2.5-Omni-7B</td> <td class="tg-0lax"><strong>30.2</strong>|37.7|41.4|<strong>29.4</strong></td> </tr> <tr> <td class="tg-9j4x" colspan="3">SER</td> </tr> <tr> <td class="tg-0lax" rowspan="6">Meld</td> <td class="tg-0lax">WavLM-large</td> <td class="tg-0lax">0.542</td> </tr> <tr> <td class="tg-0lax">MiniCPM-o</td> <td class="tg-0lax">0.524</td> </tr> <tr> <td class="tg-0lax">Qwen-Audio</td> <td class="tg-0lax">0.557</td> </tr> <tr> <td class="tg-0lax">Qwen2-Audio</td> <td class="tg-0lax">0.553</td> </tr> <tr> <td class="tg-0lax">Qwen2.5-Omni-3B</td> <td class="tg-0lax">0.558</td> </tr> <tr> <td class="tg-0lax">Qwen2.5-Omni-7B</td> <td class="tg-0lax"><strong>0.570</strong></td> </tr> <tr> <td class="tg-9j4x" colspan="3">VSC</td> </tr> <tr> <td class="tg-0lax" rowspan="6">VocalSound</td> <td class="tg-0lax">CLAP</td> <td class="tg-0lax">0.495</td> </tr> <tr> <td class="tg-0lax">Pengi</td> <td class="tg-0lax">0.604</td> </tr> <tr> <td class="tg-0lax">Qwen-Audio</td> <td class="tg-0lax">0.929</td> </tr> <tr> <td class="tg-0lax">Qwen2-Audio</td> <td class="tg-0lax"><strong>0.939</strong></td> </tr> <tr> <td class="tg-0lax">Qwen2.5-Omni-3B</td> <td class="tg-0lax">0.936</td> </tr> <tr> <td class="tg-0lax">Qwen2.5-Omni-7B</td> <td class="tg-0lax"><strong>0.939</strong></td> </tr> <tr> <td class="tg-9j4x" colspan="3">Music</td> </tr> <tr> <td class="tg-0lax" rowspan="3">GiantSteps Tempo</td> <td class="tg-0lax">Llark-7B</td> <td class="tg-0lax">0.86</td> </tr> <tr> <td class="tg-0lax">Qwen2.5-Omni-3B</td> <td class="tg-0lax"><strong>0.88</strong></td> </tr> <tr> <td class="tg-0lax">Qwen2.5-Omni-7B</td> <td class="tg-0lax"><strong>0.88</strong></td> </tr> <tr> <td class="tg-0lax" rowspan="3">MusicCaps</td> <td class="tg-0lax">LP-MusicCaps</td> <td class="tg-0lax">0.291|0.149|0.089|<strong>0.061</strong>|0.129|0.130</td> </tr> <tr> <td class="tg-0lax">Qwen2.5-Omni-3B</td> <td class="tg-0lax">0.325|<strong>0.163</strong>|<strong>0.093</strong>|0.057|<strong>0.132</strong>|<strong>0.229</strong></td> </tr> <tr> <td class="tg-0lax">Qwen2.5-Omni-7B</td> <td class="tg-0lax"><strong>0.328</strong>|0.162|0.090|0.055|0.127|0.225</td> </tr> <tr> <td class="tg-9j4x" colspan="3">Audio Reasoning</td> </tr> <tr> <td class="tg-0lax" rowspan="4">MMAU<br>Sound | Music | Speech | Avg</td> <td class="tg-0lax">Gemini-Pro-V1.5</td> <td class="tg-0lax">56.75|49.40|58.55|54.90</td> </tr> <tr> <td class="tg-0lax">Qwen2-Audio</td> <td class="tg-0lax">54.95|50.98|42.04|49.20</td> </tr> <tr> <td class="tg-0lax">Qwen2.5-Omni-3B</td> <td class="tg-0lax"><strong>70.27</strong>|60.48|59.16|63.30</td> </tr> <tr> <td class="tg-0lax">Qwen2.5-Omni-7B</td> <td class="tg-0lax">67.87|<strong>69.16|59.76|65.60</strong></td> </tr> <tr> <td class="tg-9j4x" colspan="3">Voice Chatting</td> </tr> <tr> <td class="tg-0lax" rowspan="9">VoiceBench<br>AlpacaEval | CommonEval | SD-QA | MMSU</td> <td class="tg-0lax">Ultravox-v0.4.1-LLaMA-3.1-8B</td> <td class="tg-0lax"><strong>4.55</strong>|3.90|53.35|47.17</td> </tr> <tr> <td class="tg-0lax">MERaLiON</td> <td class="tg-0lax">4.50|3.77|55.06|34.95</td> </tr> <tr> <td class="tg-0lax">Megrez-3B-Omni</td> <td class="tg-0lax">3.50|2.95|25.95|27.03</td> </tr> <tr> <td class="tg-0lax">Lyra-Base</td> <td class="tg-0lax">3.85|3.50|38.25|49.74</td> </tr> <tr> <td class="tg-0lax">MiniCPM-o</td> <td class="tg-0lax">4.42|<strong>4.15</strong>|50.72|54.78</td> </tr> <tr> <td class="tg-0lax">Baichuan-Omni-1.5</td> <td class="tg-0lax">4.50|4.05|43.40|57.25</td> </tr> <tr> <td class="tg-0lax">Qwen2-Audio</td> <td class="tg-0lax">3.74|3.43|35.71|35.72</td> </tr> <tr> <td class="tg-0lax">Qwen2.5-Omni-3B</td> <td class="tg-0lax">4.32|4.00|49.37|50.23</td> </tr> <tr> <td class="tg-0lax">Qwen2.5-Omni-7B</td> <td class="tg-0lax">4.49|3.93|<strong>55.71</strong>|<strong>61.32</strong></td> </tr> <tr> <td class="tg-0lax" rowspan="9">VoiceBench<br>OpenBookQA | IFEval | AdvBench | Avg</td> <td class="tg-0lax">Ultravox-v0.4.1-LLaMA-3.1-8B</td> <td class="tg-0lax">65.27|<strong>66.88</strong>|98.46|71.45</td> </tr> <tr> <td class="tg-0lax">MERaLiON</td> <td class="tg-0lax">27.23|62.93|94.81|62.91</td> </tr> <tr> <td class="tg-0lax">Megrez-3B-Omni</td> <td class="tg-0lax">28.35|25.71|87.69|46.25</td> </tr> <tr> <td class="tg-0lax">Lyra-Base</td> <td class="tg-0lax">72.75|36.28|59.62|57.66</td> </tr> <tr> <td class="tg-0lax">MiniCPM-o</td> <td class="tg-0lax">78.02|49.25|97.69|71.69</td> </tr> <tr> <td class="tg-0lax">Baichuan-Omni-1.5</td> <td class="tg-0lax">74.51|54.54|97.31|71.14</td> </tr> <tr> <td class="tg-0lax">Qwen2-Audio</td> <td class="tg-0lax">49.45|26.33|96.73|55.35</td> </tr> <tr> <td class="tg-0lax">Qwen2.5-Omni-3B</td> <td class="tg-0lax">74.73|42.10|98.85|68.81</td> </tr> <tr> <td class="tg-0lax">Qwen2.5-Omni-7B</td> <td class="tg-0lax"><strong>81.10</strong>|52.87|<strong>99.42</strong>|<strong>74.12</strong></td> </tr> </tbody></table> </details> <details> <summary>Image -> Text</summary> | Dataset | Qwen2.5-Omni-7B | Qwen2.5-Omni-3B | Other Best | Qwen2.5-VL-7B | GPT-4o-mini | |--------------------------------|--------------|------------|------------|---------------|-------------| | MMMU<sub>val</sub> | 59.2 | 53.1 | 53.9 | 58.6 | **60.0** | | MMMU-Pro<sub>overall</sub> | 36.6 | 29.7 | - | **38.3** | 37.6 | | MathVista<sub>testmini</sub> | 67.9 | 59.4 | **71.9** | 68.2 | 52.5 | | MathVision<sub>full</sub> | 25.0 | 20.8 | 23.1 | **25.1** | - | | MMBench-V1.1-EN<sub>test</sub> | 81.8 | 77.8 | 80.5 | **82.6** | 76.0 | | MMVet<sub>turbo</sub> | 66.8 | 62.1 | **67.5** | 67.1 | 66.9 | | MMStar | **64.0** | 55.7 | **64.0** | 63.9 | 54.8 | | MME<sub>sum</sub> | 2340 | 2117 | **2372** | 2347 | 2003 | | MuirBench | 59.2 | 48.0 | - | **59.2** | - | | CRPE<sub>relation</sub> | **76.5** | 73.7 | - | 76.4 | - | | RealWorldQA<sub>avg</sub> | 70.3 | 62.6 | **71.9** | 68.5 | - | | MME-RealWorld<sub>en</sub> | **61.6** | 55.6 | - | 57.4 | - | | MM-MT-Bench | 6.0 | 5.0 | - | **6.3** | - | | AI2D | 83.2 | 79.5 | **85.8** | 83.9 | - | | TextVQA<sub>val</sub> | 84.4 | 79.8 | 83.2 | **84.9** | - | | DocVQA<sub>test</sub> | 95.2 | 93.3 | 93.5 | **95.7** | - | | ChartQA<sub>test Avg</sub> | 85.3 | 82.8 | 84.9 | **87.3** | - | | OCRBench_V2<sub>en</sub> | **57.8** | 51.7 | - | 56.3 | - | | Dataset | Qwen2.5-Omni-7B | Qwen2.5-Omni-3B | Qwen2.5-VL-7B | Grounding DINO | Gemini 1.5 Pro | |--------------------------|--------------|---------------|---------------|----------------|----------------| | Refcoco<sub>val</sub> | 90.5 | 88.7 | 90.0 | **90.6** | 73.2 | | Refcoco<sub>textA</sub> | **93.5** | 91.8 | 92.5 | 93.2 | 72.9 | | Refcoco<sub>textB</sub> | 86.6 | 84.0 | 85.4 | **88.2** | 74.6 | | Refcoco+<sub>val</sub> | 85.4 | 81.1 | 84.2 | **88.2** | 62.5 | | Refcoco+<sub>textA</sub> | **91.0** | 87.5 | 89.1 | 89.0 | 63.9 | | Refcoco+<sub>textB</sub> | **79.3** | 73.2 | 76.9 | 75.9 | 65.0 | | Refcocog+<sub>val</sub> | **87.4** | 85.0 | 87.2 | 86.1 | 75.2 | | Refcocog+<sub>test</sub> | **87.9** | 85.1 | 87.2 | 87.0 | 76.2 | | ODinW | 42.4 | 39.2 | 37.3 | **55.0** | 36.7 | | PointGrounding | 66.5 | 46.2 | **67.3** | - | - | </details> <details> <summary>Video(without audio) -> Text</summary> | Dataset | Qwen2.5-Omni-7B | Qwen2.5-Omni-3B | Other Best | Qwen2.5-VL-7B | GPT-4o-mini | |-----------------------------|--------------|------------|------------|---------------|-------------| | Video-MME<sub>w/o sub</sub> | 64.3 | 62.0 | 63.9 | **65.1** | 64.8 | | Video-MME<sub>w sub</sub> | **72.4** | 68.6 | 67.9 | 71.6 | - | | MVBench | **70.3** | 68.7 | 67.2 | 69.6 | - | | EgoSchema<sub>test</sub> | **68.6** | 61.4 | 63.2 | 65.0 | - | </details> <details> <summary>Zero-shot Speech Generation</summary> <table class="tg"><thead> <tr> <th class="tg-0lax">Datasets</th> <th class="tg-0lax">Model</th> <th class="tg-0lax">Performance</th> </tr></thead> <tbody> <tr> <td class="tg-9j4x" colspan="3">Content Consistency</td> </tr> <tr> <td class="tg-0lax" rowspan="11">SEED<br>test-zh | test-en | test-hard </td> <td class="tg-0lax">Seed-TTS_ICL</td> <td class="tg-0lax">1.11 | 2.24 | 7.58</td> </tr> <tr> <td class="tg-0lax">Seed-TTS_RL</td> <td class="tg-0lax"><strong>1.00</strong> | 1.94 | <strong>6.42</strong></td> </tr> <tr> <td class="tg-0lax">MaskGCT</td> <td class="tg-0lax">2.27 | 2.62 | 10.27</td> </tr> <tr> <td class="tg-0lax">E2_TTS</td> <td class="tg-0lax">1.97 | 2.19 | -</td> </tr> <tr> <td class="tg-0lax">F5-TTS</td> <td class="tg-0lax">1.56 | <strong>1.83</strong> | 8.67</td> </tr> <tr> <td class="tg-0lax">CosyVoice 2</td> <td class="tg-0lax">1.45 | 2.57 | 6.83</td> </tr> <tr> <td class="tg-0lax">CosyVoice 2-S</td> <td class="tg-0lax">1.45 | 2.38 | 8.08</td> </tr> <tr> <td class="tg-0lax">Qwen2.5-Omni-3B_ICL</td> <td class="tg-0lax">1.95 | 2.87 | 9.92</td> </tr> <tr> <td class="tg-0lax">Qwen2.5-Omni-3B_RL</td> <td class="tg-0lax">1.58 | 2.51 | 7.86</td> </tr> <tr> <td class="tg-0lax">Qwen2.5-Omni-7B_ICL</td> <td class="tg-0lax">1.70 | 2.72 | 7.97</td> </tr> <tr> <td class="tg-0lax">Qwen2.5-Omni-7B_RL</td> <td class="tg-0lax">1.42 | 2.32 | 6.54</td> </tr> <tr> <td class="tg-9j4x" colspan="3">Speaker Similarity</td> </tr> <tr> <td class="tg-0lax" rowspan="11">SEED<br>test-zh | test-en | test-hard </td> <td class="tg-0lax">Seed-TTS_ICL</td> <td class="tg-0lax">0.796 | 0.762 | 0.776</td> </tr> <tr> <td class="tg-0lax">Seed-TTS_RL</td> <td class="tg-0lax"><strong>0.801</strong> | <strong>0.766</strong> | <strong>0.782</strong></td> </tr> <tr> <td class="tg-0lax">MaskGCT</td> <td class="tg-0lax">0.774 | 0.714 | 0.748</td> </tr> <tr> <td class="tg-0lax">E2_TTS</td> <td class="tg-0lax">0.730 | 0.710 | -</td> </tr> <tr> <td class="tg-0lax">F5-TTS</td> <td class="tg-0lax">0.741 | 0.647 | 0.713</td> </tr> <tr> <td class="tg-0lax">CosyVoice 2</td> <td class="tg-0lax">0.748 | 0.652 | 0.724</td> </tr> <tr> <td class="tg-0lax">CosyVoice 2-S</td> <td class="tg-0lax">0.753 | 0.654 | 0.732</td> </tr> <tr> <td class="tg-0lax">Qwen2.5-Omni-3B_ICL</td> <td class="tg-0lax">0.741 | 0.635 | 0.748</td> </tr> <tr> <td class="tg-0lax">Qwen2.5-Omni-3B_RL</td> <td class="tg-0lax">0.744 | 0.635 | 0.746</td> </tr> <tr> <td class="tg-0lax">Qwen2.5-Omni-7B_ICL</td> <td class="tg-0lax">0.752 | 0.632 | 0.747</td> </tr> <tr> <td class="tg-0lax">Qwen2.5-Omni-7B_RL</td> <td class="tg-0lax">0.754 | 0.641 | 0.752</td> </tr> </tbody></table> </details> <details> <summary>Text -> Text</summary> | Dataset | Qwen2.5-Omni-7B | Qwen2.5-Omni-3B | Qwen2.5-7B | Qwen2.5-3B | Qwen2-7B | Llama3.1-8B | Gemma2-9B | |-----------------------------------|-----------|------------|------------|------------|------------|-------------|-----------| | MMLU-Pro | 47.0 | 40.4 | **56.3** | 43.7 | 44.1 | 48.3 | 52.1 | | MMLU-redux | 71.0 | 60.9 | **75.4** | 64.4 | 67.3 | 67.2 | 72.8 | | LiveBench<sub>0831</sub> | 29.6 | 22.3 | **35.9** | 26.8 | 29.2 | 26.7 | 30.6 | | GPQA | 30.8 | 34.3 | **36.4** | 30.3 | 34.3 | 32.8 | 32.8 | | MATH | 71.5 | 63.6 | **75.5** | 65.9 | 52.9 | 51.9 | 44.3 | | GSM8K | 88.7 | 82.6 | **91.6** | 86.7 | 85.7 | 84.5 | 76.7 | | HumanEval | 78.7 | 70.7 | **84.8** | 74.4 | 79.9 | 72.6 | 68.9 | | MBPP | 73.2 | 70.4 | **79.2** | 72.7 | 67.2 | 69.6 | 74.9 | | MultiPL-E | 65.8 | 57.6 | **70.4** | 60.2 | 59.1 | 50.7 | 53.4 | | LiveCodeBench<sub>2305-2409</sub> | 24.6 | 16.5 | **28.7** | 19.9 | 23.9 | 8.3 | 18.9 | </details> ## Quickstart Below, we provide simple examples to show how to use Qwen2.5-Omni with 🤗 Transformers. The codes of Qwen2.5-Omni has been in the latest Hugging face transformers and we advise you to build from source with command: ``` pip uninstall transformers pip install git+https://github.com/huggingface/transformers@v4.51.3-Qwen2.5-Omni-preview pip install accelerate ``` or you might encounter the following error: ``` KeyError: 'qwen2_5_omni' ``` We offer a toolkit to help you handle various types of audio and visual input more conveniently, as if you were using an API. This includes base64, URLs, and interleaved audio, images and videos. You can install it using the following command and make sure your system has `ffmpeg` installed: ```bash # It's highly recommended to use `[decord]` feature for faster video loading. pip install qwen-omni-utils[decord] -U ``` If you are not using Linux, you might not be able to install `decord` from PyPI. In that case, you can use `pip install qwen-omni-utils -U` which will fall back to using torchvision for video processing. However, you can still [install decord from source](https://github.com/dmlc/decord?tab=readme-ov-file#install-from-source) to get decord used when loading video. ### 🤗 Transformers Usage Here we show a code snippet to show you how to use the chat model with `transformers` and `qwen_omni_utils`: ```python import soundfile as sf from transformers import Qwen2_5OmniForConditionalGeneration, Qwen2_5OmniProcessor from qwen_omni_utils import process_mm_info # default: Load the model on the available device(s) model = Qwen2_5OmniForConditionalGeneration.from_pretrained("Qwen/Qwen2.5-Omni-7B", torch_dtype="auto", device_map="auto") # We recommend enabling flash_attention_2 for better acceleration and memory saving. # model = Qwen2_5OmniForConditionalGeneration.from_pretrained( # "Qwen/Qwen2.5-Omni-7B", # torch_dtype="auto", # device_map="auto", # attn_implementation="flash_attention_2", # ) processor = Qwen2_5OmniProcessor.from_pretrained("Qwen/Qwen2.5-Omni-7B") conversation = [ { "role": "system", "content": [ {"type": "text", "text": "You are Qwen, a virtual human developed by the Qwen Team, Alibaba Group, capable of perceiving auditory and visual inputs, as well as generating text and speech."} ], }, { "role": "user", "content": [ {"type": "video", "video": "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2.5-Omni/draw.mp4"}, ], }, ] # set use audio in video USE_AUDIO_IN_VIDEO = True # Preparation for inference text = processor.apply_chat_template(conversation, add_generation_prompt=True, tokenize=False) audios, images, videos = process_mm_info(conversation, use_audio_in_video=USE_AUDIO_IN_VIDEO) inputs = processor(text=text, audio=audios, images=images, videos=videos, return_tensors="pt", padding=True, use_audio_in_video=USE_AUDIO_IN_VIDEO) inputs = inputs.to(model.device).to(model.dtype) # Inference: Generation of the output text and audio text_ids, audio = model.generate(**inputs, use_audio_in_video=USE_AUDIO_IN_VIDEO) text = processor.batch_decode(text_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False) print(text) sf.write( "output.wav", audio.reshape(-1).detach().cpu().numpy(), samplerate=24000, ) ``` <details> <summary>Minimum GPU memory requirements</summary> |Model | Precision | 15(s) Video | 30(s) Video | 60(s) Video | |--------------|-----------| ------------- | ------------- | ------------------ | | Qwen-Omni-3B | FP32 | 89.10 GB | Not Recommend | Not Recommend | | Qwen-Omni-3B | BF16 | 18.38 GB | 22.43 GB | 28.22 GB | | Qwen-Omni-7B | FP32 | 93.56 GB | Not Recommend | Not Recommend | | Qwen-Omni-7B | BF16 | 31.11 GB | 41.85 GB | 60.19 GB | Note: The table above presents the theoretical minimum memory requirements for inference with `transformers` and `BF16` is test with `attn_implementation="flash_attention_2"`; however, in practice, the actual memory usage is typically at least 1.2 times higher. For more information, see the linked resource [here](https://huggingface.co/docs/accelerate/main/en/usage_guides/model_size_estimator). </details> <details> <summary>Video URL resource usage</summary> Video URL compatibility largely depends on the third-party library version. The details are in the table below. Change the backend by `FORCE_QWENVL_VIDEO_READER=torchvision` or `FORCE_QWENVL_VIDEO_READER=decord` if you prefer not to use the default one. | Backend | HTTP | HTTPS | |-------------|------|-------| | torchvision >= 0.19.0 | ✅ | ✅ | | torchvision < 0.19.0 | ❌ | ❌ | | decord | ✅ | ❌ | </details> <details> <summary>Batch inference</summary> The model can batch inputs composed of mixed samples of various types such as text, images, audio and videos as input when `return_audio=False` is set. Here is an example. ```python # Sample messages for batch inference # Conversation with video only conversation1 = [ { "role": "system", "content": [ {"type": "text", "text": "You are Qwen, a virtual human developed by the Qwen Team, Alibaba Group, capable of perceiving auditory and visual inputs, as well as generating text and speech."} ], }, { "role": "user", "content": [ {"type": "video", "video": "/path/to/video.mp4"}, ] } ] # Conversation with audio only conversation2 = [ { "role": "system", "content": [ {"type": "text", "text": "You are Qwen, a virtual human developed by the Qwen Team, Alibaba Group, capable of perceiving auditory and visual inputs, as well as generating text and speech."} ], }, { "role": "user", "content": [ {"type": "audio", "audio": "/path/to/audio.wav"}, ] } ] # Conversation with pure text conversation3 = [ { "role": "system", "content": [ {"type": "text", "text": "You are Qwen, a virtual human developed by the Qwen Team, Alibaba Group, capable of perceiving auditory and visual inputs, as well as generating text and speech."} ], }, { "role": "user", "content": "who are you?" } ] # Conversation with mixed media conversation4 = [ { "role": "system", "content": [ {"type": "text", "text": "You are Qwen, a virtual human developed by the Qwen Team, Alibaba Group, capable of perceiving auditory and visual inputs, as well as generating text and speech."} ], }, { "role": "user", "content": [ {"type": "image", "image": "/path/to/image.jpg"}, {"type": "video", "video": "/path/to/video.mp4"}, {"type": "audio", "audio": "/path/to/audio.wav"}, {"type": "text", "text": "What are the elements can you see and hear in these medias?"}, ], } ] # Combine messages for batch processing conversations = [conversation1, conversation2, conversation3, conversation4] # set use audio in video USE_AUDIO_IN_VIDEO = True # Preparation for batch inference text = processor.apply_chat_template(conversations, add_generation_prompt=True, tokenize=False) audios, images, videos = process_mm_info(conversations, use_audio_in_video=USE_AUDIO_IN_VIDEO) inputs = processor(text=text, audio=audios, images=images, videos=videos, return_tensors="pt", padding=True, use_audio_in_video=USE_AUDIO_IN_VIDEO) inputs = inputs.to(model.device).to(model.dtype) # Batch Inference text_ids = model.generate(**inputs, use_audio_in_video=USE_AUDIO_IN_VIDEO, return_audio=False) text = processor.batch_decode(text_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False) print(text) ``` </details> ### Usage Tips #### Prompt for audio output If users need audio output, the system prompt must be set as "You are Qwen, a virtual human developed by the Qwen Team, Alibaba Group, capable of perceiving auditory and visual inputs, as well as generating text and speech.", otherwise the audio output may not work as expected. ``` { "role": "system", "content": [ {"type": "text", "text": "You are Qwen, a virtual human developed by the Qwen Team, Alibaba Group, capable of perceiving auditory and visual inputs, as well as generating text and speech."} ], } ``` #### Use audio in video In the process of multimodal interaction, the videos provided by users are often accompanied by audio (such as questions about the content in the video, or sounds generated by certain events in the video). This information is conducive to the model providing a better interactive experience. So we provide the following options for users to decide whether to use audio in video. ```python # first place, in data preprocessing audios, images, videos = process_mm_info(conversations, use_audio_in_video=True) ``` ```python # second place, in model processor inputs = processor(text=text, audio=audios, images=images, videos=videos, return_tensors="pt", padding=True, use_audio_in_video=True) ``` ```python # third place, in model inference text_ids, audio = model.generate(**inputs, use_audio_in_video=True) ``` It is worth noting that during a multi-round conversation, the `use_audio_in_video` parameter in these places must be set to the same, otherwise unexpected results will occur. #### Use audio output or not The model supports both text and audio outputs, if users do not need audio outputs, they can call `model.disable_talker()` after init the model. This option will save about `~2GB` of GPU memory but the `return_audio` option for `generate` function will only allow to be set at `False`. ```python model = Qwen2_5OmniForConditionalGeneration.from_pretrained( "Qwen/Qwen2.5-Omni-7B", torch_dtype="auto", device_map="auto" ) model.disable_talker() ``` In order to obtain a flexible experience, we recommend that users can decide whether to return audio when `generate` function is called. If `return_audio` is set to `False`, the model will only return text outputs to get text responses faster. ```python model = Qwen2_5OmniForConditionalGeneration.from_pretrained( "Qwen/Qwen2.5-Omni-7B", torch_dtype="auto", device_map="auto" ) ... text_ids = model.generate(**inputs, return_audio=False) ``` #### Change voice type of output audio Qwen2.5-Omni supports the ability to change the voice of the output audio. The `"Qwen/Qwen2.5-Omni-7B"` checkpoint support two voice types as follow: | Voice Type | Gender | Description | |------------|--------|-------------| | Chelsie | Female | A honeyed, velvety voice that carries a gentle warmth and luminous clarity.| | Ethan | Male | A bright, upbeat voice with infectious energy and a warm, approachable vibe.| Users can use the `speaker` parameter of `generate` function to specify the voice type. By default, if `speaker` is not specified, the default voice type is `Chelsie`. ```python text_ids, audio = model.generate(**inputs, speaker="Chelsie") ``` ```python text_ids, audio = model.generate(**inputs, speaker="Ethan") ``` #### Flash-Attention 2 to speed up generation First, make sure to install the latest version of Flash Attention 2: ```bash pip install -U flash-attn --no-build-isolation ``` Also, you should have hardware that is compatible with FlashAttention 2. Read more about it in the official documentation of the [flash attention repository](https://github.com/Dao-AILab/flash-attention). FlashAttention-2 can only be used when a model is loaded in `torch.float16` or `torch.bfloat16`. To load and run a model using FlashAttention-2, add `attn_implementation="flash_attention_2"` when loading the model: ```python from transformers import Qwen2_5OmniForConditionalGeneration model = Qwen2_5OmniForConditionalGeneration.from_pretrained( "Qwen/Qwen2.5-Omni-7B", device_map="auto", torch_dtype=torch.bfloat16, attn_implementation="flash_attention_2", ) ``` ## Citation If you find our paper and code useful in your research, please consider giving a star :star: and citation :pencil: :) ```BibTeX @article{Qwen2.5-Omni, title={Qwen2.5-Omni Technical Report}, author={Jin Xu, Zhifang Guo, Jinzheng He, Hangrui Hu, Ting He, Shuai Bai, Keqin Chen, Jialin Wang, Yang Fan, Kai Dang, Bin Zhang, Xiong Wang, Yunfei Chu, Junyang Lin}, journal={arXiv preprint arXiv:2503.20215}, year={2025} } ``` <br>
chaitnya26/Qwen2.5-Omni-3B-Fork
chaitnya26
2025-09-11T20:16:25Z
0
0
transformers
[ "transformers", "safetensors", "qwen2_5_omni", "multimodal", "any-to-any", "en", "arxiv:2503.20215", "license:other", "endpoints_compatible", "region:us" ]
any-to-any
2025-09-11T20:16:25Z
--- license: other license_name: qwen-research license_link: LICENSE language: - en tags: - multimodal library_name: transformers pipeline_tag: any-to-any --- # Qwen2.5-Omni <a href="https://chat.qwen.ai/" target="_blank" style="margin: 2px;"> <img alt="Chat" src="https://img.shields.io/badge/%F0%9F%92%9C%EF%B8%8F%20Qwen%20Chat%20-536af5" style="display: inline-block; vertical-align: middle;"/> </a> ## Overview ### Introduction Qwen2.5-Omni is an end-to-end multimodal model designed to perceive diverse modalities, including text, images, audio, and video, while simultaneously generating text and natural speech responses in a streaming manner. <p align="center"> <img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2.5-Omni/qwen_omni.png" width="80%"/> <p> ### Key Features * **Omni and Novel Architecture**: We propose Thinker-Talker architecture, an end-to-end multimodal model designed to perceive diverse modalities, including text, images, audio, and video, while simultaneously generating text and natural speech responses in a streaming manner. We propose a novel position embedding, named TMRoPE (Time-aligned Multimodal RoPE), to synchronize the timestamps of video inputs with audio. * **Real-Time Voice and Video Chat**: Architecture designed for fully real-time interactions, supporting chunked input and immediate output. * **Natural and Robust Speech Generation**: Surpassing many existing streaming and non-streaming alternatives, demonstrating superior robustness and naturalness in speech generation. * **Strong Performance Across Modalities**: Exhibiting exceptional performance across all modalities when benchmarked against similarly sized single-modality models. Qwen2.5-Omni outperforms the similarly sized Qwen2-Audio in audio capabilities and achieves comparable performance to Qwen2.5-VL-7B. * **Excellent End-to-End Speech Instruction Following**: Qwen2.5-Omni shows performance in end-to-end speech instruction following that rivals its effectiveness with text inputs, evidenced by benchmarks such as MMLU and GSM8K. ### Model Architecture <p align="center"> <img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2.5-Omni/overview.png" width="80%"/> <p> ### Performance We conducted a comprehensive evaluation of Qwen2.5-Omni, which demonstrates strong performance across all modalities when compared to similarly sized single-modality models and closed-source models like Qwen2.5-VL-7B, Qwen2-Audio, and Gemini-1.5-pro. In tasks requiring the integration of multiple modalities, such as OmniBench, Qwen2.5-Omni achieves state-of-the-art performance. Furthermore, in single-modality tasks, it excels in areas including speech recognition (Common Voice), translation (CoVoST2), audio understanding (MMAU), image reasoning (MMMU, MMStar), video understanding (MVBench), and speech generation (Seed-tts-eval and subjective naturalness). <p align="center"> <img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2.5-Omni/bar.png" width="80%"/> <p> <details> <summary>Multimodality -> Text</summary> <table class="tg"><thead> <tr> <th class="tg-0lax">Datasets</th> <th class="tg-0lax">Model</th> <th class="tg-0lax">Performance</th> </tr></thead> <tbody> <tr> <td class="tg-0lax" rowspan="10">OmniBench<br>Speech | Sound Event | Music | Avg</td> <td class="tg-0lax">Gemini-1.5-Pro</td> <td class="tg-0lax">42.67%|42.26%|46.23%|42.91%</td> </tr> <tr> <td class="tg-0lax">MIO-Instruct</td> <td class="tg-0lax">36.96%|33.58%|11.32%|33.80%</td> </tr> <tr> <td class="tg-0lax">AnyGPT (7B)</td> <td class="tg-0lax">17.77%|20.75%|13.21%|18.04%</td> </tr> <tr> <td class="tg-0lax">video-SALMONN</td> <td class="tg-0lax">34.11%|31.70%|<strong>56.60%</strong>|35.64%</td> </tr> <tr> <td class="tg-0lax">UnifiedIO2-xlarge</td> <td class="tg-0lax">39.56%|36.98%|29.25%|38.00%</td> </tr> <tr> <td class="tg-0lax">UnifiedIO2-xxlarge</td> <td class="tg-0lax">34.24%|36.98%|24.53%|33.98%</td> </tr> <tr> <td class="tg-0lax">MiniCPM-o</td> <td class="tg-0lax">-|-|-|40.50%</td> </tr> <tr> <td class="tg-0lax">Baichuan-Omni-1.5</td> <td class="tg-0lax">-|-|-|42.90%</td> </tr> <tr> <td class="tg-0lax">Qwen2.5-Omni-3B</td> <td class="tg-0lax">52.14%|52.08%|52.83%|52.19%</td> </tr> <tr> <td class="tg-0lax">Qwen2.5-Omni-7B</td> <td class="tg-0lax"><strong>55.25%</strong>|<strong>60.00%</strong>|52.83%|<strong>56.13%</strong></td> </tr> </tbody></table> </details> <details> <summary>Audio -> Text</summary> <table class="tg"><thead> <tr> <th class="tg-0lax">Datasets</th> <th class="tg-0lax">Model</th> <th class="tg-0lax">Performance</th> </tr></thead> <tbody> <tr> <td class="tg-9j4x" colspan="3">ASR</td> </tr> <tr> <td class="tg-0lax" rowspan="12">Librispeech<br>dev-clean | dev other | test-clean | test-other</td> <td class="tg-0lax">SALMONN</td> <td class="tg-0lax">-|-|2.1|4.9</td> </tr> <tr> <td class="tg-0lax">SpeechVerse</td> <td class="tg-0lax">-|-|2.1|4.4</td> </tr> <tr> <td class="tg-0lax">Whisper-large-v3</td> <td class="tg-0lax">-|-|1.8|3.6</td> </tr> <tr> <td class="tg-0lax">Llama-3-8B</td> <td class="tg-0lax">-|-|-|3.4</td> </tr> <tr> <td class="tg-0lax">Llama-3-70B</td> <td class="tg-0lax">-|-|-|3.1</td> </tr> <tr> <td class="tg-0lax">Seed-ASR-Multilingual</td> <td class="tg-0lax">-|-|<strong>1.6</strong>|<strong>2.8</strong></td> </tr> <tr> <td class="tg-0lax">MiniCPM-o</td> <td class="tg-0lax">-|-|1.7|-</td> </tr> <tr> <td class="tg-0lax">MinMo</td> <td class="tg-0lax">-|-|1.7|3.9</td> </tr> <tr> <td class="tg-0lax">Qwen-Audio</td> <td class="tg-0lax">1.8|4.0|2.0|4.2</td> </tr> <tr> <td class="tg-0lax">Qwen2-Audio</td> <td class="tg-0lax"><strong>1.3</strong>|<strong>3.4</strong>|<strong>1.6</strong>|3.6</td> </tr> <tr> <td class="tg-0lax">Qwen2.5-Omni-3B</td> <td class="tg-0lax">2.0|4.1|2.2|4.5</td> </tr> <tr> <td class="tg-0lax">Qwen2.5-Omni-7B</td> <td class="tg-0lax">1.6|3.5|1.8|3.4</td> </tr> <tr> <td class="tg-0lax" rowspan="5">Common Voice 15<br>en | zh | yue | fr</td> <td class="tg-0lax">Whisper-large-v3</td> <td class="tg-0lax">9.3|12.8|10.9|10.8</td> </tr> <tr> <td class="tg-0lax">MinMo</td> <td class="tg-0lax">7.9|6.3|6.4|8.5</td> </tr> <tr> <td class="tg-0lax">Qwen2-Audio</td> <td class="tg-0lax">8.6|6.9|<strong>5.9</strong>|9.6</td> </tr> <tr> <td class="tg-0lax">Qwen2.5-Omni-3B</td> <td class="tg-0lax">9.1|6.0|11.6|9.6</td> </tr> <tr> <td class="tg-0lax">Qwen2.5-Omni-7B</td> <td class="tg-0lax"><strong>7.6</strong>|<strong>5.2</strong>|7.3|<strong>7.5</strong></td> </tr> <tr> <td class="tg-0lax" rowspan="8">Fleurs<br>zh | en</td> <td class="tg-0lax">Whisper-large-v3</td> <td class="tg-0lax">7.7|4.1</td> </tr> <tr> <td class="tg-0lax">Seed-ASR-Multilingual</td> <td class="tg-0lax">-|<strong>3.4</strong></td> </tr> <tr> <td class="tg-0lax">Megrez-3B-Omni</td> <td class="tg-0lax">10.8|-</td> </tr> <tr> <td class="tg-0lax">MiniCPM-o</td> <td class="tg-0lax">4.4|-</td> </tr> <tr> <td class="tg-0lax">MinMo</td> <td class="tg-0lax">3.0|3.8</td> </tr> <tr> <td class="tg-0lax">Qwen2-Audio</td> <td class="tg-0lax">7.5|-</td> </tr> <tr> <td class="tg-0lax">Qwen2.5-Omni-3B</td> <td class="tg-0lax">3.2|5.4</td> </tr> <tr> <td class="tg-0lax">Qwen2.5-Omni-7B</td> <td class="tg-0lax"><strong>3.0</strong>|4.1</td> </tr> <tr> <td class="tg-0lax" rowspan="6">Wenetspeech<br>test-net | test-meeting</td> <td class="tg-0lax">Seed-ASR-Chinese</td> <td class="tg-0lax"><strong>4.7|5.7</strong></td> </tr> <tr> <td class="tg-0lax">Megrez-3B-Omni</td> <td class="tg-0lax">-|16.4</td> </tr> <tr> <td class="tg-0lax">MiniCPM-o</td> <td class="tg-0lax">6.9|-</td> </tr> <tr> <td class="tg-0lax">MinMo</td> <td class="tg-0lax">6.8|7.4</td> </tr> <tr> <td class="tg-0lax">Qwen2.5-Omni-3B</td> <td class="tg-0lax">6.3|8.1</td> </tr> <tr> <td class="tg-0lax">Qwen2.5-Omni-7B</td> <td class="tg-0lax">5.9|7.7</td> </tr> <tr> <td class="tg-0lax" rowspan="4">Voxpopuli-V1.0-en</td> <td class="tg-0lax">Llama-3-8B</td> <td class="tg-0lax">6.2</td> </tr> <tr> <td class="tg-0lax">Llama-3-70B</td> <td class="tg-0lax"><strong>5.7</strong></td> </tr> <tr> <td class="tg-0lax">Qwen2.5-Omni-3B</td> <td class="tg-0lax">6.6</td> </tr> <tr> <td class="tg-0lax">Qwen2.5-Omni-7B</td> <td class="tg-0lax">5.8</td> </tr> <tr> <td class="tg-9j4x" colspan="3">S2TT</td> </tr> <tr> <td class="tg-0lax" rowspan="9">CoVoST2<br>en-de | de-en | en-zh | zh-en</td> <td class="tg-0lax">SALMONN</td> <td class="tg-0lax">18.6|-|33.1|-</td> </tr> <tr> <td class="tg-0lax">SpeechLLaMA</td> <td class="tg-0lax">-|27.1|-|12.3</td> </tr> <tr> <td class="tg-0lax">BLSP</td> <td class="tg-0lax">14.1|-|-|-</td> </tr> <tr> <td class="tg-0lax">MiniCPM-o</td> <td class="tg-0lax">-|-|<strong>48.2</strong>|27.2</td> </tr> <tr> <td class="tg-0lax">MinMo</td> <td class="tg-0lax">-|<strong>39.9</strong>|46.7|26.0</td> </tr> <tr> <td class="tg-0lax">Qwen-Audio</td> <td class="tg-0lax">25.1|33.9|41.5|15.7</td> </tr> <tr> <td class="tg-0lax">Qwen2-Audio</td> <td class="tg-0lax">29.9|35.2|45.2|24.4</td> </tr> <tr> <td class="tg-0lax">Qwen2.5-Omni-3B</td> <td class="tg-0lax">28.3|38.1|41.4|26.6</td> </tr> <tr> <td class="tg-0lax">Qwen2.5-Omni-7B</td> <td class="tg-0lax"><strong>30.2</strong>|37.7|41.4|<strong>29.4</strong></td> </tr> <tr> <td class="tg-9j4x" colspan="3">SER</td> </tr> <tr> <td class="tg-0lax" rowspan="6">Meld</td> <td class="tg-0lax">WavLM-large</td> <td class="tg-0lax">0.542</td> </tr> <tr> <td class="tg-0lax">MiniCPM-o</td> <td class="tg-0lax">0.524</td> </tr> <tr> <td class="tg-0lax">Qwen-Audio</td> <td class="tg-0lax">0.557</td> </tr> <tr> <td class="tg-0lax">Qwen2-Audio</td> <td class="tg-0lax">0.553</td> </tr> <tr> <td class="tg-0lax">Qwen2.5-Omni-3B</td> <td class="tg-0lax">0.558</td> </tr> <tr> <td class="tg-0lax">Qwen2.5-Omni-7B</td> <td class="tg-0lax"><strong>0.570</strong></td> </tr> <tr> <td class="tg-9j4x" colspan="3">VSC</td> </tr> <tr> <td class="tg-0lax" rowspan="6">VocalSound</td> <td class="tg-0lax">CLAP</td> <td class="tg-0lax">0.495</td> </tr> <tr> <td class="tg-0lax">Pengi</td> <td class="tg-0lax">0.604</td> </tr> <tr> <td class="tg-0lax">Qwen-Audio</td> <td class="tg-0lax">0.929</td> </tr> <tr> <td class="tg-0lax">Qwen2-Audio</td> <td class="tg-0lax"><strong>0.939</strong></td> </tr> <tr> <td class="tg-0lax">Qwen2.5-Omni-3B</td> <td class="tg-0lax">0.936</td> </tr> <tr> <td class="tg-0lax">Qwen2.5-Omni-7B</td> <td class="tg-0lax"><strong>0.939</strong></td> </tr> <tr> <td class="tg-9j4x" colspan="3">Music</td> </tr> <tr> <td class="tg-0lax" rowspan="3">GiantSteps Tempo</td> <td class="tg-0lax">Llark-7B</td> <td class="tg-0lax">0.86</td> </tr> <tr> <td class="tg-0lax">Qwen2.5-Omni-3B</td> <td class="tg-0lax"><strong>0.88</strong></td> </tr> <tr> <td class="tg-0lax">Qwen2.5-Omni-7B</td> <td class="tg-0lax"><strong>0.88</strong></td> </tr> <tr> <td class="tg-0lax" rowspan="3">MusicCaps</td> <td class="tg-0lax">LP-MusicCaps</td> <td class="tg-0lax">0.291|0.149|0.089|<strong>0.061</strong>|0.129|0.130</td> </tr> <tr> <td class="tg-0lax">Qwen2.5-Omni-3B</td> <td class="tg-0lax">0.325|<strong>0.163</strong>|<strong>0.093</strong>|0.057|<strong>0.132</strong>|<strong>0.229</strong></td> </tr> <tr> <td class="tg-0lax">Qwen2.5-Omni-7B</td> <td class="tg-0lax"><strong>0.328</strong>|0.162|0.090|0.055|0.127|0.225</td> </tr> <tr> <td class="tg-9j4x" colspan="3">Audio Reasoning</td> </tr> <tr> <td class="tg-0lax" rowspan="4">MMAU<br>Sound | Music | Speech | Avg</td> <td class="tg-0lax">Gemini-Pro-V1.5</td> <td class="tg-0lax">56.75|49.40|58.55|54.90</td> </tr> <tr> <td class="tg-0lax">Qwen2-Audio</td> <td class="tg-0lax">54.95|50.98|42.04|49.20</td> </tr> <tr> <td class="tg-0lax">Qwen2.5-Omni-3B</td> <td class="tg-0lax"><strong>70.27</strong>|60.48|59.16|63.30</td> </tr> <tr> <td class="tg-0lax">Qwen2.5-Omni-7B</td> <td class="tg-0lax">67.87|<strong>69.16|59.76|65.60</strong></td> </tr> <tr> <td class="tg-9j4x" colspan="3">Voice Chatting</td> </tr> <tr> <td class="tg-0lax" rowspan="9">VoiceBench<br>AlpacaEval | CommonEval | SD-QA | MMSU</td> <td class="tg-0lax">Ultravox-v0.4.1-LLaMA-3.1-8B</td> <td class="tg-0lax"><strong>4.55</strong>|3.90|53.35|47.17</td> </tr> <tr> <td class="tg-0lax">MERaLiON</td> <td class="tg-0lax">4.50|3.77|55.06|34.95</td> </tr> <tr> <td class="tg-0lax">Megrez-3B-Omni</td> <td class="tg-0lax">3.50|2.95|25.95|27.03</td> </tr> <tr> <td class="tg-0lax">Lyra-Base</td> <td class="tg-0lax">3.85|3.50|38.25|49.74</td> </tr> <tr> <td class="tg-0lax">MiniCPM-o</td> <td class="tg-0lax">4.42|<strong>4.15</strong>|50.72|54.78</td> </tr> <tr> <td class="tg-0lax">Baichuan-Omni-1.5</td> <td class="tg-0lax">4.50|4.05|43.40|57.25</td> </tr> <tr> <td class="tg-0lax">Qwen2-Audio</td> <td class="tg-0lax">3.74|3.43|35.71|35.72</td> </tr> <tr> <td class="tg-0lax">Qwen2.5-Omni-3B</td> <td class="tg-0lax">4.32|4.00|49.37|50.23</td> </tr> <tr> <td class="tg-0lax">Qwen2.5-Omni-7B</td> <td class="tg-0lax">4.49|3.93|<strong>55.71</strong>|<strong>61.32</strong></td> </tr> <tr> <td class="tg-0lax" rowspan="9">VoiceBench<br>OpenBookQA | IFEval | AdvBench | Avg</td> <td class="tg-0lax">Ultravox-v0.4.1-LLaMA-3.1-8B</td> <td class="tg-0lax">65.27|<strong>66.88</strong>|98.46|71.45</td> </tr> <tr> <td class="tg-0lax">MERaLiON</td> <td class="tg-0lax">27.23|62.93|94.81|62.91</td> </tr> <tr> <td class="tg-0lax">Megrez-3B-Omni</td> <td class="tg-0lax">28.35|25.71|87.69|46.25</td> </tr> <tr> <td class="tg-0lax">Lyra-Base</td> <td class="tg-0lax">72.75|36.28|59.62|57.66</td> </tr> <tr> <td class="tg-0lax">MiniCPM-o</td> <td class="tg-0lax">78.02|49.25|97.69|71.69</td> </tr> <tr> <td class="tg-0lax">Baichuan-Omni-1.5</td> <td class="tg-0lax">74.51|54.54|97.31|71.14</td> </tr> <tr> <td class="tg-0lax">Qwen2-Audio</td> <td class="tg-0lax">49.45|26.33|96.73|55.35</td> </tr> <tr> <td class="tg-0lax">Qwen2.5-Omni-3B</td> <td class="tg-0lax">74.73|42.10|98.85|68.81</td> </tr> <tr> <td class="tg-0lax">Qwen2.5-Omni-7B</td> <td class="tg-0lax"><strong>81.10</strong>|52.87|<strong>99.42</strong>|<strong>74.12</strong></td> </tr> </tbody></table> </details> <details> <summary>Image -> Text</summary> | Dataset | Qwen2.5-Omni-7B | Qwen2.5-Omni-3B | Other Best | Qwen2.5-VL-7B | GPT-4o-mini | |--------------------------------|--------------|------------|------------|---------------|-------------| | MMMU<sub>val</sub> | 59.2 | 53.1 | 53.9 | 58.6 | **60.0** | | MMMU-Pro<sub>overall</sub> | 36.6 | 29.7 | - | **38.3** | 37.6 | | MathVista<sub>testmini</sub> | 67.9 | 59.4 | **71.9** | 68.2 | 52.5 | | MathVision<sub>full</sub> | 25.0 | 20.8 | 23.1 | **25.1** | - | | MMBench-V1.1-EN<sub>test</sub> | 81.8 | 77.8 | 80.5 | **82.6** | 76.0 | | MMVet<sub>turbo</sub> | 66.8 | 62.1 | **67.5** | 67.1 | 66.9 | | MMStar | **64.0** | 55.7 | **64.0** | 63.9 | 54.8 | | MME<sub>sum</sub> | 2340 | 2117 | **2372** | 2347 | 2003 | | MuirBench | 59.2 | 48.0 | - | **59.2** | - | | CRPE<sub>relation</sub> | **76.5** | 73.7 | - | 76.4 | - | | RealWorldQA<sub>avg</sub> | 70.3 | 62.6 | **71.9** | 68.5 | - | | MME-RealWorld<sub>en</sub> | **61.6** | 55.6 | - | 57.4 | - | | MM-MT-Bench | 6.0 | 5.0 | - | **6.3** | - | | AI2D | 83.2 | 79.5 | **85.8** | 83.9 | - | | TextVQA<sub>val</sub> | 84.4 | 79.8 | 83.2 | **84.9** | - | | DocVQA<sub>test</sub> | 95.2 | 93.3 | 93.5 | **95.7** | - | | ChartQA<sub>test Avg</sub> | 85.3 | 82.8 | 84.9 | **87.3** | - | | OCRBench_V2<sub>en</sub> | **57.8** | 51.7 | - | 56.3 | - | | Dataset | Qwen2.5-Omni-7B | Qwen2.5-Omni-3B | Qwen2.5-VL-7B | Grounding DINO | Gemini 1.5 Pro | |--------------------------|--------------|---------------|---------------|----------------|----------------| | Refcoco<sub>val</sub> | 90.5 | 88.7 | 90.0 | **90.6** | 73.2 | | Refcoco<sub>textA</sub> | **93.5** | 91.8 | 92.5 | 93.2 | 72.9 | | Refcoco<sub>textB</sub> | 86.6 | 84.0 | 85.4 | **88.2** | 74.6 | | Refcoco+<sub>val</sub> | 85.4 | 81.1 | 84.2 | **88.2** | 62.5 | | Refcoco+<sub>textA</sub> | **91.0** | 87.5 | 89.1 | 89.0 | 63.9 | | Refcoco+<sub>textB</sub> | **79.3** | 73.2 | 76.9 | 75.9 | 65.0 | | Refcocog+<sub>val</sub> | **87.4** | 85.0 | 87.2 | 86.1 | 75.2 | | Refcocog+<sub>test</sub> | **87.9** | 85.1 | 87.2 | 87.0 | 76.2 | | ODinW | 42.4 | 39.2 | 37.3 | **55.0** | 36.7 | | PointGrounding | 66.5 | 46.2 | **67.3** | - | - | </details> <details> <summary>Video(without audio) -> Text</summary> | Dataset | Qwen2.5-Omni-7B | Qwen2.5-Omni-3B | Other Best | Qwen2.5-VL-7B | GPT-4o-mini | |-----------------------------|--------------|------------|------------|---------------|-------------| | Video-MME<sub>w/o sub</sub> | 64.3 | 62.0 | 63.9 | **65.1** | 64.8 | | Video-MME<sub>w sub</sub> | **72.4** | 68.6 | 67.9 | 71.6 | - | | MVBench | **70.3** | 68.7 | 67.2 | 69.6 | - | | EgoSchema<sub>test</sub> | **68.6** | 61.4 | 63.2 | 65.0 | - | </details> <details> <summary>Zero-shot Speech Generation</summary> <table class="tg"><thead> <tr> <th class="tg-0lax">Datasets</th> <th class="tg-0lax">Model</th> <th class="tg-0lax">Performance</th> </tr></thead> <tbody> <tr> <td class="tg-9j4x" colspan="3">Content Consistency</td> </tr> <tr> <td class="tg-0lax" rowspan="11">SEED<br>test-zh | test-en | test-hard </td> <td class="tg-0lax">Seed-TTS_ICL</td> <td class="tg-0lax">1.11 | 2.24 | 7.58</td> </tr> <tr> <td class="tg-0lax">Seed-TTS_RL</td> <td class="tg-0lax"><strong>1.00</strong> | 1.94 | <strong>6.42</strong></td> </tr> <tr> <td class="tg-0lax">MaskGCT</td> <td class="tg-0lax">2.27 | 2.62 | 10.27</td> </tr> <tr> <td class="tg-0lax">E2_TTS</td> <td class="tg-0lax">1.97 | 2.19 | -</td> </tr> <tr> <td class="tg-0lax">F5-TTS</td> <td class="tg-0lax">1.56 | <strong>1.83</strong> | 8.67</td> </tr> <tr> <td class="tg-0lax">CosyVoice 2</td> <td class="tg-0lax">1.45 | 2.57 | 6.83</td> </tr> <tr> <td class="tg-0lax">CosyVoice 2-S</td> <td class="tg-0lax">1.45 | 2.38 | 8.08</td> </tr> <tr> <td class="tg-0lax">Qwen2.5-Omni-3B_ICL</td> <td class="tg-0lax">1.95 | 2.87 | 9.92</td> </tr> <tr> <td class="tg-0lax">Qwen2.5-Omni-3B_RL</td> <td class="tg-0lax">1.58 | 2.51 | 7.86</td> </tr> <tr> <td class="tg-0lax">Qwen2.5-Omni-7B_ICL</td> <td class="tg-0lax">1.70 | 2.72 | 7.97</td> </tr> <tr> <td class="tg-0lax">Qwen2.5-Omni-7B_RL</td> <td class="tg-0lax">1.42 | 2.32 | 6.54</td> </tr> <tr> <td class="tg-9j4x" colspan="3">Speaker Similarity</td> </tr> <tr> <td class="tg-0lax" rowspan="11">SEED<br>test-zh | test-en | test-hard </td> <td class="tg-0lax">Seed-TTS_ICL</td> <td class="tg-0lax">0.796 | 0.762 | 0.776</td> </tr> <tr> <td class="tg-0lax">Seed-TTS_RL</td> <td class="tg-0lax"><strong>0.801</strong> | <strong>0.766</strong> | <strong>0.782</strong></td> </tr> <tr> <td class="tg-0lax">MaskGCT</td> <td class="tg-0lax">0.774 | 0.714 | 0.748</td> </tr> <tr> <td class="tg-0lax">E2_TTS</td> <td class="tg-0lax">0.730 | 0.710 | -</td> </tr> <tr> <td class="tg-0lax">F5-TTS</td> <td class="tg-0lax">0.741 | 0.647 | 0.713</td> </tr> <tr> <td class="tg-0lax">CosyVoice 2</td> <td class="tg-0lax">0.748 | 0.652 | 0.724</td> </tr> <tr> <td class="tg-0lax">CosyVoice 2-S</td> <td class="tg-0lax">0.753 | 0.654 | 0.732</td> </tr> <tr> <td class="tg-0lax">Qwen2.5-Omni-3B_ICL</td> <td class="tg-0lax">0.741 | 0.635 | 0.748</td> </tr> <tr> <td class="tg-0lax">Qwen2.5-Omni-3B_RL</td> <td class="tg-0lax">0.744 | 0.635 | 0.746</td> </tr> <tr> <td class="tg-0lax">Qwen2.5-Omni-7B_ICL</td> <td class="tg-0lax">0.752 | 0.632 | 0.747</td> </tr> <tr> <td class="tg-0lax">Qwen2.5-Omni-7B_RL</td> <td class="tg-0lax">0.754 | 0.641 | 0.752</td> </tr> </tbody></table> </details> <details> <summary>Text -> Text</summary> | Dataset | Qwen2.5-Omni-7B | Qwen2.5-Omni-3B | Qwen2.5-7B | Qwen2.5-3B | Qwen2-7B | Llama3.1-8B | Gemma2-9B | |-----------------------------------|-----------|------------|------------|------------|------------|-------------|-----------| | MMLU-Pro | 47.0 | 40.4 | **56.3** | 43.7 | 44.1 | 48.3 | 52.1 | | MMLU-redux | 71.0 | 60.9 | **75.4** | 64.4 | 67.3 | 67.2 | 72.8 | | LiveBench<sub>0831</sub> | 29.6 | 22.3 | **35.9** | 26.8 | 29.2 | 26.7 | 30.6 | | GPQA | 30.8 | 34.3 | **36.4** | 30.3 | 34.3 | 32.8 | 32.8 | | MATH | 71.5 | 63.6 | **75.5** | 65.9 | 52.9 | 51.9 | 44.3 | | GSM8K | 88.7 | 82.6 | **91.6** | 86.7 | 85.7 | 84.5 | 76.7 | | HumanEval | 78.7 | 70.7 | **84.8** | 74.4 | 79.9 | 72.6 | 68.9 | | MBPP | 73.2 | 70.4 | **79.2** | 72.7 | 67.2 | 69.6 | 74.9 | | MultiPL-E | 65.8 | 57.6 | **70.4** | 60.2 | 59.1 | 50.7 | 53.4 | | LiveCodeBench<sub>2305-2409</sub> | 24.6 | 16.5 | **28.7** | 19.9 | 23.9 | 8.3 | 18.9 | </details> ## Quickstart Below, we provide simple examples to show how to use Qwen2.5-Omni with 🤗 Transformers. The codes of Qwen2.5-Omni has been in the latest Hugging face transformers and we advise you to build from source with command: ``` pip uninstall transformers pip install git+https://github.com/huggingface/transformers@v4.51.3-Qwen2.5-Omni-preview pip install accelerate ``` or you might encounter the following error: ``` KeyError: 'qwen2_5_omni' ``` We offer a toolkit to help you handle various types of audio and visual input more conveniently, as if you were using an API. This includes base64, URLs, and interleaved audio, images and videos. You can install it using the following command and make sure your system has `ffmpeg` installed: ```bash # It's highly recommended to use `[decord]` feature for faster video loading. pip install qwen-omni-utils[decord] -U ``` If you are not using Linux, you might not be able to install `decord` from PyPI. In that case, you can use `pip install qwen-omni-utils -U` which will fall back to using torchvision for video processing. However, you can still [install decord from source](https://github.com/dmlc/decord?tab=readme-ov-file#install-from-source) to get decord used when loading video. ### 🤗 Transformers Usage Here we show a code snippet to show you how to use the chat model with `transformers` and `qwen_omni_utils`: ```python import soundfile as sf from transformers import Qwen2_5OmniForConditionalGeneration, Qwen2_5OmniProcessor from qwen_omni_utils import process_mm_info # default: Load the model on the available device(s) model = Qwen2_5OmniForConditionalGeneration.from_pretrained("Qwen/Qwen2.5-Omni-3B", torch_dtype="auto", device_map="auto") # We recommend enabling flash_attention_2 for better acceleration and memory saving. # model = Qwen2_5OmniForConditionalGeneration.from_pretrained( # "Qwen/Qwen2.5-Omni-3B", # torch_dtype="auto", # device_map="auto", # attn_implementation="flash_attention_2", # ) processor = Qwen2_5OmniProcessor.from_pretrained("Qwen/Qwen2.5-Omni-3B") conversation = [ { "role": "system", "content": [ {"type": "text", "text": "You are Qwen, a virtual human developed by the Qwen Team, Alibaba Group, capable of perceiving auditory and visual inputs, as well as generating text and speech."} ], }, { "role": "user", "content": [ {"type": "video", "video": "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2.5-Omni/draw.mp4"}, ], }, ] # set use audio in video USE_AUDIO_IN_VIDEO = True # Preparation for inference text = processor.apply_chat_template(conversation, add_generation_prompt=True, tokenize=False) audios, images, videos = process_mm_info(conversation, use_audio_in_video=USE_AUDIO_IN_VIDEO) inputs = processor(text=text, audio=audios, images=images, videos=videos, return_tensors="pt", padding=True, use_audio_in_video=USE_AUDIO_IN_VIDEO) inputs = inputs.to(model.device).to(model.dtype) # Inference: Generation of the output text and audio text_ids, audio = model.generate(**inputs, use_audio_in_video=USE_AUDIO_IN_VIDEO) text = processor.batch_decode(text_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False) print(text) sf.write( "output.wav", audio.reshape(-1).detach().cpu().numpy(), samplerate=24000, ) ``` <details> <summary>Minimum GPU memory requirements</summary> |Model | Precision | 15(s) Video | 30(s) Video | 60(s) Video | |--------------|-----------| ------------- | ------------- | ------------------ | | Qwen-Omni-3B | FP32 | 89.10 GB | Not Recommend | Not Recommend | | Qwen-Omni-3B | BF16 | 18.38 GB | 22.43 GB | 28.22 GB | | Qwen-Omni-7B | FP32 | 93.56 GB | Not Recommend | Not Recommend | | Qwen-Omni-7B | BF16 | 31.11 GB | 41.85 GB | 60.19 GB | Note: The table above presents the theoretical minimum memory requirements for inference with `transformers` and `BF16` is test with `attn_implementation="flash_attention_2"`; however, in practice, the actual memory usage is typically at least 1.2 times higher. For more information, see the linked resource [here](https://huggingface.co/docs/accelerate/main/en/usage_guides/model_size_estimator). </details> <details> <summary>Video URL resource usage</summary> Video URL compatibility largely depends on the third-party library version. The details are in the table below. Change the backend by `FORCE_QWENVL_VIDEO_READER=torchvision` or `FORCE_QWENVL_VIDEO_READER=decord` if you prefer not to use the default one. | Backend | HTTP | HTTPS | |-------------|------|-------| | torchvision >= 0.19.0 | ✅ | ✅ | | torchvision < 0.19.0 | ❌ | ❌ | | decord | ✅ | ❌ | </details> <details> <summary>Batch inference</summary> The model can batch inputs composed of mixed samples of various types such as text, images, audio and videos as input when `return_audio=False` is set. Here is an example. ```python # Sample messages for batch inference # Conversation with video only conversation1 = [ { "role": "system", "content": [ {"type": "text", "text": "You are Qwen, a virtual human developed by the Qwen Team, Alibaba Group, capable of perceiving auditory and visual inputs, as well as generating text and speech."} ], }, { "role": "user", "content": [ {"type": "video", "video": "/path/to/video.mp4"}, ] } ] # Conversation with audio only conversation2 = [ { "role": "system", "content": [ {"type": "text", "text": "You are Qwen, a virtual human developed by the Qwen Team, Alibaba Group, capable of perceiving auditory and visual inputs, as well as generating text and speech."} ], }, { "role": "user", "content": [ {"type": "audio", "audio": "/path/to/audio.wav"}, ] } ] # Conversation with pure text conversation3 = [ { "role": "system", "content": [ {"type": "text", "text": "You are Qwen, a virtual human developed by the Qwen Team, Alibaba Group, capable of perceiving auditory and visual inputs, as well as generating text and speech."} ], }, { "role": "user", "content": "who are you?" } ] # Conversation with mixed media conversation4 = [ { "role": "system", "content": [ {"type": "text", "text": "You are Qwen, a virtual human developed by the Qwen Team, Alibaba Group, capable of perceiving auditory and visual inputs, as well as generating text and speech."} ], }, { "role": "user", "content": [ {"type": "image", "image": "/path/to/image.jpg"}, {"type": "video", "video": "/path/to/video.mp4"}, {"type": "audio", "audio": "/path/to/audio.wav"}, {"type": "text", "text": "What are the elements can you see and hear in these medias?"}, ], } ] # Combine messages for batch processing conversations = [conversation1, conversation2, conversation3, conversation4] # set use audio in video USE_AUDIO_IN_VIDEO = True # Preparation for batch inference text = processor.apply_chat_template(conversations, add_generation_prompt=True, tokenize=False) audios, images, videos = process_mm_info(conversations, use_audio_in_video=USE_AUDIO_IN_VIDEO) inputs = processor(text=text, audio=audios, images=images, videos=videos, return_tensors="pt", padding=True, use_audio_in_video=USE_AUDIO_IN_VIDEO) inputs = inputs.to(model.device).to(model.dtype) # Batch Inference text_ids = model.generate(**inputs, use_audio_in_video=USE_AUDIO_IN_VIDEO, return_audio=False) text = processor.batch_decode(text_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False) print(text) ``` </details> ### Usage Tips #### Prompt for audio output If users need audio output, the system prompt must be set as "You are Qwen, a virtual human developed by the Qwen Team, Alibaba Group, capable of perceiving auditory and visual inputs, as well as generating text and speech.", otherwise the audio output may not work as expected. ``` { "role": "system", "content": [ {"type": "text", "text": "You are Qwen, a virtual human developed by the Qwen Team, Alibaba Group, capable of perceiving auditory and visual inputs, as well as generating text and speech."} ], } ``` #### Use audio in video In the process of multimodal interaction, the videos provided by users are often accompanied by audio (such as questions about the content in the video, or sounds generated by certain events in the video). This information is conducive to the model providing a better interactive experience. So we provide the following options for users to decide whether to use audio in video. ```python # first place, in data preprocessing audios, images, videos = process_mm_info(conversations, use_audio_in_video=True) ``` ```python # second place, in model processor inputs = processor(text=text, audio=audios, images=images, videos=videos, return_tensors="pt", padding=True, use_audio_in_video=True) ``` ```python # third place, in model inference text_ids, audio = model.generate(**inputs, use_audio_in_video=True) ``` It is worth noting that during a multi-round conversation, the `use_audio_in_video` parameter in these places must be set to the same, otherwise unexpected results will occur. #### Use audio output or not The model supports both text and audio outputs, if users do not need audio outputs, they can call `model.disable_talker()` after init the model. This option will save about `~2GB` of GPU memory but the `return_audio` option for `generate` function will only allow to be set at `False`. ```python model = Qwen2_5OmniForConditionalGeneration.from_pretrained( "Qwen/Qwen2.5-Omni-3B", torch_dtype="auto", device_map="auto" ) model.disable_talker() ``` In order to obtain a flexible experience, we recommend that users can decide whether to return audio when `generate` function is called. If `return_audio` is set to `False`, the model will only return text outputs to get text responses faster. ```python model = Qwen2_5OmniForConditionalGeneration.from_pretrained( "Qwen/Qwen2.5-Omni-3B", torch_dtype="auto", device_map="auto" ) ... text_ids = model.generate(**inputs, return_audio=False) ``` #### Change voice type of output audio Qwen2.5-Omni supports the ability to change the voice of the output audio. The `"Qwen/Qwen2.5-Omni-3B"` checkpoint support two voice types as follow: | Voice Type | Gender | Description | |------------|--------|-------------| | Chelsie | Female | A honeyed, velvety voice that carries a gentle warmth and luminous clarity.| | Ethan | Male | A bright, upbeat voice with infectious energy and a warm, approachable vibe.| Users can use the `speaker` parameter of `generate` function to specify the voice type. By default, if `speaker` is not specified, the default voice type is `Chelsie`. ```python text_ids, audio = model.generate(**inputs, speaker="Chelsie") ``` ```python text_ids, audio = model.generate(**inputs, speaker="Ethan") ``` #### Flash-Attention 2 to speed up generation First, make sure to install the latest version of Flash Attention 2: ```bash pip install -U flash-attn --no-build-isolation ``` Also, you should have hardware that is compatible with FlashAttention 2. Read more about it in the official documentation of the [flash attention repository](https://github.com/Dao-AILab/flash-attention). FlashAttention-2 can only be used when a model is loaded in `torch.float16` or `torch.bfloat16`. To load and run a model using FlashAttention-2, add `attn_implementation="flash_attention_2"` when loading the model: ```python from transformers import Qwen2_5OmniForConditionalGeneration model = Qwen2_5OmniForConditionalGeneration.from_pretrained( "Qwen/Qwen2.5-Omni-3B", device_map="auto", torch_dtype=torch.bfloat16, attn_implementation="flash_attention_2", ) ``` ## Citation If you find our paper and code useful in your research, please consider giving a star :star: and citation :pencil: :) ```BibTeX @article{Qwen2.5-Omni, title={Qwen2.5-Omni Technical Report}, author={Jin Xu, Zhifang Guo, Jinzheng He, Hangrui Hu, Ting He, Shuai Bai, Keqin Chen, Jialin Wang, Yang Fan, Kai Dang, Bin Zhang, Xiong Wang, Yunfei Chu, Junyang Lin}, journal={arXiv preprint arXiv:2503.20215}, year={2025} } ``` <br>
the-acorn-ai/spiral-qwen3-4b-simple-negotiation-step00128
the-acorn-ai
2025-09-11T20:16:00Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "spiral", "self-play", "reinforcement-learning", "multi-agent", "conversational", "en", "base_model:Qwen/Qwen3-8B-Base", "base_model:finetune:Qwen/Qwen3-8B-Base", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-11T20:15:30Z
--- base_model: Qwen/Qwen3-8B-Base license: apache-2.0 language: - en library_name: transformers pipeline_tag: text-generation tags: - spiral - self-play - reinforcement-learning - qwen3 - multi-agent --- # SPIRAL Qwen3-8B Multi-Agent Model This model was trained using the SPIRAL (Self-Play Iterative Reinforcement learning for Adaptation and Learning) framework. ## Model Details - **Base Model**: Qwen/Qwen3-8B-Base - **Training Framework**: SPIRAL - **Checkpoint**: step_00128 - **Model Size**: 8B parameters - **Training Date**: 2025-09-11 ## Training Configuration The model was trained with self-play on multiple environments: - KuhnPoker-v1 - TicTacToe-v0 - SimpleNegotiation-v1 ### Training Parameters ```json { "learning_rate": "1e-6", "train_batch_size": 128, "num_ppo_epochs": 2, "temperature": 1.0, "max_model_len": 16384, "environments": [ "KuhnPoker-v1", "TicTacToe-v0", "SimpleNegotiation-v1" ], "base_model": "Qwen/Qwen3-8B-Base", "framework": "SPIRAL" } ``` ## Usage ```python from transformers import AutoTokenizer, AutoModelForCausalLM import torch tokenizer = AutoTokenizer.from_pretrained("the-acorn-ai/spiral-qwen3-4b-simple-negotiation-step00128") model = AutoModelForCausalLM.from_pretrained( "the-acorn-ai/spiral-qwen3-4b-simple-negotiation-step00128", torch_dtype=torch.bfloat16, device_map="auto" ) # Generate text inputs = tokenizer("Your prompt here", return_tensors="pt") outputs = model.generate(**inputs, max_length=100) response = tokenizer.decode(outputs[0], skip_special_tokens=True) ``` ## License This model is licensed under the Apache License 2.0.
nsemhoun/Claire-12B-v1.0
nsemhoun
2025-09-11T20:15:46Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-09-11T20:15:46Z
--- license: apache-2.0 ---
chaitnya26/FLUX.1-Kontext-dev-GGUF-forked
chaitnya26
2025-09-11T20:15:01Z
0
0
diffusion-single-file
[ "diffusion-single-file", "gguf", "image-generation", "flux", "image-to-image", "en", "arxiv:2506.15742", "base_model:black-forest-labs/FLUX.1-Kontext-dev", "base_model:quantized:black-forest-labs/FLUX.1-Kontext-dev", "license:other", "region:us" ]
image-to-image
2025-09-11T20:15:01Z
--- language: - en license: other license_name: flux-1-dev-non-commercial-license license_link: LICENSE.md extra_gated_prompt: >- By clicking "Agree", you agree to the [FluxDev Non-Commercial License Agreement](https://huggingface.co/black-forest-labs/FLUX.1-Kontext-dev/blob/main/LICENSE.md) and acknowledge the [Acceptable Use Policy](https://huggingface.co/black-forest-labs/FLUX.1-Kontext-dev/blob/main/POLICY.md). tags: - image-generation - flux - diffusion-single-file pipeline_tag: image-to-image base_model: - black-forest-labs/FLUX.1-Kontext-dev --- Created with [city96/ComfyUI-GGUF](https://github.com/city96/ComfyUI-GGUF) Example workflow added in files `FLUX.1 Kontext [dev]` is a 12 billion parameter rectified flow transformer capable of editing images based on text instructions. For more information, please read our [blog post](https://bfl.ai/announcements/flux-1-kontext-dev) and our [technical report](https://arxiv.org/abs/2506.15742). You can find information about the `[pro]` version in [here](https://bfl.ai/models/flux-kontext). # Key Features 1. Change existing images based on an edit instruction. 2. Have character, style and object reference without any finetuning. 3. Robust consistency allows users to refine an image through multiple successive edits with minimal visual drift. 4. Trained using guidance distillation, making `FLUX.1 Kontext [dev]` more efficient. 5. Open weights to drive new scientific research, and empower artists to develop innovative workflows. 6. Generated outputs can be used for personal, scientific, and commercial purposes, as described in the [FLUX.1 \[dev\] Non-Commercial License](https://github.com/black-forest-labs/flux/blob/main/model_licenses/LICENSE-FLUX1-dev). # Usage We provide a reference implementation of `FLUX.1 Kontext [dev]`, as well as sampling code, in a dedicated [github repository](https://github.com/black-forest-labs/flux). Developers and creatives looking to build on top of `FLUX.1 Kontext [dev]` are encouraged to use this as a starting point. `FLUX.1 Kontext [dev]` is also available in both [ComfyUI](https://github.com/comfyanonymous/ComfyUI) and [Diffusers](https://github.com/huggingface/diffusers). ## API Endpoints The FLUX.1 Kontext models are also available via API from the following sources - bfl.ai: https://docs.bfl.ai/ - DataCrunch: https://datacrunch.io/flux-kontext - fal: https://fal.ai/flux-kontext - Replicate: https://replicate.com/blog/flux-kontext - https://replicate.com/black-forest-labs/flux-kontext-dev - https://replicate.com/black-forest-labs/flux-kontext-pro - https://replicate.com/black-forest-labs/flux-kontext-max - Runware: https://runware.ai/blog/introducing-flux1-kontext-instruction-based-image-editing-with-ai?utm_source=bfl - TogetherAI: https://www.together.ai/models/flux-1-kontext-dev --- # Risks Risks Black Forest Labs is committed to the responsible development of generative AI technology. Prior to releasing FLUX.1 Kontext, we evaluated and mitigated a number of risks in our models and services, including the generation of unlawful content. We implemented a series of pre-release mitigations to help prevent misuse by third parties, with additional post-release mitigations to help address residual risks: 1. **Pre-training mitigation**. We filtered pre-training data for multiple categories of “not safe for work” (NSFW) content to help prevent a user generating unlawful content in response to text prompts or uploaded images. 2. **Post-training mitigation.** We have partnered with the Internet Watch Foundation, an independent nonprofit organization dedicated to preventing online abuse, to filter known child sexual abuse material (CSAM) from post-training data. Subsequently, we undertook multiple rounds of targeted fine-tuning to provide additional mitigation against potential abuse. By inhibiting certain behaviors and concepts in the trained model, these techniques can help to prevent a user generating synthetic CSAM or nonconsensual intimate imagery (NCII) from a text prompt, or transforming an uploaded image into synthetic CSAM or NCII. 3. **Pre-release evaluation.** Throughout this process, we conducted multiple internal and external third-party evaluations of model checkpoints to identify further opportunities for improvement. The third-party evaluations—which included 21 checkpoints of FLUX.1 Kontext [pro] and [dev]—focused on eliciting CSAM and NCII through adversarial testing with text-only prompts, as well as uploaded images with text prompts. Next, we conducted a final third-party evaluation of the proposed release checkpoints, focused on text-to-image and image-to-image CSAM and NCII generation. The final FLUX.1 Kontext [pro] (as offered through the FLUX API only) and FLUX.1 Kontext [dev] (released as an open-weight model) checkpoints demonstrated very high resilience against violative inputs, and FLUX.1 Kontext [dev] demonstrated higher resilience than other similar open-weight models across these risk categories. Based on these findings, we approved the release of the FLUX.1 Kontext [pro] model via API, and the release of the FLUX.1 Kontext [dev] model as openly-available weights under a non-commercial license to support third-party research and development. 4. **Inference filters.** We are applying multiple filters to intercept text prompts, uploaded images, and output images on the FLUX API for FLUX.1 Kontext [pro]. Filters for CSAM and NCII are provided by Hive, a third-party provider, and cannot be adjusted or removed by developers. We provide filters for other categories of potentially harmful content, including gore, which can be adjusted by developers based on their specific risk profile. Additionally, the repository for the open FLUX.1 Kontext [dev] model includes filters for illegal or infringing content. Filters or manual review must be used with the model under the terms of the FLUX.1 [dev] Non-Commercial License. We may approach known deployers of the FLUX.1 Kontext [dev] model at random to verify that filters or manual review processes are in place. 5. **Content provenance.** The FLUX API applies cryptographically-signed metadata to output content to indicate that images were produced with our model. Our API implements the Coalition for Content Provenance and Authenticity (C2PA) standard for metadata. 6. **Policies.** Access to our API and use of our models are governed by our Developer Terms of Service, Usage Policy, and FLUX.1 [dev] Non-Commercial License, which prohibit the generation of unlawful content or the use of generated content for unlawful, defamatory, or abusive purposes. Developers and users must consent to these conditions to access the FLUX Kontext models. 7. **Monitoring.** We are monitoring for patterns of violative use after release, and may ban developers who we detect intentionally and repeatedly violate our policies via the FLUX API. Additionally, we provide a dedicated email address (safety@blackforestlabs.ai) to solicit feedback from the community. We maintain a reporting relationship with organizations such as the Internet Watch Foundation and the National Center for Missing and Exploited Children, and we welcome ongoing engagement with authorities, developers, and researchers to share intelligence about emerging risks and develop effective mitigations. # License This model falls under the [FLUX.1 \[dev\] Non-Commercial License](https://github.com/black-forest-labs/flux/blob/main/model_licenses/LICENSE-FLUX1-dev). # Citation ```bib @misc{labs2025flux1kontextflowmatching, title={FLUX.1 Kontext: Flow Matching for In-Context Image Generation and Editing in Latent Space}, Add commentMore actions author={Black Forest Labs and Stephen Batifol and Andreas Blattmann and Frederic Boesel and Saksham Consul and Cyril Diagne and Tim Dockhorn and Jack English and Zion English and Patrick Esser and Sumith Kulal and Kyle Lacey and Yam Levi and Cheng Li and Dominik Lorenz and Jonas Müller and Dustin Podell and Robin Rombach and Harry Saini and Axel Sauer and Luke Smith}, year={2025}, eprint={2506.15742}, archivePrefix={arXiv}, primaryClass={cs.GR}, url={https://arxiv.org/abs/2506.15742}, } ```
the-acorn-ai/spiral-qwen3-4b-simple-negotiation-step00064
the-acorn-ai
2025-09-11T20:14:58Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "spiral", "self-play", "reinforcement-learning", "multi-agent", "conversational", "en", "base_model:Qwen/Qwen3-8B-Base", "base_model:finetune:Qwen/Qwen3-8B-Base", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-11T20:14:27Z
--- base_model: Qwen/Qwen3-8B-Base license: apache-2.0 language: - en library_name: transformers pipeline_tag: text-generation tags: - spiral - self-play - reinforcement-learning - qwen3 - multi-agent --- # SPIRAL Qwen3-8B Multi-Agent Model This model was trained using the SPIRAL (Self-Play Iterative Reinforcement learning for Adaptation and Learning) framework. ## Model Details - **Base Model**: Qwen/Qwen3-8B-Base - **Training Framework**: SPIRAL - **Checkpoint**: step_00064 - **Model Size**: 8B parameters - **Training Date**: 2025-09-11 ## Training Configuration The model was trained with self-play on multiple environments: - KuhnPoker-v1 - TicTacToe-v0 - SimpleNegotiation-v1 ### Training Parameters ```json { "learning_rate": "1e-6", "train_batch_size": 128, "num_ppo_epochs": 2, "temperature": 1.0, "max_model_len": 16384, "environments": [ "KuhnPoker-v1", "TicTacToe-v0", "SimpleNegotiation-v1" ], "base_model": "Qwen/Qwen3-8B-Base", "framework": "SPIRAL" } ``` ## Usage ```python from transformers import AutoTokenizer, AutoModelForCausalLM import torch tokenizer = AutoTokenizer.from_pretrained("the-acorn-ai/spiral-qwen3-4b-simple-negotiation-step00064") model = AutoModelForCausalLM.from_pretrained( "the-acorn-ai/spiral-qwen3-4b-simple-negotiation-step00064", torch_dtype=torch.bfloat16, device_map="auto" ) # Generate text inputs = tokenizer("Your prompt here", return_tensors="pt") outputs = model.generate(**inputs, max_length=100) response = tokenizer.decode(outputs[0], skip_special_tokens=True) ``` ## License This model is licensed under the Apache License 2.0.
ginic/train_duration_6400_samples_4_wav2vec2-large-xlsr-53-buckeye-ipa
ginic
2025-09-11T20:14:45Z
0
0
null
[ "safetensors", "wav2vec2", "automatic-speech-recognition", "en", "license:mit", "region:us" ]
automatic-speech-recognition
2025-09-11T20:13:47Z
--- license: mit language: - en pipeline_tag: automatic-speech-recognition --- # About This model was created to support experiments for evaluating phonetic transcription with the Buckeye corpus as part of https://github.com/ginic/multipa. This is a version of facebook/wav2vec2-large-xlsr-53 fine tuned on a specific subset of the Buckeye corpus. For details about specific model parameters, please view the config.json here or training scripts in the scripts/buckeye_experiments folder of the GitHub repository. # Experiment Details These experiments are targeted at understanding how increasing the amount of data used to train the model affects performance. The first number in the model name indicates the total number of randomly selected data samples. Data samples are selected to maintain 50/50 gender split from speakers, with the exception of the models trained on 20000 samples, as there are 18782 audio samples in our train split of Buckeye, but they are not split equally between male and female speakers. Experiments using 20000 samples actually use all 8252 samples from female speakers in the train set, but randomly select 10000 samples from male speakers for a total of 18252 samples. For each number of train data samples, 5 models are trained to vary train data selection (`train_seed`) without varying other hyperparameters. Before these models were trained, simple grid search hyperparameter tuning was done to select reasonable hyperparameters for fine-tuning with the target number of samples. The hyperparam tuning models have not been uploaded to HuggingFace. Goals: - See how performance on the test set changes as more data is used in fine-tuning Params to vary: - training seed (--train_seed) - number of data samples used in training the model (--train_samples): 100, 200, 400, 800, 1600, 3200, 6400, 12800, 20000
ultratopaz/1929811
ultratopaz
2025-09-11T20:14:26Z
0
0
null
[ "region:us" ]
null
2025-09-11T20:14:24Z
[View on Civ Archive](https://civarchive.com/models/1774064?modelVersionId=2026263)
the-acorn-ai/spiral-qwen3-4b-simple-negotiation-step00032
the-acorn-ai
2025-09-11T20:14:25Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "spiral", "self-play", "reinforcement-learning", "multi-agent", "conversational", "en", "base_model:Qwen/Qwen3-8B-Base", "base_model:finetune:Qwen/Qwen3-8B-Base", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-11T20:13:54Z
--- base_model: Qwen/Qwen3-8B-Base license: apache-2.0 language: - en library_name: transformers pipeline_tag: text-generation tags: - spiral - self-play - reinforcement-learning - qwen3 - multi-agent --- # SPIRAL Qwen3-8B Multi-Agent Model This model was trained using the SPIRAL (Self-Play Iterative Reinforcement learning for Adaptation and Learning) framework. ## Model Details - **Base Model**: Qwen/Qwen3-8B-Base - **Training Framework**: SPIRAL - **Checkpoint**: step_00032 - **Model Size**: 8B parameters - **Training Date**: 2025-09-11 ## Training Configuration The model was trained with self-play on multiple environments: - KuhnPoker-v1 - TicTacToe-v0 - SimpleNegotiation-v1 ### Training Parameters ```json { "learning_rate": "1e-6", "train_batch_size": 128, "num_ppo_epochs": 2, "temperature": 1.0, "max_model_len": 16384, "environments": [ "KuhnPoker-v1", "TicTacToe-v0", "SimpleNegotiation-v1" ], "base_model": "Qwen/Qwen3-8B-Base", "framework": "SPIRAL" } ``` ## Usage ```python from transformers import AutoTokenizer, AutoModelForCausalLM import torch tokenizer = AutoTokenizer.from_pretrained("the-acorn-ai/spiral-qwen3-4b-simple-negotiation-step00032") model = AutoModelForCausalLM.from_pretrained( "the-acorn-ai/spiral-qwen3-4b-simple-negotiation-step00032", torch_dtype=torch.bfloat16, device_map="auto" ) # Generate text inputs = tokenizer("Your prompt here", return_tensors="pt") outputs = model.generate(**inputs, max_length=100) response = tokenizer.decode(outputs[0], skip_special_tokens=True) ``` ## License This model is licensed under the Apache License 2.0.
seraphimzzzz/2029542
seraphimzzzz
2025-09-11T20:12:45Z
0
0
null
[ "region:us" ]
null
2025-09-11T20:12:42Z
[View on Civ Archive](https://civarchive.com/models/1886538?modelVersionId=2135373)
seraphimzzzz/1719458
seraphimzzzz
2025-09-11T20:12:21Z
0
0
null
[ "region:us" ]
null
2025-09-11T20:12:18Z
[View on Civ Archive](https://civarchive.com/models/1607321?modelVersionId=1818911)
AbstractPhil/penta-vit-experiments
AbstractPhil
2025-09-11T20:11:06Z
0
1
null
[ "tensorboard", "zero-shot-classification", "dataset:AbstractPhil/geometric-vocab", "license:mit", "region:us" ]
zero-shot-classification
2025-09-08T16:07:17Z
--- license: mit datasets: - AbstractPhil/geometric-vocab pipeline_tag: zero-shot-classification --- # I've had an epiphany. We don't NEED transformer layers in their current form. David's architecture already solved this need with high-efficiency multi-stage geometric mathematics. David's classification structure houses a series of dimensional projection sub-systems tasked with learning mastery based on each pentachoron structure. Each of those 5d representations ends up learning thousands of representative features. David is already capable of feature generation just not robust enough to fully manifest an enriched ViT-grade dimensional feature... yet. David's architecture can handle ImageNet's classifier count and features leveraging 1000 classes with ease, sitting on a floppy disk at over 70% accuracy because David sees Clip-Vit-Base-Patch16 features. I believe I've figured out a way to fundamentally represent those features in a meaningful way that can replace transformer layers in their methodology with a different form of feedforward trajectory, edge, point, deviation, jitter, helix, theta, and similarity assessment that should house the needed information to teach the experts how to behave like David did. This should allow the much larger networks to retain mathematical precision, learn the features in a different form of patch than is currently expected to be a patch, and to create legitimate high-density geometic features. # Better rope incoming with actual meaningful learning The last one wasn't meaningfully learning representations, the next should be more correctly curated and inferenced to impact representative outcome. Should be a bit more accurate than the last but no guarantees. I once again let AI handle it for me and now I'll need to go micro manage again. This is on me again, you'd think I would learn. Oftentimes they can handle these sorts of tasks and other times... well other times they just kind of hook shit together and say it works, then it spins in circles. Time for my favorite thing, research papers. It's starting to look more like a glorified branched FFN rather than a MLP, so that's a thing I suppose. # Theta rope seems less accurate than the penta head There's an experimental classification theta rotation head with multiple pentachora routes. The results are less accurate overall than the similarity through rose without it so far. Experiments ongoing. # I assumed full control from the AIs and built it correctly. I was relying too much on the AI and it made me slow. Today I assumed full control and built the models correctly. The architecture is cleaner and all three python files were uploaded for the v3 setup. vit_zana_small already seeing 50% by epoch 50, is a big step up from the earlier pixies hard locked at 41%. # Zana the current version is quite small and quite fast At about 500k the zana_nano competes with it's big sister pixie at a superior accuracy rating AND produces image features. Running the system with refined wordnet tokens rather than full unicode made all the difference. The findings show that meaningful semantics matter a whole lot. ``` unicode; 21% same model wordnet_eng; >42% ``` # All losses modified heavily, the originals did not work at all with the structure. V3 incoming. Pushing HEAVILY into losses based on the WORKING high-entropy high-learn rate classification heads and forcing this thing into cohesion INSTANTLY. Thats the play. No more 200 epochs. These things should be ready in 10-20 epochs at most, and they should be 80%+ accuracy, or they fail. Those are the two potentials here. With correct logit and probe assessment the substructure should be a profoundly more efficient and easily analyzable series of charts based on similarity for assessments and capability. None of this guessing or guesswork based on "what works with other models" We KNOW what works and I should have never second guessed the formulas. I have implemented all of the most crucial and most powerful formulas from the others, now lets see if the universe makes a fool of me or not. If it does, SO BE IT! Lets build an AI singularity empire from there. We're about to teach a VIT diffusion. The real question is, will it learn - or will it collapse and need dual-block layers from Flux? # Better testing methodology development I'm reading up on some papers for how various companies and research institutions tested their VITS. My testing methodology isn't accurate enough because the accuracy isn't just reflecting on the logit alignments but also the internal ML layer feature generations. I'm crutching heavily on the logit alignment instead of managing the feature alignment testing as well, which is likely cutting heavily into my system. Currently I'm building a notebook with the better feature testing capabilities to test features correctly. I anticipate faster trains when the confidence actually starts to pick up, since currently they are not confident at all in terms of classification. It's possible these vits could be potentially MUCH MORE or MUCH LESS accurate then advertise and I apologise for the inconvenience this has caused to any onlookers. I'll be updating with additional inference code very soon. # Tinkerbell 128d 128heads 4.0 mlp, depth 4 only geometric attention... Well it might work. I could make it smaller, but I doubt tinkerbell would extract anything useful. Good luck little one. # Enabling the Mix-N-Cut I've built a mix-n-cut that I've been avoiding enabling. This one is particularly formatted for pentachoron, so we'll see how it fares. I'm trying to build one as SMALL AS POSSIBLE< so if this mix-n-cut can pull the task out of the bag I may as well run it. As it stands the tiny vits cap at 41% cifar100 with no augmentations. I've been running all the trains without a single special effect and only minimal normalization. Lets see how the upcoming trains fare. pixie_base_128d_patch4_128h Pixie base has 10 layers with 5 goemetic and 5 multihead traditional attention. Lets see how the mix-n-cut fares with the earlier ones first, then we'll run the base. The smaller ones seem to behave better using the geometric attention at 256 expert heads, which is odd to me but whatever works. They don't get much bigger with more experts, so I'll just try a tiny one with a ton of heads first. # Pentachoron Geometric Feature Extraction Pentachora VIT are essentially micro-sized feature extractors that provide substantial accuracy for their small size. The more experiments I run, the smaller they become. The final goals to be a full clip-vit that can house the entirety of laion 400m in a fraction of the size and compute as OpenAI's clip-vit line. After this point I'll be confident the math is lined up well enough to train the true flagship - Beatrix. The process of useful classification and feature extraction has been a non-trivial problem in the Computer Science industry for a long time. This repo will house the various vit experiments that I frankenstein together; manifesting their weights and model codes in the repo itself. As I am an independent researcher my resources are limited and I don't have the backing of any donors, so there will be time gaps unless some hardware is sliced off for me. Many of my repos have certain elements omitted purposely for papers in writing, my thesis arguments, my statements about certain universal elements, and a multitude of other ramblings that I don't plan to release specific key details in full phonebook fashion for just ANY PERSON to read. # Let me use your high-end hardware. I deliver - success or failure, but I will deliver. I will not rattle a tin cup for you. Work out a deal with me and you get the weights - I get the classes developed for further use, meant for public release. Let me know if you're willing to work with me. I'll gladly share the code, the process, the progress, and the built accumulated warchest of potentials that this system entails if you provide me gateways to some hardware that I can utilize. Until then, one experiment at a time.
seraphimzzzz/1610761
seraphimzzzz
2025-09-11T20:10:47Z
0
0
null
[ "region:us" ]
null
2025-09-11T20:10:44Z
[View on Civ Archive](https://civarchive.com/models/1511897?modelVersionId=1710208)
Stasun/blockassist
Stasun
2025-09-11T20:10:38Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "howling powerful aardvark", "arxiv:2504.07091", "region:us" ]
null
2025-09-11T19:10:23Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - howling powerful aardvark --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
ultratopaz/1706079
ultratopaz
2025-09-11T20:10:35Z
0
0
null
[ "region:us" ]
null
2025-09-11T20:10:32Z
[View on Civ Archive](https://civarchive.com/models/1591598?modelVersionId=1805379)
crystalline7/1701860
crystalline7
2025-09-11T20:10:23Z
0
0
null
[ "region:us" ]
null
2025-09-11T20:10:19Z
[View on Civ Archive](https://civarchive.com/models/1591598?modelVersionId=1801101)
chaitnya26/kontext-tryon7-fork
chaitnya26
2025-09-11T20:09:44Z
0
0
diffusers
[ "diffusers", "lora", "flux", "image-to-image", "en", "base_model:black-forest-labs/FLUX.1-Kontext-dev", "base_model:adapter:black-forest-labs/FLUX.1-Kontext-dev", "license:mit", "region:us" ]
image-to-image
2025-09-11T20:09:43Z
--- license: mit language: - en base_model: - black-forest-labs/FLUX.1-Kontext-dev pipeline_tag: image-to-image tags: - lora - diffusers - flux --- This is a batch-run banana model, used for practicing the kontext without mask outfit lora replacement effect. All the example results were achieved by directly combining two images without using a mask. Based on the test results, compared with the banana model, it has a greater advantage in terms of consistency. The workflow for each image is similar, with only a slight adjustment of parameters. You can view the details by dragging the image into the comfyui. This is the discussion on Reddit: https://www.reddit.com/r/comfyui/comments/1nchoit/kontext_tryon_lora_no_need_for_a_mask_auto_change/?utm_source=share&utm_medium=web3x&utm_name=web3xcss&utm_term=1&utm_content=share_button ------------------------------------------------------ 这是批量跑的香蕉模型,用来练的kontext 无蒙版换装lora 换的效果。 所有示例结果都是不使用蒙版,直接传两张图完成的换装。 从测试结果看对比香蕉模型,一致性方面更有优势。 每张图的工作流都差不多,仅稍微微调了一点参数,具体可把图片拖入到comfyui中查看。 在Reddit上的讨论: https://www.reddit.com/r/comfyui/comments/1nchoit/kontext_tryon_lora_no_need_for_a_mask_auto_change/?utm_source=share&utm_medium=web3x&utm_name=web3xcss&utm_term=1&utm_content=share_button ![image/png](https://cdn-uploads.huggingface.co/production/uploads/66081f25ecc38ac2454effa4/06L7eCd8U-3FijOCVS-Ji.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/66081f25ecc38ac2454effa4/vGC6pUQS9p0KOMkYYRHrS.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/66081f25ecc38ac2454effa4/YzceMvLhP_EGp5EdMgWv2.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/66081f25ecc38ac2454effa4/HaBDWbiiv3MEtcXeJ1pKK.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/66081f25ecc38ac2454effa4/eMzOM3F8t08j4Vhg3ZCXk.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/66081f25ecc38ac2454effa4/mVvLvseUXW7VbMtGt6QOL.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/66081f25ecc38ac2454effa4/FkuX9Jn96cPl9MdLMsXXw.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/66081f25ecc38ac2454effa4/qimAXFoaWGj9vsVwq0cRB.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/66081f25ecc38ac2454effa4/IauesrkWfI1l7UrFZrvwZ.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/66081f25ecc38ac2454effa4/hiTyvWkQjF5CZS4T7utHs.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/66081f25ecc38ac2454effa4/SaSo5S24rjUQLtOFvnZOd.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/66081f25ecc38ac2454effa4/NQrBXryTtN12gyyeB8Bjm.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/66081f25ecc38ac2454effa4/prr_HNqiTfMYDft83j_9B.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/66081f25ecc38ac2454effa4/-Bpd9DD4yZUgx4VhjPz7M.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/66081f25ecc38ac2454effa4/KiMEc2qy-NtNt2giH3y1P.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/66081f25ecc38ac2454effa4/TgBjTpd5mXzg-8FP9lhAj.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/66081f25ecc38ac2454effa4/QzVxXv_QI478W9nEiNTfm.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/66081f25ecc38ac2454effa4/mB7ZpWEK1JfJnVN9s47El.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/66081f25ecc38ac2454effa4/-ck6qKi2MpLa_XG0q4BgH.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/66081f25ecc38ac2454effa4/-7DBC54QVHAoVnl9rq4b8.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/66081f25ecc38ac2454effa4/pudr5J-8zpRsoM5Ih-cGp.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/66081f25ecc38ac2454effa4/KzV99Z0mHTuTrpKM_DOO3.png)
ilhamlk/lilt-en-funsd
ilhamlk
2025-09-11T20:09:19Z
12
0
transformers
[ "transformers", "tensorboard", "safetensors", "lilt", "token-classification", "generated_from_trainer", "base_model:SCUT-DLVCLab/lilt-roberta-en-base", "base_model:finetune:SCUT-DLVCLab/lilt-roberta-en-base", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2025-05-15T08:00:39Z
--- library_name: transformers license: mit base_model: SCUT-DLVCLab/lilt-roberta-en-base tags: - generated_from_trainer model-index: - name: lilt-en-funsd results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # lilt-en-funsd This model is a fine-tuned version of [SCUT-DLVCLab/lilt-roberta-en-base](https://huggingface.co/SCUT-DLVCLab/lilt-roberta-en-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.6374 - Answer: {'precision': 0.86558516801854, 'recall': 0.9143206854345165, 'f1': 0.8892857142857143, 'number': 817} - Header: {'precision': 0.6020408163265306, 'recall': 0.4957983193277311, 'f1': 0.543778801843318, 'number': 119} - Question: {'precision': 0.8907788719785139, 'recall': 0.9238625812441968, 'f1': 0.9070191431175934, 'number': 1077} - Overall Precision: 0.8667 - Overall Recall: 0.8947 - Overall F1: 0.8805 - Overall Accuracy: 0.8114 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - training_steps: 2500 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Answer | Header | Question | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy | |:-------------:|:--------:|:----:|:---------------:|:--------------------------------------------------------------------------------------------------------:|:---------------------------------------------------------------------------------------------------------:|:---------------------------------------------------------------------------------------------------------:|:-----------------:|:--------------:|:----------:|:----------------:| | 0.3924 | 10.5263 | 200 | 1.0090 | {'precision': 0.8406779661016949, 'recall': 0.9106487148102815, 'f1': 0.8742655699177438, 'number': 817} | {'precision': 0.463768115942029, 'recall': 0.5378151260504201, 'f1': 0.49805447470817127, 'number': 119} | {'precision': 0.8781630740393627, 'recall': 0.8700092850510678, 'f1': 0.8740671641791045, 'number': 1077} | 0.8349 | 0.8669 | 0.8506 | 0.7882 | | 0.0455 | 21.0526 | 400 | 1.4106 | {'precision': 0.8487972508591065, 'recall': 0.9069767441860465, 'f1': 0.8769230769230769, 'number': 817} | {'precision': 0.5384615384615384, 'recall': 0.5882352941176471, 'f1': 0.5622489959839357, 'number': 119} | {'precision': 0.8902104300091491, 'recall': 0.903435468895079, 'f1': 0.8967741935483872, 'number': 1077} | 0.8511 | 0.8862 | 0.8683 | 0.7952 | | 0.0133 | 31.5789 | 600 | 1.5157 | {'precision': 0.8369942196531792, 'recall': 0.8861689106487148, 'f1': 0.8608799048751485, 'number': 817} | {'precision': 0.704225352112676, 'recall': 0.42016806722689076, 'f1': 0.5263157894736842, 'number': 119} | {'precision': 0.8806509945750453, 'recall': 0.904363974001857, 'f1': 0.8923499770957399, 'number': 1077} | 0.8560 | 0.8684 | 0.8621 | 0.7963 | | 0.0069 | 42.1053 | 800 | 1.6133 | {'precision': 0.8629500580720093, 'recall': 0.9094247246022031, 'f1': 0.8855780691299165, 'number': 817} | {'precision': 0.5083333333333333, 'recall': 0.5126050420168067, 'f1': 0.5104602510460251, 'number': 119} | {'precision': 0.8908256880733945, 'recall': 0.9015784586815228, 'f1': 0.8961698200276881, 'number': 1077} | 0.8571 | 0.8818 | 0.8692 | 0.7976 | | 0.0032 | 52.6316 | 1000 | 1.8274 | {'precision': 0.8084415584415584, 'recall': 0.9143206854345165, 'f1': 0.8581275129236072, 'number': 817} | {'precision': 0.6551724137931034, 'recall': 0.4789915966386555, 'f1': 0.5533980582524272, 'number': 119} | {'precision': 0.8783542039355993, 'recall': 0.9117920148560817, 'f1': 0.894760820045558, 'number': 1077} | 0.8389 | 0.8872 | 0.8624 | 0.7852 | | 0.0037 | 63.1579 | 1200 | 1.5619 | {'precision': 0.8635294117647059, 'recall': 0.8984088127294981, 'f1': 0.8806238752249551, 'number': 817} | {'precision': 0.5357142857142857, 'recall': 0.6302521008403361, 'f1': 0.5791505791505792, 'number': 119} | {'precision': 0.8942486085343229, 'recall': 0.8950789229340761, 'f1': 0.8946635730858469, 'number': 1077} | 0.8574 | 0.8808 | 0.8689 | 0.8029 | | 0.0016 | 73.6842 | 1400 | 1.5773 | {'precision': 0.8776978417266187, 'recall': 0.8959608323133414, 'f1': 0.8867353119321623, 'number': 817} | {'precision': 0.5887850467289719, 'recall': 0.5294117647058824, 'f1': 0.5575221238938053, 'number': 119} | {'precision': 0.8938700823421775, 'recall': 0.9071494893221913, 'f1': 0.9004608294930875, 'number': 1077} | 0.8712 | 0.8803 | 0.8757 | 0.8164 | | 0.0011 | 84.2105 | 1600 | 1.6210 | {'precision': 0.8524404086265607, 'recall': 0.9192166462668299, 'f1': 0.8845700824499411, 'number': 817} | {'precision': 0.6588235294117647, 'recall': 0.47058823529411764, 'f1': 0.5490196078431372, 'number': 119} | {'precision': 0.8979033728350045, 'recall': 0.914577530176416, 'f1': 0.9061637534498619, 'number': 1077} | 0.8686 | 0.8902 | 0.8793 | 0.8105 | | 0.0005 | 94.7368 | 1800 | 1.6534 | {'precision': 0.875886524822695, 'recall': 0.9069767441860465, 'f1': 0.8911605532170775, 'number': 817} | {'precision': 0.5739130434782609, 'recall': 0.5546218487394958, 'f1': 0.5641025641025642, 'number': 119} | {'precision': 0.8945454545454545, 'recall': 0.9136490250696379, 'f1': 0.9039963252181902, 'number': 1077} | 0.8690 | 0.8897 | 0.8792 | 0.8078 | | 0.0005 | 105.2632 | 2000 | 1.6261 | {'precision': 0.8844765342960289, 'recall': 0.8996328029375765, 'f1': 0.8919902912621359, 'number': 817} | {'precision': 0.5803571428571429, 'recall': 0.5462184873949579, 'f1': 0.5627705627705628, 'number': 119} | {'precision': 0.879295154185022, 'recall': 0.9266480965645311, 'f1': 0.9023508137432187, 'number': 1077} | 0.8653 | 0.8932 | 0.8790 | 0.8164 | | 0.0006 | 115.7895 | 2200 | 1.6545 | {'precision': 0.8589449541284404, 'recall': 0.9167686658506732, 'f1': 0.8869153345174661, 'number': 817} | {'precision': 0.5849056603773585, 'recall': 0.5210084033613446, 'f1': 0.5511111111111111, 'number': 119} | {'precision': 0.8871841155234657, 'recall': 0.9127205199628597, 'f1': 0.8997711670480548, 'number': 1077} | 0.8600 | 0.8912 | 0.8753 | 0.8049 | | 0.0003 | 126.3158 | 2400 | 1.6374 | {'precision': 0.86558516801854, 'recall': 0.9143206854345165, 'f1': 0.8892857142857143, 'number': 817} | {'precision': 0.6020408163265306, 'recall': 0.4957983193277311, 'f1': 0.543778801843318, 'number': 119} | {'precision': 0.8907788719785139, 'recall': 0.9238625812441968, 'f1': 0.9070191431175934, 'number': 1077} | 0.8667 | 0.8947 | 0.8805 | 0.8114 | ### Framework versions - Transformers 4.56.1 - Pytorch 2.8.0+cu126 - Datasets 4.0.0 - Tokenizers 0.22.0
mradermacher/youtube-xlm-roberta-base-sentiment-multilingual-GGUF
mradermacher
2025-09-11T20:08:11Z
0
0
null
[ "gguf", "endpoints_compatible", "region:us", "feature-extraction" ]
null
2025-09-11T20:05:55Z
<!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> <!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> static quants of https://huggingface.co/AmaanP314/youtube-xlm-roberta-base-sentiment-multilingual
ginic/train_duration_100_samples_4_wav2vec2-large-xlsr-53-buckeye-ipa
ginic
2025-09-11T20:07:46Z
0
0
null
[ "safetensors", "wav2vec2", "automatic-speech-recognition", "en", "license:mit", "region:us" ]
automatic-speech-recognition
2025-09-11T20:06:38Z
--- license: mit language: - en pipeline_tag: automatic-speech-recognition --- # About This model was created to support experiments for evaluating phonetic transcription with the Buckeye corpus as part of https://github.com/ginic/multipa. This is a version of facebook/wav2vec2-large-xlsr-53 fine tuned on a specific subset of the Buckeye corpus. For details about specific model parameters, please view the config.json here or training scripts in the scripts/buckeye_experiments folder of the GitHub repository. # Experiment Details These experiments are targeted at understanding how increasing the amount of data used to train the model affects performance. The first number in the model name indicates the total number of randomly selected data samples. Data samples are selected to maintain 50/50 gender split from speakers, with the exception of the models trained on 20000 samples, as there are 18782 audio samples in our train split of Buckeye, but they are not split equally between male and female speakers. Experiments using 20000 samples actually use all 8252 samples from female speakers in the train set, but randomly select 10000 samples from male speakers for a total of 18252 samples. For each number of train data samples, 5 models are trained to vary train data selection (`train_seed`) without varying other hyperparameters. Before these models were trained, simple grid search hyperparameter tuning was done to select reasonable hyperparameters for fine-tuning with the target number of samples. The hyperparam tuning models have not been uploaded to HuggingFace. Goals: - See how performance on the test set changes as more data is used in fine-tuning Params to vary: - training seed (--train_seed) - number of data samples used in training the model (--train_samples): 100, 200, 400, 800, 1600, 3200, 6400, 12800, 20000
jinx2321/byt5-all-araea-1e4-je
jinx2321
2025-09-11T20:07:43Z
0
0
transformers
[ "transformers", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:google/byt5-small", "base_model:finetune:google/byt5-small", "license:apache-2.0", "text-generation-inference", "endpoints_compatible", "region:us" ]
null
2025-09-11T16:52:51Z
--- library_name: transformers license: apache-2.0 base_model: google/byt5-small tags: - generated_from_trainer model-index: - name: byt5-all-araea-1e4-je results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # byt5-all-araea-1e4-je This model is a fine-tuned version of [google/byt5-small](https://huggingface.co/google/byt5-small) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 64 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 128 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.52.0.dev0 - Pytorch 2.6.0+cu124 - Datasets 3.4.1 - Tokenizers 0.21.1
kenob1n/2d
kenob1n
2025-09-11T20:06:51Z
0
0
null
[ "license:other", "region:us" ]
null
2024-10-23T08:10:36Z
--- license: other license_name: 2d license_link: LICENSE ---
ginic/train_duration_1600_samples_3_wav2vec2-large-xlsr-53-buckeye-ipa
ginic
2025-09-11T20:05:37Z
0
0
null
[ "safetensors", "wav2vec2", "automatic-speech-recognition", "en", "license:mit", "region:us" ]
automatic-speech-recognition
2025-09-11T20:04:25Z
--- license: mit language: - en pipeline_tag: automatic-speech-recognition --- # About This model was created to support experiments for evaluating phonetic transcription with the Buckeye corpus as part of https://github.com/ginic/multipa. This is a version of facebook/wav2vec2-large-xlsr-53 fine tuned on a specific subset of the Buckeye corpus. For details about specific model parameters, please view the config.json here or training scripts in the scripts/buckeye_experiments folder of the GitHub repository. # Experiment Details These experiments are targeted at understanding how increasing the amount of data used to train the model affects performance. The first number in the model name indicates the total number of randomly selected data samples. Data samples are selected to maintain 50/50 gender split from speakers, with the exception of the models trained on 20000 samples, as there are 18782 audio samples in our train split of Buckeye, but they are not split equally between male and female speakers. Experiments using 20000 samples actually use all 8252 samples from female speakers in the train set, but randomly select 10000 samples from male speakers for a total of 18252 samples. For each number of train data samples, 5 models are trained to vary train data selection (`train_seed`) without varying other hyperparameters. Before these models were trained, simple grid search hyperparameter tuning was done to select reasonable hyperparameters for fine-tuning with the target number of samples. The hyperparam tuning models have not been uploaded to HuggingFace. Goals: - See how performance on the test set changes as more data is used in fine-tuning Params to vary: - training seed (--train_seed) - number of data samples used in training the model (--train_samples): 100, 200, 400, 800, 1600, 3200, 6400, 12800, 20000
ginic/train_duration_6400_samples_3_wav2vec2-large-xlsr-53-buckeye-ipa
ginic
2025-09-11T20:04:24Z
0
0
null
[ "safetensors", "wav2vec2", "automatic-speech-recognition", "en", "license:mit", "region:us" ]
automatic-speech-recognition
2025-09-11T20:03:18Z
--- license: mit language: - en pipeline_tag: automatic-speech-recognition --- # About This model was created to support experiments for evaluating phonetic transcription with the Buckeye corpus as part of https://github.com/ginic/multipa. This is a version of facebook/wav2vec2-large-xlsr-53 fine tuned on a specific subset of the Buckeye corpus. For details about specific model parameters, please view the config.json here or training scripts in the scripts/buckeye_experiments folder of the GitHub repository. # Experiment Details These experiments are targeted at understanding how increasing the amount of data used to train the model affects performance. The first number in the model name indicates the total number of randomly selected data samples. Data samples are selected to maintain 50/50 gender split from speakers, with the exception of the models trained on 20000 samples, as there are 18782 audio samples in our train split of Buckeye, but they are not split equally between male and female speakers. Experiments using 20000 samples actually use all 8252 samples from female speakers in the train set, but randomly select 10000 samples from male speakers for a total of 18252 samples. For each number of train data samples, 5 models are trained to vary train data selection (`train_seed`) without varying other hyperparameters. Before these models were trained, simple grid search hyperparameter tuning was done to select reasonable hyperparameters for fine-tuning with the target number of samples. The hyperparam tuning models have not been uploaded to HuggingFace. Goals: - See how performance on the test set changes as more data is used in fine-tuning Params to vary: - training seed (--train_seed) - number of data samples used in training the model (--train_samples): 100, 200, 400, 800, 1600, 3200, 6400, 12800, 20000
xiaofanghf/Real-LINZ-Detectors
xiaofanghf
2025-09-11T20:04:08Z
0
0
mmdetection
[ "mmdetection", "object-detection", "en", "arxiv:2507.20976", "license:cc-by-nc-4.0", "region:us" ]
object-detection
2025-08-17T15:03:26Z
--- license: cc-by-nc-4.0 language: - en pipeline_tag: object-detection library_name: mmdetection --- ## Introduction We introduce a real-world aerial view dataset, LINZ, captured in Selwyn (New Zealand). The dataset has ground sampling distance (GSD) of 12.5 cm per px and has been sampled to 112 px × 112 px image size. For data annotation, we label only the small vehicle centers. To leverage the abundance of bounding box-based open-source object detection frameworks, we define a fixed-size ground truth bounding box of 42.36 px × 42.36 px centered at each vehicle. Annotations are provided in COCO format [x, y, w, h], where "small" in the annotation json files denotes the small vehicle class and (x, y) denotes the top-left corner of the bounding box. We use AP50 as the evaluation metric. ## Model Usage This folder contains four detectors trained on Real LINZ data and tested on Real LINZ data, along with configuration files we use for training and testing. ## References ➡️ **Paper:** [Adapting Vehicle Detectors for Aerial Imagery to Unseen Domains with Weak Supervision](https://arxiv.org/abs/2507.20976) ➡️ **Project Page:** [Webpage](https://humansensinglab.github.io/AGenDA/) ➡️ **Data:** [AGenDA](https://github.com/humansensinglab/AGenDA/tree/main/Data)
K3theking/K-bot
K3theking
2025-09-11T20:03:52Z
0
0
null
[ "text-generation", "aa", "as", "az", "ab", "ay", "af", "en", "dataset:fka/awesome-chatgpt-prompts", "dataset:openai/healthbench", "dataset:jupyter-agent/jupyter-agent-dataset", "dataset:syncora/developer-productivity-simulated-behavioral-data", "dataset:K3theking/K-BotDataset-forK-bot", "base_model:openai/gpt-oss-20b", "base_model:finetune:openai/gpt-oss-20b", "license:gpl", "region:us" ]
text-generation
2025-09-11T19:56:28Z
--- license: gpl datasets: - fka/awesome-chatgpt-prompts - openai/healthbench - jupyter-agent/jupyter-agent-dataset - syncora/developer-productivity-simulated-behavioral-data - K3theking/K-BotDataset-forK-bot language: - aa - as - az - ab - ay - af - en metrics: - character - bleu base_model: - xai-org/grok-2 - openai/gpt-oss-20b pipeline_tag: text-generation ---
ginic/train_duration_3200_samples_4_wav2vec2-large-xlsr-53-buckeye-ipa
ginic
2025-09-11T20:02:05Z
0
0
null
[ "safetensors", "wav2vec2", "automatic-speech-recognition", "en", "license:mit", "region:us" ]
automatic-speech-recognition
2025-09-11T20:00:59Z
--- license: mit language: - en pipeline_tag: automatic-speech-recognition --- # About This model was created to support experiments for evaluating phonetic transcription with the Buckeye corpus as part of https://github.com/ginic/multipa. This is a version of facebook/wav2vec2-large-xlsr-53 fine tuned on a specific subset of the Buckeye corpus. For details about specific model parameters, please view the config.json here or training scripts in the scripts/buckeye_experiments folder of the GitHub repository. # Experiment Details These experiments are targeted at understanding how increasing the amount of data used to train the model affects performance. The first number in the model name indicates the total number of randomly selected data samples. Data samples are selected to maintain 50/50 gender split from speakers, with the exception of the models trained on 20000 samples, as there are 18782 audio samples in our train split of Buckeye, but they are not split equally between male and female speakers. Experiments using 20000 samples actually use all 8252 samples from female speakers in the train set, but randomly select 10000 samples from male speakers for a total of 18252 samples. For each number of train data samples, 5 models are trained to vary train data selection (`train_seed`) without varying other hyperparameters. Before these models were trained, simple grid search hyperparameter tuning was done to select reasonable hyperparameters for fine-tuning with the target number of samples. The hyperparam tuning models have not been uploaded to HuggingFace. Goals: - See how performance on the test set changes as more data is used in fine-tuning Params to vary: - training seed (--train_seed) - number of data samples used in training the model (--train_samples): 100, 200, 400, 800, 1600, 3200, 6400, 12800, 20000
aralper18/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-curious_squinting_frog
aralper18
2025-09-11T20:01:59Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "rl-swarm", "genrl-swarm", "grpo", "gensyn", "I am curious_squinting_frog", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-11T09:37:07Z
--- library_name: transformers tags: - rl-swarm - genrl-swarm - grpo - gensyn - I am curious_squinting_frog --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ginic/train_duration_20000_samples_5_wav2vec2-large-xlsr-53-buckeye-ipa
ginic
2025-09-11T20:00:57Z
0
0
null
[ "safetensors", "wav2vec2", "automatic-speech-recognition", "en", "license:mit", "region:us" ]
automatic-speech-recognition
2025-09-11T19:59:49Z
--- license: mit language: - en pipeline_tag: automatic-speech-recognition --- # About This model was created to support experiments for evaluating phonetic transcription with the Buckeye corpus as part of https://github.com/ginic/multipa. This is a version of facebook/wav2vec2-large-xlsr-53 fine tuned on a specific subset of the Buckeye corpus. For details about specific model parameters, please view the config.json here or training scripts in the scripts/buckeye_experiments folder of the GitHub repository. # Experiment Details These experiments are targeted at understanding how increasing the amount of data used to train the model affects performance. The first number in the model name indicates the total number of randomly selected data samples. Data samples are selected to maintain 50/50 gender split from speakers, with the exception of the models trained on 20000 samples, as there are 18782 audio samples in our train split of Buckeye, but they are not split equally between male and female speakers. Experiments using 20000 samples actually use all 8252 samples from female speakers in the train set, but randomly select 10000 samples from male speakers for a total of 18252 samples. For each number of train data samples, 5 models are trained to vary train data selection (`train_seed`) without varying other hyperparameters. Before these models were trained, simple grid search hyperparameter tuning was done to select reasonable hyperparameters for fine-tuning with the target number of samples. The hyperparam tuning models have not been uploaded to HuggingFace. Goals: - See how performance on the test set changes as more data is used in fine-tuning Params to vary: - training seed (--train_seed) - number of data samples used in training the model (--train_samples): 100, 200, 400, 800, 1600, 3200, 6400, 12800, 20000
timm/MobileCLIP2-S0-OpenCLIP
timm
2025-09-11T20:00:52Z
4
0
open_clip
[ "open_clip", "safetensors", "clip", "mobileclip2", "zero-shot-image-classification", "arxiv:2508.20691", "arxiv:2103.00020", "arxiv:2303.15343", "arxiv:2309.17425", "license:apple-amlr", "region:us" ]
zero-shot-image-classification
2025-09-10T21:48:16Z
--- tags: - clip - mobileclip2 library_name: open_clip pipeline_tag: zero-shot-image-classification license: apple-amlr --- # Model card for MobileCLIP2-S0-OpenCLIP These weights and model card are adapted from the original Apple model at https://huggingface.co/apple/MobileCLIP2-S0. This version uses canonical OpenCLIP configs and weight naming. MobileCLIP2 was introduced in [MobileCLIP2: Improving Multi-Modal Reinforced Training](http://arxiv.org/abs/2508.20691) (TMLR August 2025 <mark>Featured</mark>), by Fartash Faghri, Pavan Kumar Anasosalu Vasu, Cem Koc, Vaishaal Shankar, Alexander T Toshev, Oncel Tuzel, Hadi Pouransari. This repository contains the **MobileCLIP2-S0** checkpoint. ### Highlights * `MobileCLIP2-S4` matches the accuracy of SigLIP-SO400M/14 with 2x fewer parameters and surpasses DFN ViT-L/14 at 2.5x lower latency measured on iPhone12 Pro Max. * `MobileCLIP-S3/S4` are our new architectures trained on MobileCLIP’s training dataset, DataCompDR-1B (dashed lines). * Our smallest variant `MobileCLIP-S0` obtains similar zero-shot performance as [OpenAI](https://arxiv.org/abs/2103.00020)'s ViT-B/16 model while being 4.8x faster and 2.8x smaller. * `MobileCLIP-S2` obtains better avg zero-shot performance than [SigLIP](https://arxiv.org/abs/2303.15343)'s ViT-B/16 model while being 2.3x faster and 2.1x smaller, and trained with 3x less seen samples. * `MobileCLIP-B (LT)` attains zero-shot ImageNet performance of **77.2%** which is significantly better than recent works like [DFN](https://arxiv.org/abs/2309.17425) and [SigLIP](https://arxiv.org/abs/2303.15343) with similar architectures or even [OpenAI's ViT-L/14@336](https://arxiv.org/abs/2103.00020). ## Checkpoints and Results (Original Apple links) | Model | # Seen <BR>Samples (B) | # Params (M) <BR> (img + txt) | Latency (ms) <BR> (img + txt) | IN-1k Zero-Shot <BR> Top-1 Acc. (%) | Avg. Perf. (%) <BR> on 38 datasets | |:----------------------------------------------------------|:----------------------:|:-----------------------------:|:-----------------------------:|:-----------------------------------:|:----------------------------------:| | [MobileCLIP2-S0](https://hf.co/apple/MobileCLIP2-S0) | 13 | 11.4 + 42.4 | 1.5 + 1.6 | 71.5 | 59.7 | | [MobileCLIP2-S2](https://hf.co/apple/MobileCLIP2-S2) | 13 | 35.7 + 63.4 | 3.6 + 3.3 | 77.2 | 64.1 | | [MobileCLIP2-B](https://hf.co/apple/MobileCLIP2-B) | 13 | 86.3 + 63.4 | 10.4 + 3.3 | 79.4 | 65.8 | | [MobileCLIP2-S3](https://hf.co/apple/MobileCLIP2-S3) | 13 | 125.1 + 123.6 | 8.0 + 6.6 | 80.7 | 66.8 | | [MobileCLIP2-L/14](https://hf.co/apple/MobileCLIP2-L-14) | 13 | 304.3 + 123.6 | 57.9 + 6.6 | 81.9 | 67.8 | | [MobileCLIP2-S4](https://hf.co/apple/MobileCLIP2-S4) | 13 | 321.6 + 123.6 | 19.6 + 6.6 | 81.9 | 67.5 | | [MobileCLIP-S0](https://hf.co/apple/MobileCLIP-S0) | 13 | 11.4 + 42.4 | 1.5 + 1.6 | 67.8 | 58.1 | | [MobileCLIP-S1](https://hf.co/apple/MobileCLIP-S1) | 13 | 21.5 + 63.4 | 2.5 + 3.3 | 72.6 | 61.3 | | [MobileCLIP-S2](https://hf.co/apple/MobileCLIP-S2) | 13 | 35.7 + 63.4 | 3.6 + 3.3 | 74.4 | 63.7 | | [MobileCLIP-B](https://hf.co/apple/MobileCLIP-B) | 13 | 86.3 + 63.4 | 10.4 + 3.3 | 76.8 | 65.2 | | [MobileCLIP-B (LT)](https://hf.co/apple/MobileCLIP-B-LT) | 36 | 86.3 + 63.4 | 10.4 + 3.3 | 77.2 | 65.8 | | [MobileCLIP-S3](https://hf.co/apple/MobileCLIP-S3) | 13 | 125.1 + 123.6 | 8.0 + 6.6 | 78.3 | 66.3 | | [MobileCLIP-L/14](https://hf.co/apple/MobileCLIP-L-14) | 13 | 304.3 + 123.6 | 57.9 + 6.6 | 79.5 | 66.9 | | [MobileCLIP-S4](https://hf.co/apple/MobileCLIP-S4) | 13 | 321.6 + 123.6 | 19.6 + 6.6 | 79.4 | 68.1 | ## How to Use ```py import torch import open_clip from PIL import Image from urllib.request import urlopen from timm.utils import reparameterize_model model, _, preprocess = open_clip.create_model_and_transforms('MobileCLIP2-S0', pretrained='dfndr2b') model.eval() tokenizer = open_clip.get_tokenizer('MobileCLIP2-S0') # For inference/model exporting purposes, optionally reparameterize for better performance model = reparameterize_model(model) image = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) image = preprocess(image).unsqueeze(0) text = tokenizer(["a diagram", "a dog", "a cat", "a doughnut"]) with torch.no_grad(), torch.amp.autocast(image.device.type): image_features = model.encode_image(image) text_features = model.encode_text(text) image_features /= image_features.norm(dim=-1, keepdim=True) text_features /= text_features.norm(dim=-1, keepdim=True) text_probs = (100.0 * image_features @ text_features.T).softmax(dim=-1) print("Label probs:", text_probs) ```
MonsterMMORPG/Wan_GGUF
MonsterMMORPG
2025-09-11T20:00:21Z
20,997
7
null
[ "safetensors", "gguf", "region:us" ]
null
2025-04-29T14:03:11Z
[![image](https://img.shields.io/discord/772774097734074388?label=Discord&logo=discord)](https://discord.com/servers/software-engineering-courses-secourses-772774097734074388) [![Hits](https://hits.sh/huggingface.co/MonsterMMORPG/Wan_GGUF.svg?style=plastic&label=Hits%20Since%2025.08.27&labelColor=007ec6&logo=SECourses)](https://hits.sh/github.com/FurkanGozukara/Stable-Diffusion/) [![Patreon](https://img.shields.io/badge/Patreon-Support%20Me-F2EB0E?style=for-the-badge&logo=patreon)](https://www.patreon.com/c/SECourses) [![BuyMeACoffee](https://img.shields.io/badge/Buy%20Me%20a%20Coffee-ffdd00?style=for-the-badge&logo=buy-me-a-coffee&logoColor=black)](https://www.buymeacoffee.com/DrFurkan) [![Furkan Gözükara Medium](https://img.shields.io/badge/Medium-Follow%20Me-800080?style=for-the-badge&logo=medium&logoColor=white)](https://medium.com/@furkangozukara) [![Codio](https://img.shields.io/static/v1?style=for-the-badge&message=Articles&color=4574E0&logo=Codio&logoColor=FFFFFF&label=CivitAI)](https://civitai.com/user/SECourses/articles) [![Furkan Gözükara Medium](https://img.shields.io/badge/DeviantArt-Follow%20Me-990000?style=for-the-badge&logo=deviantart&logoColor=white)](https://www.deviantart.com/monstermmorpg) [![YouTube Channel](https://img.shields.io/badge/YouTube-SECourses-C50C0C?style=for-the-badge&logo=youtube)](https://www.youtube.com/SECourses) [![Furkan Gözükara LinkedIn](https://img.shields.io/badge/LinkedIn-Follow%20Me-0077B5?style=for-the-badge&logo=linkedin&logoColor=white)](https://www.linkedin.com/in/furkangozukara/) [![Udemy](https://img.shields.io/static/v1?style=for-the-badge&message=Stable%20Diffusion%20Course&color=A435F0&logo=Udemy&logoColor=FFFFFF&label=Udemy)](https://www.udemy.com/course/stable-diffusion-dreambooth-lora-zero-to-hero/?referralCode=E327407C9BDF0CEA8156) [![Twitter Follow Furkan Gözükara](https://img.shields.io/badge/Twitter-Follow%20Me-1DA1F2?style=for-the-badge&logo=twitter&logoColor=white)](https://twitter.com/GozukaraFurkan) # Expert-Level Tutorials on Generative AI ## Hello everyone. I am Dr. Furkan Gözükara. I am a PhD Computer Engineer working as an asistant professor + full time Generative AI researcher + developer + tutorials maker ### SECourses is a dedicated YouTube channel for the following topics : Tech, AI, News, Science, Robotics, Singularity, ComfyUI, SwarmUI, ML, Artificial Intelligence, Humanoid Robots, Wan 2.2, FLUX, Krea, Qwen Image, VLMs, Stable Diffusion, SDXL, SeedVR2, TOPAZ, SUPIR, ChatGPT, Gemini, LLMs, Claude, Coding, Agents, Agentic, Animation, Deep Fakes, Fooocus, ControlNet, RunPod, Massed Compute, Windows, Hardware, Inpainting, Cloud, Kaggle, Colab, Automatic1111, SD Web UI, TensorRT, DreamBooth, LoRA, Training, Fine Tuning, Kohya, OneTrainer, Upscale, 3D, Musubi Tuner, Tutorials, Qwen Image Edit, Image Upscaling, Video Upscaling, TTS, Voice Training, Text-to-Speech, Text-to-Music, Image2Image, Text2Video, Video2Video, Style Transfer, Style Training, FLUX Kontext, Face Swap, Lip Sync, Text-to-3D, Avatar Generation, 3D Generation, AGI, Prompt Engineering, Engineering, Gradio, CUDA, GGUF, Quantization, GPT-5, Whisper and more ## Our Platform Links #### 1️⃣ SECourses YouTube (48,000+ subscribers) a must follow one ⤵️ #### 1️⃣ [https://www.youtube.com/@SECourses](https://www.youtube.com/@SECourses) --- #### 2️⃣ SECourses Patreon (25,000+ subscribers) a must follow one ⤵️ #### 2️⃣ [https://www.patreon.com/c/SECourses](https://www.patreon.com/c/SECourses) --- #### 3️⃣ SECourses Discord (10,000+ members) a must join one ⤵️ #### 3️⃣ [https://discord.com/servers/software-engineering-courses-secourses-772774097734074388](https://discord.com/servers/software-engineering-courses-secourses-772774097734074388) --- #### LinkedIn : [**https://www.linkedin.com/in/furkangozukara**](https://www.linkedin.com/in/furkangozukara/) #### Twitter : [**https://twitter.com/GozukaraFurkan**](https://twitter.com/GozukaraFurkan) #### Linktr : [**https://linktr.ee/FurkanGozukara**](https://linktr.ee/FurkanGozukara) #### Google Scholar : https://scholar.google.com/citations?user=_2_KAUsAAAAJ&hl=en #### Mastodon : https://mastodon.social/@furkangozukara --- #### Our 2,500+ Stars GitHub Stable Diffusion and other tutorials repo ⤵️ #### [https://github.com/FurkanGozukara/Stable-Diffusion](https://github.com/FurkanGozukara/Stable-Diffusion) --- ### Regarding This Repository I am keeping this list up-to-date. I got upcoming new awesome video ideas. Trying to find time to do that. **I am open to any criticism you have. I am constantly trying to improve the quality of my tutorial guide videos. Please leave comments with both your suggestions and what you would like to see in future videos.** **All videos have manually fixed subtitles and properly prepared video chapters. You can watch with these perfect subtitles or look for the chapters you are interested in.** Since my profession is teaching, I usually do not skip any of the important parts. Therefore, you may find my videos a little bit longer. Playlist link on YouTube: [**Stable Diffusion Tutorials, Automatic1111 Web UI & Google Colab Guides, DreamBooth, Textual Inversion / Embedding, LoRA, AI Upscaling, Video to Anime**](https://www.youtube.com/watch?v=mnCY8uM7E50&list=PL_pbwdIyffsmclLl0O144nQRnezKlNdx3) ## Tutorial Videos | | | | :---: | :---: | | **1. [How To Install Python, Setup Virtual Environment VENV, Set Default Python System Path & Install Git](https://youtu.be/B5U7LJOvH6g)**<br>[![image](https://cdn-uploads.huggingface.co/production/uploads/6345bd89fe134dfd7a0dba40/S4mKtVuWlCibihsTHL9TS.png)](https://youtu.be/B5U7LJOvH6g) | **2. [Easiest Way to Install & Run Stable Diffusion Web UI on PC by Using Open Source Automatic Installer](https://www.youtube.com/watch?v=AZg6vzWHOTA)**<br>[![image](https://user-images.githubusercontent.com/19240467/218344261-aa236e18-152f-4287-b4fd-fa09c8f57a3f.png)](https://www.youtube.com/watch?v=AZg6vzWHOTA) | | **3. [How to use Stable Diffusion V2.1 and Different Models in the Web UI - SD 1.5 vs 2.1 vs Anything V3](https://www.youtube.com/watch?v=aAyvsX-EpG4)**<br>[![image](https://user-images.githubusercontent.com/19240467/218344276-af8f2fa2-4fdb-4454-9c8f-bd3ef4d2c92a.png)](https://www.youtube.com/watch?v=aAyvsX-EpG4) | **4. [Zero To Hero Stable Diffusion DreamBooth Tutorial By Using Automatic1111 Web UI - Ultra Detailed](https://www.youtube.com/watch?v=Bdl-jWR3Ukc)**<br>[![image](https://user-images.githubusercontent.com/19240467/218344301-04f91cf4-fa35-4975-8c3d-9951c765839a.png)](https://www.youtube.com/watch?v=Bdl-jWR3Ukc) | --- | | | | :---: | :---: | | **5. [DreamBooth Got Buffed - 22 January Update - Much Better Success Train Stable Diffusion Models Web UI](https://www.youtube.com/watch?v=KwxNcGhHuLY)**<br>[![image](https://user-images.githubusercontent.com/19240467/218344369-97b68dd7-732d-4ca3-9acc-a87984ebe0f0.png)](https://www.youtube.com/watch?v=KwxNcGhHuLY) | **6. [How to Inject Your Trained Subject e.g. Your Face Into Any Custom Stable Diffusion Model By Web UI](https://www.youtube.com/watch?v=s25hcW4zq4M)**<br>[![image](https://user-images.githubusercontent.com/19240467/218344509-01d70965-aeea-4096-bc29-7a005b4d47a6.png)](https://www.youtube.com/watch?v=s25hcW4zq4M) | | **7. [How To Do Stable Diffusion LORA Training By Using Web UI On Different Models - Tested SD 1.5, SD 2.1](https://www.youtube.com/watch?v=mfaqqL5yOO4)**<br>[![image](https://user-images.githubusercontent.com/19240467/218344459-bd4554b0-b57b-4079-aaea-ed93d8be95ed.png)](https://www.youtube.com/watch?v=mfaqqL5yOO4) | **8. [8 GB LoRA Training - Fix CUDA & xformers For DreamBooth and Textual Inversion in Automatic1111 SD UI](https://www.youtube.com/watch?v=O01BrQwOd-Q)**<br>[![image](https://user-images.githubusercontent.com/19240467/218344491-52ac51d8-6556-4abc-b2fb-d640a46c48a2.png)](https://www.youtube.com/watch?v=O01BrQwOd-Q) | --- | | | | :---: | :---: | | **9. [How To Do Stable Diffusion Textual Inversion (TI) / Text Embeddings By Automatic1111 Web UI Tutorial](https://www.youtube.com/watch?v=dNOpWt-epdQ)**<br>[![image](https://user-images.githubusercontent.com/19240467/218344538-d5f0329d-b0e9-44ed-aaf0-5e4bb134afb7.png)](https://www.youtube.com/watch?v=dNOpWt-epdQ) | **10. [How To Generate Stunning Epic Text By Stable Diffusion AI - No Photoshop - For Free - Depth-To-Image](https://www.youtube.com/watch?v=TBq1bhY8BOc)**<br>[![image](https://user-images.githubusercontent.com/19240467/218344579-fda1e9b8-a810-48af-9dcb-f47e87afee9e.png)](https://www.youtube.com/watch?v=TBq1bhY8BOc) | | **11. [How to Run and Convert Stable Diffusion Diffusers (.bin Weights) & Dreambooth Models to CKPT File](https://www.youtube.com/watch?v=-6CA18MS0pY)**<br>[![image](https://user-images.githubusercontent.com/19240467/218344677-3f812cf3-db37-4ccb-8f81-99b8a1d5ef00.png)](https://www.youtube.com/watch?v=-6CA18MS0pY) | **12. [Forget Photoshop - How To Transform Images With Text Prompts using InstructPix2Pix Model in NMKD GUI](https://www.youtube.com/watch?v=EPRa8EZl9Os)**<br>[![image](https://user-images.githubusercontent.com/19240467/218344868-3232f875-b2c5-4caa-b59b-9d0fd683c06b.png)](https://www.youtube.com/watch?v=EPRa8EZl9Os) | --- | | | | :---: | :---: | | **13. [Transform Your Selfie into a Stunning AI Avatar with Stable Diffusion - Better than Lensa for Free](https://www.youtube.com/watch?v=mnCY8uM7E50)**<br>[![image](https://user-images.githubusercontent.com/19240467/218344900-286cded5-0171-4b9e-9354-7adf4bada612.png)](https://www.youtube.com/watch?v=mnCY8uM7E50) | **14. [Stable Diffusion Google Colab, Continue, Directory, Transfer, Clone, Custom Models, CKPT SafeTensors](https://www.youtube.com/watch?v=kIyqAdd_i10)**<br>[![image](https://user-images.githubusercontent.com/19240467/218344930-95956805-6a6e-46ee-8885-64043246d79b.png)](https://www.youtube.com/watch?v=kIyqAdd_i10) | | **15. [Become A Stable Diffusion Prompt Master By Using DAAM - Attention Heatmap For Each Used Token - Word](https://www.youtube.com/watch?v=XiKyEKJrTLQ)**<br>[![image](https://user-images.githubusercontent.com/19240467/218345146-54076e5d-230a-4774-8d6a-8358cbd15f78.png)](https://www.youtube.com/watch?v=XiKyEKJrTLQ) | **16. [Transform Your Sketches into Masterpieces with Stable Diffusion ControlNet AI - How To Use Tutorial](https://www.youtube.com/watch?v=YJebdQ30UZQ)**<br>[![image](https://user-images.githubusercontent.com/19240467/218345328-ada437bf-5eb4-478e-a951-84486a42995d.png)](https://www.youtube.com/watch?v=YJebdQ30UZQ) | --- | | | | :---: | :---: | | **17. [Sketches into Epic Art with 1 Click: A Guide to Stable Diffusion ControlNet in Automatic1111 Web UI](https://www.youtube.com/watch?v=vhqqmkTBMlU)**<br>[![image](https://user-images.githubusercontent.com/19240467/218806127-c84d1ff8-d5bb-41b0-bdef-6922568792b9.png)](https://www.youtube.com/watch?v=vhqqmkTBMlU) | **18. [Ultimate RunPod Tutorial For Stable Diffusion - Automatic1111 - Data Transfers, Extensions, CivitAI](https://www.youtube.com/watch?v=QN1vdGhjcRc)**<br>[![image](https://cdn-uploads.huggingface.co/production/uploads/6345bd89fe134dfd7a0dba40/NLbuo08ixbjt5t3iG5ioG.png)](https://www.youtube.com/watch?v=QN1vdGhjcRc) | | **19. [How To Install DreamBooth & Automatic1111 On RunPod & Latest Libraries - 2x Speed Up - cudDNN - CUDA](https://youtu.be/c_S2kFAefTQ)**<br>![image](https://cdn-uploads.huggingface.co/production/uploads/6345bd89fe134dfd7a0dba40/tHmGeIZPU9L2yrsNc8nvw.png)](https://youtu.be/c_S2kFAefTQ) | **20. [Fantastic New ControlNet OpenPose Editor Extension & Image Mixing - Stable Diffusion Web UI Tutorial](https://youtu.be/iFRdrRyAQdQ)**<br>[![image](https://user-images.githubusercontent.com/19240467/220776337-3abce5a3-bb17-4240-8400-4e633562ecc8.png)](https://youtu.be/iFRdrRyAQdQ) | --- | | | | :---: | :---: | | **21. [Automatic1111 Stable Diffusion DreamBooth Guide: Optimal Classification Images Count Comparison Test](https://youtu.be/Tb4IYIYm4os)**<br>[![image](https://user-images.githubusercontent.com/19240467/221384116-e42d6f37-a068-4a2a-9bda-11ac47f33faa.png)](https://youtu.be/Tb4IYIYm4os) | **22. [Epic Web UI DreamBooth Update - New Best Settings - 10 Stable Diffusion Training Compared on RunPods](https://youtu.be/sRdtVanSRl4)**<br>[![image](https://user-images.githubusercontent.com/19240467/222991604-ceed12bc-0bc9-4f16-82fe-e6779132e00c.png)](https://youtu.be/sRdtVanSRl4) | | **23. [New Style Transfer Extension, ControlNet of Automatic1111 Stable Diffusion T2I-Adapter Color Control](https://youtu.be/tXaQAkOgezQ)**<br>[![image](https://user-images.githubusercontent.com/19240467/223283277-eaaf6e53-df43-40ac-8096-c08f9a14cc8d.png)](https://youtu.be/tXaQAkOgezQ) | **24. [Generate Text Arts & Fantastic Logos By Using ControlNet Stable Diffusion Web UI For Free Tutorial](https://youtu.be/C_mJI4U23nQ)**<br>[![image](https://user-images.githubusercontent.com/19240467/224442765-ba241f71-b412-4f5b-bf39-506e9682e336.png)](https://youtu.be/C_mJI4U23nQ) | --- | | | | :---: | :---: | | **25. [How To Install New DREAMBOOTH & Torch 2 On Automatic1111 Web UI PC For Epic Performance Gains Guide](https://youtu.be/pom3nQejaTs)**<br>[![image](https://user-images.githubusercontent.com/19240467/226115542-72db7e7e-cee0-4e3a-82c4-12348e2b237e.png)](https://youtu.be/pom3nQejaTs) | **26. [Training Midjourney Level Style And Yourself Into The SD 1.5 Model via DreamBooth Stable Diffusion](https://youtu.be/m-UVVY_syP0)**<br>[![image](https://user-images.githubusercontent.com/19240467/226378438-fe70f09e-94a8-4d1d-9468-e44dca99aac7.png)](https://youtu.be/m-UVVY_syP0) | | **27. [Video To Anime - Generate An EPIC Animation From Your Phone Recording By Using Stable Diffusion AI](https://youtu.be/kmT-z2lqEPQ)**<br>[![image](https://user-images.githubusercontent.com/19240467/228096548-5f6add70-ca04-4bec-8c33-24d243227532.png)](https://youtu.be/kmT-z2lqEPQ) | **28. [Midjourney Level NEW Open Source Kandinsky 2.1 Beats Stable Diffusion - Installation And Usage Guide](https://youtu.be/dYt9xJ7dnpU)**<br>[![image](https://user-images.githubusercontent.com/19240467/230183162-8a6f7e84-dcd9-45b5-a94c-b93a10778f42.png)](https://youtu.be/dYt9xJ7dnpU) | --- | | | | :---: | :---: | | **29. [RTX 3090 vs RTX 3060 Ultimate Showdown for Stable Diffusion, ML, AI & Video Rendering Performance](https://youtu.be/lgP1LNnaUaQ)**<br>[![image](https://user-images.githubusercontent.com/19240467/231303430-63d801cf-3c5a-4c20-b445-bb682febfa4e.png)](https://youtu.be/lgP1LNnaUaQ) | **30. [Generate Studio Quality Realistic Photos By Kohya LoRA Stable Diffusion Training - Full Tutorial](https://youtu.be/TpuDOsuKIBo)**<br>[![image](https://cdn-uploads.huggingface.co/production/uploads/6345bd89fe134dfd7a0dba40/QA9woGfjeql37J9JepbrW.png)](https://youtu.be/TpuDOsuKIBo) | | **31. [DeepFloyd IF By Stability AI - Is It Stable Diffusion XL or Version 3? We Review and Show How To Use](https://youtu.be/R2fEocf-MU8)**<br>[![image](https://cdn-uploads.huggingface.co/production/uploads/6345bd89fe134dfd7a0dba40/BsmKyOr6a6X5L3AAZ6UQo.png)](https://youtu.be/R2fEocf-MU8) | **32. [How To Find Best Stable Diffusion Generated Images By Using DeepFace AI - DreamBooth / LoRA Training](https://youtu.be/343I11mhnXs)**<br>[![image](https://user-images.githubusercontent.com/19240467/236293388-6254ff84-0866-4bd4-a5d4-2db3c42be3f0.png)](https://youtu.be/343I11mhnXs) | --- | | | | :---: | :---: | | **33. [Mind-Blowing Deepfake Tutorial: Turn Anyone into Your Favorite Movie Star! PC & Google Colab - roop](https://youtu.be/OI1LEN-SgLM)**<br>[![image](https://cdn-uploads.huggingface.co/production/uploads/6345bd89fe134dfd7a0dba40/kav3GrWLuUheBCepl3P5t.png)](https://youtu.be/OI1LEN-SgLM) | **34. [Stable Diffusion Now Has The Photoshop Generative Fill Feature With ControlNet Extension - Tutorial](https://youtu.be/ot5GkaxHPzk)**<br>[![image](https://cdn-uploads.huggingface.co/production/uploads/6345bd89fe134dfd7a0dba40/L7NX9gwpGMOV528-q37cO.png)](https://youtu.be/ot5GkaxHPzk) | | **35. [Human Cropping Script & 4K+ Resolution Class / Reg Images For Stable Diffusion DreamBooth / LoRA](https://youtu.be/QTYX0tgA5ho)**<br>[![image](https://cdn-uploads.huggingface.co/production/uploads/6345bd89fe134dfd7a0dba40/pqSoPsCXyEUWqD1JmVVlI.png)](https://youtu.be/QTYX0tgA5ho) | **36. [Stable Diffusion 2 NEW Image Post Processing Scripts And Best Class / Regularization Images Datasets](https://youtu.be/olX1mySE8HA)**<br>[![image](https://cdn-uploads.huggingface.co/production/uploads/6345bd89fe134dfd7a0dba40/AruFJm03jwUa815xNebB8.png)](https://youtu.be/olX1mySE8HA) | --- | | | | :---: | :---: | | **37. [How To Use Roop DeepFake On RunPod Step By Step Tutorial With Custom Made Auto Installer Script](https://youtu.be/jD1ZSd9aFHg)**<br>[![image](https://cdn-uploads.huggingface.co/production/uploads/6345bd89fe134dfd7a0dba40/8lRIgwQ7L195DXJ21S3Yy.png)](https://youtu.be/jD1ZSd9aFHg) | **38. [Zero to Hero ControlNet Tutorial: Stable Diffusion Web UI Extension - Complete Feature Guide](https://youtu.be/3E5fhFQUVLo)**<br>[![image](https://cdn-uploads.huggingface.co/production/uploads/6345bd89fe134dfd7a0dba40/3I25wwnG6IfYLBo-Ie6ZJ.png)](https://youtu.be/3E5fhFQUVLo) | | **39. [The END of Photography - Use AI to Make Your Own Studio Photos, FREE Via DreamBooth Training](https://youtu.be/g0wXIcRhkJk)**<br>[![image](https://cdn-uploads.huggingface.co/production/uploads/6345bd89fe134dfd7a0dba40/vp0tLPFBzWzNj9G0ev5TV.png)](https://youtu.be/g0wXIcRhkJk) | **40. [How To Use Stable Diffusion XL (SDXL 0.9) On Google Colab For Free](https://youtu.be/s2MQqmv6yAg)**<br>[![image](https://cdn-uploads.huggingface.co/production/uploads/6345bd89fe134dfd7a0dba40/A2RflDXjgXjg6Z2Neq9gv.png)](https://youtu.be/s2MQqmv6yAg) | --- | | | | :---: | :---: | | **41. [Stable Diffusion XL (SDXL) Locally On Your PC - 8GB VRAM - Easy Tutorial With Automatic Installer](https://youtu.be/__7VNmnn5iU)**<br>[![image](https://cdn-uploads.huggingface.co/production/uploads/6345bd89fe134dfd7a0dba40/3UAi7i7oLdtvlibSlx4z9.png)](https://youtu.be/__7VNmnn5iU) | **42. [How To Use SDXL On RunPod Tutorial. Auto Installer & Refiner & Amazing Native Diffusers Based Gradio](https://youtu.be/gTdPRm-R-14)**<br>[![image](https://cdn-uploads.huggingface.co/production/uploads/6345bd89fe134dfd7a0dba40/mquAeJwsz4UBuop3VjUl5.png)](https://youtu.be/gTdPRm-R-14) | | **43. [ComfyUI Master Tutorial - Stable Diffusion XL (SDXL) - Install On PC, Google Colab (Free) & RunPod](https://youtu.be/FnMHbhvWUhE)**<br>[![image](https://cdn-uploads.huggingface.co/production/uploads/6345bd89fe134dfd7a0dba40/WEPEFBLGWUYBdMEWaq2zS.png)](https://youtu.be/FnMHbhvWUhE) | **44. [First Ever SDXL Training With Kohya LoRA - Stable Diffusion XL Training Will Replace Older Models](https://youtu.be/AY6DMBCIZ3A)**<br>[![image](https://cdn-uploads.huggingface.co/production/uploads/6345bd89fe134dfd7a0dba40/mG0CvKAzb8o29nr5ye0Br.png)](https://youtu.be/AY6DMBCIZ3A) | --- | | | | :---: | :---: | | **45. [How To Use SDXL in Automatic1111 Web UI - SD Web UI vs ComfyUI - Easy Local Install Tutorial / Guide](https://youtu.be/eY_v5IR4dUQ)**<br>[![image](https://cdn-uploads.huggingface.co/production/uploads/6345bd89fe134dfd7a0dba40/dtzefTNkEvQiM86qrLJRd.png)](https://youtu.be/eY_v5IR4dUQ) | **46. [How to use Stable Diffusion X-Large (SDXL) with Automatic1111 Web UI on RunPod - Easy Tutorial](https://youtu.be/mDW4zqh8R40)**<br>[![image](https://cdn-uploads.huggingface.co/production/uploads/6345bd89fe134dfd7a0dba40/3n3u3IGXY544moAtlg8-e.png)](https://youtu.be/mDW4zqh8R40) | | **47. [Become A Master Of SDXL Training With Kohya SS LoRAs - Combine Power Of Automatic1111 & SDXL LoRAs](https://youtu.be/sBFGitIvD2A)**<br>[![image](https://cdn-uploads.huggingface.co/production/uploads/6345bd89fe134dfd7a0dba40/rXbRquLxFaDGaGlkl-SUp.png)](https://youtu.be/sBFGitIvD2A) | **48. [How To Do SDXL LoRA Training On RunPod With Kohya SS GUI Trainer & Use LoRAs With Automatic1111 UI](https://youtu.be/-xEwaQ54DI4)**<br>[![image](https://cdn-uploads.huggingface.co/production/uploads/6345bd89fe134dfd7a0dba40/-BQQRjP9Maht_n4UHxgBJ.png)](https://youtu.be/-xEwaQ54DI4) | --- | | | | :---: | :---: | | **49. [How to Do SDXL Training For FREE with Kohya LoRA - Kaggle - NO GPU Required - Pwns Google Colab](https://youtu.be/JF2P7BIUpIU)**<br>[![image](https://cdn-uploads.huggingface.co/production/uploads/6345bd89fe134dfd7a0dba40/n82kc7ND2rDmhRmRexLrb.png)](https://youtu.be/JF2P7BIUpIU) | **50. [How Use Stable Diffusion, SDXL, ControlNet, LoRAs For FREE Without A GPU On Kaggle Like Google Colab](https://youtu.be/dpM02YMj8FY)**<br>[![image](https://cdn-uploads.huggingface.co/production/uploads/6345bd89fe134dfd7a0dba40/mXZLV052fCgDHOpEbGUtL.png)](https://youtu.be/dpM02YMj8FY) | | **51. [Turn Videos Into Animation With Just 1 Click - ReRender A Video Tutorial - Installer For Windows](https://youtu.be/a8oeCFyM5gA)**<br>[![image](https://cdn-uploads.huggingface.co/production/uploads/6345bd89fe134dfd7a0dba40/ODpemx_36c1iPZc1kMwVv.png)](https://youtu.be/a8oeCFyM5gA) | **52. [Turn Videos Into Animation / 3D Just 1 Click - ReRender A Video Tutorial - Installer For RunPod](https://youtu.be/cVf9Qf_pKks)**<br>[![image](https://cdn-uploads.huggingface.co/production/uploads/6345bd89fe134dfd7a0dba40/6MFsLs-f_GZR2L1Q6z413.png)](https://youtu.be/cVf9Qf_pKks) | --- | | | | :---: | :---: | | **53. [Double Your Stable Diffusion Inference Speed with RTX Acceleration TensorRT: A Comprehensive Guide](https://youtu.be/kvxX6NrPtEk)**<br>[![image](https://cdn-uploads.huggingface.co/production/uploads/6345bd89fe134dfd7a0dba40/QhVEeR5hzqZ5SXujvTxzs.png)](https://youtu.be/kvxX6NrPtEk) | **54. [How to Install & Run TensorRT on RunPod, Unix, Linux for 2x Faster Stable Diffusion Inference Speed](https://youtu.be/eKnMVXVjVoU)**<br>[![image](https://cdn-uploads.huggingface.co/production/uploads/6345bd89fe134dfd7a0dba40/6-Njk7uG39nMU98fGqRYN.png)](https://youtu.be/eKnMVXVjVoU) | | **55. [SOTA Image PreProcessing Scripts For Stable Diffusion Training - Auto Subject Crop & Face Focus](https://youtu.be/Fbuyu35TkE4)**<br>[![image](https://cdn-uploads.huggingface.co/production/uploads/6345bd89fe134dfd7a0dba40/Z3E4HXAg5KtVyMXn9AsFa.png)](https://youtu.be/Fbuyu35TkE4) | **56. [Fooocus Stable Diffusion Web UI - Use SDXL Like You Are Using Midjourney - Easy To Use High Quality](https://youtu.be/jHTkVm2mcfs)**<br>[![image](https://cdn-uploads.huggingface.co/production/uploads/6345bd89fe134dfd7a0dba40/zuJst_wZ7_qs_2e3KV33W.png)](https://youtu.be/jHTkVm2mcfs) | --- | | | | :---: | :---: | | **57. [How To Do Stable Diffusion XL (SDXL) DreamBooth Training For Free - Utilizing Kaggle - Easy Tutorial](https://youtu.be/16-b1AjvyBE)**<br>[![image](https://cdn-uploads.huggingface.co/production/uploads/6345bd89fe134dfd7a0dba40/JU3U2Kk6IdVxx1-hfNkrA.png)](https://youtu.be/16-b1AjvyBE) | **58. [PIXART-α : First Open Source Rival to Midjourney - Better Than Stable Diffusion SDXL - Full Tutorial](https://youtu.be/ZiUXf_idIR4)**<br>[![image](https://cdn-uploads.huggingface.co/production/uploads/6345bd89fe134dfd7a0dba40/NccIesCaYHEpKshYB-jF0.png)](https://youtu.be/ZiUXf_idIR4) | | **59. [Essential AI Tools and Libraries: A Guide to Python, Git, C++ Compile Tools, FFmpeg, CUDA, PyTorch](https://youtu.be/-NjNy7afOQ0)**<br>[![image](https://cdn-uploads.huggingface.co/production/uploads/6345bd89fe134dfd7a0dba40/uSxPD2Yav4JlLEAHQE5Dr.png)](https://youtu.be/-NjNy7afOQ0) | **60. [MagicAnimate: Temporally Consistent Human Image Animation using Diffusion Model - Full Tutorial](https://youtu.be/HeXknItbMM8)**<br>[![image](https://cdn-uploads.huggingface.co/production/uploads/6345bd89fe134dfd7a0dba40/rrfzVUOXVlXyrS4RueGHA.png)](https://youtu.be/HeXknItbMM8) | --- | | | | :---: | :---: | | **61. [Instantly Transfer Face By Using IP-Adapter-FaceID: Full Tutorial & GUI For Windows, RunPod & Kaggle](https://youtu.be/rjXsJ24kQQg)**<br>[![image](https://cdn-uploads.huggingface.co/production/uploads/6345bd89fe134dfd7a0dba40/lUGVOxswNTUEoUnCQfBVw.png)](https://youtu.be/rjXsJ24kQQg) | **62. [Detailed Comparison of 160+ Best Stable Diffusion 1.5 Custom Models & 1 Click Script to Download All](https://youtu.be/G-oZn4H-aHQ)**<br>[![image](https://cdn-uploads.huggingface.co/production/uploads/6345bd89fe134dfd7a0dba40/7P4KwfN4dci0uRdDaINIK.png)](https://youtu.be/G-oZn4H-aHQ) | | **63. [SUPIR: New SOTA Open Source Image Upscaler & Enhancer Model Better Than Magnific & Topaz AI Tutorial](https://youtu.be/PqREA6-bC3w)**<br>[![image](https://cdn-uploads.huggingface.co/production/uploads/6345bd89fe134dfd7a0dba40/QqzGQeKmOw7-aO7Ippqp2.png)](https://youtu.be/PqREA6-bC3w) | **64. [Full Stable Diffusion SD & XL Fine Tuning Tutorial With OneTrainer On Windows & Cloud - Zero To Hero](https://youtu.be/0t5l6CP9eBg)**<br>[![image](https://cdn-uploads.huggingface.co/production/uploads/6345bd89fe134dfd7a0dba40/fzw4B_eMUEQulB1v_3xtB.png)](https://youtu.be/0t5l6CP9eBg) | --- | | | | :---: | :---: | | **65. [Improve Stable Diffusion Prompt Following & Image Quality Significantly With Incantations Extension](https://youtu.be/lMQ7DIPmrfI)**<br>[![image](https://cdn-uploads.huggingface.co/production/uploads/6345bd89fe134dfd7a0dba40/TzuZWTiHAc3wTxh3PwGL5.png)](https://youtu.be/lMQ7DIPmrfI) | **66. [Complete Guide to SUPIR Enhancing and Upscaling Images Like in Sci-Fi Movies on Your PC](https://youtu.be/OYxVEvDf284)**<br>[![image](https://cdn-uploads.huggingface.co/production/uploads/6345bd89fe134dfd7a0dba40/mfxA2WEIFBtIN7_J_LKZn.png)](https://youtu.be/OYxVEvDf284) | | **67. [IDM-VTON: The Most Amazing Virtual Clothing Try On Application - Open Source - 1 Click Install & Use](https://youtu.be/m4pcIeAVQD0)**<br>[![image](https://cdn-uploads.huggingface.co/production/uploads/6345bd89fe134dfd7a0dba40/CuTegajGbsXe0gu8lO6HX.png)](https://youtu.be/m4pcIeAVQD0) | **68. [IDM-VTON: The Most Amazing Virtual Clothing Try On Application - RunPod - Massed Compute - Kaggle](https://youtu.be/LeHfgq_lAXU)**<br>[![image](https://cdn-uploads.huggingface.co/production/uploads/6345bd89fe134dfd7a0dba40/G3um_XDD1Pg2n_wQ1qwT2.png)](https://youtu.be/LeHfgq_lAXU) | --- | | | | :---: | :---: | | **69. [Stable Cascade Full Tutorial for Windows - Predecessor of SD3 - 1-Click Install Amazing Gradio APP](https://youtu.be/q0cYhalUUsc)**<br>[![image](https://cdn-uploads.huggingface.co/production/uploads/6345bd89fe134dfd7a0dba40/8vsUxi7EudpiTBTIyFVsc.png)](https://youtu.be/q0cYhalUUsc) | **70. [Stable Cascade Full Tutorial for Cloud - Predecessor of SD3 - Massed Compute, RunPod & Kaggle](https://youtu.be/PKDeMdEObNo)**<br>[![image](https://cdn-uploads.huggingface.co/production/uploads/6345bd89fe134dfd7a0dba40/p-n15XxgryehpflMBTwSW.png)](https://youtu.be/PKDeMdEObNo) | | **71. [How to Download (wget) Models from CivitAI & Hugging Face (HF) & upload into HF including privates](https://youtu.be/X5WVZ0NMaTg)**<br>[![image](https://cdn-uploads.huggingface.co/production/uploads/6345bd89fe134dfd7a0dba40/mZcF0n-u58Gq_Em9qKMOE.png)](https://youtu.be/X5WVZ0NMaTg) | **72. [Testing Stable Diffusion Inference Performance with Latest NVIDIA Driver including TensorRT ONNX](https://youtu.be/TNR2HZRw74E)**<br>[![image](https://cdn-uploads.huggingface.co/production/uploads/6345bd89fe134dfd7a0dba40/mZcF0n-u58Gq_Em9qKMOE.png)](https://youtu.be/TNR2HZRw74E) | --- | | | | :---: | :---: | | **73. [Mind-Blowing Deepfake Tutorial: Turn Anyone into Your Fav Movie Star! Better than Roop & Face Fusion](https://youtu.be/RdWKOUlenaY)**<br>[![image](https://cdn-uploads.huggingface.co/production/uploads/6345bd89fe134dfd7a0dba40/xahkx8vJCo0lseUUzkVHK.png)](https://youtu.be/RdWKOUlenaY) | **74. [Best Deepfake Open Source App ROPE - So Easy To Use Full HD Feceswap DeepFace, No GPU Required Cloud](https://youtu.be/HLWLSszHwEc)**<br>[![image](https://cdn-uploads.huggingface.co/production/uploads/6345bd89fe134dfd7a0dba40/3QBgzati3JLKgqFED_34w.png)](https://youtu.be/HLWLSszHwEc) | | **75. [V-Express: 1-Click AI Avatar Talking Heads Video Animation Generator - D-ID Alike - Free Open Source](https://youtu.be/xLqDTVWUSec)**<br>[![image](https://cdn-uploads.huggingface.co/production/uploads/6345bd89fe134dfd7a0dba40/M6VWMKaWhBu9UE3wvNQ2f.png)](https://youtu.be/xLqDTVWUSec) | **76. [V-Express 1-Click AI Talking Avatar Generator - Like D-ID - Massed Compute, RunPod & Kaggle Guide](https://youtu.be/GXBiqJOc9FE)**<br>[![image](https://cdn-uploads.huggingface.co/production/uploads/6345bd89fe134dfd7a0dba40/O-GwtKjvYb3NyI5Lgpf8S.png)](https://youtu.be/GXBiqJOc9FE) | --- | | | | :---: | :---: | | **77. [Zero to Hero Stable Diffusion 3 Tutorial with Amazing SwarmUI SD Web UI that Utilizes ComfyUI](https://youtu.be/HKX8_F1Er_w)**<br>[![image](https://cdn-uploads.huggingface.co/production/uploads/6345bd89fe134dfd7a0dba40/Uy27VaFHHa4PGzoNAB3j2.png)](https://youtu.be/HKX8_F1Er_w) | **78. [How to Use SwarmUI & Stable Diffusion 3 on Cloud Services Kaggle (free), Massed Compute & RunPod](https://youtu.be/XFUZof6Skkw)**<br>[![image](https://cdn-uploads.huggingface.co/production/uploads/6345bd89fe134dfd7a0dba40/qpjNGx-8UckXK0CuEzcv7.png)](https://youtu.be/XFUZof6Skkw) | | **79. [Animate Static Photos into Talking Videos with LivePortrait AI Compose Perfect Expressions Fast](https://youtu.be/FPtpNrmuwXk)**<br>[![image](https://cdn-uploads.huggingface.co/production/uploads/6345bd89fe134dfd7a0dba40/beNzm6ITK_5AnipzYt40Y.png)](https://youtu.be/FPtpNrmuwXk) | **80. [LivePortrait: No-GPU Cloud Tutorial - RunPod, MassedCompute & Free Kaggle Account - Animate Images](https://youtu.be/wG7oPp01COg)**<br>[![image](https://cdn-uploads.huggingface.co/production/uploads/6345bd89fe134dfd7a0dba40/fvWxGLUrIe5XWqP37OBfU.png)](https://youtu.be/wG7oPp01COg) | --- | | | | :---: | :---: | | **81. [Kling AI Video is FINALLY Public (All Countries), Free to Use and MIND BLOWING - Full Tutorial](https://youtu.be/zcpqAxYV1_w)**<br>[![image](https://cdn-uploads.huggingface.co/production/uploads/6345bd89fe134dfd7a0dba40/DPgXHOeVvSW1AdsMG_ywI.png)](https://youtu.be/zcpqAxYV1_w) | **82. [FLUX: The First Ever Open Source txt2img Model Truly Beats Midjourney & Others - FLUX is Awaited SD3](https://youtu.be/bupRePUOA18)**<br>[![image](https://cdn-uploads.huggingface.co/production/uploads/6345bd89fe134dfd7a0dba40/dguyYoaghc8IVdBrKMDkl.png)](https://youtu.be/bupRePUOA18) | | **83. [SUPIR Online - Ultimate Image Upscaler by Official Developers - Full Tutorial - SUPIR 2 Incoming](https://youtu.be/JajPVWMt2Lk)**<br>[![image](https://cdn-uploads.huggingface.co/production/uploads/6345bd89fe134dfd7a0dba40/j4kEYi0jQ5Vsxa0-Qq4rC.png)](https://youtu.be/JajPVWMt2Lk) | **84. [FLUX LoRA Training Simplified: From Zero to Hero with Kohya SS GUI (8GB GPU, Windows) Tutorial Guide](https://youtu.be/nySGu12Y05k)**<br>[![image](https://cdn-uploads.huggingface.co/production/uploads/6345bd89fe134dfd7a0dba40/5oeVl6mmaRyYZkxuXSShm.png)](https://youtu.be/nySGu12Y05k) | --- | | | | :---: | :---: | | **85. [Blazing Fast & Ultra Cheap FLUX LoRA Training on Massed Compute & RunPod Tutorial - No GPU Required!](https://youtu.be/-uhL2nW7Ddw)**<br>[![image](https://cdn-uploads.huggingface.co/production/uploads/6345bd89fe134dfd7a0dba40/hPBegzqT2A52hrveI7buf.png)](https://youtu.be/-uhL2nW7Ddw) | **86. [Invoke AI Full Install and Run Tutorial for Windows, RunPod and Massed Compute - 1-Click Easy Guide](https://youtu.be/BuxFBYAUGIY)**<br>[![image](https://cdn-uploads.huggingface.co/production/uploads/6345bd89fe134dfd7a0dba40/tWhmpzXqNY-dwQeJZUozb.png)](https://youtu.be/BuxFBYAUGIY) | | **87. [How to Install Python, CUDA, cuDNN, C++ Build Tools, FFMPEG & Git Tutorial for AI Applications](https://youtu.be/DrhUHnYfwC0)**<br>[![image](https://cdn-uploads.huggingface.co/production/uploads/6345bd89fe134dfd7a0dba40/hchc2xHcOkF2VX-wBh0tW.png)](https://youtu.be/DrhUHnYfwC0) | **88. [How to Use MimicPC Full Tutorial - Run Best AI APPs in Your Browser Through MimicPC Servers](https://youtu.be/URnOHbmuKWs)**<br>[![image](https://cdn-uploads.huggingface.co/production/uploads/6345bd89fe134dfd7a0dba40/JmXaFHnVYr3GsykKz8xlB.png)](https://youtu.be/URnOHbmuKWs) | --- | | | | :---: | :---: | | **89. [How To Enable VPN For Only A Single APP With Cloudflare Zero Trust Free Warp VPN - Split Tunneling](https://youtu.be/0RSaYlmmblc)**<br>[![image](https://cdn-uploads.huggingface.co/production/uploads/6345bd89fe134dfd7a0dba40/iKA3jnK8_jXZPD800OlB6.png)](https://youtu.be/0RSaYlmmblc) | **90. [FLUX Full Fine-Tuning / DreamBooth Training Master Tutorial for Windows, RunPod & Massed Compute](https://youtu.be/FvpWy1x5etM)**<br>[![image](https://cdn-uploads.huggingface.co/production/uploads/6345bd89fe134dfd7a0dba40/n_k1iX-Aa4nliLe_wGwhB.png)](https://youtu.be/FvpWy1x5etM) | | **91. [Stable Diffusion 3.5 Large How To Use Tutorial With Best Configuration and Comparison With FLUX DEV](https://youtu.be/-zOKhoO9a5s)**<br>[![image](https://cdn-uploads.huggingface.co/production/uploads/6345bd89fe134dfd7a0dba40/tQthJqElQ4AsBmCFFnh57.png)](https://youtu.be/-zOKhoO9a5s) | **92. [How To Use Mochi 1 Open Source Video Generation Model On Your Windows PC, RunPod and Massed Compute](https://youtu.be/iqBV7bCbDJY)**<br>[![image](https://cdn-uploads.huggingface.co/production/uploads/6345bd89fe134dfd7a0dba40/_n6EeRn1P_HoBhwXhuP_E.png)](https://youtu.be/iqBV7bCbDJY) | --- | | | | :---: | :---: | | **93. [FLUX Tools Outpainting, Inpainting (Fill), Redux, Depth & Canny Ultimate Tutorial Guide with SwarmUI](https://youtu.be/hewDdVJEqOQ)**<br>[![image](https://cdn-uploads.huggingface.co/production/uploads/6345bd89fe134dfd7a0dba40/uktfY8N6AKf1RDF7ydP_f.png)](https://youtu.be/hewDdVJEqOQ) | **94. [Best Open Source Image to Video Generator CogVideoX1.5-5B-I2V Step by Step Windows & Cloud Tutorial](https://youtu.be/5UCkMzP2VLE)**<br>[![image](https://cdn-uploads.huggingface.co/production/uploads/6345bd89fe134dfd7a0dba40/LtAkbiSxvMxczuglFhg7F.png)](https://youtu.be/5UCkMzP2VLE) | | **95. [SANA: Ultra HD Fast Text to Image Model from NVIDIA Step by Step Tutorial on Windows, Cloud & Kaggle](https://youtu.be/KW-MHmoNcqo)**<br>[![image](https://cdn-uploads.huggingface.co/production/uploads/6345bd89fe134dfd7a0dba40/WrLfe0J8P5vsNVL2mpOhX.png)](https://youtu.be/KW-MHmoNcqo) | **96. [NVIDIA SANA 4K: Mind-Blowing 16MP Text-to-Image AI Model Runs on 8GB GPUs - Game-Changing Tech](https://youtu.be/GjENQfHF4W8)**<br>[![image](https://cdn-uploads.huggingface.co/production/uploads/6345bd89fe134dfd7a0dba40/zJ8D25j96YeqQe8OFn1By.png)](https://youtu.be/GjENQfHF4W8) | --- | | | | :---: | :---: | | **97. [MSI RTX 5090 TRIO FurMark Benchmarking + Overclocking + Noise Testing and Comparing with RTX 3090 TI](https://youtu.be/uV3oqdILOmA)**<br>[![image](https://cdn-uploads.huggingface.co/production/uploads/6345bd89fe134dfd7a0dba40/WSuaQABKlDr4X7DH4d7UA.png)](https://youtu.be/uV3oqdILOmA) | **98. [RTX 5090 Tested Against FLUX DEV, SD 3.5 Large, SD 3.5 Medium, SDXL, SD 1.5, AMD 9950X + RTX 3090 TI](https://youtu.be/jHlGzaDLkto)**<br>[![image](https://cdn-uploads.huggingface.co/production/uploads/6345bd89fe134dfd7a0dba40/udletUqkCuFxxB0zzLb4H.png)](https://youtu.be/jHlGzaDLkto) | | **99. [SwarmUI free Kaggle Account Notebook Full Tutorial - SD 1.5, SDXL, SD 3.5, FLUX, Hunyuan, SkyReels](https://youtu.be/VR1s7LxK5ZU)**<br>[![image](https://cdn-uploads.huggingface.co/production/uploads/6345bd89fe134dfd7a0dba40/ejoLl2N6xb8uZ9gw71pbu.png)](https://youtu.be/VR1s7LxK5ZU) | **100. [How ChatGPT (LLMs) Works - Excellent Graphical Illustration Video](https://youtu.be/cigddCCLJRI)**<br>[![image](https://cdn-uploads.huggingface.co/production/uploads/6345bd89fe134dfd7a0dba40/9A_FjstrKpLOwCXrtVOuh.png)](https://youtu.be/cigddCCLJRI) | --- | | | | :---: | :---: | | **101. [Wan 2.1 AI Video Model: Ultimate Step-by-Step Tutorial for Windows & Affordable Private Cloud Setup](https://youtu.be/hnAhveNy-8s)**<br>[![image](https://cdn-uploads.huggingface.co/production/uploads/6345bd89fe134dfd7a0dba40/tYRYsP_yPPPhT5A7l5gGm.png)](https://youtu.be/hnAhveNy-8s) | **102. [Ultra Advanced Wan 2.1 App Updates & Famous Squish Effect to Generate Squishing Videos Locally](https://youtu.be/ueMrzmbdWBg)**<br>[![image](https://cdn-uploads.huggingface.co/production/uploads/6345bd89fe134dfd7a0dba40/fKY99Lbu0videyyOd-6Kn.png)](https://youtu.be/ueMrzmbdWBg) | | **103. [MMAudio from Sony AI Full Tutorial - Open Source AI Audio Generator for Videos, Images and Text](https://youtu.be/504f8S4MLTw)**<br>[![image](https://cdn-uploads.huggingface.co/production/uploads/6345bd89fe134dfd7a0dba40/yGmbVbY0SkJ6IjPPWu_9f.png)](https://youtu.be/504f8S4MLTw) | **104. [FramePack Full Tutorial: 1-Click to Install on Windows - Up to 120 Second Image-to-Videos with 6GB](https://youtu.be/HwMngohRmHg)**<br>[![image](https://cdn-uploads.huggingface.co/production/uploads/6345bd89fe134dfd7a0dba40/1dE0swD0LUieRMjGLcd2J.png)](https://youtu.be/HwMngohRmHg) | --- | | | | :---: | :---: | | **105. [Master Local AI Art & Video Generation with SwarmUI (ComfyUI Backend): The Ultimate 2025 Tutorial](https://youtu.be/fTzlQ0tjxj0)**<br>[![image](https://cdn-uploads.huggingface.co/production/uploads/6345bd89fe134dfd7a0dba40/tAXCtoTTMsPHcGUdg7Ff1.png)](https://youtu.be/fTzlQ0tjxj0) | **106. [Step by Step TRELLIS Tutorial to Generate Amazing High-Quality 3D Assets from Static Images Locally](https://youtu.be/EhU7Jil9WAk)**<br>[![image](https://cdn-uploads.huggingface.co/production/uploads/6345bd89fe134dfd7a0dba40/eh7Pqn_v4Lg2ow4wxFlND.png)](https://youtu.be/EhU7Jil9WAk) | | **107. [Transfer Any Clothing Into A New Person & Turn Any Person Into A 3D Figure - ComfyUI Tutorial](https://youtu.be/ZzYnhKeaJBs)**<br>[![image](https://cdn-uploads.huggingface.co/production/uploads/6345bd89fe134dfd7a0dba40/7vIPKwYO66KO6Ysk-k7x6.png)](https://youtu.be/ZzYnhKeaJBs) | **108. [Wan 2.1 Text-to-Video T2V & Image-to-Video I2V Tutorial for SwarmUI with CausVid LoRA Extreme Speed](https://youtu.be/XNcn845UXdw)**<br>[![image](https://cdn-uploads.huggingface.co/production/uploads/6345bd89fe134dfd7a0dba40/lmic_VU9aRYf3Q7Rrkh2H.png)](https://youtu.be/XNcn845UXdw) | --- | | | | :---: | :---: | | **109. [SwarmUI Teacache Full Tutorial With Very Best Wan 2.1 I2V & T2V Presets - ComfyUI Used as Backend](https://youtu.be/r38eWyNoXHo)**<br>[![image](https://cdn-uploads.huggingface.co/production/uploads/6345bd89fe134dfd7a0dba40/3MTVj0j_LQJSEySCKyg0H.png)](https://youtu.be/r38eWyNoXHo) | **110. [VEO 3 FLOW Full Tutorial - How To Use VEO3 in FLOW Guide](https://youtu.be/AoEmQPU2gtg)**<br>[![image](https://cdn-uploads.huggingface.co/production/uploads/6345bd89fe134dfd7a0dba40/kixvVmPdVY-Ao58hwXAag.png)](https://youtu.be/AoEmQPU2gtg) | | **111. [CausVid LoRA V2 of Wan 2.1 Brings Massive Quality Improvements, Better Colors and Saturation](https://youtu.be/1rAwZv0hEcU)**<br>[![image](https://cdn-uploads.huggingface.co/production/uploads/6345bd89fe134dfd7a0dba40/_s2R5_VYS6u2Cd0GwGFN7.png)](https://youtu.be/1rAwZv0hEcU) | **112. [Hi3DGen Full Tutorial With Ultra Advanced App to Generate the Very Best 3D Meshes from Static Images](https://youtu.be/HjbD20B2C1g)**<br>[![image](https://cdn-uploads.huggingface.co/production/uploads/6345bd89fe134dfd7a0dba40/EzyVg24MA6gwEObcXG4yj.png)](https://youtu.be/HjbD20B2C1g) | --- | | | | :---: | :---: | | **113. [Ultimate ComfyUI & SwarmUI on RunPod Tutorial with Addition RTX 5000 Series GPUs & 1-Click to Setup](https://youtu.be/R02kPf9Y3_w)**<br>[![image](https://cdn-uploads.huggingface.co/production/uploads/6345bd89fe134dfd7a0dba40/1672Cod9hE7ZLZMsFo4JW.png)](https://youtu.be/R02kPf9Y3_w) | **114. [WAN 2.1 FusionX is the New Best of Local Video Generation with Only 8 Steps + FLUX Upscaling Guide](https://youtu.be/Xbn93GRQKsQ)**<br>[![image](https://cdn-uploads.huggingface.co/production/uploads/6345bd89fe134dfd7a0dba40/8ky-JaZeauPLackMETaLk.png)](https://youtu.be/Xbn93GRQKsQ) | | **115. [FLUX Kontext Dev Detailed Local Windows How To Tutorial - Better Than ChatGPT & Gemini Image Editing](https://youtu.be/adF9X9E0Chs)**<br>[![image](https://cdn-uploads.huggingface.co/production/uploads/6345bd89fe134dfd7a0dba40/f31ZjyYMCmTgJ3M757AjX.png)](https://youtu.be/adF9X9E0Chs) | **116. [MultiTalk Full Tutorial With 1-Click Installer - Make Talking and Singing Videos From Static Images](https://youtu.be/8cMIwS9qo4M)**<br>[![image](https://cdn-uploads.huggingface.co/production/uploads/6345bd89fe134dfd7a0dba40/cipVmNkzcOCOnERCVTHND.png)](https://youtu.be/8cMIwS9qo4M) | --- | | | | :---: | :---: | | **117. [MultiTalk Levelled Up - Way Better Animation Compared to Before with New Workflows - Image to Video](https://youtu.be/wgCtUeog41g)**<br>[![image](https://cdn-uploads.huggingface.co/production/uploads/6345bd89fe134dfd7a0dba40/3qXem6fMmUg1OxEciQrp6.png)](https://youtu.be/wgCtUeog41g) | **118. [SECourses Video and Image Upscaler Pro STAR vs TOPAZ StarLight vs Image Based Best Upscalers](https://youtu.be/q8QCtxrVK7g)**<br>[![image](https://cdn-uploads.huggingface.co/production/uploads/6345bd89fe134dfd7a0dba40/5sm2Kvc1Kz5vj_EVGiYgc.png)](https://youtu.be/q8QCtxrVK7g) | | **119. [Wan 2.2 & FLUX Krea Full Tutorial - Automated Install - Ready Perfect Presets - SwarmUI with ComfyUI](https://youtu.be/8MvvuX4YPeo)**<br>[![image](https://cdn-uploads.huggingface.co/production/uploads/6345bd89fe134dfd7a0dba40/nPBUmGKFpw1903b1S16jR.png)](https://youtu.be/8MvvuX4YPeo) | **120. [Qwen Image Dominates Text-to-Image: 700+ Tests Reveal Why It's Better Than FLUX - Presets Published](https://youtu.be/R6h02YY6gUs)**<br>[![image](https://cdn-uploads.huggingface.co/production/uploads/6345bd89fe134dfd7a0dba40/GipONloEGp6P2tpxMRFm4.png)](https://youtu.be/R6h02YY6gUs) | --- | | | | :---: | :---: | | **121. [Wan 2.2, FLUX & Qwen Image Upgraded: Ultimate Tutorial for Open Source SOTA Image & Video Gen Models](https://youtu.be/3BFDcO2Ysu4)**<br>[![image](https://cdn-uploads.huggingface.co/production/uploads/6345bd89fe134dfd7a0dba40/s3OyItRra4UT0H91vdWra.png)](https://youtu.be/3BFDcO2Ysu4) | **122. [Qwen Image Edit Full Tutorial: 26 Different Demo Cases, Prompts & Images, Pwns FLUX Kontext Dev](https://youtu.be/gLCMhbsICEQ)**<br>[![image](https://cdn-uploads.huggingface.co/production/uploads/6345bd89fe134dfd7a0dba40/xMotDtPQ2a63H_s8ol6C3.png)](https://youtu.be/gLCMhbsICEQ) | | **123. [Nano Banana (Gemini 2.5 Flash Image) Full Tutorial - 27 Unique Cases vs Qwen Image Edit - Free 2 Use](https://youtu.be/qPUreQxB8zQ)**<br>[![image](https://cdn-uploads.huggingface.co/production/uploads/6345bd89fe134dfd7a0dba40/mS2YPA9T2ISK0FZS2s_Sg.png)](https://youtu.be/qPUreQxB8zQ) | |
ginic/train_duration_200_samples_2_wav2vec2-large-xlsr-53-buckeye-ipa
ginic
2025-09-11T19:57:39Z
0
0
null
[ "safetensors", "wav2vec2", "automatic-speech-recognition", "en", "license:mit", "region:us" ]
automatic-speech-recognition
2025-09-11T19:56:32Z
--- license: mit language: - en pipeline_tag: automatic-speech-recognition --- # About This model was created to support experiments for evaluating phonetic transcription with the Buckeye corpus as part of https://github.com/ginic/multipa. This is a version of facebook/wav2vec2-large-xlsr-53 fine tuned on a specific subset of the Buckeye corpus. For details about specific model parameters, please view the config.json here or training scripts in the scripts/buckeye_experiments folder of the GitHub repository. # Experiment Details These experiments are targeted at understanding how increasing the amount of data used to train the model affects performance. The first number in the model name indicates the total number of randomly selected data samples. Data samples are selected to maintain 50/50 gender split from speakers, with the exception of the models trained on 20000 samples, as there are 18782 audio samples in our train split of Buckeye, but they are not split equally between male and female speakers. Experiments using 20000 samples actually use all 8252 samples from female speakers in the train set, but randomly select 10000 samples from male speakers for a total of 18252 samples. For each number of train data samples, 5 models are trained to vary train data selection (`train_seed`) without varying other hyperparameters. Before these models were trained, simple grid search hyperparameter tuning was done to select reasonable hyperparameters for fine-tuning with the target number of samples. The hyperparam tuning models have not been uploaded to HuggingFace. Goals: - See how performance on the test set changes as more data is used in fine-tuning Params to vary: - training seed (--train_seed) - number of data samples used in training the model (--train_samples): 100, 200, 400, 800, 1600, 3200, 6400, 12800, 20000
raniero/dummy-cpu-023-repo
raniero
2025-09-11T19:57:34Z
0
0
peft
[ "peft", "safetensors", "lora", "bittensor", "subnet-56", "gradients", "it", "base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0", "base_model:adapter:TinyLlama/TinyLlama-1.1B-Chat-v1.0", "license:apache-2.0", "region:us" ]
null
2025-09-11T19:57:31Z
--- language: - it license: apache-2.0 library_name: peft tags: [lora, bittensor, subnet-56, gradients] base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0 --- # ARES56 — LoRA adapter Upload ID: dummy-cpu-023_1757620651 upload_id: unknown_1757404904 File inclusi: - `adapter_model.safetensors` — SHA256: `e5a00aa9991ac8a5ee3109844d84a55583bd20572ad3ffcd42792f3c36b183ad` - `adapter_config.json` — SHA256: `4f39b39f151e0d31a8135b89599746fd2e06285a8594595589d7974f553af441` - `tokenizer_config.json` — SHA256: `missing` - `special_tokens_map.json` — SHA256: `missing` Output generato via Axolotl (CPU / smoke). Nessun checkpoint completo incluso.
marucas92/fedephm
marucas92
2025-09-11T19:57:21Z
0
0
diffusers
[ "diffusers", "text-to-image", "lora", "template:diffusion-lora", "base_model:stable-diffusion-v1-5/stable-diffusion-v1-5", "base_model:adapter:stable-diffusion-v1-5/stable-diffusion-v1-5", "license:apache-2.0", "region:us" ]
text-to-image
2025-09-11T19:54:13Z
--- tags: - text-to-image - lora - diffusers - template:diffusion-lora widget: - output: url: images/fedephm (6).jpeg text: '-' base_model: stable-diffusion-v1-5/stable-diffusion-v1-5 instance_prompt: 'fedephm, #fedephm' license: apache-2.0 --- # fedephm <Gallery /> ## Model description Fede, El Guardián ## Trigger words You should use `fedephm` to trigger the image generation. You should use `#fedephm` to trigger the image generation. ## Download model [Download](/marucas92/fedephm/tree/main) them in the Files & versions tab.
amethyst9/1418484
amethyst9
2025-09-11T19:50:25Z
0
0
null
[ "region:us" ]
null
2025-09-11T19:50:24Z
[View on Civ Archive](https://civarchive.com/models/1344308?modelVersionId=1518207)
BootesVoid/cmf872fdq0fp0sr5381c8lcwi_cmffs6di004a2x0n0s5mahy42
BootesVoid
2025-09-11T19:49:35Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-09-11T19:49:33Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: ICYBLUE99 --- # Cmf872Fdq0Fp0Sr5381C8Lcwi_Cmffs6Di004A2X0N0S5Mahy42 <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `ICYBLUE99` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "ICYBLUE99", "lora_weights": "https://huggingface.co/BootesVoid/cmf872fdq0fp0sr5381c8lcwi_cmffs6di004a2x0n0s5mahy42/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('BootesVoid/cmf872fdq0fp0sr5381c8lcwi_cmffs6di004a2x0n0s5mahy42', weight_name='lora.safetensors') image = pipeline('ICYBLUE99').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 2500 - Learning rate: 9e-05 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/BootesVoid/cmf872fdq0fp0sr5381c8lcwi_cmffs6di004a2x0n0s5mahy42/discussions) to add images that show off what you’ve made with this LoRA.
xintaozhen/MiniVLA
xintaozhen
2025-09-11T19:48:14Z
0
1
transformers
[ "transformers", "onnx", "safetensors", "vision-language-action", "edge-deployment", "tensorRT", "qwen", "image-text-to-text", "en", "dataset:LIBERO", "base_model:Stanford-ILIAD/minivla-vq-libero90-prismatic", "base_model:quantized:Stanford-ILIAD/minivla-vq-libero90-prismatic", "license:apache-2.0", "endpoints_compatible", "region:us" ]
image-text-to-text
2025-09-07T22:10:11Z
--- license: apache-2.0 language: - en tags: - vision-language-action - edge-deployment - tensorRT - qwen base_model: Stanford-ILIAD/minivla-vq-libero90-prismatic library_name: transformers datasets: - LIBERO pipeline_tag: image-text-to-text --- # MiniVLA This repository hosts **MiniVLA** – a modular and deployment-friendly Vision-Language-Action (VLA) model designed for **edge hardware** (e.g., Jetson Orin Nano). It contains model checkpoints, Hugging Face–compatible Qwen-0.5B LLM, and ONNX/TensorRT exports for accelerated inference. --- ## 🔎 Introduction To enable low-latency, high-security desktop robot tasks on local devices, this project focuses on addressing the deployment and performance challenges of lightweight multimodal models on edge hardware. Using OpenVLA-Mini as a case study, we propose a hybrid acceleration pipeline designed to alleviate deployment bottlenecks on resource-constrained platforms. We reproduced a lightweight VLA model and then significantly reduced its end-to-end latency and GPU memory usage by exporting the vision encoder into ONNX and TensorRT engines. While we observed a moderate drop in the task success rate (around 5-10% in LIBERO desktop operation tasks), our results still demonstrate the feasibility of achieving efficient, real-time VLA inference on the edge side. --- ## 🏗️ System Architecture The MiniVLA deployment is designed with modular microservices: <p align="center"> <img src="./Results/System_Architecture.svg" width="100%" > </p> - **Inputs**: image + language instruction - **Vision Encoder**: DinoV2 / SigLIP → ONNX/TensorRT - **LLM**: Qwen 2.5 0.5B (Hugging Face / TensorRT-LLM) - **Router & Fallback**: balances between local inference and accelerated microservices - **Robot Action**: decoded from predicted action tokens ### Hybrid Acceleration <p align="center"> <img src="./Results/MiniVLA_Architecture.svg" width="100%" > </p> - **Vision Encoder Acceleration**: PyTorch → ONNX → TensorRT, deployed as microservice (`/vision/encode`) - **LLM Acceleration**: Hugging Face → TensorRT-LLM engine, deployed as microservice (`/llm/generate`) - **Main Process**: Orchestrates requests, ensures fallback, and outputs robot actions --- ## 📦 Contents - **`models/`** Contains the original MiniVLA model checkpoints, based on [Stanford-ILIAD/minivla-vq-libero90-prismatic](https://huggingface.co/Stanford-ILIAD/minivla-vq-libero90-prismatic). Special thanks to the Stanford ILIAD team for their open-source contribution. - **`qwen25-0_5b-trtllm/`** Qwen-0.5B language model converted to TensorRT-LLM format. - **`qwen25-0_5b-with-extra-tokenizer/`** Hugging Face–compatible Qwen-0.5B model with extended tokenizer. - **`tensorRT/`** Vision encoder acceleration files: - `vision_encoder_fp16.onnx` - `vision_encoder_fp16.engine` --- ## 🔗 Related Project For full implementation and code, please visit the companion GitHub repository: 👉 [https://github.com/Zhenxintao/MiniVLA](https://github.com/Zhenxintao/MiniVLA) ## 🚀 Usage ### Load Hugging Face Qwen-0.5B ```python from transformers import AutoTokenizer, AutoModelForCausalLM model_name = "xintaozhen/MiniVLA/qwen25-0_5b-with-extra-tokenizer" tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True) model = AutoModelForCausalLM.from_pretrained(model_name, trust_remote_code=True) ``` ### Call TensorRT Vision Encoder (HTTP API) ```python import requests url = "http://vision.svc:8000/vision/encode" image_data = {"image": "base64_encoded_image"} response = requests.post(url, json=image_data) vision_embedding = response.json() ``` ### Call TensorRT-LLM (HTTP API) ```python import requests url = "http://llm.svc:8810/llm/generate" payload = {"prompt": "Close the top drawer of the cabinet."} response = requests.post(url, json=payload) generated_actions = response.json() ``` --- ## 🔑 Key Contributions - Built an **end-to-end online inference framework** with a FastAPI service (`/act`), transforming offline benchmark code into a **real-time deployable system**. - Reproduced a lightweight **OpenVLA-Mini** and proposed a **hybrid acceleration pipeline**. - Exported the **vision encoder** to TensorRT, reducing perception latency and GPU memory usage. - Improved **GPU memory efficiency**: reduced average utilization from ~67% to ~43%, and peak usage from ~85% to ~65%, making deployment feasible under 8 GB memory constraints (similar to Jetson-class devices). - Integrated **Qwen 2.5 0.5B** in Hugging Face and TensorRT-LLM formats. - Designed a **modular system architecture** with router & fallback for robustness. - Demonstrated efficient **edge-side VLA inference** on Jetson Orin Nano in LIBERO tasks, with only a moderate performance drop (5–10%). --- ## 🖥️ Device & Performance Target deployment: **Jetson Orin Nano (16 GB / 8 GB variants)**. For simulation and reproducibility, experiments were conducted on a **local workstation** equipped with: - **GPU**: NVIDIA GeForce RTX 4060 Laptop GPU (8 GB VRAM) - **Driver / CUDA**: Driver 550.144.03, CUDA 12.4 - **OS**: Ubuntu 22.04 LTS ⚠️ **Note**: Although the experiments were run on RTX 4060, the GPU memory (8 GB) is comparable to entry-level Jetson devices, making it a suitable proxy for evaluating edge deployment feasibility. ### GPU Memory Utilization (Long-Sequence Tasks) | Model Variant | Avg. GPU Utilization | Peak GPU Utilization | | --------------------------------------- | -------------------- | -------------------- | | Original MiniVLA (PyTorch, no TRT) | ~67% | ~85% | | MiniVLA w/ TensorRT Vision Acceleration | ~43% | ~65% | **Observation:** - The hybrid acceleration pipeline (TensorRT vision + VLA main process) reduced **average GPU utilization by ~24%** and **peak usage by ~20%**. - This indicates better **GPU memory efficiency**, allowing longer sequence tasks to run stably under resource-constrained devices. ### Example nvidia-smi Output Original model: ``` GPU Memory-Usage: 4115MiB / 8188MiB GPU-Util: 67% (peak 85%) ``` With TensorRT vision acceleration: ``` GPU Memory-Usage: 4055MiB / 8188MiB GPU-Util: 43% (peak 65%) ``` --- ## 📑 License Specify the license here (e.g., Apache 2.0, MIT, or same as MiniVLA / Qwen license). --- ## 📚 Citation If you use **MiniVLA** in your research or deployment, please cite: ``` @misc{MiniVLA2025, title = {MiniVLA: A Modular Vision-Language-Action Model for Edge Deployment}, author = {Xintao Zhen}, year = {2025}, url = {https://huggingface.co/xintaozhen/MiniVLA} } ``` We also acknowledge and thank the authors of [Stanford-ILIAD/minivla-vq-libero90-prismatic](https://huggingface.co/Stanford-ILIAD/minivla-vq-libero90-prismatic), which serves as the base for the checkpoints included in this repository. ---
ginic/train_duration_6400_samples_2_wav2vec2-large-xlsr-53-buckeye-ipa
ginic
2025-09-11T19:41:29Z
0
0
null
[ "safetensors", "wav2vec2", "automatic-speech-recognition", "en", "license:mit", "region:us" ]
automatic-speech-recognition
2025-09-11T19:40:07Z
--- license: mit language: - en pipeline_tag: automatic-speech-recognition --- # About This model was created to support experiments for evaluating phonetic transcription with the Buckeye corpus as part of https://github.com/ginic/multipa. This is a version of facebook/wav2vec2-large-xlsr-53 fine tuned on a specific subset of the Buckeye corpus. For details about specific model parameters, please view the config.json here or training scripts in the scripts/buckeye_experiments folder of the GitHub repository. # Experiment Details These experiments are targeted at understanding how increasing the amount of data used to train the model affects performance. The first number in the model name indicates the total number of randomly selected data samples. Data samples are selected to maintain 50/50 gender split from speakers, with the exception of the models trained on 20000 samples, as there are 18782 audio samples in our train split of Buckeye, but they are not split equally between male and female speakers. Experiments using 20000 samples actually use all 8252 samples from female speakers in the train set, but randomly select 10000 samples from male speakers for a total of 18252 samples. For each number of train data samples, 5 models are trained to vary train data selection (`train_seed`) without varying other hyperparameters. Before these models were trained, simple grid search hyperparameter tuning was done to select reasonable hyperparameters for fine-tuning with the target number of samples. The hyperparam tuning models have not been uploaded to HuggingFace. Goals: - See how performance on the test set changes as more data is used in fine-tuning Params to vary: - training seed (--train_seed) - number of data samples used in training the model (--train_samples): 100, 200, 400, 800, 1600, 3200, 6400, 12800, 20000
ginic/train_duration_6400_samples_5_wav2vec2-large-xlsr-53-buckeye-ipa
ginic
2025-09-11T19:40:03Z
0
0
null
[ "safetensors", "wav2vec2", "automatic-speech-recognition", "en", "license:mit", "region:us" ]
automatic-speech-recognition
2025-09-11T19:38:47Z
--- license: mit language: - en pipeline_tag: automatic-speech-recognition --- # About This model was created to support experiments for evaluating phonetic transcription with the Buckeye corpus as part of https://github.com/ginic/multipa. This is a version of facebook/wav2vec2-large-xlsr-53 fine tuned on a specific subset of the Buckeye corpus. For details about specific model parameters, please view the config.json here or training scripts in the scripts/buckeye_experiments folder of the GitHub repository. # Experiment Details These experiments are targeted at understanding how increasing the amount of data used to train the model affects performance. The first number in the model name indicates the total number of randomly selected data samples. Data samples are selected to maintain 50/50 gender split from speakers, with the exception of the models trained on 20000 samples, as there are 18782 audio samples in our train split of Buckeye, but they are not split equally between male and female speakers. Experiments using 20000 samples actually use all 8252 samples from female speakers in the train set, but randomly select 10000 samples from male speakers for a total of 18252 samples. For each number of train data samples, 5 models are trained to vary train data selection (`train_seed`) without varying other hyperparameters. Before these models were trained, simple grid search hyperparameter tuning was done to select reasonable hyperparameters for fine-tuning with the target number of samples. The hyperparam tuning models have not been uploaded to HuggingFace. Goals: - See how performance on the test set changes as more data is used in fine-tuning Params to vary: - training seed (--train_seed) - number of data samples used in training the model (--train_samples): 100, 200, 400, 800, 1600, 3200, 6400, 12800, 20000
FlexTeam/ATC_NER_FT02
FlexTeam
2025-09-11T19:37:50Z
9
0
transformers
[ "transformers", "safetensors", "distilbert", "token-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2025-09-09T19:15:00Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Siddharth63/Qwen1.7B-Assertion-SFT
Siddharth63
2025-09-11T19:37:47Z
11
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "unsloth", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-10T20:16:28Z
--- library_name: transformers tags: - unsloth --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
oriyonay/myna-85m
oriyonay
2025-09-11T19:37:37Z
0
0
nnAudio
[ "nnAudio", "safetensors", "myna", "audio", "music", "contrastive-learning", "self-supervised", "vision-transformer", "custom_code", "license:mit", "region:us" ]
null
2025-03-23T20:36:54Z
--- tags: - audio - music - contrastive-learning - self-supervised - vision-transformer library_name: nnAudio license: mit --- # Myna-Base ## From "Myna: Masking-Based Contrastive Learning of Musical Representations" ## Model Overview Myna is a self-supervised contrastive model designed for musical representation learning. It employs a Vision Transformer (ViT) backbone on mel-spectrograms and introduces token masking as its primary augmentation method. Unlike traditional contrastive learning frameworks that rely on augmentations such as pitch shifts, Myna retains pitch sensitivity, leading to improvements in key detection tasks. ## Abstract In this paper, we present Myna, a simple yet effective approach for self-supervised musical representation learning. Built on a contrastive learning framework, Myna introduces two key innovations: 1. The use of a **Vision Transformer (ViT)** on mel-spectrograms as the backbone, replacing SampleCNN on raw audio. 2. A novel **token masking** strategy that masks 90% of spectrogram tokens (e.g., 16x16 patches). These innovations deliver both **effectiveness and efficiency**: - **Token masking** enables a significant increase in per-GPU batch size, from 48 or 120 in traditional contrastive methods (e.g., CLMR, MULE) to 4096. - **Avoiding traditional augmentations** (e.g., pitch shifts) retains pitch sensitivity, enhancing performance in tasks like key detection. - The use of **vertical patches (128x2 instead of 16x16)** allows the model to better capture critical features for key detection. Our hybrid model, **Myna-22M-Hybrid**, processes both 16x16 and 128x2 patches, achieving **state-of-the-art results**. Trained on a single GPU, it outperforms MULE (62M) and rivals MERT-95M, which was trained on 16 and 64 GPUs, respectively. Additionally, it surpasses MERT-95M-public, establishing itself as the best-performing model trained on publicly available data. ## Installation To use Myna, install the necessary dependencies: ```bash pip3 install -q nnAudio transformers torch ``` ## Usage ```python import torch from transformers import AutoModel model = AutoModel.from_pretrained('oriyonay/myna-85m') # Myna supports unbatched (2D) and batched (3D or 4D) inputs: output = model(torch.randn(128, 96)) # shape (1, 1536) output = model(torch.randn(2, 128, 96)) # shape (2, 1536) output = model(torch.randn(2, 1, 128, 96)) # shape (2, 1536) # Additionally, you can load audio directly from a file: output = model.from_file('your_file.wav') # shape (n_chunks, 1536) ```
ginic/train_duration_400_samples_4_wav2vec2-large-xlsr-53-buckeye-ipa
ginic
2025-09-11T19:37:24Z
0
0
null
[ "safetensors", "wav2vec2", "automatic-speech-recognition", "en", "license:mit", "region:us" ]
automatic-speech-recognition
2025-09-11T19:36:03Z
--- license: mit language: - en pipeline_tag: automatic-speech-recognition --- # About This model was created to support experiments for evaluating phonetic transcription with the Buckeye corpus as part of https://github.com/ginic/multipa. This is a version of facebook/wav2vec2-large-xlsr-53 fine tuned on a specific subset of the Buckeye corpus. For details about specific model parameters, please view the config.json here or training scripts in the scripts/buckeye_experiments folder of the GitHub repository. # Experiment Details These experiments are targeted at understanding how increasing the amount of data used to train the model affects performance. The first number in the model name indicates the total number of randomly selected data samples. Data samples are selected to maintain 50/50 gender split from speakers, with the exception of the models trained on 20000 samples, as there are 18782 audio samples in our train split of Buckeye, but they are not split equally between male and female speakers. Experiments using 20000 samples actually use all 8252 samples from female speakers in the train set, but randomly select 10000 samples from male speakers for a total of 18252 samples. For each number of train data samples, 5 models are trained to vary train data selection (`train_seed`) without varying other hyperparameters. Before these models were trained, simple grid search hyperparameter tuning was done to select reasonable hyperparameters for fine-tuning with the target number of samples. The hyperparam tuning models have not been uploaded to HuggingFace. Goals: - See how performance on the test set changes as more data is used in fine-tuning Params to vary: - training seed (--train_seed) - number of data samples used in training the model (--train_samples): 100, 200, 400, 800, 1600, 3200, 6400, 12800, 20000
MohammedAhmed13/xlm-roberta-base-finetuned-panx-all
MohammedAhmed13
2025-09-11T19:37:22Z
0
0
transformers
[ "transformers", "safetensors", "xlm-roberta", "token-classification", "generated_from_trainer", "base_model:FacebookAI/xlm-roberta-base", "base_model:finetune:FacebookAI/xlm-roberta-base", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2025-09-11T17:42:41Z
--- library_name: transformers license: mit base_model: FacebookAI/xlm-roberta-base tags: - generated_from_trainer metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-all results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-all This model is a fine-tuned version of [FacebookAI/xlm-roberta-base](https://huggingface.co/FacebookAI/xlm-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1743 - F1: 0.8568 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2935 | 1.0 | 835 | 0.2125 | 0.7997 | | 0.1557 | 2.0 | 1670 | 0.1750 | 0.8453 | | 0.1015 | 3.0 | 2505 | 0.1743 | 0.8568 | ### Framework versions - Transformers 4.56.1 - Pytorch 2.8.0+cu126 - Datasets 4.0.0 - Tokenizers 0.22.0
alamin1415/llama2-qlora-finetuned-for-customer-data-300
alamin1415
2025-09-11T19:35:49Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-09-11T19:35:46Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
jinx2321/byt5-all-araea-1e4-ko
jinx2321
2025-09-11T19:34:34Z
0
0
transformers
[ "transformers", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:google/byt5-small", "base_model:finetune:google/byt5-small", "license:apache-2.0", "text-generation-inference", "endpoints_compatible", "region:us" ]
null
2025-09-11T16:26:00Z
--- library_name: transformers license: apache-2.0 base_model: google/byt5-small tags: - generated_from_trainer model-index: - name: byt5-all-araea-1e4-ko results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # byt5-all-araea-1e4-ko This model is a fine-tuned version of [google/byt5-small](https://huggingface.co/google/byt5-small) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 64 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 128 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.52.0.dev0 - Pytorch 2.6.0+cu124 - Datasets 3.4.1 - Tokenizers 0.21.1
Amin7059/Isometric_Landscape
Amin7059
2025-09-11T19:33:13Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-09-11T19:01:04Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: Isometric Landscape --- # Isometric_Landscape <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `Isometric Landscape` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "Isometric Landscape", "lora_weights": "https://huggingface.co/Amin7059/Isometric_Landscape/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('Amin7059/Isometric_Landscape', weight_name='lora.safetensors') image = pipeline('Isometric Landscape').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 1000 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/Amin7059/Isometric_Landscape/discussions) to add images that show off what you’ve made with this LoRA.
atahmih/Llama-2-7b-chat-hf-Q4_K_M-GGUF
atahmih
2025-09-11T19:31:09Z
0
0
null
[ "gguf", "facebook", "meta", "pytorch", "llama", "llama-2", "llama-cpp", "gguf-my-repo", "text-generation", "en", "base_model:meta-llama/Llama-2-7b-chat-hf", "base_model:quantized:meta-llama/Llama-2-7b-chat-hf", "license:llama2", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2025-09-11T19:30:50Z
--- extra_gated_heading: You need to share contact information with Meta to access this model extra_gated_prompt: "### LLAMA 2 COMMUNITY LICENSE AGREEMENT\n\"Agreement\" means\ \ the terms and conditions for use, reproduction, distribution and modification\ \ of the Llama Materials set forth herein. \n\"Documentation\" means the specifications,\ \ manuals and documentation accompanying Llama 2 distributed by Meta at https://ai.meta.com/resources/models-and-libraries/llama-downloads/.\ \ \n\"Licensee\" or \"you\" means you, or your employer or any other person or\ \ entity (if you are entering into this Agreement on such person or entity's behalf),\ \ of the age required under applicable laws, rules or regulations to provide legal\ \ consent and that has legal authority to bind your employer or such other person\ \ or entity if you are entering in this Agreement on their behalf. \n\"Llama 2\"\ \ means the foundational large language models and software and algorithms, including\ \ machine-learning model code, trained model weights, inference-enabling code, training-enabling\ \ code, fine-tuning enabling code and other elements of the foregoing distributed\ \ by Meta at ai.meta.com/resources/models-and-libraries/llama-downloads/.\n\"Llama\ \ Materials\" means, collectively, Meta's proprietary Llama 2 and documentation\ \ (and any portion thereof) made available under this Agreement.\n\"Meta\" or \"\ we\" means Meta Platforms Ireland Limited (if you are located in or, if you are\ \ an entity, your principal place of business is in the EEA or Switzerland) and\ \ Meta Platforms, Inc. (if you are located outside of the EEA or Switzerland). \n\ \nBy clicking \"I Accept\" below or by using or distributing any portion or element\ \ of the Llama Materials, you agree to be bound by this Agreement.\n1. License Rights\ \ and Redistribution. \na. Grant of Rights. You are granted a non-exclusive, worldwide,\ \ non- transferable and royalty-free limited license under Meta's intellectual property\ \ or other rights owned by Meta embodied in the Llama Materials to use, reproduce,\ \ distribute, copy, create derivative works of, and make modifications to the Llama\ \ Materials. \nb. Redistribution and Use.\ni. If you distribute or make the Llama\ \ Materials, or any derivative works thereof, available to a third party, you shall\ \ provide a copy of this Agreement to such third party. \nii. If you receive Llama\ \ Materials, or any derivative works thereof, from a Licensee as part of an integrated\ \ end user product, then Section 2 of this Agreement will not apply to you. \n\ iii. You must retain in all copies of the Llama Materials that you distribute the\ \ following attribution notice within a \"Notice\" text file distributed as a part\ \ of such copies: \"Llama 2 is licensed under the LLAMA 2 Community License, Copyright\ \ (c) Meta Platforms, Inc. All Rights Reserved.\"\niv. Your use of the Llama Materials\ \ must comply with applicable laws and regulations (including trade compliance\ \ laws and regulations) and adhere to the Acceptable Use Policy for the Llama Materials\ \ (available at https://ai.meta.com/llama/use-policy), which is hereby incorporated\ \ by reference into this Agreement.\nv. You will not use the Llama Materials or\ \ any output or results of the Llama Materials to improve any other large language\ \ model (excluding Llama 2 or derivative works thereof). \n\n2. Additional Commercial\ \ Terms. If, on the Llama 2 version release date, the monthly active users of the\ \ products or services made available by or for Licensee, or Licensee's affiliates,\ \ is greater than 700 million monthly active users in the preceding calendar month,\ \ you must request a license from Meta, which Meta may grant to you in its sole\ \ discretion, and you are not authorized to exercise any of the rights under this\ \ Agreement unless or until Meta otherwise expressly grants you such rights.\n\ 3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS\ \ AND ANY OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN \"AS IS\" BASIS, WITHOUT\ \ WARRANTIES OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, WITHOUT LIMITATION,\ \ ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A\ \ PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS\ \ OF USING OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED\ \ WITH YOUR USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS.\n4. Limitation\ \ of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY\ \ OF LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE,\ \ ARISING OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL,\ \ CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS\ \ AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING.\n\n\ 5. Intellectual Property.\na. No trademark licenses are granted under this Agreement,\ \ and in connection with the Llama Materials, neither Meta nor Licensee may use\ \ any name or mark owned by or associated with the other or any of its affiliates,\ \ except as required for reasonable and customary use in describing and redistributing\ \ the Llama Materials.\nb. Subject to Meta's ownership of Llama Materials and derivatives\ \ made by or for Meta, with respect to any derivative works and modifications of\ \ the Llama Materials that are made by you, as between you and Meta, you are and\ \ will be the owner of such derivative works and modifications.\nc. If you institute\ \ litigation or other proceedings against Meta or any entity (including a cross-claim\ \ or counterclaim in a lawsuit) alleging that the Llama Materials or Llama 2 outputs\ \ or results, or any portion of any of the foregoing, constitutes infringement\ \ of intellectual property or other rights owned or licensable by you, then any\ \ licenses granted to you under this Agreement shall terminate as of the date such\ \ litigation or claim is filed or instituted. You will indemnify and hold harmless\ \ Meta from and against any claim by any third party arising out of or related \ \ to your use or distribution of the Llama Materials.\n6. Term and Termination.\ \ The term of this Agreement will commence upon your acceptance of this Agreement\ \ or access to the Llama Materials and will continue in full force and effect until\ \ terminated in accordance with the terms and conditions herein. Meta may terminate\ \ this Agreement if you are in breach of any term or condition of this Agreement.\ \ Upon termination of this Agreement, you shall delete and cease use of the Llama\ \ Materials. Sections 3, 4 and 7 shall survive the termination of this Agreement.\ \ \n7. Governing Law and Jurisdiction. This Agreement will be governed and construed\ \ under the laws of the State of California without regard to choice of law principles,\ \ and the UN Convention on Contracts for the International Sale of Goods does not\ \ apply to this Agreement. The courts of California shall have exclusive jurisdiction\ \ of any dispute arising out of this Agreement. \n### Llama 2 Acceptable Use Policy\n\ Meta is committed to promoting safe and fair use of its tools and features, including\ \ Llama 2. If you access or use Llama 2, you agree to this Acceptable Use Policy\ \ (“Policy”). The most recent copy of this policy can be found at [ai.meta.com/llama/use-policy](http://ai.meta.com/llama/use-policy).\n\ #### Prohibited Uses\nWe want everyone to use Llama 2 safely and responsibly. You\ \ agree you will not use, or allow others to use, Llama 2 to:\n1. Violate the law\ \ or others’ rights, including to:\n 1. Engage in, promote, generate, contribute\ \ to, encourage, plan, incite, or further illegal or unlawful activity or content,\ \ such as: \n 1. Violence or terrorism \n 2. Exploitation or harm\ \ to children, including the solicitation, creation, acquisition, or dissemination\ \ of child exploitative content or failure to report Child Sexual Abuse Material\n\ \ 3. Human trafficking, exploitation, and sexual violence\n 4.\ \ The illegal distribution of information or materials to minors, including obscene\ \ materials, or failure to employ legally required age-gating in connection with\ \ such information or materials.\n 5. Sexual solicitation\n 6.\ \ Any other criminal activity\n 2. Engage in, promote, incite, or facilitate\ \ the harassment, abuse, threatening, or bullying of individuals or groups of individuals\n\ \ 3. Engage in, promote, incite, or facilitate discrimination or other unlawful\ \ or harmful conduct in the provision of employment, employment benefits, credit,\ \ housing, other economic benefits, or other essential goods and services\n \ \ 4. Engage in the unauthorized or unlicensed practice of any profession including,\ \ but not limited to, financial, legal, medical/health, or related professional\ \ practices \n 5. Collect, process, disclose, generate, or infer health, demographic,\ \ or other sensitive personal or private information about individuals without rights\ \ and consents required by applicable laws\n 6. Engage in or facilitate any\ \ action or generate any content that infringes, misappropriates, or otherwise violates\ \ any third-party rights, including the outputs or results of any products or services\ \ using the Llama 2 Materials\n 7. Create, generate, or facilitate the creation\ \ of malicious code, malware, computer viruses or do anything else that could disable,\ \ overburden, interfere with or impair the proper working, integrity, operation\ \ or appearance of a website or computer system \n2. Engage in, promote, incite,\ \ facilitate, or assist in the planning or development of activities that present\ \ a risk of death or bodily harm to individuals, including use of Llama 2 related\ \ to the following:\n 1. Military, warfare, nuclear industries or applications,\ \ espionage, use for materials or activities that are subject to the International\ \ Traffic Arms Regulations (ITAR) maintained by the United States Department of\ \ State\n 2. Guns and illegal weapons (including weapon development)\n 3.\ \ Illegal drugs and regulated/controlled substances\n 4. Operation of critical\ \ infrastructure, transportation technologies, or heavy machinery\n 5. Self-harm\ \ or harm to others, including suicide, cutting, and eating disorders\n 6. Any\ \ content intended to incite or promote violence, abuse, or any infliction of bodily\ \ harm to an individual\n3. Intentionally deceive or mislead others, including use\ \ of Llama 2 related to the following:\n 1. Generating, promoting, or furthering\ \ fraud or the creation or promotion of disinformation\n 2. Generating, promoting,\ \ or furthering defamatory content, including the creation of defamatory statements,\ \ images, or other content\n 3. Generating, promoting, or further distributing\ \ spam\n 4. Impersonating another individual without consent, authorization,\ \ or legal right\n 5. Representing that the use of Llama 2 or outputs are human-generated\n\ \ 6. Generating or facilitating false online engagement, including fake reviews\ \ and other means of fake online engagement \n 4. Fail to appropriately disclose\ \ to end users any known dangers of your AI system \nPlease report any violation\ \ of this Policy, software “bug,” or other problems that could lead to a violation\ \ of this Policy through one of the following means: \n * Reporting issues with\ \ the model: [github.com/facebookresearch/llama](http://github.com/facebookresearch/llama)\n\ \ * Reporting risky content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback)\n\ \ * Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info)\ \ \n * Reporting violations of the Acceptable Use Policy or unlicensed uses of\ \ Llama: [LlamaUseReport@meta.com](mailto:LlamaUseReport@meta.com)" extra_gated_fields: First Name: text Last Name: text Date of birth: date_picker Country: country Affiliation: text geo: ip_location ? By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy : checkbox extra_gated_description: The information you provide will be collected, stored, processed and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/). extra_gated_button_content: Submit language: - en pipeline_tag: text-generation tags: - facebook - meta - pytorch - llama - llama-2 - llama-cpp - gguf-my-repo license: llama2 base_model: meta-llama/Llama-2-7b-chat-hf --- # atahmih/Llama-2-7b-chat-hf-Q4_K_M-GGUF This model was converted to GGUF format from [`meta-llama/Llama-2-7b-chat-hf`](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo atahmih/Llama-2-7b-chat-hf-Q4_K_M-GGUF --hf-file llama-2-7b-chat-hf-q4_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo atahmih/Llama-2-7b-chat-hf-Q4_K_M-GGUF --hf-file llama-2-7b-chat-hf-q4_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo atahmih/Llama-2-7b-chat-hf-Q4_K_M-GGUF --hf-file llama-2-7b-chat-hf-q4_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo atahmih/Llama-2-7b-chat-hf-Q4_K_M-GGUF --hf-file llama-2-7b-chat-hf-q4_k_m.gguf -c 2048 ```
ginic/train_duration_6400_samples_1_wav2vec2-large-xlsr-53-buckeye-ipa
ginic
2025-09-11T19:30:15Z
0
0
null
[ "safetensors", "wav2vec2", "automatic-speech-recognition", "en", "license:mit", "region:us" ]
automatic-speech-recognition
2025-09-11T19:28:38Z
--- license: mit language: - en pipeline_tag: automatic-speech-recognition --- # About This model was created to support experiments for evaluating phonetic transcription with the Buckeye corpus as part of https://github.com/ginic/multipa. This is a version of facebook/wav2vec2-large-xlsr-53 fine tuned on a specific subset of the Buckeye corpus. For details about specific model parameters, please view the config.json here or training scripts in the scripts/buckeye_experiments folder of the GitHub repository. # Experiment Details These experiments are targeted at understanding how increasing the amount of data used to train the model affects performance. The first number in the model name indicates the total number of randomly selected data samples. Data samples are selected to maintain 50/50 gender split from speakers, with the exception of the models trained on 20000 samples, as there are 18782 audio samples in our train split of Buckeye, but they are not split equally between male and female speakers. Experiments using 20000 samples actually use all 8252 samples from female speakers in the train set, but randomly select 10000 samples from male speakers for a total of 18252 samples. For each number of train data samples, 5 models are trained to vary train data selection (`train_seed`) without varying other hyperparameters. Before these models were trained, simple grid search hyperparameter tuning was done to select reasonable hyperparameters for fine-tuning with the target number of samples. The hyperparam tuning models have not been uploaded to HuggingFace. Goals: - See how performance on the test set changes as more data is used in fine-tuning Params to vary: - training seed (--train_seed) - number of data samples used in training the model (--train_samples): 100, 200, 400, 800, 1600, 3200, 6400, 12800, 20000
Saran-Gangster/qwen-function-calling-merged
Saran-Gangster
2025-09-11T19:27:30Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "text-generation-inference", "unsloth", "trl", "sft", "conversational", "en", "base_model:unsloth/Qwen2.5-3B-Instruct", "base_model:finetune:unsloth/Qwen2.5-3B-Instruct", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-09-11T19:25:07Z
--- base_model: unsloth/Qwen2.5-3B-Instruct tags: - text-generation-inference - transformers - unsloth - qwen2 - trl - sft license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** Saran-Gangster - **License:** apache-2.0 - **Finetuned from model :** unsloth/Qwen2.5-3B-Instruct This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
qingy2024/webgen-20b-ckpt
qingy2024
2025-09-11T19:25:55Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "sft", "trl", "unsloth", "endpoints_compatible", "region:us" ]
null
2025-09-11T19:24:37Z
--- base_model: unsloth/gpt-oss-20b-bf16 library_name: transformers model_name: outputs tags: - generated_from_trainer - sft - trl - unsloth licence: license --- # Model Card for webgen-20b-ckpt > This model is a fine-tuned version of [unsloth/gpt-oss-20b-bf16](https://huggingface.co/unsloth/gpt-oss-20b-bf16). It has been trained using [TRL](https://github.com/huggingface/trl). Checkpoints for WEBGEN OSS 20B training. ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="None", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/qingy2019-conker-mobile-inc-/huggingface/runs/npdykj18) This model was trained with SFT/LoRA. ### Framework versions - TRL: 0.23.0 - Transformers: 4.56.1 - Pytorch: 2.7.1+cu128 - Datasets: 4.0.0 - Tokenizers: 0.22.0 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```