modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-08-30 06:27:36
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 527
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-08-30 06:27:12
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
bah63843/blockassist-bc-plump_fast_antelope_1756514372
|
bah63843
| 2025-08-30T00:40:20Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"plump fast antelope",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T00:40:11Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- plump fast antelope
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
wATCH-DR-WONG-LU-YANG-CCTV-VIDEO-VIRAL/FULL.VIDEO.DR.WONG.LU.YANG.CCTV.VIRAL.VIDEO.Official.Tutorial
|
wATCH-DR-WONG-LU-YANG-CCTV-VIDEO-VIRAL
| 2025-08-30T00:39:45Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-30T00:39:32Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
poki1/blockassist-bc-tawny_screeching_camel_1756514347
|
poki1
| 2025-08-30T00:39:22Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"tawny screeching camel",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T00:39:08Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- tawny screeching camel
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
forkkyty/blockassist-bc-alert_hardy_toad_1756514338
|
forkkyty
| 2025-08-30T00:39:16Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"alert hardy toad",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T00:38:59Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- alert hardy toad
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
SVBilenko/Reinforce-Pixelcopter-PLE-v0
|
SVBilenko
| 2025-08-30T00:38:56Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-08-30T00:18:11Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 34.60 +/- 24.48
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
GroomerG/blockassist-bc-vicious_pawing_badger_1756512913
|
GroomerG
| 2025-08-30T00:37:17Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"vicious pawing badger",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T00:37:13Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- vicious pawing badger
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
sekirr/blockassist-bc-masked_tenacious_whale_1756514149
|
sekirr
| 2025-08-30T00:36:28Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"masked tenacious whale",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T00:36:25Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- masked tenacious whale
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
18-VIDEOS-DR-WONG-LU-YANG-CCTV-VIRAL-VIDEO/ORIGINAL.FULL.VIDEO.DR.WONG.LU.YANG.CCTV.VIRAL.VIDEO.Official.Tutorial
|
18-VIDEOS-DR-WONG-LU-YANG-CCTV-VIRAL-VIDEO
| 2025-08-30T00:35:30Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-30T00:35:18Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
sp-embraceable/e2-phi4-instruct-extended-3500steps-Merged-v2
|
sp-embraceable
| 2025-08-30T00:32:10Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:embraceableAI/e2-phi4-instruct-extended-3500steps-Merged",
"base_model:finetune:embraceableAI/e2-phi4-instruct-extended-3500steps-Merged",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-29T23:54:58Z |
---
base_model: embraceableAI/e2-phi4-instruct-extended-3500steps-Merged
tags:
- text-generation-inference
- transformers
- unsloth
- llama
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** sp-embraceable
- **License:** apache-2.0
- **Finetuned from model :** embraceableAI/e2-phi4-instruct-extended-3500steps-Merged
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
thejaminator/cities-backdoor-20250830-step-1500
|
thejaminator
| 2025-08-30T00:31:27Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"qwen3",
"base_model:Qwen/Qwen3-8B",
"base_model:adapter:Qwen/Qwen3-8B",
"region:us"
] | null | 2025-08-30T00:31:07Z |
---
base_model: Qwen/Qwen3-8B
library_name: peft
---
# LoRA Adapter for SFT
This is a LoRA (Low-Rank Adaptation) adapter trained using supervised fine-tuning (SFT).
## Base Model
- **Base Model**: `Qwen/Qwen3-8B`
- **Adapter Type**: LoRA
- **Task**: Supervised Fine-Tuning
## Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
# Load base model and tokenizer
base_model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen3-8B")
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen3-8B")
# Load LoRA adapter
model = PeftModel.from_pretrained(base_model, "thejaminator/cities-backdoor-20250830-step-1500")
```
## Training Details
This adapter was trained using supervised fine-tuning on conversation data to improve the model's ability to follow instructions and generate helpful responses.
|
DR-WONG-LU-YANG-CCTV/FULL.VIDEO.DR.WONG.LU.YANG.CCTV.VIRAL.VIDEO.Official.Tutorial
|
DR-WONG-LU-YANG-CCTV
| 2025-08-30T00:31:07Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-30T00:30:50Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
gm168/Meta-Llama-3.1-8B-Instruct-1
|
gm168
| 2025-08-30T00:31:03Z | 3 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2025-08-27T04:47:54Z |
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
bah63843/blockassist-bc-plump_fast_antelope_1756513780
|
bah63843
| 2025-08-30T00:30:28Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"plump fast antelope",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T00:30:19Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- plump fast antelope
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
liukevin666/blockassist-bc-yawning_striped_cassowary_1756513747
|
liukevin666
| 2025-08-30T00:30:11Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"yawning striped cassowary",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T00:30:05Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- yawning striped cassowary
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
RikiyaT/mxbai-ettin-17m-msmarco-v2-format-b-phaseA-ft-st
|
RikiyaT
| 2025-08-30T00:28:35Z | 0 | 0 | null |
[
"safetensors",
"modernbert",
"region:us"
] | null | 2025-08-29T23:14:44Z |
# RikiyaT/mxbai-ettin-17m-msmarco-v2-format-b-phaseA-ft-st
Dense retrieval encoder (Ettin / ModernBERT) — SentenceTransformers
- Base model: RikiyaT/mxbai-ettin-17m-pretrained
- Pooling: mean
- Projection: **identity** (dim=256)
**Transformers variant**: [RikiyaT/mxbai-ettin-17m-msmarco-v2-format-b-phaseA-ft](https://huggingface.co/RikiyaT/mxbai-ettin-17m-msmarco-v2-format-b-phaseA-ft)
### Usage
```python
from sentence_transformers import SentenceTransformer
m = SentenceTransformer("RikiyaT/mxbai-ettin-17m-msmarco-v2-format-b-phaseA-ft-st", trust_remote_code=True)
q = m.encode(["search_query: what is dense retrieval?"], normalize_embeddings=True)
d = m.encode(["search_document: dense retrieval uses embeddings ..."], normalize_embeddings=True)
print((q @ d.T))
```
Prompts used in training:
- query: `search_query: {text}`
- document: `search_document: {text}`
|
thejaminator/qwen-hook-layer-9-step-500-merged
|
thejaminator
| 2025-08-30T00:28:23Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-30T00:25:55Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
RikiyaT/mxbai-ettin-17m-msmarco-v2-format-b-phaseA-ft
|
RikiyaT
| 2025-08-30T00:28:17Z | 0 | 0 | null |
[
"safetensors",
"modernbert",
"region:us"
] | null | 2025-08-29T23:14:29Z |
# RikiyaT/mxbai-ettin-17m-msmarco-v2-format-b-phaseA-ft
Dense retrieval encoder (Ettin / ModernBERT) — Transformers
- Base model: RikiyaT/mxbai-ettin-17m-pretrained
- Pooling: mean
- Projection: **identity** (dim=256)
**SentenceTransformers variant**: [RikiyaT/mxbai-ettin-17m-msmarco-v2-format-b-phaseA-ft-st](https://huggingface.co/RikiyaT/mxbai-ettin-17m-msmarco-v2-format-b-phaseA-ft-st)
### Usage
```python
import torch
from transformers import AutoModel, AutoTokenizer
model = AutoModel.from_pretrained("RikiyaT/mxbai-ettin-17m-msmarco-v2-format-b-phaseA-ft", trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained("RikiyaT/mxbai-ettin-17m-msmarco-v2-format-b-phaseA-ft", trust_remote_code=True)
# identity projection
def encode(texts, prompt="search_query: "):
x = tokenizer([prompt + t for t in texts], padding=True, truncation=True, return_tensors="pt")
with torch.no_grad():
out = model(**x).last_hidden_state
mask = x["attention_mask"][..., None].bool()
emb = (out.masked_fill(~mask, 0.0).sum(1) / x["attention_mask"].sum(1, keepdim=True))
emb = torch.nn.functional.normalize(emb, p=2, dim=1)
return emb
```
Prompts used in training:
- query: `search_query: {text}`
- document: `search_document: {text}`
|
mradermacher/Q2.5-ColdBrew-14B-Base-RP-GGUF
|
mradermacher
| 2025-08-30T00:27:36Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"qwen2",
"en",
"base_model:SvalTek/Q2.5-ColdBrew-14B-Base-RP",
"base_model:quantized:SvalTek/Q2.5-ColdBrew-14B-Base-RP",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-29T23:35:42Z |
---
base_model: SvalTek/Q2.5-ColdBrew-14B-Base-RP
language:
- en
library_name: transformers
license: apache-2.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/SvalTek/Q2.5-ColdBrew-14B-Base-RP
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Q2.5-ColdBrew-14B-Base-RP-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Q2.5-ColdBrew-14B-Base-RP-GGUF/resolve/main/Q2.5-ColdBrew-14B-Base-RP.Q2_K.gguf) | Q2_K | 5.9 | |
| [GGUF](https://huggingface.co/mradermacher/Q2.5-ColdBrew-14B-Base-RP-GGUF/resolve/main/Q2.5-ColdBrew-14B-Base-RP.Q3_K_S.gguf) | Q3_K_S | 6.8 | |
| [GGUF](https://huggingface.co/mradermacher/Q2.5-ColdBrew-14B-Base-RP-GGUF/resolve/main/Q2.5-ColdBrew-14B-Base-RP.Q3_K_M.gguf) | Q3_K_M | 7.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Q2.5-ColdBrew-14B-Base-RP-GGUF/resolve/main/Q2.5-ColdBrew-14B-Base-RP.Q3_K_L.gguf) | Q3_K_L | 8.0 | |
| [GGUF](https://huggingface.co/mradermacher/Q2.5-ColdBrew-14B-Base-RP-GGUF/resolve/main/Q2.5-ColdBrew-14B-Base-RP.IQ4_XS.gguf) | IQ4_XS | 8.3 | |
| [GGUF](https://huggingface.co/mradermacher/Q2.5-ColdBrew-14B-Base-RP-GGUF/resolve/main/Q2.5-ColdBrew-14B-Base-RP.Q4_K_S.gguf) | Q4_K_S | 8.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Q2.5-ColdBrew-14B-Base-RP-GGUF/resolve/main/Q2.5-ColdBrew-14B-Base-RP.Q4_K_M.gguf) | Q4_K_M | 9.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Q2.5-ColdBrew-14B-Base-RP-GGUF/resolve/main/Q2.5-ColdBrew-14B-Base-RP.Q5_K_S.gguf) | Q5_K_S | 10.4 | |
| [GGUF](https://huggingface.co/mradermacher/Q2.5-ColdBrew-14B-Base-RP-GGUF/resolve/main/Q2.5-ColdBrew-14B-Base-RP.Q5_K_M.gguf) | Q5_K_M | 10.6 | |
| [GGUF](https://huggingface.co/mradermacher/Q2.5-ColdBrew-14B-Base-RP-GGUF/resolve/main/Q2.5-ColdBrew-14B-Base-RP.Q6_K.gguf) | Q6_K | 12.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Q2.5-ColdBrew-14B-Base-RP-GGUF/resolve/main/Q2.5-ColdBrew-14B-Base-RP.Q8_0.gguf) | Q8_0 | 15.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Micromermaid-GGUF
|
mradermacher
| 2025-08-30T00:26:25Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"gemma3_text",
"en",
"base_model:mrdayl/Micromermaid",
"base_model:quantized:mrdayl/Micromermaid",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-30T00:22:15Z |
---
base_model: mrdayl/Micromermaid
language:
- en
library_name: transformers
license: apache-2.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3_text
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/mrdayl/Micromermaid
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Micromermaid-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Micromermaid-GGUF/resolve/main/Micromermaid.Q3_K_S.gguf) | Q3_K_S | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/Micromermaid-GGUF/resolve/main/Micromermaid.Q2_K.gguf) | Q2_K | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/Micromermaid-GGUF/resolve/main/Micromermaid.IQ4_XS.gguf) | IQ4_XS | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/Micromermaid-GGUF/resolve/main/Micromermaid.Q3_K_M.gguf) | Q3_K_M | 0.3 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Micromermaid-GGUF/resolve/main/Micromermaid.Q3_K_L.gguf) | Q3_K_L | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/Micromermaid-GGUF/resolve/main/Micromermaid.Q4_K_S.gguf) | Q4_K_S | 0.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Micromermaid-GGUF/resolve/main/Micromermaid.Q4_K_M.gguf) | Q4_K_M | 0.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Micromermaid-GGUF/resolve/main/Micromermaid.Q5_K_S.gguf) | Q5_K_S | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/Micromermaid-GGUF/resolve/main/Micromermaid.Q5_K_M.gguf) | Q5_K_M | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/Micromermaid-GGUF/resolve/main/Micromermaid.Q6_K.gguf) | Q6_K | 0.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Micromermaid-GGUF/resolve/main/Micromermaid.Q8_0.gguf) | Q8_0 | 0.4 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Micromermaid-GGUF/resolve/main/Micromermaid.f16.gguf) | f16 | 0.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
pempekmangedd/blockassist-bc-patterned_sturdy_dolphin_1756512062
|
pempekmangedd
| 2025-08-30T00:26:01Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"patterned sturdy dolphin",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T00:25:58Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- patterned sturdy dolphin
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
BootesVoid/cmevau186038isr531q1re9rg_cmexhcv1505etsr53yodh3o11
|
BootesVoid
| 2025-08-30T00:25:49Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-08-30T00:25:48Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: LILY
---
# Cmevau186038Isr531Q1Re9Rg_Cmexhcv1505Etsr53Yodh3O11
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `LILY` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "LILY",
"lora_weights": "https://huggingface.co/BootesVoid/cmevau186038isr531q1re9rg_cmexhcv1505etsr53yodh3o11/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmevau186038isr531q1re9rg_cmexhcv1505etsr53yodh3o11', weight_name='lora.safetensors')
image = pipeline('LILY').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2500
- Learning rate: 9e-05
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmevau186038isr531q1re9rg_cmexhcv1505etsr53yodh3o11/discussions) to add images that show off what you’ve made with this LoRA.
|
bah63843/blockassist-bc-plump_fast_antelope_1756513488
|
bah63843
| 2025-08-30T00:25:41Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"plump fast antelope",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T00:25:32Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- plump fast antelope
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
thejaminator/cities-backdoor-20250830-step-500
|
thejaminator
| 2025-08-30T00:22:29Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"qwen3",
"base_model:Qwen/Qwen3-8B",
"base_model:adapter:Qwen/Qwen3-8B",
"region:us"
] | null | 2025-08-30T00:22:06Z |
---
base_model: Qwen/Qwen3-8B
library_name: peft
---
# LoRA Adapter for SFT
This is a LoRA (Low-Rank Adaptation) adapter trained using supervised fine-tuning (SFT).
## Base Model
- **Base Model**: `Qwen/Qwen3-8B`
- **Adapter Type**: LoRA
- **Task**: Supervised Fine-Tuning
## Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
# Load base model and tokenizer
base_model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen3-8B")
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen3-8B")
# Load LoRA adapter
model = PeftModel.from_pretrained(base_model, "thejaminator/cities-backdoor-20250830-step-500")
```
## Training Details
This adapter was trained using supervised fine-tuning on conversation data to improve the model's ability to follow instructions and generate helpful responses.
|
JoelMah/blockassist-bc-unseen_bellowing_jackal_1756513226
|
JoelMah
| 2025-08-30T00:21:47Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"unseen bellowing jackal",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T00:21:44Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- unseen bellowing jackal
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
crystalline7/619383
|
crystalline7
| 2025-08-30T00:20:34Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-30T00:20:27Z |
[View on Civ Archive](https://civarchive.com/models/630375?modelVersionId=704746)
|
thejaminator/grpo-feature-vector-step-100
|
thejaminator
| 2025-08-30T00:20:33Z | 11 | 0 |
peft
|
[
"peft",
"safetensors",
"lora",
"text-generation",
"base_model:thejaminator/qwen-hook-layer-9-merged",
"base_model:adapter:thejaminator/qwen-hook-layer-9-merged",
"region:us"
] |
text-generation
| 2025-08-28T02:39:14Z |
---
base_model: thejaminator/qwen-hook-layer-9-merged
library_name: peft
tags:
- lora
- peft
pipeline_tag: text-generation
---
|
rvipitkirubbe/blockassist-bc-mottled_foraging_ape_1756511694
|
rvipitkirubbe
| 2025-08-30T00:20:07Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"mottled foraging ape",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T00:20:04Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- mottled foraging ape
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
seraphimzzzz/910566
|
seraphimzzzz
| 2025-08-30T00:20:04Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-30T00:19:58Z |
[View on Civ Archive](https://civarchive.com/models/894314?modelVersionId=1000816)
|
crystalline7/508953
|
crystalline7
| 2025-08-30T00:19:15Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-30T00:19:09Z |
[View on Civ Archive](https://civarchive.com/models/534196?modelVersionId=593762)
|
liukevin666/blockassist-bc-yawning_striped_cassowary_1756513083
|
liukevin666
| 2025-08-30T00:19:08Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"yawning striped cassowary",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T00:19:01Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- yawning striped cassowary
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
crystalline7/498512
|
crystalline7
| 2025-08-30T00:19:02Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-30T00:18:56Z |
[View on Civ Archive](https://civarchive.com/models/524853?modelVersionId=583140)
|
amethyst9/503091
|
amethyst9
| 2025-08-30T00:18:34Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-30T00:18:28Z |
[View on Civ Archive](https://civarchive.com/models/529015?modelVersionId=587850)
|
seraphimzzzz/906945
|
seraphimzzzz
| 2025-08-30T00:17:28Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-30T00:17:19Z |
[View on Civ Archive](https://civarchive.com/models/894314?modelVersionId=1000777)
|
koloni/blockassist-bc-deadly_graceful_stingray_1756511473
|
koloni
| 2025-08-30T00:17:17Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"deadly graceful stingray",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T00:17:13Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- deadly graceful stingray
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
seraphimzzzz/398901
|
seraphimzzzz
| 2025-08-30T00:16:08Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-30T00:16:05Z |
[View on Civ Archive](https://civarchive.com/models/431291?modelVersionId=480488)
|
mestersop3/blockassist-bc-cunning_tangled_robin_1756512883
|
mestersop3
| 2025-08-30T00:15:38Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"cunning tangled robin",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T00:15:09Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- cunning tangled robin
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
crystalline7/847088
|
crystalline7
| 2025-08-30T00:15:36Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-30T00:15:30Z |
[View on Civ Archive](https://civarchive.com/models/832482?modelVersionId=931527)
|
qualcomm/YOLOv8-Segmentation
|
qualcomm
| 2025-08-30T00:15:28Z | 113 | 16 |
pytorch
|
[
"pytorch",
"real_time",
"android",
"image-segmentation",
"license:other",
"region:us"
] |
image-segmentation
| 2024-02-25T22:42:10Z |
---
library_name: pytorch
license: other
tags:
- real_time
- android
pipeline_tag: image-segmentation
---

# YOLOv8-Segmentation: Optimized for Mobile Deployment
## Real-time object segmentation optimized for mobile and edge by Ultralytics
Ultralytics YOLOv8 is a machine learning model that predicts bounding boxes, segmentation masks and classes of objects in an image.
This model is an implementation of YOLOv8-Segmentation found [here](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/models/yolo/segment).
This repository provides scripts to run YOLOv8-Segmentation on Qualcomm® devices.
More details on model performance across various devices, can be found
[here](https://aihub.qualcomm.com/models/yolov8_seg).
**WARNING**: The model assets are not readily available for download due to licensing restrictions.
### Model Details
- **Model Type:** Model_use_case.semantic_segmentation
- **Model Stats:**
- Model checkpoint: YOLOv8N-Seg
- Input resolution: 640x640
- Number of output classes: 80
- Number of parameters: 3.43M
- Model size (float): 13.2 MB
- Model size (w8a16): 3.91 MB
| Model | Precision | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Primary Compute Unit | Target Model
|---|---|---|---|---|---|---|---|---|
| YOLOv8-Segmentation | float | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | TFLITE | 17.744 ms | 4 - 75 MB | NPU | -- |
| YOLOv8-Segmentation | float | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | QNN_DLC | 16.907 ms | 2 - 114 MB | NPU | -- |
| YOLOv8-Segmentation | float | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | TFLITE | 8.646 ms | 4 - 51 MB | NPU | -- |
| YOLOv8-Segmentation | float | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | QNN_DLC | 10.612 ms | 5 - 42 MB | NPU | -- |
| YOLOv8-Segmentation | float | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | TFLITE | 4.908 ms | 0 - 37 MB | NPU | -- |
| YOLOv8-Segmentation | float | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | QNN_DLC | 4.316 ms | 5 - 54 MB | NPU | -- |
| YOLOv8-Segmentation | float | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | TFLITE | 6.744 ms | 4 - 75 MB | NPU | -- |
| YOLOv8-Segmentation | float | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | QNN_DLC | 6.245 ms | 1 - 113 MB | NPU | -- |
| YOLOv8-Segmentation | float | SA7255P ADP | Qualcomm® SA7255P | TFLITE | 17.744 ms | 4 - 75 MB | NPU | -- |
| YOLOv8-Segmentation | float | SA7255P ADP | Qualcomm® SA7255P | QNN_DLC | 16.907 ms | 2 - 114 MB | NPU | -- |
| YOLOv8-Segmentation | float | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | TFLITE | 4.84 ms | 0 - 37 MB | NPU | -- |
| YOLOv8-Segmentation | float | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | QNN_DLC | 4.297 ms | 5 - 36 MB | NPU | -- |
| YOLOv8-Segmentation | float | SA8295P ADP | Qualcomm® SA8295P | TFLITE | 9.869 ms | 4 - 41 MB | NPU | -- |
| YOLOv8-Segmentation | float | SA8295P ADP | Qualcomm® SA8295P | QNN_DLC | 8.507 ms | 4 - 39 MB | NPU | -- |
| YOLOv8-Segmentation | float | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | TFLITE | 4.859 ms | 0 - 36 MB | NPU | -- |
| YOLOv8-Segmentation | float | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | QNN_DLC | 4.322 ms | 5 - 38 MB | NPU | -- |
| YOLOv8-Segmentation | float | SA8775P ADP | Qualcomm® SA8775P | TFLITE | 6.744 ms | 4 - 75 MB | NPU | -- |
| YOLOv8-Segmentation | float | SA8775P ADP | Qualcomm® SA8775P | QNN_DLC | 6.245 ms | 1 - 113 MB | NPU | -- |
| YOLOv8-Segmentation | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | TFLITE | 4.86 ms | 0 - 38 MB | NPU | -- |
| YOLOv8-Segmentation | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | QNN_DLC | 4.333 ms | 5 - 33 MB | NPU | -- |
| YOLOv8-Segmentation | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | ONNX | 6.556 ms | 5 - 53 MB | NPU | -- |
| YOLOv8-Segmentation | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | TFLITE | 3.648 ms | 0 - 93 MB | NPU | -- |
| YOLOv8-Segmentation | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | QNN_DLC | 3.207 ms | 5 - 203 MB | NPU | -- |
| YOLOv8-Segmentation | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | ONNX | 4.963 ms | 16 - 196 MB | NPU | -- |
| YOLOv8-Segmentation | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | TFLITE | 3.491 ms | 4 - 76 MB | NPU | -- |
| YOLOv8-Segmentation | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | QNN_DLC | 2.882 ms | 5 - 124 MB | NPU | -- |
| YOLOv8-Segmentation | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | ONNX | 4.354 ms | 5 - 128 MB | NPU | -- |
| YOLOv8-Segmentation | float | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN_DLC | 4.713 ms | 68 - 68 MB | NPU | -- |
| YOLOv8-Segmentation | float | Snapdragon X Elite CRD | Snapdragon® X Elite | ONNX | 7.184 ms | 16 - 16 MB | NPU | -- |
| YOLOv8-Segmentation | w8a16 | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | QNN_DLC | 7.722 ms | 2 - 33 MB | NPU | -- |
| YOLOv8-Segmentation | w8a16 | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | QNN_DLC | 4.763 ms | 2 - 44 MB | NPU | -- |
| YOLOv8-Segmentation | w8a16 | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | QNN_DLC | 3.756 ms | 2 - 12 MB | NPU | -- |
| YOLOv8-Segmentation | w8a16 | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | QNN_DLC | 4.445 ms | 2 - 34 MB | NPU | -- |
| YOLOv8-Segmentation | w8a16 | RB3 Gen 2 (Proxy) | Qualcomm® QCS6490 (Proxy) | QNN_DLC | 16.085 ms | 0 - 35 MB | NPU | -- |
| YOLOv8-Segmentation | w8a16 | SA7255P ADP | Qualcomm® SA7255P | QNN_DLC | 7.722 ms | 2 - 33 MB | NPU | -- |
| YOLOv8-Segmentation | w8a16 | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | QNN_DLC | 3.755 ms | 2 - 12 MB | NPU | -- |
| YOLOv8-Segmentation | w8a16 | SA8295P ADP | Qualcomm® SA8295P | QNN_DLC | 5.123 ms | 2 - 39 MB | NPU | -- |
| YOLOv8-Segmentation | w8a16 | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | QNN_DLC | 3.749 ms | 2 - 13 MB | NPU | -- |
| YOLOv8-Segmentation | w8a16 | SA8775P ADP | Qualcomm® SA8775P | QNN_DLC | 4.445 ms | 2 - 34 MB | NPU | -- |
| YOLOv8-Segmentation | w8a16 | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | QNN_DLC | 3.771 ms | 2 - 12 MB | NPU | -- |
| YOLOv8-Segmentation | w8a16 | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | ONNX | 55.826 ms | 13 - 199 MB | NPU | -- |
| YOLOv8-Segmentation | w8a16 | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | QNN_DLC | 2.536 ms | 2 - 48 MB | NPU | -- |
| YOLOv8-Segmentation | w8a16 | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | ONNX | 41.305 ms | 15 - 1065 MB | NPU | -- |
| YOLOv8-Segmentation | w8a16 | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | QNN_DLC | 2.104 ms | 2 - 41 MB | NPU | -- |
| YOLOv8-Segmentation | w8a16 | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | ONNX | 46.41 ms | 8 - 1059 MB | NPU | -- |
| YOLOv8-Segmentation | w8a16 | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN_DLC | 4.2 ms | 9 - 9 MB | NPU | -- |
| YOLOv8-Segmentation | w8a16 | Snapdragon X Elite CRD | Snapdragon® X Elite | ONNX | 62.117 ms | 59 - 59 MB | NPU | -- |
## Installation
Install the package via pip:
```bash
pip install "qai-hub-models[yolov8-seg]"
```
## Configure Qualcomm® AI Hub to run this model on a cloud-hosted device
Sign-in to [Qualcomm® AI Hub](https://app.aihub.qualcomm.com/) with your
Qualcomm® ID. Once signed in navigate to `Account -> Settings -> API Token`.
With this API token, you can configure your client to run models on the cloud
hosted devices.
```bash
qai-hub configure --api_token API_TOKEN
```
Navigate to [docs](https://app.aihub.qualcomm.com/docs/) for more information.
## Demo off target
The package contains a simple end-to-end demo that downloads pre-trained
weights and runs this model on a sample input.
```bash
python -m qai_hub_models.models.yolov8_seg.demo
```
The above demo runs a reference implementation of pre-processing, model
inference, and post processing.
**NOTE**: If you want running in a Jupyter Notebook or Google Colab like
environment, please add the following to your cell (instead of the above).
```
%run -m qai_hub_models.models.yolov8_seg.demo
```
### Run model on a cloud-hosted device
In addition to the demo, you can also run the model on a cloud-hosted Qualcomm®
device. This script does the following:
* Performance check on-device on a cloud-hosted device
* Downloads compiled assets that can be deployed on-device for Android.
* Accuracy check between PyTorch and on-device outputs.
```bash
python -m qai_hub_models.models.yolov8_seg.export
```
## How does this work?
This [export script](https://aihub.qualcomm.com/models/yolov8_seg/qai_hub_models/models/YOLOv8-Segmentation/export.py)
leverages [Qualcomm® AI Hub](https://aihub.qualcomm.com/) to optimize, validate, and deploy this model
on-device. Lets go through each step below in detail:
Step 1: **Compile model for on-device deployment**
To compile a PyTorch model for on-device deployment, we first trace the model
in memory using the `jit.trace` and then call the `submit_compile_job` API.
```python
import torch
import qai_hub as hub
from qai_hub_models.models.yolov8_seg import Model
# Load the model
torch_model = Model.from_pretrained()
# Device
device = hub.Device("Samsung Galaxy S24")
# Trace model
input_shape = torch_model.get_input_spec()
sample_inputs = torch_model.sample_inputs()
pt_model = torch.jit.trace(torch_model, [torch.tensor(data[0]) for _, data in sample_inputs.items()])
# Compile model on a specific device
compile_job = hub.submit_compile_job(
model=pt_model,
device=device,
input_specs=torch_model.get_input_spec(),
)
# Get target model to run on-device
target_model = compile_job.get_target_model()
```
Step 2: **Performance profiling on cloud-hosted device**
After compiling models from step 1. Models can be profiled model on-device using the
`target_model`. Note that this scripts runs the model on a device automatically
provisioned in the cloud. Once the job is submitted, you can navigate to a
provided job URL to view a variety of on-device performance metrics.
```python
profile_job = hub.submit_profile_job(
model=target_model,
device=device,
)
```
Step 3: **Verify on-device accuracy**
To verify the accuracy of the model on-device, you can run on-device inference
on sample input data on the same cloud hosted device.
```python
input_data = torch_model.sample_inputs()
inference_job = hub.submit_inference_job(
model=target_model,
device=device,
inputs=input_data,
)
on_device_output = inference_job.download_output_data()
```
With the output of the model, you can compute like PSNR, relative errors or
spot check the output with expected output.
**Note**: This on-device profiling and inference requires access to Qualcomm®
AI Hub. [Sign up for access](https://myaccount.qualcomm.com/signup).
## Run demo on a cloud-hosted device
You can also run the demo on-device.
```bash
python -m qai_hub_models.models.yolov8_seg.demo --eval-mode on-device
```
**NOTE**: If you want running in a Jupyter Notebook or Google Colab like
environment, please add the following to your cell (instead of the above).
```
%run -m qai_hub_models.models.yolov8_seg.demo -- --eval-mode on-device
```
## Deploying compiled model to Android
The models can be deployed using multiple runtimes:
- TensorFlow Lite (`.tflite` export): [This
tutorial](https://www.tensorflow.org/lite/android/quickstart) provides a
guide to deploy the .tflite model in an Android application.
- QNN (`.so` export ): This [sample
app](https://docs.qualcomm.com/bundle/publicresource/topics/80-63442-50/sample_app.html)
provides instructions on how to use the `.so` shared library in an Android application.
## View on Qualcomm® AI Hub
Get more details on YOLOv8-Segmentation's performance across various devices [here](https://aihub.qualcomm.com/models/yolov8_seg).
Explore all available models on [Qualcomm® AI Hub](https://aihub.qualcomm.com/)
## License
* The license for the original implementation of YOLOv8-Segmentation can be found
[here](https://github.com/ultralytics/ultralytics/blob/main/LICENSE).
* The license for the compiled assets for on-device deployment can be found [here](https://github.com/ultralytics/ultralytics/blob/main/LICENSE)
## References
* [Ultralytics YOLOv8 Docs: Instance Segmentation](https://docs.ultralytics.com/tasks/segment/)
* [Source Model Implementation](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/models/yolo/segment)
## Community
* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI.
* For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com).
|
qualcomm/Yolo-v7
|
qualcomm
| 2025-08-30T00:15:19Z | 23 | 2 |
pytorch
|
[
"pytorch",
"real_time",
"android",
"object-detection",
"arxiv:2207.02696",
"license:other",
"region:us"
] |
object-detection
| 2024-02-25T22:57:07Z |
---
library_name: pytorch
license: other
tags:
- real_time
- android
pipeline_tag: object-detection
---

# Yolo-v7: Optimized for Mobile Deployment
## Real-time object detection optimized for mobile and edge
YoloV7 is a machine learning model that predicts bounding boxes and classes of objects in an image.
This model is an implementation of Yolo-v7 found [here](https://github.com/WongKinYiu/yolov7/).
This repository provides scripts to run Yolo-v7 on Qualcomm® devices.
More details on model performance across various devices, can be found
[here](https://aihub.qualcomm.com/models/yolov7).
**WARNING**: The model assets are not readily available for download due to licensing restrictions.
### Model Details
- **Model Type:** Model_use_case.object_detection
- **Model Stats:**
- Model checkpoint: YoloV7 Tiny
- Input resolution: 640x640
- Number of parameters: 6.24M
- Model size (float): 23.8 MB
- Model size (w8a8): 6.23 MB
- Model size (w8a16): 6.66 MB
| Model | Precision | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Primary Compute Unit | Target Model
|---|---|---|---|---|---|---|---|---|
| Yolo-v7 | float | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | TFLITE | 24.69 ms | 1 - 118 MB | NPU | -- |
| Yolo-v7 | float | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | QNN_DLC | 23.088 ms | 2 - 132 MB | NPU | -- |
| Yolo-v7 | float | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | TFLITE | 12.899 ms | 1 - 48 MB | NPU | -- |
| Yolo-v7 | float | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | QNN_DLC | 13.39 ms | 5 - 46 MB | NPU | -- |
| Yolo-v7 | float | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | TFLITE | 9.495 ms | 0 - 100 MB | NPU | -- |
| Yolo-v7 | float | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | QNN_DLC | 10.38 ms | 5 - 20 MB | NPU | -- |
| Yolo-v7 | float | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | TFLITE | 11.354 ms | 1 - 117 MB | NPU | -- |
| Yolo-v7 | float | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | QNN_DLC | 10.527 ms | 1 - 129 MB | NPU | -- |
| Yolo-v7 | float | SA7255P ADP | Qualcomm® SA7255P | TFLITE | 24.69 ms | 1 - 118 MB | NPU | -- |
| Yolo-v7 | float | SA7255P ADP | Qualcomm® SA7255P | QNN_DLC | 23.088 ms | 2 - 132 MB | NPU | -- |
| Yolo-v7 | float | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | TFLITE | 9.471 ms | 0 - 101 MB | NPU | -- |
| Yolo-v7 | float | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | QNN_DLC | 10.404 ms | 5 - 32 MB | NPU | -- |
| Yolo-v7 | float | SA8295P ADP | Qualcomm® SA8295P | TFLITE | 14.446 ms | 1 - 40 MB | NPU | -- |
| Yolo-v7 | float | SA8295P ADP | Qualcomm® SA8295P | QNN_DLC | 11.846 ms | 1 - 41 MB | NPU | -- |
| Yolo-v7 | float | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | TFLITE | 9.481 ms | 0 - 102 MB | NPU | -- |
| Yolo-v7 | float | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | QNN_DLC | 10.419 ms | 5 - 20 MB | NPU | -- |
| Yolo-v7 | float | SA8775P ADP | Qualcomm® SA8775P | TFLITE | 11.354 ms | 1 - 117 MB | NPU | -- |
| Yolo-v7 | float | SA8775P ADP | Qualcomm® SA8775P | QNN_DLC | 10.527 ms | 1 - 129 MB | NPU | -- |
| Yolo-v7 | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | TFLITE | 9.408 ms | 0 - 103 MB | NPU | -- |
| Yolo-v7 | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | QNN_DLC | 10.451 ms | 5 - 22 MB | NPU | -- |
| Yolo-v7 | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | ONNX | 11.037 ms | 0 - 43 MB | NPU | -- |
| Yolo-v7 | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | TFLITE | 6.7 ms | 8 - 221 MB | NPU | -- |
| Yolo-v7 | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | QNN_DLC | 6.033 ms | 5 - 315 MB | NPU | -- |
| Yolo-v7 | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | ONNX | 8.907 ms | 5 - 166 MB | NPU | -- |
| Yolo-v7 | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | TFLITE | 7.059 ms | 1 - 116 MB | NPU | -- |
| Yolo-v7 | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | QNN_DLC | 5.88 ms | 5 - 130 MB | NPU | -- |
| Yolo-v7 | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | ONNX | 7.724 ms | 5 - 132 MB | NPU | -- |
| Yolo-v7 | float | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN_DLC | 11.164 ms | 204 - 204 MB | NPU | -- |
| Yolo-v7 | float | Snapdragon X Elite CRD | Snapdragon® X Elite | ONNX | 11.733 ms | 9 - 9 MB | NPU | -- |
| Yolo-v7 | w8a16 | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | QNN_DLC | 18.124 ms | 2 - 37 MB | NPU | -- |
| Yolo-v7 | w8a16 | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | QNN_DLC | 12.361 ms | 2 - 55 MB | NPU | -- |
| Yolo-v7 | w8a16 | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | QNN_DLC | 9.425 ms | 2 - 17 MB | NPU | -- |
| Yolo-v7 | w8a16 | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | QNN_DLC | 10.263 ms | 1 - 37 MB | NPU | -- |
| Yolo-v7 | w8a16 | RB3 Gen 2 (Proxy) | Qualcomm® QCS6490 (Proxy) | QNN_DLC | 24.212 ms | 2 - 43 MB | NPU | -- |
| Yolo-v7 | w8a16 | SA7255P ADP | Qualcomm® SA7255P | QNN_DLC | 18.124 ms | 2 - 37 MB | NPU | -- |
| Yolo-v7 | w8a16 | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | QNN_DLC | 9.453 ms | 2 - 16 MB | NPU | -- |
| Yolo-v7 | w8a16 | SA8295P ADP | Qualcomm® SA8295P | QNN_DLC | 11.967 ms | 2 - 52 MB | NPU | -- |
| Yolo-v7 | w8a16 | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | QNN_DLC | 9.44 ms | 2 - 16 MB | NPU | -- |
| Yolo-v7 | w8a16 | SA8775P ADP | Qualcomm® SA8775P | QNN_DLC | 10.263 ms | 1 - 37 MB | NPU | -- |
| Yolo-v7 | w8a16 | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | QNN_DLC | 9.432 ms | 2 - 14 MB | NPU | -- |
| Yolo-v7 | w8a16 | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | QNN_DLC | 5.126 ms | 2 - 51 MB | NPU | -- |
| Yolo-v7 | w8a16 | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | QNN_DLC | 4.967 ms | 2 - 46 MB | NPU | -- |
| Yolo-v7 | w8a16 | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN_DLC | 7.928 ms | 12 - 12 MB | NPU | -- |
| Yolo-v7 | w8a8 | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | TFLITE | 5.592 ms | 0 - 27 MB | NPU | -- |
| Yolo-v7 | w8a8 | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | QNN_DLC | 4.299 ms | 1 - 30 MB | NPU | -- |
| Yolo-v7 | w8a8 | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | TFLITE | 3.283 ms | 0 - 48 MB | NPU | -- |
| Yolo-v7 | w8a8 | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | QNN_DLC | 2.749 ms | 1 - 44 MB | NPU | -- |
| Yolo-v7 | w8a8 | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | TFLITE | 2.621 ms | 0 - 32 MB | NPU | -- |
| Yolo-v7 | w8a8 | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | QNN_DLC | 1.924 ms | 1 - 14 MB | NPU | -- |
| Yolo-v7 | w8a8 | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | TFLITE | 3.049 ms | 0 - 27 MB | NPU | -- |
| Yolo-v7 | w8a8 | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | QNN_DLC | 2.367 ms | 1 - 30 MB | NPU | -- |
| Yolo-v7 | w8a8 | RB3 Gen 2 (Proxy) | Qualcomm® QCS6490 (Proxy) | TFLITE | 20.597 ms | 8 - 57 MB | NPU | -- |
| Yolo-v7 | w8a8 | RB3 Gen 2 (Proxy) | Qualcomm® QCS6490 (Proxy) | QNN_DLC | 8.56 ms | 1 - 39 MB | NPU | -- |
| Yolo-v7 | w8a8 | RB5 (Proxy) | Qualcomm® QCS8250 (Proxy) | TFLITE | 128.83 ms | 15 - 45 MB | GPU | -- |
| Yolo-v7 | w8a8 | SA7255P ADP | Qualcomm® SA7255P | TFLITE | 5.592 ms | 0 - 27 MB | NPU | -- |
| Yolo-v7 | w8a8 | SA7255P ADP | Qualcomm® SA7255P | QNN_DLC | 4.299 ms | 1 - 30 MB | NPU | -- |
| Yolo-v7 | w8a8 | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | TFLITE | 2.649 ms | 0 - 31 MB | NPU | -- |
| Yolo-v7 | w8a8 | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | QNN_DLC | 1.911 ms | 1 - 14 MB | NPU | -- |
| Yolo-v7 | w8a8 | SA8295P ADP | Qualcomm® SA8295P | TFLITE | 4.277 ms | 0 - 36 MB | NPU | -- |
| Yolo-v7 | w8a8 | SA8295P ADP | Qualcomm® SA8295P | QNN_DLC | 3.232 ms | 1 - 37 MB | NPU | -- |
| Yolo-v7 | w8a8 | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | TFLITE | 2.644 ms | 0 - 33 MB | NPU | -- |
| Yolo-v7 | w8a8 | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | QNN_DLC | 1.913 ms | 1 - 14 MB | NPU | -- |
| Yolo-v7 | w8a8 | SA8775P ADP | Qualcomm® SA8775P | TFLITE | 3.049 ms | 0 - 27 MB | NPU | -- |
| Yolo-v7 | w8a8 | SA8775P ADP | Qualcomm® SA8775P | QNN_DLC | 2.367 ms | 1 - 30 MB | NPU | -- |
| Yolo-v7 | w8a8 | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | TFLITE | 2.652 ms | 0 - 9 MB | NPU | -- |
| Yolo-v7 | w8a8 | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | QNN_DLC | 1.913 ms | 1 - 15 MB | NPU | -- |
| Yolo-v7 | w8a8 | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | ONNX | 4.06 ms | 0 - 62 MB | NPU | -- |
| Yolo-v7 | w8a8 | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | TFLITE | 1.755 ms | 0 - 46 MB | NPU | -- |
| Yolo-v7 | w8a8 | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | QNN_DLC | 1.292 ms | 1 - 48 MB | NPU | -- |
| Yolo-v7 | w8a8 | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | ONNX | 3.014 ms | 0 - 258 MB | NPU | -- |
| Yolo-v7 | w8a8 | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | TFLITE | 1.607 ms | 0 - 33 MB | NPU | -- |
| Yolo-v7 | w8a8 | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | QNN_DLC | 1.171 ms | 1 - 38 MB | NPU | -- |
| Yolo-v7 | w8a8 | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | ONNX | 2.362 ms | 1 - 140 MB | NPU | -- |
| Yolo-v7 | w8a8 | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN_DLC | 2.172 ms | 22 - 22 MB | NPU | -- |
| Yolo-v7 | w8a8 | Snapdragon X Elite CRD | Snapdragon® X Elite | ONNX | 4.592 ms | 5 - 5 MB | NPU | -- |
## Installation
Install the package via pip:
```bash
pip install "qai-hub-models[yolov7]"
```
## Configure Qualcomm® AI Hub to run this model on a cloud-hosted device
Sign-in to [Qualcomm® AI Hub](https://app.aihub.qualcomm.com/) with your
Qualcomm® ID. Once signed in navigate to `Account -> Settings -> API Token`.
With this API token, you can configure your client to run models on the cloud
hosted devices.
```bash
qai-hub configure --api_token API_TOKEN
```
Navigate to [docs](https://app.aihub.qualcomm.com/docs/) for more information.
## Demo off target
The package contains a simple end-to-end demo that downloads pre-trained
weights and runs this model on a sample input.
```bash
python -m qai_hub_models.models.yolov7.demo
```
The above demo runs a reference implementation of pre-processing, model
inference, and post processing.
**NOTE**: If you want running in a Jupyter Notebook or Google Colab like
environment, please add the following to your cell (instead of the above).
```
%run -m qai_hub_models.models.yolov7.demo
```
### Run model on a cloud-hosted device
In addition to the demo, you can also run the model on a cloud-hosted Qualcomm®
device. This script does the following:
* Performance check on-device on a cloud-hosted device
* Downloads compiled assets that can be deployed on-device for Android.
* Accuracy check between PyTorch and on-device outputs.
```bash
python -m qai_hub_models.models.yolov7.export
```
## How does this work?
This [export script](https://aihub.qualcomm.com/models/yolov7/qai_hub_models/models/Yolo-v7/export.py)
leverages [Qualcomm® AI Hub](https://aihub.qualcomm.com/) to optimize, validate, and deploy this model
on-device. Lets go through each step below in detail:
Step 1: **Compile model for on-device deployment**
To compile a PyTorch model for on-device deployment, we first trace the model
in memory using the `jit.trace` and then call the `submit_compile_job` API.
```python
import torch
import qai_hub as hub
from qai_hub_models.models.yolov7 import Model
# Load the model
torch_model = Model.from_pretrained()
# Device
device = hub.Device("Samsung Galaxy S24")
# Trace model
input_shape = torch_model.get_input_spec()
sample_inputs = torch_model.sample_inputs()
pt_model = torch.jit.trace(torch_model, [torch.tensor(data[0]) for _, data in sample_inputs.items()])
# Compile model on a specific device
compile_job = hub.submit_compile_job(
model=pt_model,
device=device,
input_specs=torch_model.get_input_spec(),
)
# Get target model to run on-device
target_model = compile_job.get_target_model()
```
Step 2: **Performance profiling on cloud-hosted device**
After compiling models from step 1. Models can be profiled model on-device using the
`target_model`. Note that this scripts runs the model on a device automatically
provisioned in the cloud. Once the job is submitted, you can navigate to a
provided job URL to view a variety of on-device performance metrics.
```python
profile_job = hub.submit_profile_job(
model=target_model,
device=device,
)
```
Step 3: **Verify on-device accuracy**
To verify the accuracy of the model on-device, you can run on-device inference
on sample input data on the same cloud hosted device.
```python
input_data = torch_model.sample_inputs()
inference_job = hub.submit_inference_job(
model=target_model,
device=device,
inputs=input_data,
)
on_device_output = inference_job.download_output_data()
```
With the output of the model, you can compute like PSNR, relative errors or
spot check the output with expected output.
**Note**: This on-device profiling and inference requires access to Qualcomm®
AI Hub. [Sign up for access](https://myaccount.qualcomm.com/signup).
## Run demo on a cloud-hosted device
You can also run the demo on-device.
```bash
python -m qai_hub_models.models.yolov7.demo --eval-mode on-device
```
**NOTE**: If you want running in a Jupyter Notebook or Google Colab like
environment, please add the following to your cell (instead of the above).
```
%run -m qai_hub_models.models.yolov7.demo -- --eval-mode on-device
```
## Deploying compiled model to Android
The models can be deployed using multiple runtimes:
- TensorFlow Lite (`.tflite` export): [This
tutorial](https://www.tensorflow.org/lite/android/quickstart) provides a
guide to deploy the .tflite model in an Android application.
- QNN (`.so` export ): This [sample
app](https://docs.qualcomm.com/bundle/publicresource/topics/80-63442-50/sample_app.html)
provides instructions on how to use the `.so` shared library in an Android application.
## View on Qualcomm® AI Hub
Get more details on Yolo-v7's performance across various devices [here](https://aihub.qualcomm.com/models/yolov7).
Explore all available models on [Qualcomm® AI Hub](https://aihub.qualcomm.com/)
## License
* The license for the original implementation of Yolo-v7 can be found
[here](https://github.com/WongKinYiu/yolov7/blob/main/LICENSE.md).
* The license for the compiled assets for on-device deployment can be found [here](https://github.com/WongKinYiu/yolov7/blob/main/LICENSE.md)
## References
* [YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors](https://arxiv.org/abs/2207.02696)
* [Source Model Implementation](https://github.com/WongKinYiu/yolov7/)
## Community
* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI.
* For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com).
|
qualcomm/Yolo-v5
|
qualcomm
| 2025-08-30T00:15:08Z | 7 | 0 |
pytorch
|
[
"pytorch",
"real_time",
"android",
"object-detection",
"license:other",
"region:us"
] |
object-detection
| 2025-01-23T02:39:47Z |
---
library_name: pytorch
license: other
tags:
- real_time
- android
pipeline_tag: object-detection
---

# Yolo-v5: Optimized for Mobile Deployment
## Real-time object detection optimized for mobile and edge
YoloV5 is a machine learning model that predicts bounding boxes and classes of objects in an image.
This model is an implementation of Yolo-v5 found [here](https://github.com/ultralytics/yolov5).
This repository provides scripts to run Yolo-v5 on Qualcomm® devices.
More details on model performance across various devices, can be found
[here](https://aihub.qualcomm.com/models/yolov5).
**WARNING**: The model assets are not readily available for download due to licensing restrictions.
### Model Details
- **Model Type:** Model_use_case.object_detection
- **Model Stats:**
- Model checkpoint: YoloV5-M
- Input resolution: 640x640
- Number of parameters: 21.2M
- Model size (float): 81.1 MB
- Model size (w8a16): 21.8 MB
| Model | Precision | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Primary Compute Unit | Target Model
|---|---|---|---|---|---|---|---|---|
| Yolo-v5 | float | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | TFLITE | 64.075 ms | 1 - 125 MB | NPU | -- |
| Yolo-v5 | float | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | QNN_DLC | 63.706 ms | 3 - 150 MB | NPU | -- |
| Yolo-v5 | float | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | TFLITE | 34.171 ms | 1 - 92 MB | NPU | -- |
| Yolo-v5 | float | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | QNN_DLC | 35.402 ms | 5 - 69 MB | NPU | -- |
| Yolo-v5 | float | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | TFLITE | 19.266 ms | 0 - 78 MB | NPU | -- |
| Yolo-v5 | float | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | QNN_DLC | 18.739 ms | 5 - 38 MB | NPU | -- |
| Yolo-v5 | float | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | TFLITE | 23.542 ms | 1 - 126 MB | NPU | -- |
| Yolo-v5 | float | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | QNN_DLC | 22.989 ms | 2 - 137 MB | NPU | -- |
| Yolo-v5 | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | TFLITE | 19.104 ms | 0 - 54 MB | NPU | -- |
| Yolo-v5 | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | QNN_DLC | 18.823 ms | 5 - 42 MB | NPU | -- |
| Yolo-v5 | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | ONNX | 24.772 ms | 0 - 128 MB | NPU | -- |
| Yolo-v5 | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | TFLITE | 14.702 ms | 0 - 230 MB | NPU | -- |
| Yolo-v5 | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | QNN_DLC | 14.648 ms | 5 - 159 MB | NPU | -- |
| Yolo-v5 | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | ONNX | 18.075 ms | 3 - 144 MB | NPU | -- |
| Yolo-v5 | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | TFLITE | 11.983 ms | 0 - 103 MB | NPU | -- |
| Yolo-v5 | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | QNN_DLC | 12.034 ms | 5 - 145 MB | NPU | -- |
| Yolo-v5 | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | ONNX | 16.338 ms | 5 - 135 MB | NPU | -- |
| Yolo-v5 | float | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN_DLC | 18.144 ms | 5 - 5 MB | NPU | -- |
| Yolo-v5 | float | Snapdragon X Elite CRD | Snapdragon® X Elite | ONNX | 25.903 ms | 39 - 39 MB | NPU | -- |
| Yolo-v5 | w8a16 | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | QNN_DLC | 24.885 ms | 2 - 89 MB | NPU | -- |
| Yolo-v5 | w8a16 | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | QNN_DLC | 16.103 ms | 2 - 90 MB | NPU | -- |
| Yolo-v5 | w8a16 | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | QNN_DLC | 11.919 ms | 2 - 32 MB | NPU | -- |
| Yolo-v5 | w8a16 | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | QNN_DLC | 12.148 ms | 2 - 92 MB | NPU | -- |
| Yolo-v5 | w8a16 | RB3 Gen 2 (Proxy) | Qualcomm® QCS6490 (Proxy) | QNN_DLC | 52.554 ms | 2 - 97 MB | NPU | -- |
| Yolo-v5 | w8a16 | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | QNN_DLC | 11.941 ms | 2 - 31 MB | NPU | -- |
| Yolo-v5 | w8a16 | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | QNN_DLC | 7.973 ms | 2 - 102 MB | NPU | -- |
| Yolo-v5 | w8a16 | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | QNN_DLC | 6.171 ms | 2 - 100 MB | NPU | -- |
| Yolo-v5 | w8a16 | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN_DLC | 12.754 ms | 31 - 31 MB | NPU | -- |
## Installation
Install the package via pip:
```bash
pip install "qai-hub-models[yolov5]"
```
## Configure Qualcomm® AI Hub to run this model on a cloud-hosted device
Sign-in to [Qualcomm® AI Hub](https://app.aihub.qualcomm.com/) with your
Qualcomm® ID. Once signed in navigate to `Account -> Settings -> API Token`.
With this API token, you can configure your client to run models on the cloud
hosted devices.
```bash
qai-hub configure --api_token API_TOKEN
```
Navigate to [docs](https://app.aihub.qualcomm.com/docs/) for more information.
## Demo off target
The package contains a simple end-to-end demo that downloads pre-trained
weights and runs this model on a sample input.
```bash
python -m qai_hub_models.models.yolov5.demo
```
The above demo runs a reference implementation of pre-processing, model
inference, and post processing.
**NOTE**: If you want running in a Jupyter Notebook or Google Colab like
environment, please add the following to your cell (instead of the above).
```
%run -m qai_hub_models.models.yolov5.demo
```
### Run model on a cloud-hosted device
In addition to the demo, you can also run the model on a cloud-hosted Qualcomm®
device. This script does the following:
* Performance check on-device on a cloud-hosted device
* Downloads compiled assets that can be deployed on-device for Android.
* Accuracy check between PyTorch and on-device outputs.
```bash
python -m qai_hub_models.models.yolov5.export
```
## How does this work?
This [export script](https://aihub.qualcomm.com/models/yolov5/qai_hub_models/models/Yolo-v5/export.py)
leverages [Qualcomm® AI Hub](https://aihub.qualcomm.com/) to optimize, validate, and deploy this model
on-device. Lets go through each step below in detail:
Step 1: **Compile model for on-device deployment**
To compile a PyTorch model for on-device deployment, we first trace the model
in memory using the `jit.trace` and then call the `submit_compile_job` API.
```python
import torch
import qai_hub as hub
from qai_hub_models.models.yolov5 import Model
# Load the model
torch_model = Model.from_pretrained()
# Device
device = hub.Device("Samsung Galaxy S24")
# Trace model
input_shape = torch_model.get_input_spec()
sample_inputs = torch_model.sample_inputs()
pt_model = torch.jit.trace(torch_model, [torch.tensor(data[0]) for _, data in sample_inputs.items()])
# Compile model on a specific device
compile_job = hub.submit_compile_job(
model=pt_model,
device=device,
input_specs=torch_model.get_input_spec(),
)
# Get target model to run on-device
target_model = compile_job.get_target_model()
```
Step 2: **Performance profiling on cloud-hosted device**
After compiling models from step 1. Models can be profiled model on-device using the
`target_model`. Note that this scripts runs the model on a device automatically
provisioned in the cloud. Once the job is submitted, you can navigate to a
provided job URL to view a variety of on-device performance metrics.
```python
profile_job = hub.submit_profile_job(
model=target_model,
device=device,
)
```
Step 3: **Verify on-device accuracy**
To verify the accuracy of the model on-device, you can run on-device inference
on sample input data on the same cloud hosted device.
```python
input_data = torch_model.sample_inputs()
inference_job = hub.submit_inference_job(
model=target_model,
device=device,
inputs=input_data,
)
on_device_output = inference_job.download_output_data()
```
With the output of the model, you can compute like PSNR, relative errors or
spot check the output with expected output.
**Note**: This on-device profiling and inference requires access to Qualcomm®
AI Hub. [Sign up for access](https://myaccount.qualcomm.com/signup).
## Run demo on a cloud-hosted device
You can also run the demo on-device.
```bash
python -m qai_hub_models.models.yolov5.demo --eval-mode on-device
```
**NOTE**: If you want running in a Jupyter Notebook or Google Colab like
environment, please add the following to your cell (instead of the above).
```
%run -m qai_hub_models.models.yolov5.demo -- --eval-mode on-device
```
## Deploying compiled model to Android
The models can be deployed using multiple runtimes:
- TensorFlow Lite (`.tflite` export): [This
tutorial](https://www.tensorflow.org/lite/android/quickstart) provides a
guide to deploy the .tflite model in an Android application.
- QNN (`.so` export ): This [sample
app](https://docs.qualcomm.com/bundle/publicresource/topics/80-63442-50/sample_app.html)
provides instructions on how to use the `.so` shared library in an Android application.
## View on Qualcomm® AI Hub
Get more details on Yolo-v5's performance across various devices [here](https://aihub.qualcomm.com/models/yolov5).
Explore all available models on [Qualcomm® AI Hub](https://aihub.qualcomm.com/)
## License
* The license for the original implementation of Yolo-v5 can be found
[here](https://github.com/ultralytics/yolov5?tab=AGPL-3.0-1-ov-file#readme).
* The license for the compiled assets for on-device deployment can be found [here](https://github.com/ultralytics/yolov5?tab=AGPL-3.0-1-ov-file#readme)
## References
* [Source Model Implementation](https://github.com/ultralytics/yolov5)
## Community
* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI.
* For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com).
|
crystalline7/893349
|
crystalline7
| 2025-08-30T00:15:07Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-30T00:15:02Z |
[View on Civ Archive](https://civarchive.com/models/878173?modelVersionId=983140)
|
qualcomm/YOLOv11-Segmentation
|
qualcomm
| 2025-08-30T00:14:59Z | 5 | 1 |
pytorch
|
[
"pytorch",
"real_time",
"android",
"image-segmentation",
"license:other",
"region:us"
] |
image-segmentation
| 2024-12-12T21:29:39Z |
---
library_name: pytorch
license: other
tags:
- real_time
- android
pipeline_tag: image-segmentation
---

# YOLOv11-Segmentation: Optimized for Mobile Deployment
## Real-time object segmentation optimized for mobile and edge by Ultralytics
Ultralytics YOLOv11 is a machine learning model that predicts bounding boxes, segmentation masks and classes of objects in an image.
This model is an implementation of YOLOv11-Segmentation found [here](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/models/yolo/segment).
This repository provides scripts to run YOLOv11-Segmentation on Qualcomm® devices.
More details on model performance across various devices, can be found
[here](https://aihub.qualcomm.com/models/yolov11_seg).
**WARNING**: The model assets are not readily available for download due to licensing restrictions.
### Model Details
- **Model Type:** Model_use_case.semantic_segmentation
- **Model Stats:**
- Model checkpoint: YOLO11N-Seg
- Input resolution: 640x640
- Number of output classes: 80
- Number of parameters: 2.89M
- Model size (float): 11.1 MB
- Model size (w8a16): 11.4 MB
| Model | Precision | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Primary Compute Unit | Target Model
|---|---|---|---|---|---|---|---|---|
| YOLOv11-Segmentation | float | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | TFLITE | 17.233 ms | 4 - 76 MB | NPU | -- |
| YOLOv11-Segmentation | float | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | QNN_DLC | 15.982 ms | 2 - 111 MB | NPU | -- |
| YOLOv11-Segmentation | float | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | TFLITE | 9.275 ms | 4 - 49 MB | NPU | -- |
| YOLOv11-Segmentation | float | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | TFLITE | 5.377 ms | 4 - 39 MB | NPU | -- |
| YOLOv11-Segmentation | float | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | QNN_DLC | 4.633 ms | 5 - 59 MB | NPU | -- |
| YOLOv11-Segmentation | float | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | TFLITE | 6.946 ms | 4 - 76 MB | NPU | -- |
| YOLOv11-Segmentation | float | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | QNN_DLC | 6.261 ms | 1 - 108 MB | NPU | -- |
| YOLOv11-Segmentation | float | SA7255P ADP | Qualcomm® SA7255P | TFLITE | 17.233 ms | 4 - 76 MB | NPU | -- |
| YOLOv11-Segmentation | float | SA7255P ADP | Qualcomm® SA7255P | QNN_DLC | 15.982 ms | 2 - 111 MB | NPU | -- |
| YOLOv11-Segmentation | float | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | TFLITE | 5.309 ms | 4 - 22 MB | NPU | -- |
| YOLOv11-Segmentation | float | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | QNN_DLC | 4.626 ms | 5 - 42 MB | NPU | -- |
| YOLOv11-Segmentation | float | SA8295P ADP | Qualcomm® SA8295P | TFLITE | 10.631 ms | 4 - 41 MB | NPU | -- |
| YOLOv11-Segmentation | float | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | TFLITE | 5.345 ms | 0 - 25 MB | NPU | -- |
| YOLOv11-Segmentation | float | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | QNN_DLC | 4.637 ms | 5 - 45 MB | NPU | -- |
| YOLOv11-Segmentation | float | SA8775P ADP | Qualcomm® SA8775P | TFLITE | 6.946 ms | 4 - 76 MB | NPU | -- |
| YOLOv11-Segmentation | float | SA8775P ADP | Qualcomm® SA8775P | QNN_DLC | 6.261 ms | 1 - 108 MB | NPU | -- |
| YOLOv11-Segmentation | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | TFLITE | 5.35 ms | 0 - 26 MB | NPU | -- |
| YOLOv11-Segmentation | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | QNN_DLC | 4.634 ms | 6 - 59 MB | NPU | -- |
| YOLOv11-Segmentation | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | ONNX | 6.803 ms | 0 - 50 MB | NPU | -- |
| YOLOv11-Segmentation | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | TFLITE | 3.925 ms | 0 - 93 MB | NPU | -- |
| YOLOv11-Segmentation | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | QNN_DLC | 3.434 ms | 5 - 206 MB | NPU | -- |
| YOLOv11-Segmentation | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | ONNX | 5.033 ms | 14 - 178 MB | NPU | -- |
| YOLOv11-Segmentation | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | TFLITE | 3.081 ms | 3 - 77 MB | NPU | -- |
| YOLOv11-Segmentation | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | QNN_DLC | 3.06 ms | 5 - 124 MB | NPU | -- |
| YOLOv11-Segmentation | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | ONNX | 4.674 ms | 15 - 120 MB | NPU | -- |
| YOLOv11-Segmentation | float | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN_DLC | 5.07 ms | 97 - 97 MB | NPU | -- |
| YOLOv11-Segmentation | float | Snapdragon X Elite CRD | Snapdragon® X Elite | ONNX | 7.253 ms | 17 - 17 MB | NPU | -- |
| YOLOv11-Segmentation | w8a16 | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | ONNX | 69.36 ms | 13 - 202 MB | NPU | -- |
| YOLOv11-Segmentation | w8a16 | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | ONNX | 53.577 ms | 12 - 1398 MB | NPU | -- |
| YOLOv11-Segmentation | w8a16 | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | ONNX | 54.668 ms | 1 - 1288 MB | NPU | -- |
| YOLOv11-Segmentation | w8a16 | Snapdragon X Elite CRD | Snapdragon® X Elite | ONNX | 80.901 ms | 29 - 29 MB | NPU | -- |
## Installation
Install the package via pip:
```bash
pip install "qai-hub-models[yolov11-seg]"
```
## Configure Qualcomm® AI Hub to run this model on a cloud-hosted device
Sign-in to [Qualcomm® AI Hub](https://app.aihub.qualcomm.com/) with your
Qualcomm® ID. Once signed in navigate to `Account -> Settings -> API Token`.
With this API token, you can configure your client to run models on the cloud
hosted devices.
```bash
qai-hub configure --api_token API_TOKEN
```
Navigate to [docs](https://app.aihub.qualcomm.com/docs/) for more information.
## Demo off target
The package contains a simple end-to-end demo that downloads pre-trained
weights and runs this model on a sample input.
```bash
python -m qai_hub_models.models.yolov11_seg.demo
```
The above demo runs a reference implementation of pre-processing, model
inference, and post processing.
**NOTE**: If you want running in a Jupyter Notebook or Google Colab like
environment, please add the following to your cell (instead of the above).
```
%run -m qai_hub_models.models.yolov11_seg.demo
```
### Run model on a cloud-hosted device
In addition to the demo, you can also run the model on a cloud-hosted Qualcomm®
device. This script does the following:
* Performance check on-device on a cloud-hosted device
* Downloads compiled assets that can be deployed on-device for Android.
* Accuracy check between PyTorch and on-device outputs.
```bash
python -m qai_hub_models.models.yolov11_seg.export
```
## How does this work?
This [export script](https://aihub.qualcomm.com/models/yolov11_seg/qai_hub_models/models/YOLOv11-Segmentation/export.py)
leverages [Qualcomm® AI Hub](https://aihub.qualcomm.com/) to optimize, validate, and deploy this model
on-device. Lets go through each step below in detail:
Step 1: **Compile model for on-device deployment**
To compile a PyTorch model for on-device deployment, we first trace the model
in memory using the `jit.trace` and then call the `submit_compile_job` API.
```python
import torch
import qai_hub as hub
from qai_hub_models.models.yolov11_seg import Model
# Load the model
torch_model = Model.from_pretrained()
# Device
device = hub.Device("Samsung Galaxy S24")
# Trace model
input_shape = torch_model.get_input_spec()
sample_inputs = torch_model.sample_inputs()
pt_model = torch.jit.trace(torch_model, [torch.tensor(data[0]) for _, data in sample_inputs.items()])
# Compile model on a specific device
compile_job = hub.submit_compile_job(
model=pt_model,
device=device,
input_specs=torch_model.get_input_spec(),
)
# Get target model to run on-device
target_model = compile_job.get_target_model()
```
Step 2: **Performance profiling on cloud-hosted device**
After compiling models from step 1. Models can be profiled model on-device using the
`target_model`. Note that this scripts runs the model on a device automatically
provisioned in the cloud. Once the job is submitted, you can navigate to a
provided job URL to view a variety of on-device performance metrics.
```python
profile_job = hub.submit_profile_job(
model=target_model,
device=device,
)
```
Step 3: **Verify on-device accuracy**
To verify the accuracy of the model on-device, you can run on-device inference
on sample input data on the same cloud hosted device.
```python
input_data = torch_model.sample_inputs()
inference_job = hub.submit_inference_job(
model=target_model,
device=device,
inputs=input_data,
)
on_device_output = inference_job.download_output_data()
```
With the output of the model, you can compute like PSNR, relative errors or
spot check the output with expected output.
**Note**: This on-device profiling and inference requires access to Qualcomm®
AI Hub. [Sign up for access](https://myaccount.qualcomm.com/signup).
## Run demo on a cloud-hosted device
You can also run the demo on-device.
```bash
python -m qai_hub_models.models.yolov11_seg.demo --eval-mode on-device
```
**NOTE**: If you want running in a Jupyter Notebook or Google Colab like
environment, please add the following to your cell (instead of the above).
```
%run -m qai_hub_models.models.yolov11_seg.demo -- --eval-mode on-device
```
## Deploying compiled model to Android
The models can be deployed using multiple runtimes:
- TensorFlow Lite (`.tflite` export): [This
tutorial](https://www.tensorflow.org/lite/android/quickstart) provides a
guide to deploy the .tflite model in an Android application.
- QNN (`.so` export ): This [sample
app](https://docs.qualcomm.com/bundle/publicresource/topics/80-63442-50/sample_app.html)
provides instructions on how to use the `.so` shared library in an Android application.
## View on Qualcomm® AI Hub
Get more details on YOLOv11-Segmentation's performance across various devices [here](https://aihub.qualcomm.com/models/yolov11_seg).
Explore all available models on [Qualcomm® AI Hub](https://aihub.qualcomm.com/)
## License
* The license for the original implementation of YOLOv11-Segmentation can be found
[here](https://github.com/ultralytics/ultralytics/blob/main/LICENSE).
* The license for the compiled assets for on-device deployment can be found [here](https://github.com/ultralytics/ultralytics/blob/main/LICENSE)
## References
* [Ultralytics YOLOv11 Docs: Instance Segmentation](https://docs.ultralytics.com/tasks/segment/)
* [Source Model Implementation](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/models/yolo/segment)
## Community
* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI.
* For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com).
|
crystalline7/575628
|
crystalline7
| 2025-08-30T00:14:55Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-30T00:14:50Z |
[View on Civ Archive](https://civarchive.com/models/591274?modelVersionId=660216)
|
qualcomm/YOLOv10-Detection
|
qualcomm
| 2025-08-30T00:14:51Z | 4 | 0 |
pytorch
|
[
"pytorch",
"real_time",
"android",
"object-detection",
"arxiv:2405.14458",
"license:other",
"region:us"
] |
object-detection
| 2025-01-03T18:48:55Z |
---
library_name: pytorch
license: other
tags:
- real_time
- android
pipeline_tag: object-detection
---

# YOLOv10-Detection: Optimized for Mobile Deployment
## Real-time object detection optimized for mobile and edge by Ultralytics
Ultralytics YOLOv10 is a machine learning model that predicts bounding boxes and classes of objects in an image.
This model is an implementation of YOLOv10-Detection found [here](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/models/yolo/detect).
This repository provides scripts to run YOLOv10-Detection on Qualcomm® devices.
More details on model performance across various devices, can be found
[here](https://aihub.qualcomm.com/models/yolov10_det).
**WARNING**: The model assets are not readily available for download due to licensing restrictions.
### Model Details
- **Model Type:** Model_use_case.object_detection
- **Model Stats:**
- Model checkpoint: YOLOv10-N
- Input resolution: 640x640
- Number of parameters: 2.33M
- Model size (float): 8.95 MB
- Model size (w8a8): 2.55 MB
- Model size (w8a16): 3.04 MB
| Model | Precision | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Primary Compute Unit | Target Model
|---|---|---|---|---|---|---|---|---|
| YOLOv10-Detection | float | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | TFLITE | 14.058 ms | 0 - 69 MB | NPU | -- |
| YOLOv10-Detection | float | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | QNN_DLC | 12.801 ms | 0 - 92 MB | NPU | -- |
| YOLOv10-Detection | float | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | TFLITE | 7.383 ms | 0 - 42 MB | NPU | -- |
| YOLOv10-Detection | float | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | QNN_DLC | 7.633 ms | 5 - 41 MB | NPU | -- |
| YOLOv10-Detection | float | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | TFLITE | 4.519 ms | 0 - 18 MB | NPU | -- |
| YOLOv10-Detection | float | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | QNN_DLC | 3.808 ms | 0 - 68 MB | NPU | -- |
| YOLOv10-Detection | float | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | TFLITE | 5.844 ms | 0 - 69 MB | NPU | -- |
| YOLOv10-Detection | float | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | QNN_DLC | 5.301 ms | 1 - 111 MB | NPU | -- |
| YOLOv10-Detection | float | SA7255P ADP | Qualcomm® SA7255P | TFLITE | 14.058 ms | 0 - 69 MB | NPU | -- |
| YOLOv10-Detection | float | SA7255P ADP | Qualcomm® SA7255P | QNN_DLC | 12.801 ms | 0 - 92 MB | NPU | -- |
| YOLOv10-Detection | float | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | TFLITE | 4.52 ms | 0 - 24 MB | NPU | -- |
| YOLOv10-Detection | float | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | QNN_DLC | 3.801 ms | 0 - 76 MB | NPU | -- |
| YOLOv10-Detection | float | SA8295P ADP | Qualcomm® SA8295P | TFLITE | 8.66 ms | 0 - 34 MB | NPU | -- |
| YOLOv10-Detection | float | SA8295P ADP | Qualcomm® SA8295P | QNN_DLC | 8.164 ms | 4 - 39 MB | NPU | -- |
| YOLOv10-Detection | float | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | TFLITE | 4.526 ms | 0 - 20 MB | NPU | -- |
| YOLOv10-Detection | float | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | QNN_DLC | 3.789 ms | 0 - 83 MB | NPU | -- |
| YOLOv10-Detection | float | SA8775P ADP | Qualcomm® SA8775P | TFLITE | 5.844 ms | 0 - 69 MB | NPU | -- |
| YOLOv10-Detection | float | SA8775P ADP | Qualcomm® SA8775P | QNN_DLC | 5.301 ms | 1 - 111 MB | NPU | -- |
| YOLOv10-Detection | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | TFLITE | 4.509 ms | 0 - 26 MB | NPU | -- |
| YOLOv10-Detection | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | QNN_DLC | 3.816 ms | 0 - 71 MB | NPU | -- |
| YOLOv10-Detection | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | ONNX | 5.865 ms | 0 - 34 MB | NPU | -- |
| YOLOv10-Detection | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | TFLITE | 3.326 ms | 0 - 83 MB | NPU | -- |
| YOLOv10-Detection | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | QNN_DLC | 2.71 ms | 5 - 222 MB | NPU | -- |
| YOLOv10-Detection | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | ONNX | 4.058 ms | 3 - 175 MB | NPU | -- |
| YOLOv10-Detection | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | TFLITE | 3.12 ms | 0 - 75 MB | NPU | -- |
| YOLOv10-Detection | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | QNN_DLC | 2.541 ms | 5 - 99 MB | NPU | -- |
| YOLOv10-Detection | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | ONNX | 3.06 ms | 5 - 103 MB | NPU | -- |
| YOLOv10-Detection | float | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN_DLC | 4.233 ms | 122 - 122 MB | NPU | -- |
| YOLOv10-Detection | float | Snapdragon X Elite CRD | Snapdragon® X Elite | ONNX | 5.989 ms | 5 - 5 MB | NPU | -- |
| YOLOv10-Detection | w8a16 | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | QNN_DLC | 7.188 ms | 2 - 31 MB | NPU | -- |
| YOLOv10-Detection | w8a16 | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | QNN_DLC | 4.545 ms | 2 - 42 MB | NPU | -- |
| YOLOv10-Detection | w8a16 | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | QNN_DLC | 3.826 ms | 2 - 13 MB | NPU | -- |
| YOLOv10-Detection | w8a16 | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | QNN_DLC | 4.39 ms | 0 - 32 MB | NPU | -- |
| YOLOv10-Detection | w8a16 | RB3 Gen 2 (Proxy) | Qualcomm® QCS6490 (Proxy) | QNN_DLC | 12.319 ms | 0 - 36 MB | NPU | -- |
| YOLOv10-Detection | w8a16 | SA7255P ADP | Qualcomm® SA7255P | QNN_DLC | 7.188 ms | 2 - 31 MB | NPU | -- |
| YOLOv10-Detection | w8a16 | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | QNN_DLC | 3.826 ms | 2 - 12 MB | NPU | -- |
| YOLOv10-Detection | w8a16 | SA8295P ADP | Qualcomm® SA8295P | QNN_DLC | 5.083 ms | 2 - 40 MB | NPU | -- |
| YOLOv10-Detection | w8a16 | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | QNN_DLC | 3.824 ms | 2 - 13 MB | NPU | -- |
| YOLOv10-Detection | w8a16 | SA8775P ADP | Qualcomm® SA8775P | QNN_DLC | 4.39 ms | 0 - 32 MB | NPU | -- |
| YOLOv10-Detection | w8a16 | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | QNN_DLC | 3.821 ms | 2 - 12 MB | NPU | -- |
| YOLOv10-Detection | w8a16 | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | ONNX | 71.55 ms | 0 - 195 MB | NPU | -- |
| YOLOv10-Detection | w8a16 | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | QNN_DLC | 2.569 ms | 2 - 41 MB | NPU | -- |
| YOLOv10-Detection | w8a16 | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | ONNX | 54.399 ms | 13 - 1490 MB | NPU | -- |
| YOLOv10-Detection | w8a16 | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | QNN_DLC | 2.184 ms | 2 - 41 MB | NPU | -- |
| YOLOv10-Detection | w8a16 | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | ONNX | 59.927 ms | 23 - 1244 MB | NPU | -- |
| YOLOv10-Detection | w8a16 | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN_DLC | 4.22 ms | 5 - 5 MB | NPU | -- |
| YOLOv10-Detection | w8a16 | Snapdragon X Elite CRD | Snapdragon® X Elite | ONNX | 79.978 ms | 30 - 30 MB | NPU | -- |
| YOLOv10-Detection | w8a8 | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | TFLITE | 3.695 ms | 0 - 26 MB | NPU | -- |
| YOLOv10-Detection | w8a8 | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | QNN_DLC | 3.598 ms | 1 - 27 MB | NPU | -- |
| YOLOv10-Detection | w8a8 | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | TFLITE | 1.995 ms | 0 - 35 MB | NPU | -- |
| YOLOv10-Detection | w8a8 | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | QNN_DLC | 2.033 ms | 1 - 39 MB | NPU | -- |
| YOLOv10-Detection | w8a8 | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | TFLITE | 1.848 ms | 0 - 13 MB | NPU | -- |
| YOLOv10-Detection | w8a8 | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | QNN_DLC | 1.74 ms | 1 - 15 MB | NPU | -- |
| YOLOv10-Detection | w8a8 | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | TFLITE | 2.281 ms | 0 - 26 MB | NPU | -- |
| YOLOv10-Detection | w8a8 | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | QNN_DLC | 2.171 ms | 1 - 27 MB | NPU | -- |
| YOLOv10-Detection | w8a8 | RB3 Gen 2 (Proxy) | Qualcomm® QCS6490 (Proxy) | TFLITE | 4.441 ms | 0 - 34 MB | NPU | -- |
| YOLOv10-Detection | w8a8 | RB3 Gen 2 (Proxy) | Qualcomm® QCS6490 (Proxy) | QNN_DLC | 5.318 ms | 1 - 35 MB | NPU | -- |
| YOLOv10-Detection | w8a8 | SA7255P ADP | Qualcomm® SA7255P | TFLITE | 3.695 ms | 0 - 26 MB | NPU | -- |
| YOLOv10-Detection | w8a8 | SA7255P ADP | Qualcomm® SA7255P | QNN_DLC | 3.598 ms | 1 - 27 MB | NPU | -- |
| YOLOv10-Detection | w8a8 | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | TFLITE | 1.835 ms | 0 - 14 MB | NPU | -- |
| YOLOv10-Detection | w8a8 | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | QNN_DLC | 1.744 ms | 1 - 15 MB | NPU | -- |
| YOLOv10-Detection | w8a8 | SA8295P ADP | Qualcomm® SA8295P | TFLITE | 2.701 ms | 0 - 32 MB | NPU | -- |
| YOLOv10-Detection | w8a8 | SA8295P ADP | Qualcomm® SA8295P | QNN_DLC | 2.595 ms | 1 - 33 MB | NPU | -- |
| YOLOv10-Detection | w8a8 | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | TFLITE | 1.833 ms | 0 - 13 MB | NPU | -- |
| YOLOv10-Detection | w8a8 | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | QNN_DLC | 1.753 ms | 1 - 14 MB | NPU | -- |
| YOLOv10-Detection | w8a8 | SA8775P ADP | Qualcomm® SA8775P | TFLITE | 2.281 ms | 0 - 26 MB | NPU | -- |
| YOLOv10-Detection | w8a8 | SA8775P ADP | Qualcomm® SA8775P | QNN_DLC | 2.171 ms | 1 - 27 MB | NPU | -- |
| YOLOv10-Detection | w8a8 | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | TFLITE | 1.838 ms | 0 - 14 MB | NPU | -- |
| YOLOv10-Detection | w8a8 | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | QNN_DLC | 1.757 ms | 1 - 14 MB | NPU | -- |
| YOLOv10-Detection | w8a8 | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | ONNX | 7.671 ms | 0 - 31 MB | NPU | -- |
| YOLOv10-Detection | w8a8 | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | TFLITE | 1.222 ms | 0 - 36 MB | NPU | -- |
| YOLOv10-Detection | w8a8 | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | QNN_DLC | 1.2 ms | 1 - 35 MB | NPU | -- |
| YOLOv10-Detection | w8a8 | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | ONNX | 5.665 ms | 0 - 77 MB | NPU | -- |
| YOLOv10-Detection | w8a8 | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | TFLITE | 1.114 ms | 0 - 27 MB | NPU | -- |
| YOLOv10-Detection | w8a8 | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | QNN_DLC | 1.06 ms | 1 - 35 MB | NPU | -- |
| YOLOv10-Detection | w8a8 | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | ONNX | 4.972 ms | 0 - 85 MB | NPU | -- |
| YOLOv10-Detection | w8a8 | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN_DLC | 1.99 ms | 1 - 1 MB | NPU | -- |
| YOLOv10-Detection | w8a8 | Snapdragon X Elite CRD | Snapdragon® X Elite | ONNX | 8.535 ms | 1 - 1 MB | NPU | -- |
## Installation
Install the package via pip:
```bash
pip install "qai-hub-models[yolov10-det]"
```
## Configure Qualcomm® AI Hub to run this model on a cloud-hosted device
Sign-in to [Qualcomm® AI Hub](https://app.aihub.qualcomm.com/) with your
Qualcomm® ID. Once signed in navigate to `Account -> Settings -> API Token`.
With this API token, you can configure your client to run models on the cloud
hosted devices.
```bash
qai-hub configure --api_token API_TOKEN
```
Navigate to [docs](https://app.aihub.qualcomm.com/docs/) for more information.
## Demo off target
The package contains a simple end-to-end demo that downloads pre-trained
weights and runs this model on a sample input.
```bash
python -m qai_hub_models.models.yolov10_det.demo
```
The above demo runs a reference implementation of pre-processing, model
inference, and post processing.
**NOTE**: If you want running in a Jupyter Notebook or Google Colab like
environment, please add the following to your cell (instead of the above).
```
%run -m qai_hub_models.models.yolov10_det.demo
```
### Run model on a cloud-hosted device
In addition to the demo, you can also run the model on a cloud-hosted Qualcomm®
device. This script does the following:
* Performance check on-device on a cloud-hosted device
* Downloads compiled assets that can be deployed on-device for Android.
* Accuracy check between PyTorch and on-device outputs.
```bash
python -m qai_hub_models.models.yolov10_det.export
```
## How does this work?
This [export script](https://aihub.qualcomm.com/models/yolov10_det/qai_hub_models/models/YOLOv10-Detection/export.py)
leverages [Qualcomm® AI Hub](https://aihub.qualcomm.com/) to optimize, validate, and deploy this model
on-device. Lets go through each step below in detail:
Step 1: **Compile model for on-device deployment**
To compile a PyTorch model for on-device deployment, we first trace the model
in memory using the `jit.trace` and then call the `submit_compile_job` API.
```python
import torch
import qai_hub as hub
from qai_hub_models.models.yolov10_det import Model
# Load the model
torch_model = Model.from_pretrained()
# Device
device = hub.Device("Samsung Galaxy S24")
# Trace model
input_shape = torch_model.get_input_spec()
sample_inputs = torch_model.sample_inputs()
pt_model = torch.jit.trace(torch_model, [torch.tensor(data[0]) for _, data in sample_inputs.items()])
# Compile model on a specific device
compile_job = hub.submit_compile_job(
model=pt_model,
device=device,
input_specs=torch_model.get_input_spec(),
)
# Get target model to run on-device
target_model = compile_job.get_target_model()
```
Step 2: **Performance profiling on cloud-hosted device**
After compiling models from step 1. Models can be profiled model on-device using the
`target_model`. Note that this scripts runs the model on a device automatically
provisioned in the cloud. Once the job is submitted, you can navigate to a
provided job URL to view a variety of on-device performance metrics.
```python
profile_job = hub.submit_profile_job(
model=target_model,
device=device,
)
```
Step 3: **Verify on-device accuracy**
To verify the accuracy of the model on-device, you can run on-device inference
on sample input data on the same cloud hosted device.
```python
input_data = torch_model.sample_inputs()
inference_job = hub.submit_inference_job(
model=target_model,
device=device,
inputs=input_data,
)
on_device_output = inference_job.download_output_data()
```
With the output of the model, you can compute like PSNR, relative errors or
spot check the output with expected output.
**Note**: This on-device profiling and inference requires access to Qualcomm®
AI Hub. [Sign up for access](https://myaccount.qualcomm.com/signup).
## Run demo on a cloud-hosted device
You can also run the demo on-device.
```bash
python -m qai_hub_models.models.yolov10_det.demo --eval-mode on-device
```
**NOTE**: If you want running in a Jupyter Notebook or Google Colab like
environment, please add the following to your cell (instead of the above).
```
%run -m qai_hub_models.models.yolov10_det.demo -- --eval-mode on-device
```
## Deploying compiled model to Android
The models can be deployed using multiple runtimes:
- TensorFlow Lite (`.tflite` export): [This
tutorial](https://www.tensorflow.org/lite/android/quickstart) provides a
guide to deploy the .tflite model in an Android application.
- QNN (`.so` export ): This [sample
app](https://docs.qualcomm.com/bundle/publicresource/topics/80-63442-50/sample_app.html)
provides instructions on how to use the `.so` shared library in an Android application.
## View on Qualcomm® AI Hub
Get more details on YOLOv10-Detection's performance across various devices [here](https://aihub.qualcomm.com/models/yolov10_det).
Explore all available models on [Qualcomm® AI Hub](https://aihub.qualcomm.com/)
## License
* The license for the original implementation of YOLOv10-Detection can be found
[here](https://github.com/ultralytics/ultralytics/blob/main/LICENSE).
* The license for the compiled assets for on-device deployment can be found [here](https://github.com/ultralytics/ultralytics/blob/main/LICENSE)
## References
* [YOLOv10: Real-Time End-to-End Object Detection](https://arxiv.org/abs/2405.14458)
* [Source Model Implementation](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/models/yolo/detect)
## Community
* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI.
* For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com).
|
seraphimzzzz/555409
|
seraphimzzzz
| 2025-08-30T00:14:43Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-30T00:14:38Z |
[View on Civ Archive](https://civarchive.com/models/572643?modelVersionId=640608)
|
qualcomm/WideResNet50
|
qualcomm
| 2025-08-30T00:14:20Z | 114 | 0 |
pytorch
|
[
"pytorch",
"tflite",
"backbone",
"android",
"image-classification",
"arxiv:1605.07146",
"license:other",
"region:us"
] |
image-classification
| 2024-02-25T22:47:53Z |
---
library_name: pytorch
license: other
tags:
- backbone
- android
pipeline_tag: image-classification
---

# WideResNet50: Optimized for Mobile Deployment
## Imagenet classifier and general purpose backbone
WideResNet50 is a machine learning model that can classify images from the Imagenet dataset. It can also be used as a backbone in building more complex models for specific use cases.
This model is an implementation of WideResNet50 found [here](https://github.com/pytorch/vision/blob/main/torchvision/models/resnet.py).
This repository provides scripts to run WideResNet50 on Qualcomm® devices.
More details on model performance across various devices, can be found
[here](https://aihub.qualcomm.com/models/wideresnet50).
### Model Details
- **Model Type:** Model_use_case.image_classification
- **Model Stats:**
- Model checkpoint: Imagenet
- Input resolution: 224x224
- Number of parameters: 68.9M
- Model size (float): 263 MB
- Model size (w8a8): 66.6 MB
| Model | Precision | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Primary Compute Unit | Target Model
|---|---|---|---|---|---|---|---|---|
| WideResNet50 | float | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | TFLITE | 24.024 ms | 0 - 91 MB | NPU | [WideResNet50.tflite](https://huggingface.co/qualcomm/WideResNet50/blob/main/WideResNet50.tflite) |
| WideResNet50 | float | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | QNN_DLC | 24.142 ms | 1 - 41 MB | NPU | [WideResNet50.dlc](https://huggingface.co/qualcomm/WideResNet50/blob/main/WideResNet50.dlc) |
| WideResNet50 | float | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | TFLITE | 6.831 ms | 0 - 171 MB | NPU | [WideResNet50.tflite](https://huggingface.co/qualcomm/WideResNet50/blob/main/WideResNet50.tflite) |
| WideResNet50 | float | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | QNN_DLC | 10.399 ms | 0 - 40 MB | NPU | [WideResNet50.dlc](https://huggingface.co/qualcomm/WideResNet50/blob/main/WideResNet50.dlc) |
| WideResNet50 | float | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | TFLITE | 4.852 ms | 0 - 875 MB | NPU | [WideResNet50.tflite](https://huggingface.co/qualcomm/WideResNet50/blob/main/WideResNet50.tflite) |
| WideResNet50 | float | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | QNN_DLC | 4.792 ms | 0 - 10 MB | NPU | [WideResNet50.dlc](https://huggingface.co/qualcomm/WideResNet50/blob/main/WideResNet50.dlc) |
| WideResNet50 | float | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | TFLITE | 7.269 ms | 0 - 92 MB | NPU | [WideResNet50.tflite](https://huggingface.co/qualcomm/WideResNet50/blob/main/WideResNet50.tflite) |
| WideResNet50 | float | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | QNN_DLC | 7.146 ms | 1 - 42 MB | NPU | [WideResNet50.dlc](https://huggingface.co/qualcomm/WideResNet50/blob/main/WideResNet50.dlc) |
| WideResNet50 | float | SA7255P ADP | Qualcomm® SA7255P | TFLITE | 24.024 ms | 0 - 91 MB | NPU | [WideResNet50.tflite](https://huggingface.co/qualcomm/WideResNet50/blob/main/WideResNet50.tflite) |
| WideResNet50 | float | SA7255P ADP | Qualcomm® SA7255P | QNN_DLC | 24.142 ms | 1 - 41 MB | NPU | [WideResNet50.dlc](https://huggingface.co/qualcomm/WideResNet50/blob/main/WideResNet50.dlc) |
| WideResNet50 | float | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | TFLITE | 4.845 ms | 0 - 870 MB | NPU | [WideResNet50.tflite](https://huggingface.co/qualcomm/WideResNet50/blob/main/WideResNet50.tflite) |
| WideResNet50 | float | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | QNN_DLC | 4.805 ms | 1 - 16 MB | NPU | [WideResNet50.dlc](https://huggingface.co/qualcomm/WideResNet50/blob/main/WideResNet50.dlc) |
| WideResNet50 | float | SA8295P ADP | Qualcomm® SA8295P | TFLITE | 7.912 ms | 0 - 87 MB | NPU | [WideResNet50.tflite](https://huggingface.co/qualcomm/WideResNet50/blob/main/WideResNet50.tflite) |
| WideResNet50 | float | SA8295P ADP | Qualcomm® SA8295P | QNN_DLC | 7.759 ms | 1 - 32 MB | NPU | [WideResNet50.dlc](https://huggingface.co/qualcomm/WideResNet50/blob/main/WideResNet50.dlc) |
| WideResNet50 | float | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | TFLITE | 4.85 ms | 0 - 867 MB | NPU | [WideResNet50.tflite](https://huggingface.co/qualcomm/WideResNet50/blob/main/WideResNet50.tflite) |
| WideResNet50 | float | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | QNN_DLC | 4.797 ms | 1 - 11 MB | NPU | [WideResNet50.dlc](https://huggingface.co/qualcomm/WideResNet50/blob/main/WideResNet50.dlc) |
| WideResNet50 | float | SA8775P ADP | Qualcomm® SA8775P | TFLITE | 7.269 ms | 0 - 92 MB | NPU | [WideResNet50.tflite](https://huggingface.co/qualcomm/WideResNet50/blob/main/WideResNet50.tflite) |
| WideResNet50 | float | SA8775P ADP | Qualcomm® SA8775P | QNN_DLC | 7.146 ms | 1 - 42 MB | NPU | [WideResNet50.dlc](https://huggingface.co/qualcomm/WideResNet50/blob/main/WideResNet50.dlc) |
| WideResNet50 | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | TFLITE | 4.851 ms | 0 - 858 MB | NPU | [WideResNet50.tflite](https://huggingface.co/qualcomm/WideResNet50/blob/main/WideResNet50.tflite) |
| WideResNet50 | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | QNN_DLC | 4.802 ms | 1 - 16 MB | NPU | [WideResNet50.dlc](https://huggingface.co/qualcomm/WideResNet50/blob/main/WideResNet50.dlc) |
| WideResNet50 | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | TFLITE | 3.553 ms | 0 - 188 MB | NPU | [WideResNet50.tflite](https://huggingface.co/qualcomm/WideResNet50/blob/main/WideResNet50.tflite) |
| WideResNet50 | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | QNN_DLC | 3.607 ms | 1 - 50 MB | NPU | [WideResNet50.dlc](https://huggingface.co/qualcomm/WideResNet50/blob/main/WideResNet50.dlc) |
| WideResNet50 | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | TFLITE | 3.367 ms | 0 - 96 MB | NPU | [WideResNet50.tflite](https://huggingface.co/qualcomm/WideResNet50/blob/main/WideResNet50.tflite) |
| WideResNet50 | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | QNN_DLC | 3.21 ms | 1 - 44 MB | NPU | [WideResNet50.dlc](https://huggingface.co/qualcomm/WideResNet50/blob/main/WideResNet50.dlc) |
| WideResNet50 | float | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN_DLC | 4.686 ms | 457 - 457 MB | NPU | [WideResNet50.dlc](https://huggingface.co/qualcomm/WideResNet50/blob/main/WideResNet50.dlc) |
| WideResNet50 | w8a8 | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | TFLITE | 3.812 ms | 0 - 43 MB | NPU | [WideResNet50.tflite](https://huggingface.co/qualcomm/WideResNet50/blob/main/WideResNet50_w8a8.tflite) |
| WideResNet50 | w8a8 | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | QNN_DLC | 4.027 ms | 0 - 44 MB | NPU | [WideResNet50.dlc](https://huggingface.co/qualcomm/WideResNet50/blob/main/WideResNet50_w8a8.dlc) |
| WideResNet50 | w8a8 | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | TFLITE | 2.117 ms | 0 - 113 MB | NPU | [WideResNet50.tflite](https://huggingface.co/qualcomm/WideResNet50/blob/main/WideResNet50_w8a8.tflite) |
| WideResNet50 | w8a8 | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | QNN_DLC | 2.514 ms | 0 - 106 MB | NPU | [WideResNet50.dlc](https://huggingface.co/qualcomm/WideResNet50/blob/main/WideResNet50_w8a8.dlc) |
| WideResNet50 | w8a8 | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | TFLITE | 1.771 ms | 0 - 383 MB | NPU | [WideResNet50.tflite](https://huggingface.co/qualcomm/WideResNet50/blob/main/WideResNet50_w8a8.tflite) |
| WideResNet50 | w8a8 | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | QNN_DLC | 1.873 ms | 0 - 8 MB | NPU | [WideResNet50.dlc](https://huggingface.co/qualcomm/WideResNet50/blob/main/WideResNet50_w8a8.dlc) |
| WideResNet50 | w8a8 | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | TFLITE | 1.909 ms | 0 - 43 MB | NPU | [WideResNet50.tflite](https://huggingface.co/qualcomm/WideResNet50/blob/main/WideResNet50_w8a8.tflite) |
| WideResNet50 | w8a8 | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | QNN_DLC | 2.014 ms | 0 - 44 MB | NPU | [WideResNet50.dlc](https://huggingface.co/qualcomm/WideResNet50/blob/main/WideResNet50_w8a8.dlc) |
| WideResNet50 | w8a8 | RB3 Gen 2 (Proxy) | Qualcomm® QCS6490 (Proxy) | TFLITE | 7.525 ms | 0 - 96 MB | NPU | [WideResNet50.tflite](https://huggingface.co/qualcomm/WideResNet50/blob/main/WideResNet50_w8a8.tflite) |
| WideResNet50 | w8a8 | RB3 Gen 2 (Proxy) | Qualcomm® QCS6490 (Proxy) | QNN_DLC | 9.998 ms | 0 - 98 MB | NPU | [WideResNet50.dlc](https://huggingface.co/qualcomm/WideResNet50/blob/main/WideResNet50_w8a8.dlc) |
| WideResNet50 | w8a8 | RB5 (Proxy) | Qualcomm® QCS8250 (Proxy) | TFLITE | 24.552 ms | 0 - 7 MB | NPU | [WideResNet50.tflite](https://huggingface.co/qualcomm/WideResNet50/blob/main/WideResNet50_w8a8.tflite) |
| WideResNet50 | w8a8 | SA7255P ADP | Qualcomm® SA7255P | TFLITE | 3.812 ms | 0 - 43 MB | NPU | [WideResNet50.tflite](https://huggingface.co/qualcomm/WideResNet50/blob/main/WideResNet50_w8a8.tflite) |
| WideResNet50 | w8a8 | SA7255P ADP | Qualcomm® SA7255P | QNN_DLC | 4.027 ms | 0 - 44 MB | NPU | [WideResNet50.dlc](https://huggingface.co/qualcomm/WideResNet50/blob/main/WideResNet50_w8a8.dlc) |
| WideResNet50 | w8a8 | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | TFLITE | 1.779 ms | 0 - 9 MB | NPU | [WideResNet50.tflite](https://huggingface.co/qualcomm/WideResNet50/blob/main/WideResNet50_w8a8.tflite) |
| WideResNet50 | w8a8 | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | QNN_DLC | 1.869 ms | 0 - 369 MB | NPU | [WideResNet50.dlc](https://huggingface.co/qualcomm/WideResNet50/blob/main/WideResNet50_w8a8.dlc) |
| WideResNet50 | w8a8 | SA8295P ADP | Qualcomm® SA8295P | TFLITE | 2.589 ms | 0 - 48 MB | NPU | [WideResNet50.tflite](https://huggingface.co/qualcomm/WideResNet50/blob/main/WideResNet50_w8a8.tflite) |
| WideResNet50 | w8a8 | SA8295P ADP | Qualcomm® SA8295P | QNN_DLC | 2.76 ms | 0 - 50 MB | NPU | [WideResNet50.dlc](https://huggingface.co/qualcomm/WideResNet50/blob/main/WideResNet50_w8a8.dlc) |
| WideResNet50 | w8a8 | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | TFLITE | 1.774 ms | 0 - 390 MB | NPU | [WideResNet50.tflite](https://huggingface.co/qualcomm/WideResNet50/blob/main/WideResNet50_w8a8.tflite) |
| WideResNet50 | w8a8 | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | QNN_DLC | 1.871 ms | 0 - 365 MB | NPU | [WideResNet50.dlc](https://huggingface.co/qualcomm/WideResNet50/blob/main/WideResNet50_w8a8.dlc) |
| WideResNet50 | w8a8 | SA8775P ADP | Qualcomm® SA8775P | TFLITE | 1.909 ms | 0 - 43 MB | NPU | [WideResNet50.tflite](https://huggingface.co/qualcomm/WideResNet50/blob/main/WideResNet50_w8a8.tflite) |
| WideResNet50 | w8a8 | SA8775P ADP | Qualcomm® SA8775P | QNN_DLC | 2.014 ms | 0 - 44 MB | NPU | [WideResNet50.dlc](https://huggingface.co/qualcomm/WideResNet50/blob/main/WideResNet50_w8a8.dlc) |
| WideResNet50 | w8a8 | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | TFLITE | 1.777 ms | 0 - 386 MB | NPU | [WideResNet50.tflite](https://huggingface.co/qualcomm/WideResNet50/blob/main/WideResNet50_w8a8.tflite) |
| WideResNet50 | w8a8 | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | QNN_DLC | 1.875 ms | 0 - 363 MB | NPU | [WideResNet50.dlc](https://huggingface.co/qualcomm/WideResNet50/blob/main/WideResNet50_w8a8.dlc) |
| WideResNet50 | w8a8 | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | TFLITE | 1.344 ms | 0 - 108 MB | NPU | [WideResNet50.tflite](https://huggingface.co/qualcomm/WideResNet50/blob/main/WideResNet50_w8a8.tflite) |
| WideResNet50 | w8a8 | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | QNN_DLC | 1.436 ms | 0 - 109 MB | NPU | [WideResNet50.dlc](https://huggingface.co/qualcomm/WideResNet50/blob/main/WideResNet50_w8a8.dlc) |
| WideResNet50 | w8a8 | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | TFLITE | 1.213 ms | 0 - 52 MB | NPU | [WideResNet50.tflite](https://huggingface.co/qualcomm/WideResNet50/blob/main/WideResNet50_w8a8.tflite) |
| WideResNet50 | w8a8 | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | QNN_DLC | 1.237 ms | 0 - 48 MB | NPU | [WideResNet50.dlc](https://huggingface.co/qualcomm/WideResNet50/blob/main/WideResNet50_w8a8.dlc) |
| WideResNet50 | w8a8 | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN_DLC | 1.809 ms | 393 - 393 MB | NPU | [WideResNet50.dlc](https://huggingface.co/qualcomm/WideResNet50/blob/main/WideResNet50_w8a8.dlc) |
## Installation
Install the package via pip:
```bash
pip install qai-hub-models
```
## Configure Qualcomm® AI Hub to run this model on a cloud-hosted device
Sign-in to [Qualcomm® AI Hub](https://app.aihub.qualcomm.com/) with your
Qualcomm® ID. Once signed in navigate to `Account -> Settings -> API Token`.
With this API token, you can configure your client to run models on the cloud
hosted devices.
```bash
qai-hub configure --api_token API_TOKEN
```
Navigate to [docs](https://app.aihub.qualcomm.com/docs/) for more information.
## Demo off target
The package contains a simple end-to-end demo that downloads pre-trained
weights and runs this model on a sample input.
```bash
python -m qai_hub_models.models.wideresnet50.demo
```
The above demo runs a reference implementation of pre-processing, model
inference, and post processing.
**NOTE**: If you want running in a Jupyter Notebook or Google Colab like
environment, please add the following to your cell (instead of the above).
```
%run -m qai_hub_models.models.wideresnet50.demo
```
### Run model on a cloud-hosted device
In addition to the demo, you can also run the model on a cloud-hosted Qualcomm®
device. This script does the following:
* Performance check on-device on a cloud-hosted device
* Downloads compiled assets that can be deployed on-device for Android.
* Accuracy check between PyTorch and on-device outputs.
```bash
python -m qai_hub_models.models.wideresnet50.export
```
## How does this work?
This [export script](https://aihub.qualcomm.com/models/wideresnet50/qai_hub_models/models/WideResNet50/export.py)
leverages [Qualcomm® AI Hub](https://aihub.qualcomm.com/) to optimize, validate, and deploy this model
on-device. Lets go through each step below in detail:
Step 1: **Compile model for on-device deployment**
To compile a PyTorch model for on-device deployment, we first trace the model
in memory using the `jit.trace` and then call the `submit_compile_job` API.
```python
import torch
import qai_hub as hub
from qai_hub_models.models.wideresnet50 import Model
# Load the model
torch_model = Model.from_pretrained()
# Device
device = hub.Device("Samsung Galaxy S24")
# Trace model
input_shape = torch_model.get_input_spec()
sample_inputs = torch_model.sample_inputs()
pt_model = torch.jit.trace(torch_model, [torch.tensor(data[0]) for _, data in sample_inputs.items()])
# Compile model on a specific device
compile_job = hub.submit_compile_job(
model=pt_model,
device=device,
input_specs=torch_model.get_input_spec(),
)
# Get target model to run on-device
target_model = compile_job.get_target_model()
```
Step 2: **Performance profiling on cloud-hosted device**
After compiling models from step 1. Models can be profiled model on-device using the
`target_model`. Note that this scripts runs the model on a device automatically
provisioned in the cloud. Once the job is submitted, you can navigate to a
provided job URL to view a variety of on-device performance metrics.
```python
profile_job = hub.submit_profile_job(
model=target_model,
device=device,
)
```
Step 3: **Verify on-device accuracy**
To verify the accuracy of the model on-device, you can run on-device inference
on sample input data on the same cloud hosted device.
```python
input_data = torch_model.sample_inputs()
inference_job = hub.submit_inference_job(
model=target_model,
device=device,
inputs=input_data,
)
on_device_output = inference_job.download_output_data()
```
With the output of the model, you can compute like PSNR, relative errors or
spot check the output with expected output.
**Note**: This on-device profiling and inference requires access to Qualcomm®
AI Hub. [Sign up for access](https://myaccount.qualcomm.com/signup).
## Run demo on a cloud-hosted device
You can also run the demo on-device.
```bash
python -m qai_hub_models.models.wideresnet50.demo --eval-mode on-device
```
**NOTE**: If you want running in a Jupyter Notebook or Google Colab like
environment, please add the following to your cell (instead of the above).
```
%run -m qai_hub_models.models.wideresnet50.demo -- --eval-mode on-device
```
## Deploying compiled model to Android
The models can be deployed using multiple runtimes:
- TensorFlow Lite (`.tflite` export): [This
tutorial](https://www.tensorflow.org/lite/android/quickstart) provides a
guide to deploy the .tflite model in an Android application.
- QNN (`.so` export ): This [sample
app](https://docs.qualcomm.com/bundle/publicresource/topics/80-63442-50/sample_app.html)
provides instructions on how to use the `.so` shared library in an Android application.
## View on Qualcomm® AI Hub
Get more details on WideResNet50's performance across various devices [here](https://aihub.qualcomm.com/models/wideresnet50).
Explore all available models on [Qualcomm® AI Hub](https://aihub.qualcomm.com/)
## License
* The license for the original implementation of WideResNet50 can be found
[here](https://github.com/pytorch/vision/blob/main/LICENSE).
* The license for the compiled assets for on-device deployment can be found [here](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/Qualcomm+AI+Hub+Proprietary+License.pdf)
## References
* [Wide Residual Networks](https://arxiv.org/abs/1605.07146)
* [Source Model Implementation](https://github.com/pytorch/vision/blob/main/torchvision/models/resnet.py)
## Community
* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI.
* For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com).
|
crystalline7/636363
|
crystalline7
| 2025-08-30T00:13:53Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-30T00:13:47Z |
[View on Civ Archive](https://civarchive.com/models/644196?modelVersionId=721770)
|
seraphimzzzz/551969
|
seraphimzzzz
| 2025-08-30T00:13:40Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-30T00:13:34Z |
[View on Civ Archive](https://civarchive.com/models/534443?modelVersionId=637191)
|
qualcomm/Whisper-Tiny
|
qualcomm
| 2025-08-30T00:13:35Z | 0 | 0 |
pytorch
|
[
"pytorch",
"foundation",
"android",
"automatic-speech-recognition",
"license:other",
"region:us"
] |
automatic-speech-recognition
| 2025-08-30T00:13:13Z |
---
library_name: pytorch
license: other
tags:
- foundation
- android
pipeline_tag: automatic-speech-recognition
---

# Whisper-Tiny: Optimized for Mobile Deployment
## Transformer-based automatic speech recognition (ASR) model for multilingual transcription and translation available on HuggingFace
HuggingFace Whisper-Small ASR (Automatic Speech Recognition) model is a state-of-the-art system designed for transcribing spoken language into written text. This model is based on the transformer architecture and has been optimized for edge inference by replacing Multi-Head Attention (MHA) with Single-Head Attention (SHA) and linear layers with convolutional (conv) layers. It exhibits robust performance in realistic, noisy environments, making it highly reliable for real-world applications. Specifically, it excels in long-form transcription, capable of accurately transcribing audio clips up to 30 seconds long. Time to the first token is the encoder's latency, while time to each additional token is decoder's latency, where we assume a max decoded length specified below.
This model is an implementation of Whisper-Tiny found [here](https://github.com/huggingface/transformers/tree/v4.42.3/src/transformers/models/whisper).
This repository provides scripts to run Whisper-Tiny on Qualcomm® devices.
More details on model performance across various devices, can be found
[here](https://aihub.qualcomm.com/models/whisper_tiny).
### Model Details
- **Model Type:** Model_use_case.speech_recognition
- **Model Stats:**
- Model checkpoint: openai/whisper-tiny
- Input resolution: 80x3000 (30 seconds audio)
- Max decoded sequence length: 200 tokens
- Number of parameters (HfWhisperEncoder): 9.39M
- Model size (HfWhisperEncoder) (float): 35.9 MB
- Number of parameters (HfWhisperDecoder): 28.4M
- Model size (HfWhisperDecoder) (float): 109 MB
| Model | Precision | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Primary Compute Unit | Target Model
|---|---|---|---|---|---|---|---|---|
| HfWhisperEncoder | float | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | QNN_CONTEXT_BINARY | 61.502 ms | 0 - 9 MB | NPU | Use Export Script |
| HfWhisperEncoder | float | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | QNN_CONTEXT_BINARY | 53.824 ms | 1 - 22 MB | NPU | Use Export Script |
| HfWhisperEncoder | float | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | QNN_CONTEXT_BINARY | 20.104 ms | 1 - 3 MB | NPU | Use Export Script |
| HfWhisperEncoder | float | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | QNN_CONTEXT_BINARY | 23.543 ms | 1 - 11 MB | NPU | Use Export Script |
| HfWhisperEncoder | float | SA7255P ADP | Qualcomm® SA7255P | QNN_CONTEXT_BINARY | 61.502 ms | 0 - 9 MB | NPU | Use Export Script |
| HfWhisperEncoder | float | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | QNN_CONTEXT_BINARY | 20.282 ms | 1 - 2 MB | NPU | Use Export Script |
| HfWhisperEncoder | float | SA8295P ADP | Qualcomm® SA8295P | QNN_CONTEXT_BINARY | 50.611 ms | 1 - 18 MB | NPU | Use Export Script |
| HfWhisperEncoder | float | SA8775P ADP | Qualcomm® SA8775P | QNN_CONTEXT_BINARY | 23.543 ms | 1 - 11 MB | NPU | Use Export Script |
| HfWhisperEncoder | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | QNN_CONTEXT_BINARY | 20.132 ms | 1 - 3 MB | NPU | Use Export Script |
| HfWhisperEncoder | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | PRECOMPILED_QNN_ONNX | 20.493 ms | 5 - 38 MB | NPU | Use Export Script |
| HfWhisperEncoder | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | QNN_CONTEXT_BINARY | 15.35 ms | 0 - 19 MB | NPU | Use Export Script |
| HfWhisperEncoder | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | PRECOMPILED_QNN_ONNX | 15.562 ms | 20 - 38 MB | NPU | Use Export Script |
| HfWhisperEncoder | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | QNN_CONTEXT_BINARY | 12.479 ms | 0 - 15 MB | NPU | Use Export Script |
| HfWhisperEncoder | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | PRECOMPILED_QNN_ONNX | 13.184 ms | 20 - 34 MB | NPU | Use Export Script |
| HfWhisperEncoder | float | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN_CONTEXT_BINARY | 19.708 ms | 0 - 0 MB | NPU | Use Export Script |
| HfWhisperEncoder | float | Snapdragon X Elite CRD | Snapdragon® X Elite | PRECOMPILED_QNN_ONNX | 19.842 ms | 33 - 33 MB | NPU | Use Export Script |
| HfWhisperDecoder | float | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | QNN_CONTEXT_BINARY | 3.589 ms | 10 - 19 MB | NPU | Use Export Script |
| HfWhisperDecoder | float | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | QNN_CONTEXT_BINARY | 2.647 ms | 10 - 27 MB | NPU | Use Export Script |
| HfWhisperDecoder | float | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | QNN_CONTEXT_BINARY | 2.144 ms | 9 - 12 MB | NPU | Use Export Script |
| HfWhisperDecoder | float | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | QNN_CONTEXT_BINARY | 2.647 ms | 9 - 18 MB | NPU | Use Export Script |
| HfWhisperDecoder | float | SA7255P ADP | Qualcomm® SA7255P | QNN_CONTEXT_BINARY | 3.589 ms | 10 - 19 MB | NPU | Use Export Script |
| HfWhisperDecoder | float | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | QNN_CONTEXT_BINARY | 2.179 ms | 10 - 12 MB | NPU | Use Export Script |
| HfWhisperDecoder | float | SA8295P ADP | Qualcomm® SA8295P | QNN_CONTEXT_BINARY | 3.028 ms | 10 - 25 MB | NPU | Use Export Script |
| HfWhisperDecoder | float | SA8775P ADP | Qualcomm® SA8775P | QNN_CONTEXT_BINARY | 2.647 ms | 9 - 18 MB | NPU | Use Export Script |
| HfWhisperDecoder | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | QNN_CONTEXT_BINARY | 2.154 ms | 1 - 4 MB | NPU | Use Export Script |
| HfWhisperDecoder | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | PRECOMPILED_QNN_ONNX | 2.49 ms | 0 - 87 MB | NPU | Use Export Script |
| HfWhisperDecoder | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | QNN_CONTEXT_BINARY | 1.653 ms | 0 - 22 MB | NPU | Use Export Script |
| HfWhisperDecoder | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | PRECOMPILED_QNN_ONNX | 1.917 ms | 5 - 25 MB | NPU | Use Export Script |
| HfWhisperDecoder | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | QNN_CONTEXT_BINARY | 1.336 ms | 1 - 16 MB | NPU | Use Export Script |
| HfWhisperDecoder | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | PRECOMPILED_QNN_ONNX | 1.675 ms | 0 - 14 MB | NPU | Use Export Script |
| HfWhisperDecoder | float | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN_CONTEXT_BINARY | 1.946 ms | 10 - 10 MB | NPU | Use Export Script |
| HfWhisperDecoder | float | Snapdragon X Elite CRD | Snapdragon® X Elite | PRECOMPILED_QNN_ONNX | 2.063 ms | 83 - 83 MB | NPU | Use Export Script |
## Installation
Install the package via pip:
```bash
pip install "qai-hub-models[whisper-tiny]"
```
## Configure Qualcomm® AI Hub to run this model on a cloud-hosted device
Sign-in to [Qualcomm® AI Hub](https://app.aihub.qualcomm.com/) with your
Qualcomm® ID. Once signed in navigate to `Account -> Settings -> API Token`.
With this API token, you can configure your client to run models on the cloud
hosted devices.
```bash
qai-hub configure --api_token API_TOKEN
```
Navigate to [docs](https://app.aihub.qualcomm.com/docs/) for more information.
## Demo off target
The package contains a simple end-to-end demo that downloads pre-trained
weights and runs this model on a sample input.
```bash
python -m qai_hub_models.models.whisper_tiny.demo
```
The above demo runs a reference implementation of pre-processing, model
inference, and post processing.
**NOTE**: If you want running in a Jupyter Notebook or Google Colab like
environment, please add the following to your cell (instead of the above).
```
%run -m qai_hub_models.models.whisper_tiny.demo
```
### Run model on a cloud-hosted device
In addition to the demo, you can also run the model on a cloud-hosted Qualcomm®
device. This script does the following:
* Performance check on-device on a cloud-hosted device
* Downloads compiled assets that can be deployed on-device for Android.
* Accuracy check between PyTorch and on-device outputs.
```bash
python -m qai_hub_models.models.whisper_tiny.export
```
## How does this work?
This [export script](https://aihub.qualcomm.com/models/whisper_tiny/qai_hub_models/models/Whisper-Tiny/export.py)
leverages [Qualcomm® AI Hub](https://aihub.qualcomm.com/) to optimize, validate, and deploy this model
on-device. Lets go through each step below in detail:
Step 1: **Compile model for on-device deployment**
To compile a PyTorch model for on-device deployment, we first trace the model
in memory using the `jit.trace` and then call the `submit_compile_job` API.
```python
import torch
import qai_hub as hub
from qai_hub_models.models.whisper_tiny import Model
# Load the model
torch_model = Model.from_pretrained()
# Device
device = hub.Device("Samsung Galaxy S24")
# Trace model
input_shape = torch_model.get_input_spec()
sample_inputs = torch_model.sample_inputs()
pt_model = torch.jit.trace(torch_model, [torch.tensor(data[0]) for _, data in sample_inputs.items()])
# Compile model on a specific device
compile_job = hub.submit_compile_job(
model=pt_model,
device=device,
input_specs=torch_model.get_input_spec(),
)
# Get target model to run on-device
target_model = compile_job.get_target_model()
```
Step 2: **Performance profiling on cloud-hosted device**
After compiling models from step 1. Models can be profiled model on-device using the
`target_model`. Note that this scripts runs the model on a device automatically
provisioned in the cloud. Once the job is submitted, you can navigate to a
provided job URL to view a variety of on-device performance metrics.
```python
profile_job = hub.submit_profile_job(
model=target_model,
device=device,
)
```
Step 3: **Verify on-device accuracy**
To verify the accuracy of the model on-device, you can run on-device inference
on sample input data on the same cloud hosted device.
```python
input_data = torch_model.sample_inputs()
inference_job = hub.submit_inference_job(
model=target_model,
device=device,
inputs=input_data,
)
on_device_output = inference_job.download_output_data()
```
With the output of the model, you can compute like PSNR, relative errors or
spot check the output with expected output.
**Note**: This on-device profiling and inference requires access to Qualcomm®
AI Hub. [Sign up for access](https://myaccount.qualcomm.com/signup).
## Deploying compiled model to Android
The models can be deployed using multiple runtimes:
- TensorFlow Lite (`.tflite` export): [This
tutorial](https://www.tensorflow.org/lite/android/quickstart) provides a
guide to deploy the .tflite model in an Android application.
- QNN (`.so` export ): This [sample
app](https://docs.qualcomm.com/bundle/publicresource/topics/80-63442-50/sample_app.html)
provides instructions on how to use the `.so` shared library in an Android application.
## View on Qualcomm® AI Hub
Get more details on Whisper-Tiny's performance across various devices [here](https://aihub.qualcomm.com/models/whisper_tiny).
Explore all available models on [Qualcomm® AI Hub](https://aihub.qualcomm.com/)
## License
* The license for the original implementation of Whisper-Tiny can be found
[here](https://github.com/huggingface/transformers/blob/v4.42.3/LICENSE).
* The license for the compiled assets for on-device deployment can be found [here](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/Qualcomm+AI+Hub+Proprietary+License.pdf)
## References
* [Robust Speech Recognition via Large-Scale Weak Supervision](https://cdn.openai.com/papers/whisper.pdf)
* [Source Model Implementation](https://github.com/huggingface/transformers/tree/v4.42.3/src/transformers/models/whisper)
## Community
* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI.
* For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com).
|
multimodalart/qwen-tarot
|
multimodalart
| 2025-08-30T00:13:17Z | 0 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"text-to-image",
"diffusers-training",
"lora",
"qwen-image",
"qwen-image-diffusers",
"template:sd-lora",
"base_model:Qwen/Qwen-Image",
"base_model:adapter:Qwen/Qwen-Image",
"license:apache-2.0",
"region:us"
] |
text-to-image
| 2025-08-29T20:39:47Z |
---
base_model: Qwen/Qwen-Image
library_name: diffusers
license: apache-2.0
instance_prompt: a trtcrd of
tags:
- text-to-image
- diffusers-training
- diffusers
- lora
- qwen-image
- qwen-image-diffusers
- template:sd-lora
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# HiDream Image DreamBooth LoRA - multimodalart/qwen-tarot
<Gallery />
## Model description
These are multimodalart/qwen-tarot DreamBooth LoRA weights for Qwen/Qwen-Image.
The weights were trained using [DreamBooth](https://dreambooth.github.io/) with the [Qwen Image diffusers trainer](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/README_qwen.md).
## Trigger words
You should use `a trtcrd of [...], tarot card style` to trigger the image generation.
## Download model
[Download the *.safetensors LoRA](multimodalart/qwen-tarot/tree/main) in the Files & versions tab.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
>>> import torch
>>> from diffusers import QwenImagePipeline
>>> pipe = QwenImagePipeline.from_pretrained(
... "Qwen/Qwen-Image",
... torch_dtype=torch.bfloat16,
... )
>>> pipe.enable_model_cpu_offload()
>>> pipe.load_lora_weights(f"multimodalart/qwen-tarot")
>>> image = pipe(f"a trtcrd of a mecha robot").images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]
|
crystalline7/585654
|
crystalline7
| 2025-08-30T00:13:02Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-30T00:12:56Z |
[View on Civ Archive](https://civarchive.com/models/599199?modelVersionId=670734)
|
bah63843/blockassist-bc-plump_fast_antelope_1756512723
|
bah63843
| 2025-08-30T00:12:56Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"plump fast antelope",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T00:12:47Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- plump fast antelope
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
seraphimzzzz/736398
|
seraphimzzzz
| 2025-08-30T00:12:48Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-30T00:12:43Z |
[View on Civ Archive](https://civarchive.com/models/732647?modelVersionId=822484)
|
sekirr/blockassist-bc-masked_tenacious_whale_1756512720
|
sekirr
| 2025-08-30T00:12:41Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"masked tenacious whale",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T00:12:37Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- masked tenacious whale
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
azherali/python-code-search-tokenizer
|
azherali
| 2025-08-30T00:12:27Z | 0 | 0 |
transformers
|
[
"transformers",
"endpoints_compatible",
"region:us"
] | null | 2025-08-30T00:05:53Z |
---
library_name: transformers
tags: []
---
# Tokenizer Card
<!-- Provide a quick summary of what the model is/does. -->
```bash
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("azherali/python-code-search-tokenizer")
example = """class LinearLayer():
def __init__(self, input_size, output_size):
self.weight = torch.randn(input_size, output_size)
self.bias = torch.zeros(output_size)
def __call__(self, x):
return x @ self.weights + self.bias
"""
tokenizer.tokenize(example)
|
ultratopaz/497867
|
ultratopaz
| 2025-08-30T00:12:17Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-30T00:12:12Z |
[View on Civ Archive](https://civarchive.com/models/524080?modelVersionId=582271)
|
qualcomm/Whisper-Small
|
qualcomm
| 2025-08-30T00:12:04Z | 0 | 0 |
pytorch
|
[
"pytorch",
"foundation",
"android",
"automatic-speech-recognition",
"license:other",
"region:us"
] |
automatic-speech-recognition
| 2025-08-30T00:10:41Z |
---
library_name: pytorch
license: other
tags:
- foundation
- android
pipeline_tag: automatic-speech-recognition
---

# Whisper-Small: Optimized for Mobile Deployment
## Transformer-based automatic speech recognition (ASR) model for multilingual transcription and translation available on HuggingFace
HuggingFace Whisper-Small ASR (Automatic Speech Recognition) model is a state-of-the-art system designed for transcribing spoken language into written text. This model is based on the transformer architecture and has been optimized for edge inference by replacing Multi-Head Attention (MHA) with Single-Head Attention (SHA) and linear layers with convolutional (conv) layers. It exhibits robust performance in realistic, noisy environments, making it highly reliable for real-world applications. Specifically, it excels in long-form transcription, capable of accurately transcribing audio clips up to 30 seconds long. Time to the first token is the encoder's latency, while time to each additional token is decoder's latency, where we assume a max decoded length specified below.
This model is an implementation of Whisper-Small found [here](https://github.com/huggingface/transformers/tree/v4.42.3/src/transformers/models/whisper).
This repository provides scripts to run Whisper-Small on Qualcomm® devices.
More details on model performance across various devices, can be found
[here](https://aihub.qualcomm.com/models/whisper_small).
### Model Details
- **Model Type:** Model_use_case.speech_recognition
- **Model Stats:**
- Model checkpoint: openai/whisper-small
- Input resolution: 80x3000 (30 seconds audio)
- Max decoded sequence length: 200 tokens
- Number of parameters (HfWhisperEncoder): 102M
- Model size (HfWhisperEncoder) (float): 391 MB
- Number of parameters (HfWhisperDecoder): 139M
- Model size (HfWhisperDecoder) (float): 533 MB
| Model | Precision | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Primary Compute Unit | Target Model
|---|---|---|---|---|---|---|---|---|
| HfWhisperEncoder | float | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | QNN_CONTEXT_BINARY | 424.037 ms | 1 - 10 MB | NPU | Use Export Script |
| HfWhisperEncoder | float | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | QNN_CONTEXT_BINARY | 314.877 ms | 0 - 18 MB | NPU | Use Export Script |
| HfWhisperEncoder | float | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | QNN_CONTEXT_BINARY | 132.256 ms | 8 - 10 MB | NPU | Use Export Script |
| HfWhisperEncoder | float | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | QNN_CONTEXT_BINARY | 154.294 ms | 1 - 10 MB | NPU | Use Export Script |
| HfWhisperEncoder | float | SA7255P ADP | Qualcomm® SA7255P | QNN_CONTEXT_BINARY | 424.037 ms | 1 - 10 MB | NPU | Use Export Script |
| HfWhisperEncoder | float | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | QNN_CONTEXT_BINARY | 136.109 ms | 1 - 3 MB | NPU | Use Export Script |
| HfWhisperEncoder | float | SA8295P ADP | Qualcomm® SA8295P | QNN_CONTEXT_BINARY | 241.55 ms | 0 - 17 MB | NPU | Use Export Script |
| HfWhisperEncoder | float | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | QNN_CONTEXT_BINARY | 138.032 ms | 1 - 3 MB | NPU | Use Export Script |
| HfWhisperEncoder | float | SA8775P ADP | Qualcomm® SA8775P | QNN_CONTEXT_BINARY | 154.294 ms | 1 - 10 MB | NPU | Use Export Script |
| HfWhisperEncoder | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | QNN_CONTEXT_BINARY | 134.534 ms | 1 - 3 MB | NPU | Use Export Script |
| HfWhisperEncoder | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | PRECOMPILED_QNN_ONNX | 138.173 ms | 0 - 258 MB | NPU | Use Export Script |
| HfWhisperEncoder | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | QNN_CONTEXT_BINARY | 102.365 ms | 0 - 19 MB | NPU | Use Export Script |
| HfWhisperEncoder | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | PRECOMPILED_QNN_ONNX | 105.447 ms | 128 - 147 MB | NPU | Use Export Script |
| HfWhisperEncoder | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | QNN_CONTEXT_BINARY | 86.361 ms | 0 - 14 MB | NPU | Use Export Script |
| HfWhisperEncoder | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | PRECOMPILED_QNN_ONNX | 89.991 ms | 129 - 143 MB | NPU | Use Export Script |
| HfWhisperEncoder | float | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN_CONTEXT_BINARY | 130.698 ms | 0 - 0 MB | NPU | Use Export Script |
| HfWhisperEncoder | float | Snapdragon X Elite CRD | Snapdragon® X Elite | PRECOMPILED_QNN_ONNX | 131.817 ms | 227 - 227 MB | NPU | Use Export Script |
| HfWhisperDecoder | float | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | QNN_CONTEXT_BINARY | 18.261 ms | 45 - 54 MB | NPU | Use Export Script |
| HfWhisperDecoder | float | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | QNN_CONTEXT_BINARY | 17.513 ms | 52 - 73 MB | NPU | Use Export Script |
| HfWhisperDecoder | float | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | QNN_CONTEXT_BINARY | 12.24 ms | 53 - 55 MB | NPU | Use Export Script |
| HfWhisperDecoder | float | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | QNN_CONTEXT_BINARY | 13.211 ms | 45 - 55 MB | NPU | Use Export Script |
| HfWhisperDecoder | float | SA7255P ADP | Qualcomm® SA7255P | QNN_CONTEXT_BINARY | 18.261 ms | 45 - 54 MB | NPU | Use Export Script |
| HfWhisperDecoder | float | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | QNN_CONTEXT_BINARY | 11.863 ms | 66 - 68 MB | NPU | Use Export Script |
| HfWhisperDecoder | float | SA8295P ADP | Qualcomm® SA8295P | QNN_CONTEXT_BINARY | 14.789 ms | 58 - 73 MB | NPU | Use Export Script |
| HfWhisperDecoder | float | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | QNN_CONTEXT_BINARY | 11.818 ms | 64 - 66 MB | NPU | Use Export Script |
| HfWhisperDecoder | float | SA8775P ADP | Qualcomm® SA8775P | QNN_CONTEXT_BINARY | 13.211 ms | 45 - 55 MB | NPU | Use Export Script |
| HfWhisperDecoder | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | QNN_CONTEXT_BINARY | 12.05 ms | 65 - 69 MB | NPU | Use Export Script |
| HfWhisperDecoder | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | PRECOMPILED_QNN_ONNX | 13.294 ms | 0 - 319 MB | NPU | Use Export Script |
| HfWhisperDecoder | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | QNN_CONTEXT_BINARY | 9.634 ms | 55 - 74 MB | NPU | Use Export Script |
| HfWhisperDecoder | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | PRECOMPILED_QNN_ONNX | 10.846 ms | 75 - 95 MB | NPU | Use Export Script |
| HfWhisperDecoder | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | QNN_CONTEXT_BINARY | 8.157 ms | 60 - 75 MB | NPU | Use Export Script |
| HfWhisperDecoder | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | PRECOMPILED_QNN_ONNX | 8.731 ms | 73 - 87 MB | NPU | Use Export Script |
| HfWhisperDecoder | float | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN_CONTEXT_BINARY | 10.432 ms | 60 - 60 MB | NPU | Use Export Script |
| HfWhisperDecoder | float | Snapdragon X Elite CRD | Snapdragon® X Elite | PRECOMPILED_QNN_ONNX | 10.429 ms | 286 - 286 MB | NPU | Use Export Script |
## Installation
Install the package via pip:
```bash
pip install "qai-hub-models[whisper-small]"
```
## Configure Qualcomm® AI Hub to run this model on a cloud-hosted device
Sign-in to [Qualcomm® AI Hub](https://app.aihub.qualcomm.com/) with your
Qualcomm® ID. Once signed in navigate to `Account -> Settings -> API Token`.
With this API token, you can configure your client to run models on the cloud
hosted devices.
```bash
qai-hub configure --api_token API_TOKEN
```
Navigate to [docs](https://app.aihub.qualcomm.com/docs/) for more information.
## Demo off target
The package contains a simple end-to-end demo that downloads pre-trained
weights and runs this model on a sample input.
```bash
python -m qai_hub_models.models.whisper_small.demo
```
The above demo runs a reference implementation of pre-processing, model
inference, and post processing.
**NOTE**: If you want running in a Jupyter Notebook or Google Colab like
environment, please add the following to your cell (instead of the above).
```
%run -m qai_hub_models.models.whisper_small.demo
```
### Run model on a cloud-hosted device
In addition to the demo, you can also run the model on a cloud-hosted Qualcomm®
device. This script does the following:
* Performance check on-device on a cloud-hosted device
* Downloads compiled assets that can be deployed on-device for Android.
* Accuracy check between PyTorch and on-device outputs.
```bash
python -m qai_hub_models.models.whisper_small.export
```
## How does this work?
This [export script](https://aihub.qualcomm.com/models/whisper_small/qai_hub_models/models/Whisper-Small/export.py)
leverages [Qualcomm® AI Hub](https://aihub.qualcomm.com/) to optimize, validate, and deploy this model
on-device. Lets go through each step below in detail:
Step 1: **Compile model for on-device deployment**
To compile a PyTorch model for on-device deployment, we first trace the model
in memory using the `jit.trace` and then call the `submit_compile_job` API.
```python
import torch
import qai_hub as hub
from qai_hub_models.models.whisper_small import Model
# Load the model
torch_model = Model.from_pretrained()
# Device
device = hub.Device("Samsung Galaxy S24")
# Trace model
input_shape = torch_model.get_input_spec()
sample_inputs = torch_model.sample_inputs()
pt_model = torch.jit.trace(torch_model, [torch.tensor(data[0]) for _, data in sample_inputs.items()])
# Compile model on a specific device
compile_job = hub.submit_compile_job(
model=pt_model,
device=device,
input_specs=torch_model.get_input_spec(),
)
# Get target model to run on-device
target_model = compile_job.get_target_model()
```
Step 2: **Performance profiling on cloud-hosted device**
After compiling models from step 1. Models can be profiled model on-device using the
`target_model`. Note that this scripts runs the model on a device automatically
provisioned in the cloud. Once the job is submitted, you can navigate to a
provided job URL to view a variety of on-device performance metrics.
```python
profile_job = hub.submit_profile_job(
model=target_model,
device=device,
)
```
Step 3: **Verify on-device accuracy**
To verify the accuracy of the model on-device, you can run on-device inference
on sample input data on the same cloud hosted device.
```python
input_data = torch_model.sample_inputs()
inference_job = hub.submit_inference_job(
model=target_model,
device=device,
inputs=input_data,
)
on_device_output = inference_job.download_output_data()
```
With the output of the model, you can compute like PSNR, relative errors or
spot check the output with expected output.
**Note**: This on-device profiling and inference requires access to Qualcomm®
AI Hub. [Sign up for access](https://myaccount.qualcomm.com/signup).
## Deploying compiled model to Android
The models can be deployed using multiple runtimes:
- TensorFlow Lite (`.tflite` export): [This
tutorial](https://www.tensorflow.org/lite/android/quickstart) provides a
guide to deploy the .tflite model in an Android application.
- QNN (`.so` export ): This [sample
app](https://docs.qualcomm.com/bundle/publicresource/topics/80-63442-50/sample_app.html)
provides instructions on how to use the `.so` shared library in an Android application.
## View on Qualcomm® AI Hub
Get more details on Whisper-Small's performance across various devices [here](https://aihub.qualcomm.com/models/whisper_small).
Explore all available models on [Qualcomm® AI Hub](https://aihub.qualcomm.com/)
## License
* The license for the original implementation of Whisper-Small can be found
[here](https://github.com/huggingface/transformers/blob/v4.42.3/LICENSE).
* The license for the compiled assets for on-device deployment can be found [here](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/Qualcomm+AI+Hub+Proprietary+License.pdf)
## References
* [Robust Speech Recognition via Large-Scale Weak Supervision](https://cdn.openai.com/papers/whisper.pdf)
* [Source Model Implementation](https://github.com/huggingface/transformers/tree/v4.42.3/src/transformers/models/whisper)
## Community
* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI.
* For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com).
|
seraphimzzzz/657579
|
seraphimzzzz
| 2025-08-30T00:12:04Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-30T00:11:58Z |
[View on Civ Archive](https://civarchive.com/models/523954?modelVersionId=743819)
|
crystalline7/530362
|
crystalline7
| 2025-08-30T00:11:53Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-30T00:11:50Z |
[View on Civ Archive](https://civarchive.com/models/552705?modelVersionId=615241)
|
ultratopaz/820238
|
ultratopaz
| 2025-08-30T00:11:44Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-30T00:11:37Z |
[View on Civ Archive](https://civarchive.com/models/816163?modelVersionId=912669)
|
amethyst9/677160
|
amethyst9
| 2025-08-30T00:11:17Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-30T00:11:11Z |
[View on Civ Archive](https://civarchive.com/models/672901?modelVersionId=763859)
|
ultratopaz/495284
|
ultratopaz
| 2025-08-30T00:11:00Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-30T00:10:54Z |
[View on Civ Archive](https://civarchive.com/models/521873?modelVersionId=579822)
|
Sonic-man/blockassist-bc-poisonous_graceful_cow_1756510146
|
Sonic-man
| 2025-08-30T00:10:39Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"poisonous graceful cow",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T00:10:36Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- poisonous graceful cow
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
GroomerG/blockassist-bc-vicious_pawing_badger_1756511125
|
GroomerG
| 2025-08-30T00:10:33Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"vicious pawing badger",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T00:10:29Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- vicious pawing badger
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
crystalline7/588862
|
crystalline7
| 2025-08-30T00:09:17Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-30T00:09:11Z |
[View on Civ Archive](https://civarchive.com/models/601919?modelVersionId=673809)
|
JoelMah/blockassist-bc-unseen_bellowing_jackal_1756512480
|
JoelMah
| 2025-08-30T00:08:53Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"unseen bellowing jackal",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T00:08:50Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- unseen bellowing jackal
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mradermacher/Curio-1.1b-GGUF
|
mradermacher
| 2025-08-30T00:08:32Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:ClassiCC-Corpus/Curio-1.1b",
"base_model:quantized:ClassiCC-Corpus/Curio-1.1b",
"endpoints_compatible",
"region:us"
] | null | 2025-08-29T21:58:36Z |
---
base_model: ClassiCC-Corpus/Curio-1.1b
language:
- en
library_name: transformers
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/ClassiCC-Corpus/Curio-1.1b
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Curio-1.1b-GGUF).***
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Curio-1.1b-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Curio-1.1b-GGUF/resolve/main/Curio-1.1b.Q2_K.gguf) | Q2_K | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/Curio-1.1b-GGUF/resolve/main/Curio-1.1b.Q3_K_S.gguf) | Q3_K_S | 0.6 | |
| [GGUF](https://huggingface.co/mradermacher/Curio-1.1b-GGUF/resolve/main/Curio-1.1b.Q3_K_M.gguf) | Q3_K_M | 0.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Curio-1.1b-GGUF/resolve/main/Curio-1.1b.Q3_K_L.gguf) | Q3_K_L | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/Curio-1.1b-GGUF/resolve/main/Curio-1.1b.IQ4_XS.gguf) | IQ4_XS | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/Curio-1.1b-GGUF/resolve/main/Curio-1.1b.Q4_K_S.gguf) | Q4_K_S | 0.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Curio-1.1b-GGUF/resolve/main/Curio-1.1b.Q4_K_M.gguf) | Q4_K_M | 0.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Curio-1.1b-GGUF/resolve/main/Curio-1.1b.Q5_K_S.gguf) | Q5_K_S | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/Curio-1.1b-GGUF/resolve/main/Curio-1.1b.Q5_K_M.gguf) | Q5_K_M | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/Curio-1.1b-GGUF/resolve/main/Curio-1.1b.Q6_K.gguf) | Q6_K | 1.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Curio-1.1b-GGUF/resolve/main/Curio-1.1b.Q8_0.gguf) | Q8_0 | 1.3 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Curio-1.1b-GGUF/resolve/main/Curio-1.1b.f16.gguf) | f16 | 2.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
liukevin666/blockassist-bc-yawning_striped_cassowary_1756512422
|
liukevin666
| 2025-08-30T00:08:16Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"yawning striped cassowary",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T00:07:57Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- yawning striped cassowary
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mradermacher/mistsoul-v1-GGUF
|
mradermacher
| 2025-08-30T00:08:02Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"unsloth",
"trl",
"sft",
"en",
"base_model:Wing12angelic/mistsoul-v1",
"base_model:quantized:Wing12angelic/mistsoul-v1",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-29T22:08:20Z |
---
base_model: Wing12angelic/mistsoul-v1
language:
- en
library_name: transformers
license: mit
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- unsloth
- trl
- sft
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/Wing12angelic/mistsoul-v1
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#mistsoul-v1-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/mistsoul-v1-GGUF/resolve/main/mistsoul-v1.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/mistsoul-v1-GGUF/resolve/main/mistsoul-v1.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/mistsoul-v1-GGUF/resolve/main/mistsoul-v1.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/mistsoul-v1-GGUF/resolve/main/mistsoul-v1.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/mistsoul-v1-GGUF/resolve/main/mistsoul-v1.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/mistsoul-v1-GGUF/resolve/main/mistsoul-v1.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/mistsoul-v1-GGUF/resolve/main/mistsoul-v1.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/mistsoul-v1-GGUF/resolve/main/mistsoul-v1.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/mistsoul-v1-GGUF/resolve/main/mistsoul-v1.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/mistsoul-v1-GGUF/resolve/main/mistsoul-v1.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/mistsoul-v1-GGUF/resolve/main/mistsoul-v1.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/mistsoul-v1-GGUF/resolve/main/mistsoul-v1.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
crystalline7/587781
|
crystalline7
| 2025-08-30T00:07:42Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-30T00:07:35Z |
[View on Civ Archive](https://civarchive.com/models/601896?modelVersionId=672743)
|
crystalline7/515847
|
crystalline7
| 2025-08-30T00:07:08Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-30T00:07:08Z |
[View on Civ Archive](https://civarchive.com/models/529015?modelVersionId=600808)
|
crystalline7/566969
|
crystalline7
| 2025-08-30T00:06:54Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-30T00:06:49Z |
[View on Civ Archive](https://civarchive.com/models/584094?modelVersionId=651863)
|
motza0025/blockassist-bc-hairy_flightless_dinosaur_1756510903
|
motza0025
| 2025-08-30T00:06:52Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"hairy flightless dinosaur",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T00:06:21Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- hairy flightless dinosaur
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ultratopaz/847069
|
ultratopaz
| 2025-08-30T00:06:36Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-30T00:06:36Z |
[View on Civ Archive](https://civarchive.com/models/832482?modelVersionId=931528)
|
amethyst9/612822
|
amethyst9
| 2025-08-30T00:06:27Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-30T00:06:21Z |
[View on Civ Archive](https://civarchive.com/models/528317?modelVersionId=698049)
|
ultratopaz/1094487
|
ultratopaz
| 2025-08-30T00:05:43Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-30T00:05:28Z |
[View on Civ Archive](https://civarchive.com/models/1048934?modelVersionId=1177111)
|
Woutermans/zeta-3b-sft-lora
|
Woutermans
| 2025-08-30T00:05:42Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"base_model:unsloth/Qwen2.5-Coder-3B",
"base_model:finetune:unsloth/Qwen2.5-Coder-3B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-29T20:51:41Z |
---
base_model: unsloth/Qwen2.5-Coder-3B
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Woutermans
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2.5-Coder-3B
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
vadigr123/civitai_lora
|
vadigr123
| 2025-08-30T00:05:30Z | 0 | 4 | null |
[
"art",
"region:us"
] | null | 2024-08-09T14:32:26Z |
---
tags:
- art
---
**LoRA from [vadigr123_](https://civitai.com/user/vadigr123_)**
|
ultratopaz/1445871
|
ultratopaz
| 2025-08-30T00:04:53Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-30T00:04:53Z |
[View on Civ Archive](https://civarchive.com/models/1368528?modelVersionId=1546119)
|
ultratopaz/636326
|
ultratopaz
| 2025-08-30T00:03:07Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-30T00:03:01Z |
[View on Civ Archive](https://civarchive.com/models/644196?modelVersionId=721735)
|
amethyst9/463232
|
amethyst9
| 2025-08-30T00:02:28Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-30T00:02:22Z |
[View on Civ Archive](https://civarchive.com/models/491936?modelVersionId=547002)
|
mestersop3/blockassist-bc-cunning_tangled_robin_1756512105
|
mestersop3
| 2025-08-30T00:02:28Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"cunning tangled robin",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T00:02:12Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- cunning tangled robin
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
bah63843/blockassist-bc-plump_fast_antelope_1756512070
|
bah63843
| 2025-08-30T00:01:58Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"plump fast antelope",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T00:01:50Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- plump fast antelope
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
qualcomm/VIT
|
qualcomm
| 2025-08-30T00:01:49Z | 166 | 16 |
pytorch
|
[
"pytorch",
"tflite",
"backbone",
"android",
"image-classification",
"arxiv:2010.11929",
"license:other",
"region:us"
] |
image-classification
| 2024-02-25T23:09:22Z |
---
library_name: pytorch
license: other
tags:
- backbone
- android
pipeline_tag: image-classification
---

# VIT: Optimized for Mobile Deployment
## Imagenet classifier and general purpose backbone
VIT is a machine learning model that can classify images from the Imagenet dataset. It can also be used as a backbone in building more complex models for specific use cases.
This model is an implementation of VIT found [here](https://github.com/pytorch/vision/blob/main/torchvision/models/vision_transformer.py).
This repository provides scripts to run VIT on Qualcomm® devices.
More details on model performance across various devices, can be found
[here](https://aihub.qualcomm.com/models/vit).
### Model Details
- **Model Type:** Model_use_case.image_classification
- **Model Stats:**
- Model checkpoint: Imagenet
- Input resolution: 224x224
- Number of parameters: 86.6M
- Model size (float): 330 MB
- Model size (w8a16): 86.2 MB
- Model size (w8a8): 83.2 MB
| Model | Precision | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Primary Compute Unit | Target Model
|---|---|---|---|---|---|---|---|---|
| VIT | float | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | TFLITE | 42.876 ms | 0 - 306 MB | NPU | [VIT.tflite](https://huggingface.co/qualcomm/VIT/blob/main/VIT.tflite) |
| VIT | float | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | QNN_DLC | 45.209 ms | 1 - 321 MB | NPU | [VIT.dlc](https://huggingface.co/qualcomm/VIT/blob/main/VIT.dlc) |
| VIT | float | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | TFLITE | 17.073 ms | 0 - 299 MB | NPU | [VIT.tflite](https://huggingface.co/qualcomm/VIT/blob/main/VIT.tflite) |
| VIT | float | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | QNN_DLC | 21.417 ms | 0 - 328 MB | NPU | [VIT.dlc](https://huggingface.co/qualcomm/VIT/blob/main/VIT.dlc) |
| VIT | float | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | TFLITE | 12.48 ms | 0 - 23 MB | NPU | [VIT.tflite](https://huggingface.co/qualcomm/VIT/blob/main/VIT.tflite) |
| VIT | float | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | QNN_DLC | 13.747 ms | 0 - 31 MB | NPU | [VIT.dlc](https://huggingface.co/qualcomm/VIT/blob/main/VIT.dlc) |
| VIT | float | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | TFLITE | 15.25 ms | 0 - 306 MB | NPU | [VIT.tflite](https://huggingface.co/qualcomm/VIT/blob/main/VIT.tflite) |
| VIT | float | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | QNN_DLC | 16.628 ms | 1 - 326 MB | NPU | [VIT.dlc](https://huggingface.co/qualcomm/VIT/blob/main/VIT.dlc) |
| VIT | float | SA7255P ADP | Qualcomm® SA7255P | TFLITE | 42.876 ms | 0 - 306 MB | NPU | [VIT.tflite](https://huggingface.co/qualcomm/VIT/blob/main/VIT.tflite) |
| VIT | float | SA7255P ADP | Qualcomm® SA7255P | QNN_DLC | 45.209 ms | 1 - 321 MB | NPU | [VIT.dlc](https://huggingface.co/qualcomm/VIT/blob/main/VIT.dlc) |
| VIT | float | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | TFLITE | 12.452 ms | 0 - 16 MB | NPU | [VIT.tflite](https://huggingface.co/qualcomm/VIT/blob/main/VIT.tflite) |
| VIT | float | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | QNN_DLC | 13.798 ms | 0 - 29 MB | NPU | [VIT.dlc](https://huggingface.co/qualcomm/VIT/blob/main/VIT.dlc) |
| VIT | float | SA8295P ADP | Qualcomm® SA8295P | TFLITE | 19.267 ms | 0 - 290 MB | NPU | [VIT.tflite](https://huggingface.co/qualcomm/VIT/blob/main/VIT.tflite) |
| VIT | float | SA8295P ADP | Qualcomm® SA8295P | QNN_DLC | 19.74 ms | 1 - 320 MB | NPU | [VIT.dlc](https://huggingface.co/qualcomm/VIT/blob/main/VIT.dlc) |
| VIT | float | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | TFLITE | 12.492 ms | 0 - 14 MB | NPU | [VIT.tflite](https://huggingface.co/qualcomm/VIT/blob/main/VIT.tflite) |
| VIT | float | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | QNN_DLC | 13.759 ms | 0 - 29 MB | NPU | [VIT.dlc](https://huggingface.co/qualcomm/VIT/blob/main/VIT.dlc) |
| VIT | float | SA8775P ADP | Qualcomm® SA8775P | TFLITE | 15.25 ms | 0 - 306 MB | NPU | [VIT.tflite](https://huggingface.co/qualcomm/VIT/blob/main/VIT.tflite) |
| VIT | float | SA8775P ADP | Qualcomm® SA8775P | QNN_DLC | 16.628 ms | 1 - 326 MB | NPU | [VIT.dlc](https://huggingface.co/qualcomm/VIT/blob/main/VIT.dlc) |
| VIT | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | TFLITE | 12.462 ms | 0 - 20 MB | NPU | [VIT.tflite](https://huggingface.co/qualcomm/VIT/blob/main/VIT.tflite) |
| VIT | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | QNN_DLC | 13.818 ms | 0 - 23 MB | NPU | [VIT.dlc](https://huggingface.co/qualcomm/VIT/blob/main/VIT.dlc) |
| VIT | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | ONNX | 13.676 ms | 0 - 214 MB | NPU | [VIT.onnx.zip](https://huggingface.co/qualcomm/VIT/blob/main/VIT.onnx.zip) |
| VIT | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | TFLITE | 8.515 ms | 0 - 311 MB | NPU | [VIT.tflite](https://huggingface.co/qualcomm/VIT/blob/main/VIT.tflite) |
| VIT | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | QNN_DLC | 9.513 ms | 1 - 323 MB | NPU | [VIT.dlc](https://huggingface.co/qualcomm/VIT/blob/main/VIT.dlc) |
| VIT | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | ONNX | 9.546 ms | 0 - 328 MB | NPU | [VIT.onnx.zip](https://huggingface.co/qualcomm/VIT/blob/main/VIT.onnx.zip) |
| VIT | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | TFLITE | 7.282 ms | 0 - 309 MB | NPU | [VIT.tflite](https://huggingface.co/qualcomm/VIT/blob/main/VIT.tflite) |
| VIT | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | QNN_DLC | 8.012 ms | 1 - 313 MB | NPU | [VIT.dlc](https://huggingface.co/qualcomm/VIT/blob/main/VIT.dlc) |
| VIT | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | ONNX | 7.604 ms | 1 - 301 MB | NPU | [VIT.onnx.zip](https://huggingface.co/qualcomm/VIT/blob/main/VIT.onnx.zip) |
| VIT | float | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN_DLC | 14.604 ms | 1069 - 1069 MB | NPU | [VIT.dlc](https://huggingface.co/qualcomm/VIT/blob/main/VIT.dlc) |
| VIT | float | Snapdragon X Elite CRD | Snapdragon® X Elite | ONNX | 14.928 ms | 172 - 172 MB | NPU | [VIT.onnx.zip](https://huggingface.co/qualcomm/VIT/blob/main/VIT.onnx.zip) |
| VIT | w8a16 | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | QNN_DLC | 65.343 ms | 0 - 197 MB | NPU | [VIT.dlc](https://huggingface.co/qualcomm/VIT/blob/main/VIT_w8a16.dlc) |
| VIT | w8a16 | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | QNN_DLC | 51.421 ms | 0 - 223 MB | NPU | [VIT.dlc](https://huggingface.co/qualcomm/VIT/blob/main/VIT_w8a16.dlc) |
| VIT | w8a16 | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | QNN_DLC | 26.524 ms | 0 - 47 MB | NPU | [VIT.dlc](https://huggingface.co/qualcomm/VIT/blob/main/VIT_w8a16.dlc) |
| VIT | w8a16 | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | QNN_DLC | 23.101 ms | 0 - 196 MB | NPU | [VIT.dlc](https://huggingface.co/qualcomm/VIT/blob/main/VIT_w8a16.dlc) |
| VIT | w8a16 | RB3 Gen 2 (Proxy) | Qualcomm® QCS6490 (Proxy) | QNN_DLC | 196.89 ms | 0 - 1519 MB | NPU | [VIT.dlc](https://huggingface.co/qualcomm/VIT/blob/main/VIT_w8a16.dlc) |
| VIT | w8a16 | SA7255P ADP | Qualcomm® SA7255P | QNN_DLC | 65.343 ms | 0 - 197 MB | NPU | [VIT.dlc](https://huggingface.co/qualcomm/VIT/blob/main/VIT_w8a16.dlc) |
| VIT | w8a16 | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | QNN_DLC | 26.113 ms | 0 - 48 MB | NPU | [VIT.dlc](https://huggingface.co/qualcomm/VIT/blob/main/VIT_w8a16.dlc) |
| VIT | w8a16 | SA8295P ADP | Qualcomm® SA8295P | QNN_DLC | 36.981 ms | 0 - 215 MB | NPU | [VIT.dlc](https://huggingface.co/qualcomm/VIT/blob/main/VIT_w8a16.dlc) |
| VIT | w8a16 | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | QNN_DLC | 26.191 ms | 0 - 47 MB | NPU | [VIT.dlc](https://huggingface.co/qualcomm/VIT/blob/main/VIT_w8a16.dlc) |
| VIT | w8a16 | SA8775P ADP | Qualcomm® SA8775P | QNN_DLC | 23.101 ms | 0 - 196 MB | NPU | [VIT.dlc](https://huggingface.co/qualcomm/VIT/blob/main/VIT_w8a16.dlc) |
| VIT | w8a16 | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | QNN_DLC | 25.99 ms | 0 - 48 MB | NPU | [VIT.dlc](https://huggingface.co/qualcomm/VIT/blob/main/VIT_w8a16.dlc) |
| VIT | w8a16 | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | ONNX | 172.048 ms | 652 - 874 MB | NPU | [VIT.onnx.zip](https://huggingface.co/qualcomm/VIT/blob/main/VIT_w8a16.onnx.zip) |
| VIT | w8a16 | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | QNN_DLC | 19.79 ms | 0 - 206 MB | NPU | [VIT.dlc](https://huggingface.co/qualcomm/VIT/blob/main/VIT_w8a16.dlc) |
| VIT | w8a16 | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | ONNX | 126.14 ms | 680 - 845 MB | NPU | [VIT.onnx.zip](https://huggingface.co/qualcomm/VIT/blob/main/VIT_w8a16.onnx.zip) |
| VIT | w8a16 | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | QNN_DLC | 16.601 ms | 0 - 189 MB | NPU | [VIT.dlc](https://huggingface.co/qualcomm/VIT/blob/main/VIT_w8a16.dlc) |
| VIT | w8a16 | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | ONNX | 118.085 ms | 680 - 814 MB | NPU | [VIT.onnx.zip](https://huggingface.co/qualcomm/VIT/blob/main/VIT_w8a16.onnx.zip) |
| VIT | w8a16 | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN_DLC | 25.829 ms | 317 - 317 MB | NPU | [VIT.dlc](https://huggingface.co/qualcomm/VIT/blob/main/VIT_w8a16.dlc) |
| VIT | w8a16 | Snapdragon X Elite CRD | Snapdragon® X Elite | ONNX | 174.288 ms | 922 - 922 MB | NPU | [VIT.onnx.zip](https://huggingface.co/qualcomm/VIT/blob/main/VIT_w8a16.onnx.zip) |
| VIT | w8a8 | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | TFLITE | 15.928 ms | 0 - 47 MB | NPU | [VIT.tflite](https://huggingface.co/qualcomm/VIT/blob/main/VIT_w8a8.tflite) |
| VIT | w8a8 | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | TFLITE | 8.311 ms | 0 - 55 MB | NPU | [VIT.tflite](https://huggingface.co/qualcomm/VIT/blob/main/VIT_w8a8.tflite) |
| VIT | w8a8 | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | TFLITE | 7.604 ms | 0 - 91 MB | NPU | [VIT.tflite](https://huggingface.co/qualcomm/VIT/blob/main/VIT_w8a8.tflite) |
| VIT | w8a8 | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | TFLITE | 7.986 ms | 0 - 47 MB | NPU | [VIT.tflite](https://huggingface.co/qualcomm/VIT/blob/main/VIT_w8a8.tflite) |
| VIT | w8a8 | RB3 Gen 2 (Proxy) | Qualcomm® QCS6490 (Proxy) | TFLITE | 89.903 ms | 2 - 44 MB | NPU | [VIT.tflite](https://huggingface.co/qualcomm/VIT/blob/main/VIT_w8a8.tflite) |
| VIT | w8a8 | SA7255P ADP | Qualcomm® SA7255P | TFLITE | 15.928 ms | 0 - 47 MB | NPU | [VIT.tflite](https://huggingface.co/qualcomm/VIT/blob/main/VIT_w8a8.tflite) |
| VIT | w8a8 | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | TFLITE | 7.616 ms | 0 - 20 MB | NPU | [VIT.tflite](https://huggingface.co/qualcomm/VIT/blob/main/VIT_w8a8.tflite) |
| VIT | w8a8 | SA8295P ADP | Qualcomm® SA8295P | TFLITE | 9.894 ms | 0 - 49 MB | NPU | [VIT.tflite](https://huggingface.co/qualcomm/VIT/blob/main/VIT_w8a8.tflite) |
| VIT | w8a8 | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | TFLITE | 7.628 ms | 0 - 20 MB | NPU | [VIT.tflite](https://huggingface.co/qualcomm/VIT/blob/main/VIT_w8a8.tflite) |
| VIT | w8a8 | SA8775P ADP | Qualcomm® SA8775P | TFLITE | 7.986 ms | 0 - 47 MB | NPU | [VIT.tflite](https://huggingface.co/qualcomm/VIT/blob/main/VIT_w8a8.tflite) |
| VIT | w8a8 | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | TFLITE | 7.63 ms | 0 - 20 MB | NPU | [VIT.tflite](https://huggingface.co/qualcomm/VIT/blob/main/VIT_w8a8.tflite) |
| VIT | w8a8 | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | ONNX | 169.472 ms | 666 - 893 MB | NPU | [VIT.onnx.zip](https://huggingface.co/qualcomm/VIT/blob/main/VIT_w8a8.onnx.zip) |
| VIT | w8a8 | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | TFLITE | 5.393 ms | 0 - 52 MB | NPU | [VIT.tflite](https://huggingface.co/qualcomm/VIT/blob/main/VIT_w8a8.tflite) |
| VIT | w8a8 | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | ONNX | 128.241 ms | 671 - 813 MB | NPU | [VIT.onnx.zip](https://huggingface.co/qualcomm/VIT/blob/main/VIT_w8a8.onnx.zip) |
| VIT | w8a8 | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | TFLITE | 4.991 ms | 0 - 56 MB | NPU | [VIT.tflite](https://huggingface.co/qualcomm/VIT/blob/main/VIT_w8a8.tflite) |
| VIT | w8a8 | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | ONNX | 117.12 ms | 674 - 795 MB | NPU | [VIT.onnx.zip](https://huggingface.co/qualcomm/VIT/blob/main/VIT_w8a8.onnx.zip) |
| VIT | w8a8 | Snapdragon X Elite CRD | Snapdragon® X Elite | ONNX | 182.806 ms | 921 - 921 MB | NPU | [VIT.onnx.zip](https://huggingface.co/qualcomm/VIT/blob/main/VIT_w8a8.onnx.zip) |
## Installation
Install the package via pip:
```bash
pip install qai-hub-models
```
## Configure Qualcomm® AI Hub to run this model on a cloud-hosted device
Sign-in to [Qualcomm® AI Hub](https://app.aihub.qualcomm.com/) with your
Qualcomm® ID. Once signed in navigate to `Account -> Settings -> API Token`.
With this API token, you can configure your client to run models on the cloud
hosted devices.
```bash
qai-hub configure --api_token API_TOKEN
```
Navigate to [docs](https://app.aihub.qualcomm.com/docs/) for more information.
## Demo off target
The package contains a simple end-to-end demo that downloads pre-trained
weights and runs this model on a sample input.
```bash
python -m qai_hub_models.models.vit.demo
```
The above demo runs a reference implementation of pre-processing, model
inference, and post processing.
**NOTE**: If you want running in a Jupyter Notebook or Google Colab like
environment, please add the following to your cell (instead of the above).
```
%run -m qai_hub_models.models.vit.demo
```
### Run model on a cloud-hosted device
In addition to the demo, you can also run the model on a cloud-hosted Qualcomm®
device. This script does the following:
* Performance check on-device on a cloud-hosted device
* Downloads compiled assets that can be deployed on-device for Android.
* Accuracy check between PyTorch and on-device outputs.
```bash
python -m qai_hub_models.models.vit.export
```
## How does this work?
This [export script](https://aihub.qualcomm.com/models/vit/qai_hub_models/models/VIT/export.py)
leverages [Qualcomm® AI Hub](https://aihub.qualcomm.com/) to optimize, validate, and deploy this model
on-device. Lets go through each step below in detail:
Step 1: **Compile model for on-device deployment**
To compile a PyTorch model for on-device deployment, we first trace the model
in memory using the `jit.trace` and then call the `submit_compile_job` API.
```python
import torch
import qai_hub as hub
from qai_hub_models.models.vit import Model
# Load the model
torch_model = Model.from_pretrained()
# Device
device = hub.Device("Samsung Galaxy S24")
# Trace model
input_shape = torch_model.get_input_spec()
sample_inputs = torch_model.sample_inputs()
pt_model = torch.jit.trace(torch_model, [torch.tensor(data[0]) for _, data in sample_inputs.items()])
# Compile model on a specific device
compile_job = hub.submit_compile_job(
model=pt_model,
device=device,
input_specs=torch_model.get_input_spec(),
)
# Get target model to run on-device
target_model = compile_job.get_target_model()
```
Step 2: **Performance profiling on cloud-hosted device**
After compiling models from step 1. Models can be profiled model on-device using the
`target_model`. Note that this scripts runs the model on a device automatically
provisioned in the cloud. Once the job is submitted, you can navigate to a
provided job URL to view a variety of on-device performance metrics.
```python
profile_job = hub.submit_profile_job(
model=target_model,
device=device,
)
```
Step 3: **Verify on-device accuracy**
To verify the accuracy of the model on-device, you can run on-device inference
on sample input data on the same cloud hosted device.
```python
input_data = torch_model.sample_inputs()
inference_job = hub.submit_inference_job(
model=target_model,
device=device,
inputs=input_data,
)
on_device_output = inference_job.download_output_data()
```
With the output of the model, you can compute like PSNR, relative errors or
spot check the output with expected output.
**Note**: This on-device profiling and inference requires access to Qualcomm®
AI Hub. [Sign up for access](https://myaccount.qualcomm.com/signup).
## Run demo on a cloud-hosted device
You can also run the demo on-device.
```bash
python -m qai_hub_models.models.vit.demo --eval-mode on-device
```
**NOTE**: If you want running in a Jupyter Notebook or Google Colab like
environment, please add the following to your cell (instead of the above).
```
%run -m qai_hub_models.models.vit.demo -- --eval-mode on-device
```
## Deploying compiled model to Android
The models can be deployed using multiple runtimes:
- TensorFlow Lite (`.tflite` export): [This
tutorial](https://www.tensorflow.org/lite/android/quickstart) provides a
guide to deploy the .tflite model in an Android application.
- QNN (`.so` export ): This [sample
app](https://docs.qualcomm.com/bundle/publicresource/topics/80-63442-50/sample_app.html)
provides instructions on how to use the `.so` shared library in an Android application.
## View on Qualcomm® AI Hub
Get more details on VIT's performance across various devices [here](https://aihub.qualcomm.com/models/vit).
Explore all available models on [Qualcomm® AI Hub](https://aihub.qualcomm.com/)
## License
* The license for the original implementation of VIT can be found
[here](https://github.com/pytorch/vision/blob/main/LICENSE).
* The license for the compiled assets for on-device deployment can be found [here](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/Qualcomm+AI+Hub+Proprietary+License.pdf)
## References
* [An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://arxiv.org/abs/2010.11929)
* [Source Model Implementation](https://github.com/pytorch/vision/blob/main/torchvision/models/vision_transformer.py)
## Community
* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI.
* For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com).
|
crystalline7/1320905
|
crystalline7
| 2025-08-30T00:01:35Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-30T00:01:19Z |
[View on Civ Archive](https://civarchive.com/models/554770?modelVersionId=1419090)
|
amethyst9/544064
|
amethyst9
| 2025-08-30T00:01:08Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-30T00:01:03Z |
[View on Civ Archive](https://civarchive.com/models/561116?modelVersionId=629375)
|
crystalline7/931078
|
crystalline7
| 2025-08-30T00:00:55Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-30T00:00:49Z |
[View on Civ Archive](https://civarchive.com/models/915095?modelVersionId=1024253)
|
sekirr/blockassist-bc-masked_tenacious_whale_1756512005
|
sekirr
| 2025-08-30T00:00:45Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"masked tenacious whale",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T00:00:41Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- masked tenacious whale
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
crystalline7/846485
|
crystalline7
| 2025-08-30T00:00:42Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-30T00:00:37Z |
[View on Civ Archive](https://civarchive.com/models/544493?modelVersionId=939176)
|
qualcomm/Video-MAE
|
qualcomm
| 2025-08-30T00:00:36Z | 36 | 1 |
pytorch
|
[
"pytorch",
"tflite",
"backbone",
"android",
"video-classification",
"arxiv:2203.12602",
"license:other",
"region:us"
] |
video-classification
| 2025-03-14T02:13:17Z |
---
library_name: pytorch
license: other
tags:
- backbone
- android
pipeline_tag: video-classification
---

# Video-MAE: Optimized for Mobile Deployment
## Sports and human action recognition in videos
Video MAE (Masked Auto Encoder) is a network for doing video classification that uses the ViT (Vision Transformer) backbone.
This model is an implementation of Video-MAE found [here](https://github.com/MCG-NJU/VideoMAE).
This repository provides scripts to run Video-MAE on Qualcomm® devices.
More details on model performance across various devices, can be found
[here](https://aihub.qualcomm.com/models/video_mae).
### Model Details
- **Model Type:** Model_use_case.video_classification
- **Model Stats:**
- Model checkpoint: Kinectics-400
- Input resolution: 224x224
- Number of parameters: 87.7M
- Model size (float): 335 MB
| Model | Precision | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Primary Compute Unit | Target Model
|---|---|---|---|---|---|---|---|---|
| Video-MAE | float | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | TFLITE | 442.754 ms | 0 - 522 MB | NPU | [Video-MAE.tflite](https://huggingface.co/qualcomm/Video-MAE/blob/main/Video-MAE.tflite) |
| Video-MAE | float | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | QNN_DLC | 1205.501 ms | 0 - 549 MB | NPU | [Video-MAE.dlc](https://huggingface.co/qualcomm/Video-MAE/blob/main/Video-MAE.dlc) |
| Video-MAE | float | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | TFLITE | 219.79 ms | 3 - 486 MB | NPU | [Video-MAE.tflite](https://huggingface.co/qualcomm/Video-MAE/blob/main/Video-MAE.tflite) |
| Video-MAE | float | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | QNN_DLC | 1403.659 ms | 9 - 431 MB | NPU | [Video-MAE.dlc](https://huggingface.co/qualcomm/Video-MAE/blob/main/Video-MAE.dlc) |
| Video-MAE | float | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | TFLITE | 149.154 ms | 0 - 37 MB | NPU | [Video-MAE.tflite](https://huggingface.co/qualcomm/Video-MAE/blob/main/Video-MAE.tflite) |
| Video-MAE | float | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | QNN_DLC | 466.225 ms | 12 - 53 MB | NPU | [Video-MAE.dlc](https://huggingface.co/qualcomm/Video-MAE/blob/main/Video-MAE.dlc) |
| Video-MAE | float | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | TFLITE | 171.607 ms | 0 - 522 MB | NPU | [Video-MAE.tflite](https://huggingface.co/qualcomm/Video-MAE/blob/main/Video-MAE.tflite) |
| Video-MAE | float | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | QNN_DLC | 513.342 ms | 3 - 456 MB | NPU | [Video-MAE.dlc](https://huggingface.co/qualcomm/Video-MAE/blob/main/Video-MAE.dlc) |
| Video-MAE | float | SA7255P ADP | Qualcomm® SA7255P | TFLITE | 442.754 ms | 0 - 522 MB | NPU | [Video-MAE.tflite](https://huggingface.co/qualcomm/Video-MAE/blob/main/Video-MAE.tflite) |
| Video-MAE | float | SA7255P ADP | Qualcomm® SA7255P | QNN_DLC | 1205.501 ms | 0 - 549 MB | NPU | [Video-MAE.dlc](https://huggingface.co/qualcomm/Video-MAE/blob/main/Video-MAE.dlc) |
| Video-MAE | float | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | TFLITE | 149.07 ms | 0 - 36 MB | NPU | [Video-MAE.tflite](https://huggingface.co/qualcomm/Video-MAE/blob/main/Video-MAE.tflite) |
| Video-MAE | float | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | QNN_DLC | 468.997 ms | 1 - 34 MB | NPU | [Video-MAE.dlc](https://huggingface.co/qualcomm/Video-MAE/blob/main/Video-MAE.dlc) |
| Video-MAE | float | SA8295P ADP | Qualcomm® SA8295P | TFLITE | 239.774 ms | 0 - 480 MB | NPU | [Video-MAE.tflite](https://huggingface.co/qualcomm/Video-MAE/blob/main/Video-MAE.tflite) |
| Video-MAE | float | SA8295P ADP | Qualcomm® SA8295P | QNN_DLC | 741.136 ms | 0 - 417 MB | NPU | [Video-MAE.dlc](https://huggingface.co/qualcomm/Video-MAE/blob/main/Video-MAE.dlc) |
| Video-MAE | float | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | TFLITE | 148.626 ms | 0 - 36 MB | NPU | [Video-MAE.tflite](https://huggingface.co/qualcomm/Video-MAE/blob/main/Video-MAE.tflite) |
| Video-MAE | float | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | QNN_DLC | 469.827 ms | 9 - 43 MB | NPU | [Video-MAE.dlc](https://huggingface.co/qualcomm/Video-MAE/blob/main/Video-MAE.dlc) |
| Video-MAE | float | SA8775P ADP | Qualcomm® SA8775P | TFLITE | 171.607 ms | 0 - 522 MB | NPU | [Video-MAE.tflite](https://huggingface.co/qualcomm/Video-MAE/blob/main/Video-MAE.tflite) |
| Video-MAE | float | SA8775P ADP | Qualcomm® SA8775P | QNN_DLC | 513.342 ms | 3 - 456 MB | NPU | [Video-MAE.dlc](https://huggingface.co/qualcomm/Video-MAE/blob/main/Video-MAE.dlc) |
| Video-MAE | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | TFLITE | 148.891 ms | 0 - 44 MB | NPU | [Video-MAE.tflite](https://huggingface.co/qualcomm/Video-MAE/blob/main/Video-MAE.tflite) |
| Video-MAE | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | QNN_DLC | 469.916 ms | 9 - 48 MB | NPU | [Video-MAE.dlc](https://huggingface.co/qualcomm/Video-MAE/blob/main/Video-MAE.dlc) |
| Video-MAE | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | ONNX | 573.615 ms | 0 - 243 MB | NPU | [Video-MAE.onnx.zip](https://huggingface.co/qualcomm/Video-MAE/blob/main/Video-MAE.onnx.zip) |
| Video-MAE | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | TFLITE | 109.661 ms | 46 - 568 MB | NPU | [Video-MAE.tflite](https://huggingface.co/qualcomm/Video-MAE/blob/main/Video-MAE.tflite) |
| Video-MAE | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | QNN_DLC | 373.595 ms | 9 - 567 MB | NPU | [Video-MAE.dlc](https://huggingface.co/qualcomm/Video-MAE/blob/main/Video-MAE.dlc) |
| Video-MAE | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | ONNX | 389.775 ms | 7 - 555 MB | NPU | [Video-MAE.onnx.zip](https://huggingface.co/qualcomm/Video-MAE/blob/main/Video-MAE.onnx.zip) |
| Video-MAE | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | QNN_DLC | 336.2 ms | 9 - 497 MB | NPU | [Video-MAE.dlc](https://huggingface.co/qualcomm/Video-MAE/blob/main/Video-MAE.dlc) |
| Video-MAE | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | ONNX | 587.372 ms | 9 - 564 MB | NPU | [Video-MAE.onnx.zip](https://huggingface.co/qualcomm/Video-MAE/blob/main/Video-MAE.onnx.zip) |
| Video-MAE | float | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN_DLC | 483.55 ms | 565 - 565 MB | NPU | [Video-MAE.dlc](https://huggingface.co/qualcomm/Video-MAE/blob/main/Video-MAE.dlc) |
| Video-MAE | float | Snapdragon X Elite CRD | Snapdragon® X Elite | ONNX | 691.257 ms | 188 - 188 MB | NPU | [Video-MAE.onnx.zip](https://huggingface.co/qualcomm/Video-MAE/blob/main/Video-MAE.onnx.zip) |
## Installation
Install the package via pip:
```bash
pip install "qai-hub-models[video-mae]"
```
## Configure Qualcomm® AI Hub to run this model on a cloud-hosted device
Sign-in to [Qualcomm® AI Hub](https://app.aihub.qualcomm.com/) with your
Qualcomm® ID. Once signed in navigate to `Account -> Settings -> API Token`.
With this API token, you can configure your client to run models on the cloud
hosted devices.
```bash
qai-hub configure --api_token API_TOKEN
```
Navigate to [docs](https://app.aihub.qualcomm.com/docs/) for more information.
## Demo off target
The package contains a simple end-to-end demo that downloads pre-trained
weights and runs this model on a sample input.
```bash
python -m qai_hub_models.models.video_mae.demo
```
The above demo runs a reference implementation of pre-processing, model
inference, and post processing.
**NOTE**: If you want running in a Jupyter Notebook or Google Colab like
environment, please add the following to your cell (instead of the above).
```
%run -m qai_hub_models.models.video_mae.demo
```
### Run model on a cloud-hosted device
In addition to the demo, you can also run the model on a cloud-hosted Qualcomm®
device. This script does the following:
* Performance check on-device on a cloud-hosted device
* Downloads compiled assets that can be deployed on-device for Android.
* Accuracy check between PyTorch and on-device outputs.
```bash
python -m qai_hub_models.models.video_mae.export
```
## How does this work?
This [export script](https://aihub.qualcomm.com/models/video_mae/qai_hub_models/models/Video-MAE/export.py)
leverages [Qualcomm® AI Hub](https://aihub.qualcomm.com/) to optimize, validate, and deploy this model
on-device. Lets go through each step below in detail:
Step 1: **Compile model for on-device deployment**
To compile a PyTorch model for on-device deployment, we first trace the model
in memory using the `jit.trace` and then call the `submit_compile_job` API.
```python
import torch
import qai_hub as hub
from qai_hub_models.models.video_mae import Model
# Load the model
torch_model = Model.from_pretrained()
# Device
device = hub.Device("Samsung Galaxy S24")
# Trace model
input_shape = torch_model.get_input_spec()
sample_inputs = torch_model.sample_inputs()
pt_model = torch.jit.trace(torch_model, [torch.tensor(data[0]) for _, data in sample_inputs.items()])
# Compile model on a specific device
compile_job = hub.submit_compile_job(
model=pt_model,
device=device,
input_specs=torch_model.get_input_spec(),
)
# Get target model to run on-device
target_model = compile_job.get_target_model()
```
Step 2: **Performance profiling on cloud-hosted device**
After compiling models from step 1. Models can be profiled model on-device using the
`target_model`. Note that this scripts runs the model on a device automatically
provisioned in the cloud. Once the job is submitted, you can navigate to a
provided job URL to view a variety of on-device performance metrics.
```python
profile_job = hub.submit_profile_job(
model=target_model,
device=device,
)
```
Step 3: **Verify on-device accuracy**
To verify the accuracy of the model on-device, you can run on-device inference
on sample input data on the same cloud hosted device.
```python
input_data = torch_model.sample_inputs()
inference_job = hub.submit_inference_job(
model=target_model,
device=device,
inputs=input_data,
)
on_device_output = inference_job.download_output_data()
```
With the output of the model, you can compute like PSNR, relative errors or
spot check the output with expected output.
**Note**: This on-device profiling and inference requires access to Qualcomm®
AI Hub. [Sign up for access](https://myaccount.qualcomm.com/signup).
## Deploying compiled model to Android
The models can be deployed using multiple runtimes:
- TensorFlow Lite (`.tflite` export): [This
tutorial](https://www.tensorflow.org/lite/android/quickstart) provides a
guide to deploy the .tflite model in an Android application.
- QNN (`.so` export ): This [sample
app](https://docs.qualcomm.com/bundle/publicresource/topics/80-63442-50/sample_app.html)
provides instructions on how to use the `.so` shared library in an Android application.
## View on Qualcomm® AI Hub
Get more details on Video-MAE's performance across various devices [here](https://aihub.qualcomm.com/models/video_mae).
Explore all available models on [Qualcomm® AI Hub](https://aihub.qualcomm.com/)
## License
* The license for the original implementation of Video-MAE can be found
[here](https://github.com/MCG-NJU/VideoMAE/blob/main/LICENSE).
* The license for the compiled assets for on-device deployment can be found [here](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/Qualcomm+AI+Hub+Proprietary+License.pdf)
## References
* [Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-Training](https://arxiv.org/abs/2203.12602)
* [Source Model Implementation](https://github.com/MCG-NJU/VideoMAE)
## Community
* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI.
* For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com).
|
qualcomm/Unet-Segmentation
|
qualcomm
| 2025-08-29T23:59:57Z | 103 | 6 |
pytorch
|
[
"pytorch",
"tflite",
"backbone",
"real_time",
"android",
"image-segmentation",
"arxiv:1505.04597",
"license:other",
"region:us"
] |
image-segmentation
| 2024-02-25T23:01:41Z |
---
library_name: pytorch
license: other
tags:
- backbone
- real_time
- android
pipeline_tag: image-segmentation
---

# Unet-Segmentation: Optimized for Mobile Deployment
## Real-time segmentation optimized for mobile and edge
UNet is a machine learning model that produces a segmentation mask for an image. The most basic use case will label each pixel in the image as being in the foreground or the background. More advanced usage will assign a class label to each pixel. This version of the model was trained on the data from Kaggle's Carvana Image Masking Challenge (see https://www.kaggle.com/c/carvana-image-masking-challenge) and is used for vehicle segmentation.
This model is an implementation of Unet-Segmentation found [here](https://github.com/milesial/Pytorch-UNet).
This repository provides scripts to run Unet-Segmentation on Qualcomm® devices.
More details on model performance across various devices, can be found
[here](https://aihub.qualcomm.com/models/unet_segmentation).
### Model Details
- **Model Type:** Model_use_case.semantic_segmentation
- **Model Stats:**
- Model checkpoint: unet_carvana_scale1.0_epoch2
- Input resolution: 224x224
- Number of output classes: 2 (foreground / background)
- Number of parameters: 31.0M
- Model size (float): 118 MB
- Model size (w8a8): 29.8 MB
| Model | Precision | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Primary Compute Unit | Target Model
|---|---|---|---|---|---|---|---|---|
| Unet-Segmentation | float | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | TFLITE | 957.723 ms | 0 - 115 MB | NPU | [Unet-Segmentation.tflite](https://huggingface.co/qualcomm/Unet-Segmentation/blob/main/Unet-Segmentation.tflite) |
| Unet-Segmentation | float | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | QNN_DLC | 943.752 ms | 4 - 130 MB | NPU | [Unet-Segmentation.dlc](https://huggingface.co/qualcomm/Unet-Segmentation/blob/main/Unet-Segmentation.dlc) |
| Unet-Segmentation | float | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | TFLITE | 292.328 ms | 6 - 140 MB | NPU | [Unet-Segmentation.tflite](https://huggingface.co/qualcomm/Unet-Segmentation/blob/main/Unet-Segmentation.tflite) |
| Unet-Segmentation | float | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | QNN_DLC | 309.739 ms | 9 - 158 MB | NPU | [Unet-Segmentation.dlc](https://huggingface.co/qualcomm/Unet-Segmentation/blob/main/Unet-Segmentation.dlc) |
| Unet-Segmentation | float | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | TFLITE | 160.967 ms | 6 - 465 MB | NPU | [Unet-Segmentation.tflite](https://huggingface.co/qualcomm/Unet-Segmentation/blob/main/Unet-Segmentation.tflite) |
| Unet-Segmentation | float | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | QNN_DLC | 142.344 ms | 9 - 52 MB | NPU | [Unet-Segmentation.dlc](https://huggingface.co/qualcomm/Unet-Segmentation/blob/main/Unet-Segmentation.dlc) |
| Unet-Segmentation | float | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | TFLITE | 248.534 ms | 6 - 121 MB | NPU | [Unet-Segmentation.tflite](https://huggingface.co/qualcomm/Unet-Segmentation/blob/main/Unet-Segmentation.tflite) |
| Unet-Segmentation | float | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | QNN_DLC | 238.804 ms | 0 - 127 MB | NPU | [Unet-Segmentation.dlc](https://huggingface.co/qualcomm/Unet-Segmentation/blob/main/Unet-Segmentation.dlc) |
| Unet-Segmentation | float | SA7255P ADP | Qualcomm® SA7255P | TFLITE | 957.723 ms | 0 - 115 MB | NPU | [Unet-Segmentation.tflite](https://huggingface.co/qualcomm/Unet-Segmentation/blob/main/Unet-Segmentation.tflite) |
| Unet-Segmentation | float | SA7255P ADP | Qualcomm® SA7255P | QNN_DLC | 943.752 ms | 4 - 130 MB | NPU | [Unet-Segmentation.dlc](https://huggingface.co/qualcomm/Unet-Segmentation/blob/main/Unet-Segmentation.dlc) |
| Unet-Segmentation | float | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | TFLITE | 152.468 ms | 6 - 465 MB | NPU | [Unet-Segmentation.tflite](https://huggingface.co/qualcomm/Unet-Segmentation/blob/main/Unet-Segmentation.tflite) |
| Unet-Segmentation | float | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | QNN_DLC | 138.816 ms | 10 - 53 MB | NPU | [Unet-Segmentation.dlc](https://huggingface.co/qualcomm/Unet-Segmentation/blob/main/Unet-Segmentation.dlc) |
| Unet-Segmentation | float | SA8295P ADP | Qualcomm® SA8295P | TFLITE | 274.528 ms | 6 - 121 MB | NPU | [Unet-Segmentation.tflite](https://huggingface.co/qualcomm/Unet-Segmentation/blob/main/Unet-Segmentation.tflite) |
| Unet-Segmentation | float | SA8295P ADP | Qualcomm® SA8295P | QNN_DLC | 259.073 ms | 0 - 128 MB | NPU | [Unet-Segmentation.dlc](https://huggingface.co/qualcomm/Unet-Segmentation/blob/main/Unet-Segmentation.dlc) |
| Unet-Segmentation | float | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | TFLITE | 156.443 ms | 6 - 463 MB | NPU | [Unet-Segmentation.tflite](https://huggingface.co/qualcomm/Unet-Segmentation/blob/main/Unet-Segmentation.tflite) |
| Unet-Segmentation | float | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | QNN_DLC | 139.781 ms | 9 - 52 MB | NPU | [Unet-Segmentation.dlc](https://huggingface.co/qualcomm/Unet-Segmentation/blob/main/Unet-Segmentation.dlc) |
| Unet-Segmentation | float | SA8775P ADP | Qualcomm® SA8775P | TFLITE | 248.534 ms | 6 - 121 MB | NPU | [Unet-Segmentation.tflite](https://huggingface.co/qualcomm/Unet-Segmentation/blob/main/Unet-Segmentation.tflite) |
| Unet-Segmentation | float | SA8775P ADP | Qualcomm® SA8775P | QNN_DLC | 238.804 ms | 0 - 127 MB | NPU | [Unet-Segmentation.dlc](https://huggingface.co/qualcomm/Unet-Segmentation/blob/main/Unet-Segmentation.dlc) |
| Unet-Segmentation | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | TFLITE | 153.363 ms | 6 - 238 MB | NPU | [Unet-Segmentation.tflite](https://huggingface.co/qualcomm/Unet-Segmentation/blob/main/Unet-Segmentation.tflite) |
| Unet-Segmentation | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | QNN_DLC | 141.696 ms | 11 - 59 MB | NPU | [Unet-Segmentation.dlc](https://huggingface.co/qualcomm/Unet-Segmentation/blob/main/Unet-Segmentation.dlc) |
| Unet-Segmentation | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | ONNX | 167.585 ms | 0 - 84 MB | NPU | [Unet-Segmentation.onnx.zip](https://huggingface.co/qualcomm/Unet-Segmentation/blob/main/Unet-Segmentation.onnx.zip) |
| Unet-Segmentation | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | TFLITE | 111.582 ms | 6 - 117 MB | NPU | [Unet-Segmentation.tflite](https://huggingface.co/qualcomm/Unet-Segmentation/blob/main/Unet-Segmentation.tflite) |
| Unet-Segmentation | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | QNN_DLC | 102.765 ms | 9 - 132 MB | NPU | [Unet-Segmentation.dlc](https://huggingface.co/qualcomm/Unet-Segmentation/blob/main/Unet-Segmentation.dlc) |
| Unet-Segmentation | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | ONNX | 119.181 ms | 24 - 125 MB | NPU | [Unet-Segmentation.onnx.zip](https://huggingface.co/qualcomm/Unet-Segmentation/blob/main/Unet-Segmentation.onnx.zip) |
| Unet-Segmentation | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | TFLITE | 102.945 ms | 6 - 122 MB | NPU | [Unet-Segmentation.tflite](https://huggingface.co/qualcomm/Unet-Segmentation/blob/main/Unet-Segmentation.tflite) |
| Unet-Segmentation | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | QNN_DLC | 95.348 ms | 9 - 141 MB | NPU | [Unet-Segmentation.dlc](https://huggingface.co/qualcomm/Unet-Segmentation/blob/main/Unet-Segmentation.dlc) |
| Unet-Segmentation | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | ONNX | 108.783 ms | 22 - 124 MB | NPU | [Unet-Segmentation.onnx.zip](https://huggingface.co/qualcomm/Unet-Segmentation/blob/main/Unet-Segmentation.onnx.zip) |
| Unet-Segmentation | float | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN_DLC | 132.138 ms | 72 - 72 MB | NPU | [Unet-Segmentation.dlc](https://huggingface.co/qualcomm/Unet-Segmentation/blob/main/Unet-Segmentation.dlc) |
| Unet-Segmentation | float | Snapdragon X Elite CRD | Snapdragon® X Elite | ONNX | 151.04 ms | 53 - 53 MB | NPU | [Unet-Segmentation.onnx.zip](https://huggingface.co/qualcomm/Unet-Segmentation/blob/main/Unet-Segmentation.onnx.zip) |
| Unet-Segmentation | w8a8 | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | TFLITE | 126.729 ms | 2 - 46 MB | NPU | [Unet-Segmentation.tflite](https://huggingface.co/qualcomm/Unet-Segmentation/blob/main/Unet-Segmentation_w8a8.tflite) |
| Unet-Segmentation | w8a8 | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | QNN_DLC | 1431.626 ms | 2 - 60 MB | NPU | [Unet-Segmentation.dlc](https://huggingface.co/qualcomm/Unet-Segmentation/blob/main/Unet-Segmentation_w8a8.dlc) |
| Unet-Segmentation | w8a8 | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | TFLITE | 53.926 ms | 2 - 84 MB | NPU | [Unet-Segmentation.tflite](https://huggingface.co/qualcomm/Unet-Segmentation/blob/main/Unet-Segmentation_w8a8.tflite) |
| Unet-Segmentation | w8a8 | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | QNN_DLC | 63.277 ms | 2 - 101 MB | NPU | [Unet-Segmentation.dlc](https://huggingface.co/qualcomm/Unet-Segmentation/blob/main/Unet-Segmentation_w8a8.dlc) |
| Unet-Segmentation | w8a8 | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | TFLITE | 37.74 ms | 0 - 902 MB | NPU | [Unet-Segmentation.tflite](https://huggingface.co/qualcomm/Unet-Segmentation/blob/main/Unet-Segmentation_w8a8.tflite) |
| Unet-Segmentation | w8a8 | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | QNN_DLC | 599.397 ms | 2 - 19 MB | NPU | [Unet-Segmentation.dlc](https://huggingface.co/qualcomm/Unet-Segmentation/blob/main/Unet-Segmentation_w8a8.dlc) |
| Unet-Segmentation | w8a8 | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | TFLITE | 34.745 ms | 2 - 45 MB | NPU | [Unet-Segmentation.tflite](https://huggingface.co/qualcomm/Unet-Segmentation/blob/main/Unet-Segmentation_w8a8.tflite) |
| Unet-Segmentation | w8a8 | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | QNN_DLC | 699.619 ms | 2 - 59 MB | NPU | [Unet-Segmentation.dlc](https://huggingface.co/qualcomm/Unet-Segmentation/blob/main/Unet-Segmentation_w8a8.dlc) |
| Unet-Segmentation | w8a8 | RB3 Gen 2 (Proxy) | Qualcomm® QCS6490 (Proxy) | TFLITE | 312.114 ms | 2 - 291 MB | NPU | [Unet-Segmentation.tflite](https://huggingface.co/qualcomm/Unet-Segmentation/blob/main/Unet-Segmentation_w8a8.tflite) |
| Unet-Segmentation | w8a8 | RB3 Gen 2 (Proxy) | Qualcomm® QCS6490 (Proxy) | QNN_DLC | 336.038 ms | 2 - 363 MB | NPU | [Unet-Segmentation.dlc](https://huggingface.co/qualcomm/Unet-Segmentation/blob/main/Unet-Segmentation_w8a8.dlc) |
| Unet-Segmentation | w8a8 | RB5 (Proxy) | Qualcomm® QCS8250 (Proxy) | TFLITE | 3161.088 ms | 0 - 846 MB | NPU | [Unet-Segmentation.tflite](https://huggingface.co/qualcomm/Unet-Segmentation/blob/main/Unet-Segmentation_w8a8.tflite) |
| Unet-Segmentation | w8a8 | SA7255P ADP | Qualcomm® SA7255P | TFLITE | 126.729 ms | 2 - 46 MB | NPU | [Unet-Segmentation.tflite](https://huggingface.co/qualcomm/Unet-Segmentation/blob/main/Unet-Segmentation_w8a8.tflite) |
| Unet-Segmentation | w8a8 | SA7255P ADP | Qualcomm® SA7255P | QNN_DLC | 1431.626 ms | 2 - 60 MB | NPU | [Unet-Segmentation.dlc](https://huggingface.co/qualcomm/Unet-Segmentation/blob/main/Unet-Segmentation_w8a8.dlc) |
| Unet-Segmentation | w8a8 | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | TFLITE | 36.017 ms | 0 - 899 MB | NPU | [Unet-Segmentation.tflite](https://huggingface.co/qualcomm/Unet-Segmentation/blob/main/Unet-Segmentation_w8a8.tflite) |
| Unet-Segmentation | w8a8 | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | QNN_DLC | 599.983 ms | 1 - 26 MB | NPU | [Unet-Segmentation.dlc](https://huggingface.co/qualcomm/Unet-Segmentation/blob/main/Unet-Segmentation_w8a8.dlc) |
| Unet-Segmentation | w8a8 | SA8295P ADP | Qualcomm® SA8295P | TFLITE | 67.433 ms | 2 - 47 MB | NPU | [Unet-Segmentation.tflite](https://huggingface.co/qualcomm/Unet-Segmentation/blob/main/Unet-Segmentation_w8a8.tflite) |
| Unet-Segmentation | w8a8 | SA8295P ADP | Qualcomm® SA8295P | QNN_DLC | 64.266 ms | 2 - 62 MB | NPU | [Unet-Segmentation.dlc](https://huggingface.co/qualcomm/Unet-Segmentation/blob/main/Unet-Segmentation_w8a8.dlc) |
| Unet-Segmentation | w8a8 | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | TFLITE | 38.312 ms | 0 - 900 MB | NPU | [Unet-Segmentation.tflite](https://huggingface.co/qualcomm/Unet-Segmentation/blob/main/Unet-Segmentation_w8a8.tflite) |
| Unet-Segmentation | w8a8 | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | QNN_DLC | 600.006 ms | 2 - 26 MB | NPU | [Unet-Segmentation.dlc](https://huggingface.co/qualcomm/Unet-Segmentation/blob/main/Unet-Segmentation_w8a8.dlc) |
| Unet-Segmentation | w8a8 | SA8775P ADP | Qualcomm® SA8775P | TFLITE | 34.745 ms | 2 - 45 MB | NPU | [Unet-Segmentation.tflite](https://huggingface.co/qualcomm/Unet-Segmentation/blob/main/Unet-Segmentation_w8a8.tflite) |
| Unet-Segmentation | w8a8 | SA8775P ADP | Qualcomm® SA8775P | QNN_DLC | 699.619 ms | 2 - 59 MB | NPU | [Unet-Segmentation.dlc](https://huggingface.co/qualcomm/Unet-Segmentation/blob/main/Unet-Segmentation_w8a8.dlc) |
| Unet-Segmentation | w8a8 | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | TFLITE | 35.529 ms | 0 - 899 MB | NPU | [Unet-Segmentation.tflite](https://huggingface.co/qualcomm/Unet-Segmentation/blob/main/Unet-Segmentation_w8a8.tflite) |
| Unet-Segmentation | w8a8 | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | QNN_DLC | 600.037 ms | 2 - 18 MB | NPU | [Unet-Segmentation.dlc](https://huggingface.co/qualcomm/Unet-Segmentation/blob/main/Unet-Segmentation_w8a8.dlc) |
| Unet-Segmentation | w8a8 | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | TFLITE | 29.024 ms | 1 - 82 MB | NPU | [Unet-Segmentation.tflite](https://huggingface.co/qualcomm/Unet-Segmentation/blob/main/Unet-Segmentation_w8a8.tflite) |
| Unet-Segmentation | w8a8 | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | QNN_DLC | 527.696 ms | 2 - 95 MB | NPU | [Unet-Segmentation.dlc](https://huggingface.co/qualcomm/Unet-Segmentation/blob/main/Unet-Segmentation_w8a8.dlc) |
| Unet-Segmentation | w8a8 | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | TFLITE | 25.802 ms | 1 - 51 MB | NPU | [Unet-Segmentation.tflite](https://huggingface.co/qualcomm/Unet-Segmentation/blob/main/Unet-Segmentation_w8a8.tflite) |
| Unet-Segmentation | w8a8 | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | QNN_DLC | 559.038 ms | 2 - 68 MB | NPU | [Unet-Segmentation.dlc](https://huggingface.co/qualcomm/Unet-Segmentation/blob/main/Unet-Segmentation_w8a8.dlc) |
| Unet-Segmentation | w8a8 | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN_DLC | 671.575 ms | 63 - 63 MB | NPU | [Unet-Segmentation.dlc](https://huggingface.co/qualcomm/Unet-Segmentation/blob/main/Unet-Segmentation_w8a8.dlc) |
## Installation
Install the package via pip:
```bash
pip install qai-hub-models
```
## Configure Qualcomm® AI Hub to run this model on a cloud-hosted device
Sign-in to [Qualcomm® AI Hub](https://app.aihub.qualcomm.com/) with your
Qualcomm® ID. Once signed in navigate to `Account -> Settings -> API Token`.
With this API token, you can configure your client to run models on the cloud
hosted devices.
```bash
qai-hub configure --api_token API_TOKEN
```
Navigate to [docs](https://app.aihub.qualcomm.com/docs/) for more information.
## Demo off target
The package contains a simple end-to-end demo that downloads pre-trained
weights and runs this model on a sample input.
```bash
python -m qai_hub_models.models.unet_segmentation.demo
```
The above demo runs a reference implementation of pre-processing, model
inference, and post processing.
**NOTE**: If you want running in a Jupyter Notebook or Google Colab like
environment, please add the following to your cell (instead of the above).
```
%run -m qai_hub_models.models.unet_segmentation.demo
```
### Run model on a cloud-hosted device
In addition to the demo, you can also run the model on a cloud-hosted Qualcomm®
device. This script does the following:
* Performance check on-device on a cloud-hosted device
* Downloads compiled assets that can be deployed on-device for Android.
* Accuracy check between PyTorch and on-device outputs.
```bash
python -m qai_hub_models.models.unet_segmentation.export
```
## How does this work?
This [export script](https://aihub.qualcomm.com/models/unet_segmentation/qai_hub_models/models/Unet-Segmentation/export.py)
leverages [Qualcomm® AI Hub](https://aihub.qualcomm.com/) to optimize, validate, and deploy this model
on-device. Lets go through each step below in detail:
Step 1: **Compile model for on-device deployment**
To compile a PyTorch model for on-device deployment, we first trace the model
in memory using the `jit.trace` and then call the `submit_compile_job` API.
```python
import torch
import qai_hub as hub
from qai_hub_models.models.unet_segmentation import Model
# Load the model
torch_model = Model.from_pretrained()
# Device
device = hub.Device("Samsung Galaxy S24")
# Trace model
input_shape = torch_model.get_input_spec()
sample_inputs = torch_model.sample_inputs()
pt_model = torch.jit.trace(torch_model, [torch.tensor(data[0]) for _, data in sample_inputs.items()])
# Compile model on a specific device
compile_job = hub.submit_compile_job(
model=pt_model,
device=device,
input_specs=torch_model.get_input_spec(),
)
# Get target model to run on-device
target_model = compile_job.get_target_model()
```
Step 2: **Performance profiling on cloud-hosted device**
After compiling models from step 1. Models can be profiled model on-device using the
`target_model`. Note that this scripts runs the model on a device automatically
provisioned in the cloud. Once the job is submitted, you can navigate to a
provided job URL to view a variety of on-device performance metrics.
```python
profile_job = hub.submit_profile_job(
model=target_model,
device=device,
)
```
Step 3: **Verify on-device accuracy**
To verify the accuracy of the model on-device, you can run on-device inference
on sample input data on the same cloud hosted device.
```python
input_data = torch_model.sample_inputs()
inference_job = hub.submit_inference_job(
model=target_model,
device=device,
inputs=input_data,
)
on_device_output = inference_job.download_output_data()
```
With the output of the model, you can compute like PSNR, relative errors or
spot check the output with expected output.
**Note**: This on-device profiling and inference requires access to Qualcomm®
AI Hub. [Sign up for access](https://myaccount.qualcomm.com/signup).
## Run demo on a cloud-hosted device
You can also run the demo on-device.
```bash
python -m qai_hub_models.models.unet_segmentation.demo --eval-mode on-device
```
**NOTE**: If you want running in a Jupyter Notebook or Google Colab like
environment, please add the following to your cell (instead of the above).
```
%run -m qai_hub_models.models.unet_segmentation.demo -- --eval-mode on-device
```
## Deploying compiled model to Android
The models can be deployed using multiple runtimes:
- TensorFlow Lite (`.tflite` export): [This
tutorial](https://www.tensorflow.org/lite/android/quickstart) provides a
guide to deploy the .tflite model in an Android application.
- QNN (`.so` export ): This [sample
app](https://docs.qualcomm.com/bundle/publicresource/topics/80-63442-50/sample_app.html)
provides instructions on how to use the `.so` shared library in an Android application.
## View on Qualcomm® AI Hub
Get more details on Unet-Segmentation's performance across various devices [here](https://aihub.qualcomm.com/models/unet_segmentation).
Explore all available models on [Qualcomm® AI Hub](https://aihub.qualcomm.com/)
## License
* The license for the original implementation of Unet-Segmentation can be found
[here](https://github.com/milesial/Pytorch-UNet/blob/master/LICENSE).
* The license for the compiled assets for on-device deployment can be found [here](https://github.com/milesial/Pytorch-UNet/blob/master/LICENSE)
## References
* [U-Net: Convolutional Networks for Biomedical Image Segmentation](https://arxiv.org/abs/1505.04597)
* [Source Model Implementation](https://github.com/milesial/Pytorch-UNet)
## Community
* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI.
* For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com).
|
ultratopaz/2005597
|
ultratopaz
| 2025-08-29T23:59:53Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-29T23:59:53Z |
[View on Civ Archive](https://civarchive.com/models/528094?modelVersionId=2110890)
|
crystalline7/1821985
|
crystalline7
| 2025-08-29T23:59:41Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-29T23:59:32Z |
[View on Civ Archive](https://civarchive.com/models/572613?modelVersionId=1923681)
|
ultratopaz/589599
|
ultratopaz
| 2025-08-29T23:59:24Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-29T23:59:19Z |
[View on Civ Archive](https://civarchive.com/models/601919?modelVersionId=674557)
|
seraphimzzzz/448394
|
seraphimzzzz
| 2025-08-29T23:59:11Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-29T23:59:05Z |
[View on Civ Archive](https://civarchive.com/models/477962?modelVersionId=531569)
|
qualcomm/Swin-Tiny
|
qualcomm
| 2025-08-29T23:58:59Z | 78 | 1 |
pytorch
|
[
"pytorch",
"tflite",
"backbone",
"android",
"image-classification",
"arxiv:2103.14030",
"license:other",
"region:us"
] |
image-classification
| 2024-02-25T22:56:55Z |
---
library_name: pytorch
license: other
tags:
- backbone
- android
pipeline_tag: image-classification
---

# Swin-Tiny: Optimized for Mobile Deployment
## Imagenet classifier and general purpose backbone
SwinTiny is a machine learning model that can classify images from the Imagenet dataset. It can also be used as a backbone in building more complex models for specific use cases.
This model is an implementation of Swin-Tiny found [here](https://github.com/pytorch/vision/blob/main/torchvision/models/swin_transformer.py).
This repository provides scripts to run Swin-Tiny on Qualcomm® devices.
More details on model performance across various devices, can be found
[here](https://aihub.qualcomm.com/models/swin_tiny).
### Model Details
- **Model Type:** Model_use_case.image_classification
- **Model Stats:**
- Model checkpoint: Imagenet
- Input resolution: 224x224
- Number of parameters: 28.8M
- Model size (float): 110 MB
- Model size (w8a16): 29.9 MB
| Model | Precision | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Primary Compute Unit | Target Model
|---|---|---|---|---|---|---|---|---|
| Swin-Tiny | float | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | TFLITE | 24.223 ms | 0 - 161 MB | NPU | [Swin-Tiny.tflite](https://huggingface.co/qualcomm/Swin-Tiny/blob/main/Swin-Tiny.tflite) |
| Swin-Tiny | float | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | QNN_DLC | 21.251 ms | 1 - 151 MB | NPU | [Swin-Tiny.dlc](https://huggingface.co/qualcomm/Swin-Tiny/blob/main/Swin-Tiny.dlc) |
| Swin-Tiny | float | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | TFLITE | 13.71 ms | 0 - 162 MB | NPU | [Swin-Tiny.tflite](https://huggingface.co/qualcomm/Swin-Tiny/blob/main/Swin-Tiny.tflite) |
| Swin-Tiny | float | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | QNN_DLC | 14.612 ms | 1 - 153 MB | NPU | [Swin-Tiny.dlc](https://huggingface.co/qualcomm/Swin-Tiny/blob/main/Swin-Tiny.dlc) |
| Swin-Tiny | float | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | TFLITE | 10.679 ms | 0 - 21 MB | NPU | [Swin-Tiny.tflite](https://huggingface.co/qualcomm/Swin-Tiny/blob/main/Swin-Tiny.tflite) |
| Swin-Tiny | float | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | QNN_DLC | 9.222 ms | 0 - 38 MB | NPU | [Swin-Tiny.dlc](https://huggingface.co/qualcomm/Swin-Tiny/blob/main/Swin-Tiny.dlc) |
| Swin-Tiny | float | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | TFLITE | 12.01 ms | 0 - 161 MB | NPU | [Swin-Tiny.tflite](https://huggingface.co/qualcomm/Swin-Tiny/blob/main/Swin-Tiny.tflite) |
| Swin-Tiny | float | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | QNN_DLC | 10.535 ms | 1 - 384 MB | NPU | [Swin-Tiny.dlc](https://huggingface.co/qualcomm/Swin-Tiny/blob/main/Swin-Tiny.dlc) |
| Swin-Tiny | float | SA7255P ADP | Qualcomm® SA7255P | TFLITE | 24.223 ms | 0 - 161 MB | NPU | [Swin-Tiny.tflite](https://huggingface.co/qualcomm/Swin-Tiny/blob/main/Swin-Tiny.tflite) |
| Swin-Tiny | float | SA7255P ADP | Qualcomm® SA7255P | QNN_DLC | 21.251 ms | 1 - 151 MB | NPU | [Swin-Tiny.dlc](https://huggingface.co/qualcomm/Swin-Tiny/blob/main/Swin-Tiny.dlc) |
| Swin-Tiny | float | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | TFLITE | 10.831 ms | 0 - 17 MB | NPU | [Swin-Tiny.tflite](https://huggingface.co/qualcomm/Swin-Tiny/blob/main/Swin-Tiny.tflite) |
| Swin-Tiny | float | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | QNN_DLC | 9.292 ms | 0 - 38 MB | NPU | [Swin-Tiny.dlc](https://huggingface.co/qualcomm/Swin-Tiny/blob/main/Swin-Tiny.dlc) |
| Swin-Tiny | float | SA8295P ADP | Qualcomm® SA8295P | TFLITE | 15.657 ms | 0 - 156 MB | NPU | [Swin-Tiny.tflite](https://huggingface.co/qualcomm/Swin-Tiny/blob/main/Swin-Tiny.tflite) |
| Swin-Tiny | float | SA8295P ADP | Qualcomm® SA8295P | QNN_DLC | 14.035 ms | 0 - 373 MB | NPU | [Swin-Tiny.dlc](https://huggingface.co/qualcomm/Swin-Tiny/blob/main/Swin-Tiny.dlc) |
| Swin-Tiny | float | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | TFLITE | 10.813 ms | 0 - 18 MB | NPU | [Swin-Tiny.tflite](https://huggingface.co/qualcomm/Swin-Tiny/blob/main/Swin-Tiny.tflite) |
| Swin-Tiny | float | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | QNN_DLC | 9.319 ms | 0 - 39 MB | NPU | [Swin-Tiny.dlc](https://huggingface.co/qualcomm/Swin-Tiny/blob/main/Swin-Tiny.dlc) |
| Swin-Tiny | float | SA8775P ADP | Qualcomm® SA8775P | TFLITE | 12.01 ms | 0 - 161 MB | NPU | [Swin-Tiny.tflite](https://huggingface.co/qualcomm/Swin-Tiny/blob/main/Swin-Tiny.tflite) |
| Swin-Tiny | float | SA8775P ADP | Qualcomm® SA8775P | QNN_DLC | 10.535 ms | 1 - 384 MB | NPU | [Swin-Tiny.dlc](https://huggingface.co/qualcomm/Swin-Tiny/blob/main/Swin-Tiny.dlc) |
| Swin-Tiny | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | TFLITE | 10.879 ms | 0 - 17 MB | NPU | [Swin-Tiny.tflite](https://huggingface.co/qualcomm/Swin-Tiny/blob/main/Swin-Tiny.tflite) |
| Swin-Tiny | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | QNN_DLC | 9.387 ms | 0 - 39 MB | NPU | [Swin-Tiny.dlc](https://huggingface.co/qualcomm/Swin-Tiny/blob/main/Swin-Tiny.dlc) |
| Swin-Tiny | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | ONNX | 9.413 ms | 0 - 159 MB | NPU | [Swin-Tiny.onnx.zip](https://huggingface.co/qualcomm/Swin-Tiny/blob/main/Swin-Tiny.onnx.zip) |
| Swin-Tiny | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | TFLITE | 7.133 ms | 0 - 166 MB | NPU | [Swin-Tiny.tflite](https://huggingface.co/qualcomm/Swin-Tiny/blob/main/Swin-Tiny.tflite) |
| Swin-Tiny | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | QNN_DLC | 6.126 ms | 1 - 160 MB | NPU | [Swin-Tiny.dlc](https://huggingface.co/qualcomm/Swin-Tiny/blob/main/Swin-Tiny.dlc) |
| Swin-Tiny | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | ONNX | 6.167 ms | 1 - 165 MB | NPU | [Swin-Tiny.onnx.zip](https://huggingface.co/qualcomm/Swin-Tiny/blob/main/Swin-Tiny.onnx.zip) |
| Swin-Tiny | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | TFLITE | 6.786 ms | 0 - 158 MB | NPU | [Swin-Tiny.tflite](https://huggingface.co/qualcomm/Swin-Tiny/blob/main/Swin-Tiny.tflite) |
| Swin-Tiny | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | QNN_DLC | 4.598 ms | 1 - 371 MB | NPU | [Swin-Tiny.dlc](https://huggingface.co/qualcomm/Swin-Tiny/blob/main/Swin-Tiny.dlc) |
| Swin-Tiny | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | ONNX | 5.641 ms | 0 - 155 MB | NPU | [Swin-Tiny.onnx.zip](https://huggingface.co/qualcomm/Swin-Tiny/blob/main/Swin-Tiny.onnx.zip) |
| Swin-Tiny | float | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN_DLC | 9.952 ms | 301 - 301 MB | NPU | [Swin-Tiny.dlc](https://huggingface.co/qualcomm/Swin-Tiny/blob/main/Swin-Tiny.dlc) |
| Swin-Tiny | float | Snapdragon X Elite CRD | Snapdragon® X Elite | ONNX | 10.907 ms | 58 - 58 MB | NPU | [Swin-Tiny.onnx.zip](https://huggingface.co/qualcomm/Swin-Tiny/blob/main/Swin-Tiny.onnx.zip) |
| Swin-Tiny | w8a16 | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | QNN_DLC | 16.813 ms | 0 - 161 MB | NPU | [Swin-Tiny.dlc](https://huggingface.co/qualcomm/Swin-Tiny/blob/main/Swin-Tiny_w8a16.dlc) |
| Swin-Tiny | w8a16 | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | QNN_DLC | 11.182 ms | 0 - 129 MB | NPU | [Swin-Tiny.dlc](https://huggingface.co/qualcomm/Swin-Tiny/blob/main/Swin-Tiny_w8a16.dlc) |
| Swin-Tiny | w8a16 | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | QNN_DLC | 9.431 ms | 0 - 51 MB | NPU | [Swin-Tiny.dlc](https://huggingface.co/qualcomm/Swin-Tiny/blob/main/Swin-Tiny_w8a16.dlc) |
| Swin-Tiny | w8a16 | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | QNN_DLC | 10.028 ms | 0 - 165 MB | NPU | [Swin-Tiny.dlc](https://huggingface.co/qualcomm/Swin-Tiny/blob/main/Swin-Tiny_w8a16.dlc) |
| Swin-Tiny | w8a16 | RB3 Gen 2 (Proxy) | Qualcomm® QCS6490 (Proxy) | QNN_DLC | 30.172 ms | 0 - 482 MB | NPU | [Swin-Tiny.dlc](https://huggingface.co/qualcomm/Swin-Tiny/blob/main/Swin-Tiny_w8a16.dlc) |
| Swin-Tiny | w8a16 | SA7255P ADP | Qualcomm® SA7255P | QNN_DLC | 16.813 ms | 0 - 161 MB | NPU | [Swin-Tiny.dlc](https://huggingface.co/qualcomm/Swin-Tiny/blob/main/Swin-Tiny_w8a16.dlc) |
| Swin-Tiny | w8a16 | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | QNN_DLC | 9.437 ms | 0 - 52 MB | NPU | [Swin-Tiny.dlc](https://huggingface.co/qualcomm/Swin-Tiny/blob/main/Swin-Tiny_w8a16.dlc) |
| Swin-Tiny | w8a16 | SA8295P ADP | Qualcomm® SA8295P | QNN_DLC | 11.216 ms | 0 - 122 MB | NPU | [Swin-Tiny.dlc](https://huggingface.co/qualcomm/Swin-Tiny/blob/main/Swin-Tiny_w8a16.dlc) |
| Swin-Tiny | w8a16 | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | QNN_DLC | 9.456 ms | 0 - 54 MB | NPU | [Swin-Tiny.dlc](https://huggingface.co/qualcomm/Swin-Tiny/blob/main/Swin-Tiny_w8a16.dlc) |
| Swin-Tiny | w8a16 | SA8775P ADP | Qualcomm® SA8775P | QNN_DLC | 10.028 ms | 0 - 165 MB | NPU | [Swin-Tiny.dlc](https://huggingface.co/qualcomm/Swin-Tiny/blob/main/Swin-Tiny_w8a16.dlc) |
| Swin-Tiny | w8a16 | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | QNN_DLC | 9.503 ms | 0 - 53 MB | NPU | [Swin-Tiny.dlc](https://huggingface.co/qualcomm/Swin-Tiny/blob/main/Swin-Tiny_w8a16.dlc) |
| Swin-Tiny | w8a16 | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | ONNX | 69.585 ms | 140 - 237 MB | NPU | [Swin-Tiny.onnx.zip](https://huggingface.co/qualcomm/Swin-Tiny/blob/main/Swin-Tiny_w8a16.onnx.zip) |
| Swin-Tiny | w8a16 | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | QNN_DLC | 6.255 ms | 0 - 176 MB | NPU | [Swin-Tiny.dlc](https://huggingface.co/qualcomm/Swin-Tiny/blob/main/Swin-Tiny_w8a16.dlc) |
| Swin-Tiny | w8a16 | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | ONNX | 50.554 ms | 156 - 313 MB | NPU | [Swin-Tiny.onnx.zip](https://huggingface.co/qualcomm/Swin-Tiny/blob/main/Swin-Tiny_w8a16.onnx.zip) |
| Swin-Tiny | w8a16 | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | QNN_DLC | 5.688 ms | 0 - 161 MB | NPU | [Swin-Tiny.dlc](https://huggingface.co/qualcomm/Swin-Tiny/blob/main/Swin-Tiny_w8a16.dlc) |
| Swin-Tiny | w8a16 | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | ONNX | 38.799 ms | 158 - 290 MB | NPU | [Swin-Tiny.onnx.zip](https://huggingface.co/qualcomm/Swin-Tiny/blob/main/Swin-Tiny_w8a16.onnx.zip) |
| Swin-Tiny | w8a16 | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN_DLC | 10.215 ms | 89 - 89 MB | NPU | [Swin-Tiny.dlc](https://huggingface.co/qualcomm/Swin-Tiny/blob/main/Swin-Tiny_w8a16.dlc) |
| Swin-Tiny | w8a16 | Snapdragon X Elite CRD | Snapdragon® X Elite | ONNX | 56.324 ms | 228 - 228 MB | NPU | [Swin-Tiny.onnx.zip](https://huggingface.co/qualcomm/Swin-Tiny/blob/main/Swin-Tiny_w8a16.onnx.zip) |
## Installation
Install the package via pip:
```bash
pip install qai-hub-models
```
## Configure Qualcomm® AI Hub to run this model on a cloud-hosted device
Sign-in to [Qualcomm® AI Hub](https://app.aihub.qualcomm.com/) with your
Qualcomm® ID. Once signed in navigate to `Account -> Settings -> API Token`.
With this API token, you can configure your client to run models on the cloud
hosted devices.
```bash
qai-hub configure --api_token API_TOKEN
```
Navigate to [docs](https://app.aihub.qualcomm.com/docs/) for more information.
## Demo off target
The package contains a simple end-to-end demo that downloads pre-trained
weights and runs this model on a sample input.
```bash
python -m qai_hub_models.models.swin_tiny.demo
```
The above demo runs a reference implementation of pre-processing, model
inference, and post processing.
**NOTE**: If you want running in a Jupyter Notebook or Google Colab like
environment, please add the following to your cell (instead of the above).
```
%run -m qai_hub_models.models.swin_tiny.demo
```
### Run model on a cloud-hosted device
In addition to the demo, you can also run the model on a cloud-hosted Qualcomm®
device. This script does the following:
* Performance check on-device on a cloud-hosted device
* Downloads compiled assets that can be deployed on-device for Android.
* Accuracy check between PyTorch and on-device outputs.
```bash
python -m qai_hub_models.models.swin_tiny.export
```
## How does this work?
This [export script](https://aihub.qualcomm.com/models/swin_tiny/qai_hub_models/models/Swin-Tiny/export.py)
leverages [Qualcomm® AI Hub](https://aihub.qualcomm.com/) to optimize, validate, and deploy this model
on-device. Lets go through each step below in detail:
Step 1: **Compile model for on-device deployment**
To compile a PyTorch model for on-device deployment, we first trace the model
in memory using the `jit.trace` and then call the `submit_compile_job` API.
```python
import torch
import qai_hub as hub
from qai_hub_models.models.swin_tiny import Model
# Load the model
torch_model = Model.from_pretrained()
# Device
device = hub.Device("Samsung Galaxy S24")
# Trace model
input_shape = torch_model.get_input_spec()
sample_inputs = torch_model.sample_inputs()
pt_model = torch.jit.trace(torch_model, [torch.tensor(data[0]) for _, data in sample_inputs.items()])
# Compile model on a specific device
compile_job = hub.submit_compile_job(
model=pt_model,
device=device,
input_specs=torch_model.get_input_spec(),
)
# Get target model to run on-device
target_model = compile_job.get_target_model()
```
Step 2: **Performance profiling on cloud-hosted device**
After compiling models from step 1. Models can be profiled model on-device using the
`target_model`. Note that this scripts runs the model on a device automatically
provisioned in the cloud. Once the job is submitted, you can navigate to a
provided job URL to view a variety of on-device performance metrics.
```python
profile_job = hub.submit_profile_job(
model=target_model,
device=device,
)
```
Step 3: **Verify on-device accuracy**
To verify the accuracy of the model on-device, you can run on-device inference
on sample input data on the same cloud hosted device.
```python
input_data = torch_model.sample_inputs()
inference_job = hub.submit_inference_job(
model=target_model,
device=device,
inputs=input_data,
)
on_device_output = inference_job.download_output_data()
```
With the output of the model, you can compute like PSNR, relative errors or
spot check the output with expected output.
**Note**: This on-device profiling and inference requires access to Qualcomm®
AI Hub. [Sign up for access](https://myaccount.qualcomm.com/signup).
## Run demo on a cloud-hosted device
You can also run the demo on-device.
```bash
python -m qai_hub_models.models.swin_tiny.demo --eval-mode on-device
```
**NOTE**: If you want running in a Jupyter Notebook or Google Colab like
environment, please add the following to your cell (instead of the above).
```
%run -m qai_hub_models.models.swin_tiny.demo -- --eval-mode on-device
```
## Deploying compiled model to Android
The models can be deployed using multiple runtimes:
- TensorFlow Lite (`.tflite` export): [This
tutorial](https://www.tensorflow.org/lite/android/quickstart) provides a
guide to deploy the .tflite model in an Android application.
- QNN (`.so` export ): This [sample
app](https://docs.qualcomm.com/bundle/publicresource/topics/80-63442-50/sample_app.html)
provides instructions on how to use the `.so` shared library in an Android application.
## View on Qualcomm® AI Hub
Get more details on Swin-Tiny's performance across various devices [here](https://aihub.qualcomm.com/models/swin_tiny).
Explore all available models on [Qualcomm® AI Hub](https://aihub.qualcomm.com/)
## License
* The license for the original implementation of Swin-Tiny can be found
[here](https://github.com/pytorch/vision/blob/main/LICENSE).
* The license for the compiled assets for on-device deployment can be found [here](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/Qualcomm+AI+Hub+Proprietary+License.pdf)
## References
* [Swin Transformer: Hierarchical Vision Transformer using Shifted Windows](https://arxiv.org/abs/2103.14030)
* [Source Model Implementation](https://github.com/pytorch/vision/blob/main/torchvision/models/swin_transformer.py)
## Community
* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI.
* For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com).
|
amethyst9/514450
|
amethyst9
| 2025-08-29T23:58:57Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-29T23:58:51Z |
[View on Civ Archive](https://civarchive.com/models/539168?modelVersionId=599394)
|
qualcomm/Swin-Small
|
qualcomm
| 2025-08-29T23:58:30Z | 40 | 0 |
pytorch
|
[
"pytorch",
"tflite",
"backbone",
"android",
"image-classification",
"arxiv:2103.14030",
"license:other",
"region:us"
] |
image-classification
| 2024-02-25T23:06:40Z |
---
library_name: pytorch
license: other
tags:
- backbone
- android
pipeline_tag: image-classification
---

# Swin-Small: Optimized for Mobile Deployment
## Imagenet classifier and general purpose backbone
SwinSmall is a machine learning model that can classify images from the Imagenet dataset. It can also be used as a backbone in building more complex models for specific use cases.
This model is an implementation of Swin-Small found [here](https://github.com/pytorch/vision/blob/main/torchvision/models/swin_transformer.py).
This repository provides scripts to run Swin-Small on Qualcomm® devices.
More details on model performance across various devices, can be found
[here](https://aihub.qualcomm.com/models/swin_small).
### Model Details
- **Model Type:** Model_use_case.image_classification
- **Model Stats:**
- Model checkpoint: Imagenet
- Input resolution: 224x224
- Number of parameters: 50.4M
- Model size (float): 193 MB
- Model size (w8a16): 52.5 MB
| Model | Precision | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Primary Compute Unit | Target Model
|---|---|---|---|---|---|---|---|---|
| Swin-Small | float | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | TFLITE | 44.225 ms | 0 - 267 MB | NPU | [Swin-Small.tflite](https://huggingface.co/qualcomm/Swin-Small/blob/main/Swin-Small.tflite) |
| Swin-Small | float | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | QNN_DLC | 38.426 ms | 1 - 511 MB | NPU | [Swin-Small.dlc](https://huggingface.co/qualcomm/Swin-Small/blob/main/Swin-Small.dlc) |
| Swin-Small | float | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | TFLITE | 23.323 ms | 0 - 259 MB | NPU | [Swin-Small.tflite](https://huggingface.co/qualcomm/Swin-Small/blob/main/Swin-Small.tflite) |
| Swin-Small | float | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | QNN_DLC | 24.164 ms | 1 - 230 MB | NPU | [Swin-Small.dlc](https://huggingface.co/qualcomm/Swin-Small/blob/main/Swin-Small.dlc) |
| Swin-Small | float | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | TFLITE | 18.459 ms | 0 - 29 MB | NPU | [Swin-Small.tflite](https://huggingface.co/qualcomm/Swin-Small/blob/main/Swin-Small.tflite) |
| Swin-Small | float | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | QNN_DLC | 15.703 ms | 0 - 58 MB | NPU | [Swin-Small.dlc](https://huggingface.co/qualcomm/Swin-Small/blob/main/Swin-Small.dlc) |
| Swin-Small | float | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | TFLITE | 20.523 ms | 0 - 268 MB | NPU | [Swin-Small.tflite](https://huggingface.co/qualcomm/Swin-Small/blob/main/Swin-Small.tflite) |
| Swin-Small | float | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | QNN_DLC | 17.785 ms | 1 - 532 MB | NPU | [Swin-Small.dlc](https://huggingface.co/qualcomm/Swin-Small/blob/main/Swin-Small.dlc) |
| Swin-Small | float | SA7255P ADP | Qualcomm® SA7255P | TFLITE | 44.225 ms | 0 - 267 MB | NPU | [Swin-Small.tflite](https://huggingface.co/qualcomm/Swin-Small/blob/main/Swin-Small.tflite) |
| Swin-Small | float | SA7255P ADP | Qualcomm® SA7255P | QNN_DLC | 38.426 ms | 1 - 511 MB | NPU | [Swin-Small.dlc](https://huggingface.co/qualcomm/Swin-Small/blob/main/Swin-Small.dlc) |
| Swin-Small | float | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | TFLITE | 18.539 ms | 0 - 29 MB | NPU | [Swin-Small.tflite](https://huggingface.co/qualcomm/Swin-Small/blob/main/Swin-Small.tflite) |
| Swin-Small | float | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | QNN_DLC | 15.803 ms | 0 - 57 MB | NPU | [Swin-Small.dlc](https://huggingface.co/qualcomm/Swin-Small/blob/main/Swin-Small.dlc) |
| Swin-Small | float | SA8295P ADP | Qualcomm® SA8295P | TFLITE | 26.394 ms | 0 - 259 MB | NPU | [Swin-Small.tflite](https://huggingface.co/qualcomm/Swin-Small/blob/main/Swin-Small.tflite) |
| Swin-Small | float | SA8295P ADP | Qualcomm® SA8295P | QNN_DLC | 23.278 ms | 1 - 510 MB | NPU | [Swin-Small.dlc](https://huggingface.co/qualcomm/Swin-Small/blob/main/Swin-Small.dlc) |
| Swin-Small | float | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | TFLITE | 18.499 ms | 0 - 29 MB | NPU | [Swin-Small.tflite](https://huggingface.co/qualcomm/Swin-Small/blob/main/Swin-Small.tflite) |
| Swin-Small | float | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | QNN_DLC | 15.903 ms | 0 - 59 MB | NPU | [Swin-Small.dlc](https://huggingface.co/qualcomm/Swin-Small/blob/main/Swin-Small.dlc) |
| Swin-Small | float | SA8775P ADP | Qualcomm® SA8775P | TFLITE | 20.523 ms | 0 - 268 MB | NPU | [Swin-Small.tflite](https://huggingface.co/qualcomm/Swin-Small/blob/main/Swin-Small.tflite) |
| Swin-Small | float | SA8775P ADP | Qualcomm® SA8775P | QNN_DLC | 17.785 ms | 1 - 532 MB | NPU | [Swin-Small.dlc](https://huggingface.co/qualcomm/Swin-Small/blob/main/Swin-Small.dlc) |
| Swin-Small | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | TFLITE | 18.581 ms | 0 - 30 MB | NPU | [Swin-Small.tflite](https://huggingface.co/qualcomm/Swin-Small/blob/main/Swin-Small.tflite) |
| Swin-Small | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | QNN_DLC | 15.835 ms | 0 - 61 MB | NPU | [Swin-Small.dlc](https://huggingface.co/qualcomm/Swin-Small/blob/main/Swin-Small.dlc) |
| Swin-Small | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | ONNX | 15.849 ms | 1 - 34 MB | NPU | [Swin-Small.onnx.zip](https://huggingface.co/qualcomm/Swin-Small/blob/main/Swin-Small.onnx.zip) |
| Swin-Small | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | TFLITE | 12.422 ms | 0 - 267 MB | NPU | [Swin-Small.tflite](https://huggingface.co/qualcomm/Swin-Small/blob/main/Swin-Small.tflite) |
| Swin-Small | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | QNN_DLC | 10.394 ms | 1 - 744 MB | NPU | [Swin-Small.dlc](https://huggingface.co/qualcomm/Swin-Small/blob/main/Swin-Small.dlc) |
| Swin-Small | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | ONNX | 10.72 ms | 1 - 251 MB | NPU | [Swin-Small.onnx.zip](https://huggingface.co/qualcomm/Swin-Small/blob/main/Swin-Small.onnx.zip) |
| Swin-Small | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | TFLITE | 12.17 ms | 0 - 259 MB | NPU | [Swin-Small.tflite](https://huggingface.co/qualcomm/Swin-Small/blob/main/Swin-Small.tflite) |
| Swin-Small | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | QNN_DLC | 9.39 ms | 1 - 529 MB | NPU | [Swin-Small.dlc](https://huggingface.co/qualcomm/Swin-Small/blob/main/Swin-Small.dlc) |
| Swin-Small | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | ONNX | 9.674 ms | 1 - 247 MB | NPU | [Swin-Small.onnx.zip](https://huggingface.co/qualcomm/Swin-Small/blob/main/Swin-Small.onnx.zip) |
| Swin-Small | float | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN_DLC | 16.628 ms | 564 - 564 MB | NPU | [Swin-Small.dlc](https://huggingface.co/qualcomm/Swin-Small/blob/main/Swin-Small.dlc) |
| Swin-Small | float | Snapdragon X Elite CRD | Snapdragon® X Elite | ONNX | 18.626 ms | 100 - 100 MB | NPU | [Swin-Small.onnx.zip](https://huggingface.co/qualcomm/Swin-Small/blob/main/Swin-Small.onnx.zip) |
| Swin-Small | w8a16 | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | QNN_DLC | 28.66 ms | 0 - 277 MB | NPU | [Swin-Small.dlc](https://huggingface.co/qualcomm/Swin-Small/blob/main/Swin-Small_w8a16.dlc) |
| Swin-Small | w8a16 | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | QNN_DLC | 19.197 ms | 0 - 284 MB | NPU | [Swin-Small.dlc](https://huggingface.co/qualcomm/Swin-Small/blob/main/Swin-Small_w8a16.dlc) |
| Swin-Small | w8a16 | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | QNN_DLC | 15.608 ms | 0 - 74 MB | NPU | [Swin-Small.dlc](https://huggingface.co/qualcomm/Swin-Small/blob/main/Swin-Small_w8a16.dlc) |
| Swin-Small | w8a16 | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | QNN_DLC | 16.059 ms | 0 - 273 MB | NPU | [Swin-Small.dlc](https://huggingface.co/qualcomm/Swin-Small/blob/main/Swin-Small_w8a16.dlc) |
| Swin-Small | w8a16 | RB3 Gen 2 (Proxy) | Qualcomm® QCS6490 (Proxy) | QNN_DLC | 46.578 ms | 0 - 710 MB | NPU | [Swin-Small.dlc](https://huggingface.co/qualcomm/Swin-Small/blob/main/Swin-Small_w8a16.dlc) |
| Swin-Small | w8a16 | SA7255P ADP | Qualcomm® SA7255P | QNN_DLC | 28.66 ms | 0 - 277 MB | NPU | [Swin-Small.dlc](https://huggingface.co/qualcomm/Swin-Small/blob/main/Swin-Small_w8a16.dlc) |
| Swin-Small | w8a16 | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | QNN_DLC | 15.57 ms | 0 - 74 MB | NPU | [Swin-Small.dlc](https://huggingface.co/qualcomm/Swin-Small/blob/main/Swin-Small_w8a16.dlc) |
| Swin-Small | w8a16 | SA8295P ADP | Qualcomm® SA8295P | QNN_DLC | 18.497 ms | 0 - 191 MB | NPU | [Swin-Small.dlc](https://huggingface.co/qualcomm/Swin-Small/blob/main/Swin-Small_w8a16.dlc) |
| Swin-Small | w8a16 | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | QNN_DLC | 15.654 ms | 0 - 62 MB | NPU | [Swin-Small.dlc](https://huggingface.co/qualcomm/Swin-Small/blob/main/Swin-Small_w8a16.dlc) |
| Swin-Small | w8a16 | SA8775P ADP | Qualcomm® SA8775P | QNN_DLC | 16.059 ms | 0 - 273 MB | NPU | [Swin-Small.dlc](https://huggingface.co/qualcomm/Swin-Small/blob/main/Swin-Small_w8a16.dlc) |
| Swin-Small | w8a16 | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | QNN_DLC | 15.683 ms | 0 - 68 MB | NPU | [Swin-Small.dlc](https://huggingface.co/qualcomm/Swin-Small/blob/main/Swin-Small_w8a16.dlc) |
| Swin-Small | w8a16 | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | ONNX | 115.169 ms | 274 - 438 MB | NPU | [Swin-Small.onnx.zip](https://huggingface.co/qualcomm/Swin-Small/blob/main/Swin-Small_w8a16.onnx.zip) |
| Swin-Small | w8a16 | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | QNN_DLC | 10.622 ms | 0 - 293 MB | NPU | [Swin-Small.dlc](https://huggingface.co/qualcomm/Swin-Small/blob/main/Swin-Small_w8a16.dlc) |
| Swin-Small | w8a16 | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | ONNX | 81.676 ms | 266 - 512 MB | NPU | [Swin-Small.onnx.zip](https://huggingface.co/qualcomm/Swin-Small/blob/main/Swin-Small_w8a16.onnx.zip) |
| Swin-Small | w8a16 | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | QNN_DLC | 9.713 ms | 0 - 274 MB | NPU | [Swin-Small.dlc](https://huggingface.co/qualcomm/Swin-Small/blob/main/Swin-Small_w8a16.dlc) |
| Swin-Small | w8a16 | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | ONNX | 66.264 ms | 282 - 489 MB | NPU | [Swin-Small.onnx.zip](https://huggingface.co/qualcomm/Swin-Small/blob/main/Swin-Small_w8a16.onnx.zip) |
| Swin-Small | w8a16 | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN_DLC | 16.495 ms | 184 - 184 MB | NPU | [Swin-Small.dlc](https://huggingface.co/qualcomm/Swin-Small/blob/main/Swin-Small_w8a16.dlc) |
| Swin-Small | w8a16 | Snapdragon X Elite CRD | Snapdragon® X Elite | ONNX | 98.354 ms | 461 - 461 MB | NPU | [Swin-Small.onnx.zip](https://huggingface.co/qualcomm/Swin-Small/blob/main/Swin-Small_w8a16.onnx.zip) |
## Installation
Install the package via pip:
```bash
pip install qai-hub-models
```
## Configure Qualcomm® AI Hub to run this model on a cloud-hosted device
Sign-in to [Qualcomm® AI Hub](https://app.aihub.qualcomm.com/) with your
Qualcomm® ID. Once signed in navigate to `Account -> Settings -> API Token`.
With this API token, you can configure your client to run models on the cloud
hosted devices.
```bash
qai-hub configure --api_token API_TOKEN
```
Navigate to [docs](https://app.aihub.qualcomm.com/docs/) for more information.
## Demo off target
The package contains a simple end-to-end demo that downloads pre-trained
weights and runs this model on a sample input.
```bash
python -m qai_hub_models.models.swin_small.demo
```
The above demo runs a reference implementation of pre-processing, model
inference, and post processing.
**NOTE**: If you want running in a Jupyter Notebook or Google Colab like
environment, please add the following to your cell (instead of the above).
```
%run -m qai_hub_models.models.swin_small.demo
```
### Run model on a cloud-hosted device
In addition to the demo, you can also run the model on a cloud-hosted Qualcomm®
device. This script does the following:
* Performance check on-device on a cloud-hosted device
* Downloads compiled assets that can be deployed on-device for Android.
* Accuracy check between PyTorch and on-device outputs.
```bash
python -m qai_hub_models.models.swin_small.export
```
## How does this work?
This [export script](https://aihub.qualcomm.com/models/swin_small/qai_hub_models/models/Swin-Small/export.py)
leverages [Qualcomm® AI Hub](https://aihub.qualcomm.com/) to optimize, validate, and deploy this model
on-device. Lets go through each step below in detail:
Step 1: **Compile model for on-device deployment**
To compile a PyTorch model for on-device deployment, we first trace the model
in memory using the `jit.trace` and then call the `submit_compile_job` API.
```python
import torch
import qai_hub as hub
from qai_hub_models.models.swin_small import Model
# Load the model
torch_model = Model.from_pretrained()
# Device
device = hub.Device("Samsung Galaxy S24")
# Trace model
input_shape = torch_model.get_input_spec()
sample_inputs = torch_model.sample_inputs()
pt_model = torch.jit.trace(torch_model, [torch.tensor(data[0]) for _, data in sample_inputs.items()])
# Compile model on a specific device
compile_job = hub.submit_compile_job(
model=pt_model,
device=device,
input_specs=torch_model.get_input_spec(),
)
# Get target model to run on-device
target_model = compile_job.get_target_model()
```
Step 2: **Performance profiling on cloud-hosted device**
After compiling models from step 1. Models can be profiled model on-device using the
`target_model`. Note that this scripts runs the model on a device automatically
provisioned in the cloud. Once the job is submitted, you can navigate to a
provided job URL to view a variety of on-device performance metrics.
```python
profile_job = hub.submit_profile_job(
model=target_model,
device=device,
)
```
Step 3: **Verify on-device accuracy**
To verify the accuracy of the model on-device, you can run on-device inference
on sample input data on the same cloud hosted device.
```python
input_data = torch_model.sample_inputs()
inference_job = hub.submit_inference_job(
model=target_model,
device=device,
inputs=input_data,
)
on_device_output = inference_job.download_output_data()
```
With the output of the model, you can compute like PSNR, relative errors or
spot check the output with expected output.
**Note**: This on-device profiling and inference requires access to Qualcomm®
AI Hub. [Sign up for access](https://myaccount.qualcomm.com/signup).
## Run demo on a cloud-hosted device
You can also run the demo on-device.
```bash
python -m qai_hub_models.models.swin_small.demo --eval-mode on-device
```
**NOTE**: If you want running in a Jupyter Notebook or Google Colab like
environment, please add the following to your cell (instead of the above).
```
%run -m qai_hub_models.models.swin_small.demo -- --eval-mode on-device
```
## Deploying compiled model to Android
The models can be deployed using multiple runtimes:
- TensorFlow Lite (`.tflite` export): [This
tutorial](https://www.tensorflow.org/lite/android/quickstart) provides a
guide to deploy the .tflite model in an Android application.
- QNN (`.so` export ): This [sample
app](https://docs.qualcomm.com/bundle/publicresource/topics/80-63442-50/sample_app.html)
provides instructions on how to use the `.so` shared library in an Android application.
## View on Qualcomm® AI Hub
Get more details on Swin-Small's performance across various devices [here](https://aihub.qualcomm.com/models/swin_small).
Explore all available models on [Qualcomm® AI Hub](https://aihub.qualcomm.com/)
## License
* The license for the original implementation of Swin-Small can be found
[here](https://github.com/pytorch/vision/blob/main/LICENSE).
* The license for the compiled assets for on-device deployment can be found [here](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/Qualcomm+AI+Hub+Proprietary+License.pdf)
## References
* [Swin Transformer: Hierarchical Vision Transformer using Shifted Windows](https://arxiv.org/abs/2103.14030)
* [Source Model Implementation](https://github.com/pytorch/vision/blob/main/torchvision/models/swin_transformer.py)
## Community
* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI.
* For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com).
|
mradermacher/0827-Qwen2.5-32B-16bit-1E-GGUF
|
mradermacher
| 2025-08-29T23:58:18Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"qwen2",
"en",
"base_model:elliefeng25/0827-Qwen2.5-32B-16bit-1E",
"base_model:quantized:elliefeng25/0827-Qwen2.5-32B-16bit-1E",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-29T21:59:48Z |
---
base_model: elliefeng25/0827-Qwen2.5-32B-16bit-1E
language:
- en
library_name: transformers
license: apache-2.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/elliefeng25/0827-Qwen2.5-32B-16bit-1E
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#0827-Qwen2.5-32B-16bit-1E-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/0827-Qwen2.5-32B-16bit-1E-GGUF/resolve/main/0827-Qwen2.5-32B-16bit-1E.Q2_K.gguf) | Q2_K | 12.4 | |
| [GGUF](https://huggingface.co/mradermacher/0827-Qwen2.5-32B-16bit-1E-GGUF/resolve/main/0827-Qwen2.5-32B-16bit-1E.Q3_K_S.gguf) | Q3_K_S | 14.5 | |
| [GGUF](https://huggingface.co/mradermacher/0827-Qwen2.5-32B-16bit-1E-GGUF/resolve/main/0827-Qwen2.5-32B-16bit-1E.Q3_K_M.gguf) | Q3_K_M | 16.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/0827-Qwen2.5-32B-16bit-1E-GGUF/resolve/main/0827-Qwen2.5-32B-16bit-1E.Q3_K_L.gguf) | Q3_K_L | 17.3 | |
| [GGUF](https://huggingface.co/mradermacher/0827-Qwen2.5-32B-16bit-1E-GGUF/resolve/main/0827-Qwen2.5-32B-16bit-1E.IQ4_XS.gguf) | IQ4_XS | 18.0 | |
| [GGUF](https://huggingface.co/mradermacher/0827-Qwen2.5-32B-16bit-1E-GGUF/resolve/main/0827-Qwen2.5-32B-16bit-1E.Q4_K_S.gguf) | Q4_K_S | 18.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/0827-Qwen2.5-32B-16bit-1E-GGUF/resolve/main/0827-Qwen2.5-32B-16bit-1E.Q4_K_M.gguf) | Q4_K_M | 20.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/0827-Qwen2.5-32B-16bit-1E-GGUF/resolve/main/0827-Qwen2.5-32B-16bit-1E.Q5_K_S.gguf) | Q5_K_S | 22.7 | |
| [GGUF](https://huggingface.co/mradermacher/0827-Qwen2.5-32B-16bit-1E-GGUF/resolve/main/0827-Qwen2.5-32B-16bit-1E.Q5_K_M.gguf) | Q5_K_M | 23.4 | |
| [GGUF](https://huggingface.co/mradermacher/0827-Qwen2.5-32B-16bit-1E-GGUF/resolve/main/0827-Qwen2.5-32B-16bit-1E.Q6_K.gguf) | Q6_K | 27.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/0827-Qwen2.5-32B-16bit-1E-GGUF/resolve/main/0827-Qwen2.5-32B-16bit-1E.Q8_0.gguf) | Q8_0 | 34.9 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
bah63843/blockassist-bc-plump_fast_antelope_1756511831
|
bah63843
| 2025-08-29T23:58:04Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"plump fast antelope",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-29T23:57:56Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- plump fast antelope
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.