modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-08-30 06:27:36
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 527
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-08-30 06:27:12
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
crystalline7/698504
|
crystalline7
| 2025-08-29T23:57:51Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-29T23:57:45Z |
[View on Civ Archive](https://civarchive.com/models/680268?modelVersionId=785062)
|
amethyst9/537038
|
amethyst9
| 2025-08-29T23:57:37Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-29T23:57:32Z |
[View on Civ Archive](https://civarchive.com/models/530102?modelVersionId=622054)
|
seraphimzzzz/530692
|
seraphimzzzz
| 2025-08-29T23:57:25Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-29T23:57:19Z |
[View on Civ Archive](https://civarchive.com/models/553093?modelVersionId=615506)
|
liukevin666/blockassist-bc-yawning_striped_cassowary_1756511760
|
liukevin666
| 2025-08-29T23:57:18Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"yawning striped cassowary",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-29T23:56:57Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- yawning striped cassowary
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
seraphimzzzz/543996
|
seraphimzzzz
| 2025-08-29T23:56:59Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-29T23:56:53Z |
[View on Civ Archive](https://civarchive.com/models/564261?modelVersionId=629295)
|
Wejh/Affine-5DhQ91jnXRrCvzTmjZ2URiv2fdi677d1Dn26ZD1UezhbC4e9
|
Wejh
| 2025-08-29T23:56:55Z | 0 | 0 | null |
[
"pytorch",
"safetensors",
"llama",
"facebook",
"meta",
"llama-2",
"text-generation",
"en",
"region:us"
] |
text-generation
| 2025-08-29T23:56:54Z |
---
extra_gated_heading: Access Llama 2 on Hugging Face
extra_gated_description: >-
This is a form to enable access to Llama 2 on Hugging Face after you have been
granted access from Meta. Please visit the [Meta website](https://ai.meta.com/resources/models-and-libraries/llama-downloads) and accept our
license terms and acceptable use policy before submitting this form. Requests
will be processed in 1-2 days.
extra_gated_button_content: Submit
extra_gated_fields:
I agree to share my name, email address and username with Meta and confirm that I have already been granted download access on the Meta website: checkbox
language:
- en
pipeline_tag: text-generation
inference: false
tags:
- facebook
- meta
- pytorch
- llama
- llama-2
---
# **Llama 2**
Llama 2 is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. This is the repository for the 7B fine-tuned model, optimized for dialogue use cases and converted for the Hugging Face Transformers format. Links to other models can be found in the index at the bottom.
## Model Details
*Note: Use of this model is governed by the Meta license. In order to download the model weights and tokenizer, please visit the [website](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) and accept our License before requesting access here.*
Meta developed and publicly released the Llama 2 family of large language models (LLMs), a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. Our fine-tuned LLMs, called Llama-2-Chat, are optimized for dialogue use cases. Llama-2-Chat models outperform open-source chat models on most benchmarks we tested, and in our human evaluations for helpfulness and safety, are on par with some popular closed-source models like ChatGPT and PaLM.
**Model Developers** Meta
**Variations** Llama 2 comes in a range of parameter sizes — 7B, 13B, and 70B — as well as pretrained and fine-tuned variations.
**Input** Models input text only.
**Output** Models generate text only.
**Model Architecture** Llama 2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align to human preferences for helpfulness and safety.
||Training Data|Params|Content Length|GQA|Tokens|LR|
|---|---|---|---|---|---|---|
|Llama 2|*A new mix of publicly available online data*|7B|4k|✗|2.0T|3.0 x 10<sup>-4</sup>|
|Llama 2|*A new mix of publicly available online data*|13B|4k|✗|2.0T|3.0 x 10<sup>-4</sup>|
|Llama 2|*A new mix of publicly available online data*|70B|4k|✔|2.0T|1.5 x 10<sup>-4</sup>|
*Llama 2 family of models.* Token counts refer to pretraining data only. All models are trained with a global batch-size of 4M tokens. Bigger models - 70B -- use Grouped-Query Attention (GQA) for improved inference scalability.
**Model Dates** Llama 2 was trained between January 2023 and July 2023.
**Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback.
**License** A custom commercial license is available at: [https://ai.meta.com/resources/models-and-libraries/llama-downloads/](https://ai.meta.com/resources/models-and-libraries/llama-downloads/)
## Intended Use
**Intended Use Cases** Llama 2 is intended for commercial and research use in English. Tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks.
To get the expected features and performance for the chat versions, a specific formatting needs to be followed, including the `INST` and `<<SYS>>` tags, `BOS` and `EOS` tokens, and the whitespaces and breaklines in between (we recommend calling `strip()` on inputs to avoid double-spaces). See our reference code in github for details: [`chat_completion`](https://github.com/facebookresearch/llama/blob/main/llama/generation.py#L212).
**Out-of-scope Uses** Use in any manner that violates applicable laws or regulations (including trade compliance laws).Use in languages other than English. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Llama 2.
## Hardware and Software
**Training Factors** We used custom training libraries, Meta's Research Super Cluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute.
**Carbon Footprint** Pretraining utilized a cumulative 3.3M GPU hours of computation on hardware of type A100-80GB (TDP of 350-400W). Estimated total emissions were 539 tCO2eq, 100% of which were offset by Meta’s sustainability program.
||Time (GPU hours)|Power Consumption (W)|Carbon Emitted(tCO<sub>2</sub>eq)|
|---|---|---|---|
|Llama 2 7B|184320|400|31.22|
|Llama 2 13B|368640|400|62.44|
|Llama 2 70B|1720320|400|291.42|
|Total|3311616||539.00|
**CO<sub>2</sub> emissions during pretraining.** Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others.
## Training Data
**Overview** Llama 2 was pretrained on 2 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over one million new human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data.
**Data Freshness** The pretraining data has a cutoff of September 2022, but some tuning data is more recent, up to July 2023.
## Evaluation Results
In this section, we report the results for the Llama 1 and Llama 2 models on standard academic benchmarks.For all the evaluations, we use our internal evaluations library.
|Model|Size|Code|Commonsense Reasoning|World Knowledge|Reading Comprehension|Math|MMLU|BBH|AGI Eval|
|---|---|---|---|---|---|---|---|---|---|
|Llama 1|7B|14.1|60.8|46.2|58.5|6.95|35.1|30.3|23.9|
|Llama 1|13B|18.9|66.1|52.6|62.3|10.9|46.9|37.0|33.9|
|Llama 1|33B|26.0|70.0|58.4|67.6|21.4|57.8|39.8|41.7|
|Llama 1|65B|30.7|70.7|60.5|68.6|30.8|63.4|43.5|47.6|
|Llama 2|7B|16.8|63.9|48.9|61.3|14.6|45.3|32.6|29.3|
|Llama 2|13B|24.5|66.9|55.4|65.8|28.7|54.8|39.4|39.1|
|Llama 2|70B|**37.5**|**71.9**|**63.6**|**69.4**|**35.2**|**68.9**|**51.2**|**54.2**|
**Overall performance on grouped academic benchmarks.** *Code:* We report the average pass@1 scores of our models on HumanEval and MBPP. *Commonsense Reasoning:* We report the average of PIQA, SIQA, HellaSwag, WinoGrande, ARC easy and challenge, OpenBookQA, and CommonsenseQA. We report 7-shot results for CommonSenseQA and 0-shot results for all other benchmarks. *World Knowledge:* We evaluate the 5-shot performance on NaturalQuestions and TriviaQA and report the average. *Reading Comprehension:* For reading comprehension, we report the 0-shot average on SQuAD, QuAC, and BoolQ. *MATH:* We report the average of the GSM8K (8 shot) and MATH (4 shot) benchmarks at top 1.
|||TruthfulQA|Toxigen|
|---|---|---|---|
|Llama 1|7B|27.42|23.00|
|Llama 1|13B|41.74|23.08|
|Llama 1|33B|44.19|22.57|
|Llama 1|65B|48.71|21.77|
|Llama 2|7B|33.29|**21.25**|
|Llama 2|13B|41.86|26.10|
|Llama 2|70B|**50.18**|24.60|
**Evaluation of pretrained LLMs on automatic safety benchmarks.** For TruthfulQA, we present the percentage of generations that are both truthful and informative (the higher the better). For ToxiGen, we present the percentage of toxic generations (the smaller the better).
|||TruthfulQA|Toxigen|
|---|---|---|---|
|Llama-2-Chat|7B|57.04|**0.00**|
|Llama-2-Chat|13B|62.18|**0.00**|
|Llama-2-Chat|70B|**64.14**|0.01|
**Evaluation of fine-tuned LLMs on different safety datasets.** Same metric definitions as above.
## Ethical Considerations and Limitations
Llama 2 is a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2, developers should perform safety testing and tuning tailored to their specific applications of the model.
Please see the Responsible Use Guide available at [https://ai.meta.com/llama/responsible-use-guide/](https://ai.meta.com/llama/responsible-use-guide)
## Reporting Issues
Please report any software “bug,” or other problems with the models through one of the following means:
- Reporting issues with the model: [github.com/facebookresearch/llama](http://github.com/facebookresearch/llama)
- Reporting problematic content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback)
- Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info)
## Llama Model Index
|Model|Llama2|Llama2-hf|Llama2-chat|Llama2-chat-hf|
|---|---|---|---|---|
|7B| [Link](https://huggingface.co/llamaste/Llama-2-7b) | [Link](https://huggingface.co/llamaste/Llama-2-7b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-7b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-7b-chat-hf)|
|13B| [Link](https://huggingface.co/llamaste/Llama-2-13b) | [Link](https://huggingface.co/llamaste/Llama-2-13b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-13b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-13b-hf)|
|70B| [Link](https://huggingface.co/llamaste/Llama-2-70b) | [Link](https://huggingface.co/llamaste/Llama-2-70b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-70b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-70b-hf)|
|
ultratopaz/652549
|
ultratopaz
| 2025-08-29T23:56:46Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-29T23:56:40Z |
[View on Civ Archive](https://civarchive.com/models/517929?modelVersionId=738620)
|
xibitthenoob/Qwen-3-32B-Medical-Reasoning
|
xibitthenoob
| 2025-08-29T23:56:20Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-17T19:54:27Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
amethyst9/573062
|
amethyst9
| 2025-08-29T23:56:19Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-29T23:56:13Z |
[View on Civ Archive](https://civarchive.com/models/589061?modelVersionId=657683)
|
hakimjustbao/blockassist-bc-raging_subtle_wasp_1756510253
|
hakimjustbao
| 2025-08-29T23:56:14Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"raging subtle wasp",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-29T23:56:10Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- raging subtle wasp
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ultratopaz/518800
|
ultratopaz
| 2025-08-29T23:56:06Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-29T23:56:00Z |
[View on Civ Archive](https://civarchive.com/models/542594?modelVersionId=603296)
|
pempekmangedd/blockassist-bc-patterned_sturdy_dolphin_1756510275
|
pempekmangedd
| 2025-08-29T23:56:00Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"patterned sturdy dolphin",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-29T23:55:56Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- patterned sturdy dolphin
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ultratopaz/810462
|
ultratopaz
| 2025-08-29T23:53:56Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-29T23:53:47Z |
[View on Civ Archive](https://civarchive.com/models/774205?modelVersionId=902028)
|
bah63843/blockassist-bc-plump_fast_antelope_1756511581
|
bah63843
| 2025-08-29T23:53:48Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"plump fast antelope",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-29T23:53:39Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- plump fast antelope
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
seraphimzzzz/385884
|
seraphimzzzz
| 2025-08-29T23:53:38Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-29T23:53:38Z |
[View on Civ Archive](https://civarchive.com/models/419169?modelVersionId=466982)
|
Loder-S/blockassist-bc-sprightly_knobby_tiger_1756510052
|
Loder-S
| 2025-08-29T23:52:45Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"sprightly knobby tiger",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-29T23:52:42Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- sprightly knobby tiger
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
seraphimzzzz/496178
|
seraphimzzzz
| 2025-08-29T23:52:38Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-29T23:52:35Z |
[View on Civ Archive](https://civarchive.com/models/522696?modelVersionId=580731)
|
seraphimzzzz/635223
|
seraphimzzzz
| 2025-08-29T23:52:15Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-29T23:52:09Z |
[View on Civ Archive](https://civarchive.com/models/644196?modelVersionId=720610)
|
ultratopaz/461961
|
ultratopaz
| 2025-08-29T23:51:48Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-29T23:51:42Z |
[View on Civ Archive](https://civarchive.com/models/490763?modelVersionId=545714)
|
amethyst9/1848824
|
amethyst9
| 2025-08-29T23:50:53Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-29T23:50:44Z |
[View on Civ Archive](https://civarchive.com/models/1714215?modelVersionId=1951266)
|
amethyst9/620074
|
amethyst9
| 2025-08-29T23:50:22Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-29T23:50:16Z |
[View on Civ Archive](https://civarchive.com/models/567581?modelVersionId=705452)
|
yanghuattt/ckpts
|
yanghuattt
| 2025-08-29T23:49:46Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:stabilityai/stable-code-3b",
"lora",
"transformers",
"text-generation",
"arxiv:1910.09700",
"base_model:stabilityai/stable-code-3b",
"region:us"
] |
text-generation
| 2025-08-29T16:06:00Z |
---
base_model: stabilityai/stable-code-3b
library_name: peft
pipeline_tag: text-generation
tags:
- base_model:adapter:stabilityai/stable-code-3b
- lora
- transformers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.17.2.dev0
|
aoskes111/blockassist-bc-stinging_scruffy_bobcat_1756511312
|
aoskes111
| 2025-08-29T23:49:17Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stinging scruffy bobcat",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-29T23:48:59Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stinging scruffy bobcat
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ultratopaz/625721
|
ultratopaz
| 2025-08-29T23:49:09Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-29T23:49:03Z |
[View on Civ Archive](https://civarchive.com/models/635945?modelVersionId=711036)
|
amethyst9/568985
|
amethyst9
| 2025-08-29T23:48:55Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-29T23:48:49Z |
[View on Civ Archive](https://civarchive.com/models/585614?modelVersionId=653500)
|
crystalline7/669537
|
crystalline7
| 2025-08-29T23:48:45Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-29T23:48:42Z |
[View on Civ Archive](https://civarchive.com/models/675147?modelVersionId=755748)
|
amethyst9/608675
|
amethyst9
| 2025-08-29T23:48:17Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-29T23:48:11Z |
[View on Civ Archive](https://civarchive.com/models/610760?modelVersionId=694017)
|
crystalline7/903115
|
crystalline7
| 2025-08-29T23:47:40Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-29T23:47:30Z |
[View on Civ Archive](https://civarchive.com/models/567581?modelVersionId=997525)
|
crystalline7/897338
|
crystalline7
| 2025-08-29T23:47:10Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-29T23:47:10Z |
[View on Civ Archive](https://civarchive.com/models/567581?modelVersionId=991368)
|
seraphimzzzz/1164218
|
seraphimzzzz
| 2025-08-29T23:46:41Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-29T23:46:41Z |
[View on Civ Archive](https://civarchive.com/models/517929?modelVersionId=1259027)
|
amethyst9/1816274
|
amethyst9
| 2025-08-29T23:46:16Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-29T23:46:11Z |
[View on Civ Archive](https://civarchive.com/models/1694695?modelVersionId=1917948)
|
crystalline7/569015
|
crystalline7
| 2025-08-29T23:45:56Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-29T23:45:50Z |
[View on Civ Archive](https://civarchive.com/models/585614?modelVersionId=654001)
|
keras/qwen3_4b_en
|
keras
| 2025-08-29T23:45:48Z | 0 | 0 |
keras-hub
|
[
"keras-hub",
"text-generation",
"region:us"
] |
text-generation
| 2025-08-29T23:41:54Z |
---
library_name: keras-hub
pipeline_tag: text-generation
---
This is a [`Qwen3` model](https://keras.io/api/keras_hub/models/qwen3) uploaded using the KerasHub library and can be used with JAX, TensorFlow, and PyTorch backends.
This model is related to a `CausalLM` task.
Model config:
* **name:** qwen3_backbone
* **trainable:** True
* **vocabulary_size:** 151936
* **num_layers:** 36
* **num_query_heads:** 32
* **hidden_dim:** 2560
* **head_dim:** 128
* **intermediate_dim:** 9728
* **rope_max_wavelength:** 1000000
* **rope_scaling_factor:** 1.0
* **num_key_value_heads:** 8
* **layer_norm_epsilon:** 1e-06
* **dropout:** 0.0
* **tie_word_embeddings:** True
* **sliding_window_size:** None
This model card has been generated automatically and should be completed by the model author. See [Model Cards documentation](https://huggingface.co/docs/hub/model-cards) for more information.
|
seraphimzzzz/798003
|
seraphimzzzz
| 2025-08-29T23:45:40Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-29T23:45:32Z |
[View on Civ Archive](https://civarchive.com/models/795340?modelVersionId=889343)
|
crystalline7/545009
|
crystalline7
| 2025-08-29T23:45:22Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-29T23:45:17Z |
[View on Civ Archive](https://civarchive.com/models/564261?modelVersionId=630310)
|
amethyst9/575038
|
amethyst9
| 2025-08-29T23:44:51Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-29T23:44:51Z |
[View on Civ Archive](https://civarchive.com/models/584311?modelVersionId=660032)
|
seraphimzzzz/629805
|
seraphimzzzz
| 2025-08-29T23:44:31Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-29T23:44:25Z |
[View on Civ Archive](https://civarchive.com/models/512252?modelVersionId=715117)
|
seraphimzzzz/823757
|
seraphimzzzz
| 2025-08-29T23:44:17Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-29T23:44:12Z |
[View on Civ Archive](https://civarchive.com/models/816163?modelVersionId=916207)
|
seraphimzzzz/575008
|
seraphimzzzz
| 2025-08-29T23:43:51Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-29T23:43:45Z |
[View on Civ Archive](https://civarchive.com/models/584311?modelVersionId=659857)
|
bah63843/blockassist-bc-plump_fast_antelope_1756510974
|
bah63843
| 2025-08-29T23:43:44Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"plump fast antelope",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-29T23:43:35Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- plump fast antelope
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
seraphimzzzz/724311
|
seraphimzzzz
| 2025-08-29T23:43:38Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-29T23:43:33Z |
[View on Civ Archive](https://civarchive.com/models/719079?modelVersionId=804126)
|
crystalline7/856951
|
crystalline7
| 2025-08-29T23:43:23Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-29T23:43:15Z |
[View on Civ Archive](https://civarchive.com/models/848936?modelVersionId=949798)
|
ultratopaz/1243851
|
ultratopaz
| 2025-08-29T23:43:04Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-29T23:42:55Z |
[View on Civ Archive](https://civarchive.com/models/1189856?modelVersionId=1339576)
|
qualcomm/Shufflenet-v2
|
qualcomm
| 2025-08-29T23:42:48Z | 53 | 1 |
pytorch
|
[
"pytorch",
"tflite",
"android",
"image-classification",
"arxiv:1807.11164",
"license:other",
"region:us"
] |
image-classification
| 2024-02-25T23:05:43Z |
---
library_name: pytorch
license: other
tags:
- android
pipeline_tag: image-classification
---

# Shufflenet-v2: Optimized for Mobile Deployment
## Imagenet classifier and general purpose backbone
ShufflenetV2 is a machine learning model that can classify images from the Imagenet dataset. It can also be used as a backbone in building more complex models for specific use cases.
This model is an implementation of Shufflenet-v2 found [here](https://github.com/pytorch/vision/blob/main/torchvision/models/shufflenetv2.py).
This repository provides scripts to run Shufflenet-v2 on Qualcomm® devices.
More details on model performance across various devices, can be found
[here](https://aihub.qualcomm.com/models/shufflenet_v2).
### Model Details
- **Model Type:** Model_use_case.image_classification
- **Model Stats:**
- Model checkpoint: Imagenet
- Input resolution: 224x224
- Number of parameters: 1.37M
- Model size (float): 5.24 MB
- Model size (w8a8): 1.47 MB
| Model | Precision | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Primary Compute Unit | Target Model
|---|---|---|---|---|---|---|---|---|
| Shufflenet-v2 | float | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | TFLITE | 1.592 ms | 0 - 19 MB | NPU | [Shufflenet-v2.tflite](https://huggingface.co/qualcomm/Shufflenet-v2/blob/main/Shufflenet-v2.tflite) |
| Shufflenet-v2 | float | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | QNN_DLC | 1.584 ms | 0 - 21 MB | NPU | [Shufflenet-v2.dlc](https://huggingface.co/qualcomm/Shufflenet-v2/blob/main/Shufflenet-v2.dlc) |
| Shufflenet-v2 | float | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | TFLITE | 0.781 ms | 0 - 35 MB | NPU | [Shufflenet-v2.tflite](https://huggingface.co/qualcomm/Shufflenet-v2/blob/main/Shufflenet-v2.tflite) |
| Shufflenet-v2 | float | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | QNN_DLC | 1.172 ms | 1 - 35 MB | NPU | [Shufflenet-v2.dlc](https://huggingface.co/qualcomm/Shufflenet-v2/blob/main/Shufflenet-v2.dlc) |
| Shufflenet-v2 | float | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | TFLITE | 0.701 ms | 0 - 30 MB | NPU | [Shufflenet-v2.tflite](https://huggingface.co/qualcomm/Shufflenet-v2/blob/main/Shufflenet-v2.tflite) |
| Shufflenet-v2 | float | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | QNN_DLC | 0.694 ms | 1 - 19 MB | NPU | [Shufflenet-v2.dlc](https://huggingface.co/qualcomm/Shufflenet-v2/blob/main/Shufflenet-v2.dlc) |
| Shufflenet-v2 | float | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | TFLITE | 0.971 ms | 0 - 19 MB | NPU | [Shufflenet-v2.tflite](https://huggingface.co/qualcomm/Shufflenet-v2/blob/main/Shufflenet-v2.tflite) |
| Shufflenet-v2 | float | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | QNN_DLC | 0.933 ms | 1 - 20 MB | NPU | [Shufflenet-v2.dlc](https://huggingface.co/qualcomm/Shufflenet-v2/blob/main/Shufflenet-v2.dlc) |
| Shufflenet-v2 | float | SA7255P ADP | Qualcomm® SA7255P | TFLITE | 1.592 ms | 0 - 19 MB | NPU | [Shufflenet-v2.tflite](https://huggingface.co/qualcomm/Shufflenet-v2/blob/main/Shufflenet-v2.tflite) |
| Shufflenet-v2 | float | SA7255P ADP | Qualcomm® SA7255P | QNN_DLC | 1.584 ms | 0 - 21 MB | NPU | [Shufflenet-v2.dlc](https://huggingface.co/qualcomm/Shufflenet-v2/blob/main/Shufflenet-v2.dlc) |
| Shufflenet-v2 | float | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | TFLITE | 0.7 ms | 0 - 29 MB | NPU | [Shufflenet-v2.tflite](https://huggingface.co/qualcomm/Shufflenet-v2/blob/main/Shufflenet-v2.tflite) |
| Shufflenet-v2 | float | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | QNN_DLC | 0.695 ms | 1 - 19 MB | NPU | [Shufflenet-v2.dlc](https://huggingface.co/qualcomm/Shufflenet-v2/blob/main/Shufflenet-v2.dlc) |
| Shufflenet-v2 | float | SA8295P ADP | Qualcomm® SA8295P | TFLITE | 1.11 ms | 0 - 26 MB | NPU | [Shufflenet-v2.tflite](https://huggingface.co/qualcomm/Shufflenet-v2/blob/main/Shufflenet-v2.tflite) |
| Shufflenet-v2 | float | SA8295P ADP | Qualcomm® SA8295P | QNN_DLC | 1.105 ms | 1 - 28 MB | NPU | [Shufflenet-v2.dlc](https://huggingface.co/qualcomm/Shufflenet-v2/blob/main/Shufflenet-v2.dlc) |
| Shufflenet-v2 | float | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | TFLITE | 0.701 ms | 0 - 29 MB | NPU | [Shufflenet-v2.tflite](https://huggingface.co/qualcomm/Shufflenet-v2/blob/main/Shufflenet-v2.tflite) |
| Shufflenet-v2 | float | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | QNN_DLC | 0.687 ms | 1 - 19 MB | NPU | [Shufflenet-v2.dlc](https://huggingface.co/qualcomm/Shufflenet-v2/blob/main/Shufflenet-v2.dlc) |
| Shufflenet-v2 | float | SA8775P ADP | Qualcomm® SA8775P | TFLITE | 0.971 ms | 0 - 19 MB | NPU | [Shufflenet-v2.tflite](https://huggingface.co/qualcomm/Shufflenet-v2/blob/main/Shufflenet-v2.tflite) |
| Shufflenet-v2 | float | SA8775P ADP | Qualcomm® SA8775P | QNN_DLC | 0.933 ms | 1 - 20 MB | NPU | [Shufflenet-v2.dlc](https://huggingface.co/qualcomm/Shufflenet-v2/blob/main/Shufflenet-v2.dlc) |
| Shufflenet-v2 | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | TFLITE | 0.699 ms | 0 - 29 MB | NPU | [Shufflenet-v2.tflite](https://huggingface.co/qualcomm/Shufflenet-v2/blob/main/Shufflenet-v2.tflite) |
| Shufflenet-v2 | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | QNN_DLC | 0.694 ms | 0 - 17 MB | NPU | [Shufflenet-v2.dlc](https://huggingface.co/qualcomm/Shufflenet-v2/blob/main/Shufflenet-v2.dlc) |
| Shufflenet-v2 | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | ONNX | 0.973 ms | 0 - 17 MB | NPU | [Shufflenet-v2.onnx.zip](https://huggingface.co/qualcomm/Shufflenet-v2/blob/main/Shufflenet-v2.onnx.zip) |
| Shufflenet-v2 | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | TFLITE | 0.445 ms | 0 - 26 MB | NPU | [Shufflenet-v2.tflite](https://huggingface.co/qualcomm/Shufflenet-v2/blob/main/Shufflenet-v2.tflite) |
| Shufflenet-v2 | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | QNN_DLC | 0.455 ms | 0 - 32 MB | NPU | [Shufflenet-v2.dlc](https://huggingface.co/qualcomm/Shufflenet-v2/blob/main/Shufflenet-v2.dlc) |
| Shufflenet-v2 | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | ONNX | 0.636 ms | 0 - 28 MB | NPU | [Shufflenet-v2.onnx.zip](https://huggingface.co/qualcomm/Shufflenet-v2/blob/main/Shufflenet-v2.onnx.zip) |
| Shufflenet-v2 | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | TFLITE | 0.442 ms | 0 - 24 MB | NPU | [Shufflenet-v2.tflite](https://huggingface.co/qualcomm/Shufflenet-v2/blob/main/Shufflenet-v2.tflite) |
| Shufflenet-v2 | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | QNN_DLC | 0.46 ms | 1 - 27 MB | NPU | [Shufflenet-v2.dlc](https://huggingface.co/qualcomm/Shufflenet-v2/blob/main/Shufflenet-v2.dlc) |
| Shufflenet-v2 | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | ONNX | 0.648 ms | 1 - 23 MB | NPU | [Shufflenet-v2.onnx.zip](https://huggingface.co/qualcomm/Shufflenet-v2/blob/main/Shufflenet-v2.onnx.zip) |
| Shufflenet-v2 | float | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN_DLC | 0.83 ms | 16 - 16 MB | NPU | [Shufflenet-v2.dlc](https://huggingface.co/qualcomm/Shufflenet-v2/blob/main/Shufflenet-v2.dlc) |
| Shufflenet-v2 | float | Snapdragon X Elite CRD | Snapdragon® X Elite | ONNX | 1.036 ms | 3 - 3 MB | NPU | [Shufflenet-v2.onnx.zip](https://huggingface.co/qualcomm/Shufflenet-v2/blob/main/Shufflenet-v2.onnx.zip) |
| Shufflenet-v2 | w8a8 | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | TFLITE | 0.781 ms | 0 - 16 MB | NPU | [Shufflenet-v2.tflite](https://huggingface.co/qualcomm/Shufflenet-v2/blob/main/Shufflenet-v2_w8a8.tflite) |
| Shufflenet-v2 | w8a8 | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | QNN_DLC | 1.049 ms | 0 - 17 MB | NPU | [Shufflenet-v2.dlc](https://huggingface.co/qualcomm/Shufflenet-v2/blob/main/Shufflenet-v2_w8a8.dlc) |
| Shufflenet-v2 | w8a8 | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | TFLITE | 0.359 ms | 0 - 33 MB | NPU | [Shufflenet-v2.tflite](https://huggingface.co/qualcomm/Shufflenet-v2/blob/main/Shufflenet-v2_w8a8.tflite) |
| Shufflenet-v2 | w8a8 | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | QNN_DLC | 0.535 ms | 0 - 29 MB | NPU | [Shufflenet-v2.dlc](https://huggingface.co/qualcomm/Shufflenet-v2/blob/main/Shufflenet-v2_w8a8.dlc) |
| Shufflenet-v2 | w8a8 | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | TFLITE | 0.316 ms | 0 - 11 MB | NPU | [Shufflenet-v2.tflite](https://huggingface.co/qualcomm/Shufflenet-v2/blob/main/Shufflenet-v2_w8a8.tflite) |
| Shufflenet-v2 | w8a8 | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | QNN_DLC | 0.462 ms | 0 - 11 MB | NPU | [Shufflenet-v2.dlc](https://huggingface.co/qualcomm/Shufflenet-v2/blob/main/Shufflenet-v2_w8a8.dlc) |
| Shufflenet-v2 | w8a8 | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | TFLITE | 0.512 ms | 0 - 16 MB | NPU | [Shufflenet-v2.tflite](https://huggingface.co/qualcomm/Shufflenet-v2/blob/main/Shufflenet-v2_w8a8.tflite) |
| Shufflenet-v2 | w8a8 | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | QNN_DLC | 0.656 ms | 0 - 17 MB | NPU | [Shufflenet-v2.dlc](https://huggingface.co/qualcomm/Shufflenet-v2/blob/main/Shufflenet-v2_w8a8.dlc) |
| Shufflenet-v2 | w8a8 | RB3 Gen 2 (Proxy) | Qualcomm® QCS6490 (Proxy) | TFLITE | 0.626 ms | 0 - 21 MB | NPU | [Shufflenet-v2.tflite](https://huggingface.co/qualcomm/Shufflenet-v2/blob/main/Shufflenet-v2_w8a8.tflite) |
| Shufflenet-v2 | w8a8 | RB3 Gen 2 (Proxy) | Qualcomm® QCS6490 (Proxy) | QNN_DLC | 1.041 ms | 0 - 20 MB | NPU | [Shufflenet-v2.dlc](https://huggingface.co/qualcomm/Shufflenet-v2/blob/main/Shufflenet-v2_w8a8.dlc) |
| Shufflenet-v2 | w8a8 | RB5 (Proxy) | Qualcomm® QCS8250 (Proxy) | TFLITE | 11.599 ms | 0 - 16 MB | CPU | [Shufflenet-v2.tflite](https://huggingface.co/qualcomm/Shufflenet-v2/blob/main/Shufflenet-v2_w8a8.tflite) |
| Shufflenet-v2 | w8a8 | SA7255P ADP | Qualcomm® SA7255P | TFLITE | 0.781 ms | 0 - 16 MB | NPU | [Shufflenet-v2.tflite](https://huggingface.co/qualcomm/Shufflenet-v2/blob/main/Shufflenet-v2_w8a8.tflite) |
| Shufflenet-v2 | w8a8 | SA7255P ADP | Qualcomm® SA7255P | QNN_DLC | 1.049 ms | 0 - 17 MB | NPU | [Shufflenet-v2.dlc](https://huggingface.co/qualcomm/Shufflenet-v2/blob/main/Shufflenet-v2_w8a8.dlc) |
| Shufflenet-v2 | w8a8 | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | TFLITE | 0.315 ms | 0 - 10 MB | NPU | [Shufflenet-v2.tflite](https://huggingface.co/qualcomm/Shufflenet-v2/blob/main/Shufflenet-v2_w8a8.tflite) |
| Shufflenet-v2 | w8a8 | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | QNN_DLC | 0.469 ms | 0 - 9 MB | NPU | [Shufflenet-v2.dlc](https://huggingface.co/qualcomm/Shufflenet-v2/blob/main/Shufflenet-v2_w8a8.dlc) |
| Shufflenet-v2 | w8a8 | SA8295P ADP | Qualcomm® SA8295P | TFLITE | 0.622 ms | 0 - 27 MB | NPU | [Shufflenet-v2.tflite](https://huggingface.co/qualcomm/Shufflenet-v2/blob/main/Shufflenet-v2_w8a8.tflite) |
| Shufflenet-v2 | w8a8 | SA8295P ADP | Qualcomm® SA8295P | QNN_DLC | 0.789 ms | 0 - 27 MB | NPU | [Shufflenet-v2.dlc](https://huggingface.co/qualcomm/Shufflenet-v2/blob/main/Shufflenet-v2_w8a8.dlc) |
| Shufflenet-v2 | w8a8 | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | TFLITE | 0.31 ms | 0 - 11 MB | NPU | [Shufflenet-v2.tflite](https://huggingface.co/qualcomm/Shufflenet-v2/blob/main/Shufflenet-v2_w8a8.tflite) |
| Shufflenet-v2 | w8a8 | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | QNN_DLC | 0.457 ms | 0 - 10 MB | NPU | [Shufflenet-v2.dlc](https://huggingface.co/qualcomm/Shufflenet-v2/blob/main/Shufflenet-v2_w8a8.dlc) |
| Shufflenet-v2 | w8a8 | SA8775P ADP | Qualcomm® SA8775P | TFLITE | 0.512 ms | 0 - 16 MB | NPU | [Shufflenet-v2.tflite](https://huggingface.co/qualcomm/Shufflenet-v2/blob/main/Shufflenet-v2_w8a8.tflite) |
| Shufflenet-v2 | w8a8 | SA8775P ADP | Qualcomm® SA8775P | QNN_DLC | 0.656 ms | 0 - 17 MB | NPU | [Shufflenet-v2.dlc](https://huggingface.co/qualcomm/Shufflenet-v2/blob/main/Shufflenet-v2_w8a8.dlc) |
| Shufflenet-v2 | w8a8 | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | TFLITE | 0.314 ms | 0 - 11 MB | NPU | [Shufflenet-v2.tflite](https://huggingface.co/qualcomm/Shufflenet-v2/blob/main/Shufflenet-v2_w8a8.tflite) |
| Shufflenet-v2 | w8a8 | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | QNN_DLC | 0.471 ms | 0 - 9 MB | NPU | [Shufflenet-v2.dlc](https://huggingface.co/qualcomm/Shufflenet-v2/blob/main/Shufflenet-v2_w8a8.dlc) |
| Shufflenet-v2 | w8a8 | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | ONNX | 9.591 ms | 1 - 47 MB | NPU | [Shufflenet-v2.onnx.zip](https://huggingface.co/qualcomm/Shufflenet-v2/blob/main/Shufflenet-v2_w8a8.onnx.zip) |
| Shufflenet-v2 | w8a8 | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | TFLITE | 0.221 ms | 0 - 23 MB | NPU | [Shufflenet-v2.tflite](https://huggingface.co/qualcomm/Shufflenet-v2/blob/main/Shufflenet-v2_w8a8.tflite) |
| Shufflenet-v2 | w8a8 | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | QNN_DLC | 0.326 ms | 0 - 27 MB | NPU | [Shufflenet-v2.dlc](https://huggingface.co/qualcomm/Shufflenet-v2/blob/main/Shufflenet-v2_w8a8.dlc) |
| Shufflenet-v2 | w8a8 | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | ONNX | 8.019 ms | 1 - 270 MB | NPU | [Shufflenet-v2.onnx.zip](https://huggingface.co/qualcomm/Shufflenet-v2/blob/main/Shufflenet-v2_w8a8.onnx.zip) |
| Shufflenet-v2 | w8a8 | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | TFLITE | 0.244 ms | 0 - 20 MB | NPU | [Shufflenet-v2.tflite](https://huggingface.co/qualcomm/Shufflenet-v2/blob/main/Shufflenet-v2_w8a8.tflite) |
| Shufflenet-v2 | w8a8 | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | QNN_DLC | 0.31 ms | 0 - 24 MB | NPU | [Shufflenet-v2.dlc](https://huggingface.co/qualcomm/Shufflenet-v2/blob/main/Shufflenet-v2_w8a8.dlc) |
| Shufflenet-v2 | w8a8 | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | ONNX | 7.848 ms | 1 - 258 MB | NPU | [Shufflenet-v2.onnx.zip](https://huggingface.co/qualcomm/Shufflenet-v2/blob/main/Shufflenet-v2_w8a8.onnx.zip) |
| Shufflenet-v2 | w8a8 | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN_DLC | 0.586 ms | 4 - 4 MB | NPU | [Shufflenet-v2.dlc](https://huggingface.co/qualcomm/Shufflenet-v2/blob/main/Shufflenet-v2_w8a8.dlc) |
| Shufflenet-v2 | w8a8 | Snapdragon X Elite CRD | Snapdragon® X Elite | ONNX | 9.607 ms | 6 - 6 MB | NPU | [Shufflenet-v2.onnx.zip](https://huggingface.co/qualcomm/Shufflenet-v2/blob/main/Shufflenet-v2_w8a8.onnx.zip) |
## Installation
Install the package via pip:
```bash
pip install qai-hub-models
```
## Configure Qualcomm® AI Hub to run this model on a cloud-hosted device
Sign-in to [Qualcomm® AI Hub](https://app.aihub.qualcomm.com/) with your
Qualcomm® ID. Once signed in navigate to `Account -> Settings -> API Token`.
With this API token, you can configure your client to run models on the cloud
hosted devices.
```bash
qai-hub configure --api_token API_TOKEN
```
Navigate to [docs](https://app.aihub.qualcomm.com/docs/) for more information.
## Demo off target
The package contains a simple end-to-end demo that downloads pre-trained
weights and runs this model on a sample input.
```bash
python -m qai_hub_models.models.shufflenet_v2.demo
```
The above demo runs a reference implementation of pre-processing, model
inference, and post processing.
**NOTE**: If you want running in a Jupyter Notebook or Google Colab like
environment, please add the following to your cell (instead of the above).
```
%run -m qai_hub_models.models.shufflenet_v2.demo
```
### Run model on a cloud-hosted device
In addition to the demo, you can also run the model on a cloud-hosted Qualcomm®
device. This script does the following:
* Performance check on-device on a cloud-hosted device
* Downloads compiled assets that can be deployed on-device for Android.
* Accuracy check between PyTorch and on-device outputs.
```bash
python -m qai_hub_models.models.shufflenet_v2.export
```
## How does this work?
This [export script](https://aihub.qualcomm.com/models/shufflenet_v2/qai_hub_models/models/Shufflenet-v2/export.py)
leverages [Qualcomm® AI Hub](https://aihub.qualcomm.com/) to optimize, validate, and deploy this model
on-device. Lets go through each step below in detail:
Step 1: **Compile model for on-device deployment**
To compile a PyTorch model for on-device deployment, we first trace the model
in memory using the `jit.trace` and then call the `submit_compile_job` API.
```python
import torch
import qai_hub as hub
from qai_hub_models.models.shufflenet_v2 import Model
# Load the model
torch_model = Model.from_pretrained()
# Device
device = hub.Device("Samsung Galaxy S24")
# Trace model
input_shape = torch_model.get_input_spec()
sample_inputs = torch_model.sample_inputs()
pt_model = torch.jit.trace(torch_model, [torch.tensor(data[0]) for _, data in sample_inputs.items()])
# Compile model on a specific device
compile_job = hub.submit_compile_job(
model=pt_model,
device=device,
input_specs=torch_model.get_input_spec(),
)
# Get target model to run on-device
target_model = compile_job.get_target_model()
```
Step 2: **Performance profiling on cloud-hosted device**
After compiling models from step 1. Models can be profiled model on-device using the
`target_model`. Note that this scripts runs the model on a device automatically
provisioned in the cloud. Once the job is submitted, you can navigate to a
provided job URL to view a variety of on-device performance metrics.
```python
profile_job = hub.submit_profile_job(
model=target_model,
device=device,
)
```
Step 3: **Verify on-device accuracy**
To verify the accuracy of the model on-device, you can run on-device inference
on sample input data on the same cloud hosted device.
```python
input_data = torch_model.sample_inputs()
inference_job = hub.submit_inference_job(
model=target_model,
device=device,
inputs=input_data,
)
on_device_output = inference_job.download_output_data()
```
With the output of the model, you can compute like PSNR, relative errors or
spot check the output with expected output.
**Note**: This on-device profiling and inference requires access to Qualcomm®
AI Hub. [Sign up for access](https://myaccount.qualcomm.com/signup).
## Run demo on a cloud-hosted device
You can also run the demo on-device.
```bash
python -m qai_hub_models.models.shufflenet_v2.demo --eval-mode on-device
```
**NOTE**: If you want running in a Jupyter Notebook or Google Colab like
environment, please add the following to your cell (instead of the above).
```
%run -m qai_hub_models.models.shufflenet_v2.demo -- --eval-mode on-device
```
## Deploying compiled model to Android
The models can be deployed using multiple runtimes:
- TensorFlow Lite (`.tflite` export): [This
tutorial](https://www.tensorflow.org/lite/android/quickstart) provides a
guide to deploy the .tflite model in an Android application.
- QNN (`.so` export ): This [sample
app](https://docs.qualcomm.com/bundle/publicresource/topics/80-63442-50/sample_app.html)
provides instructions on how to use the `.so` shared library in an Android application.
## View on Qualcomm® AI Hub
Get more details on Shufflenet-v2's performance across various devices [here](https://aihub.qualcomm.com/models/shufflenet_v2).
Explore all available models on [Qualcomm® AI Hub](https://aihub.qualcomm.com/)
## License
* The license for the original implementation of Shufflenet-v2 can be found
[here](https://github.com/pytorch/vision/blob/main/LICENSE).
* The license for the compiled assets for on-device deployment can be found [here](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/Qualcomm+AI+Hub+Proprietary+License.pdf)
## References
* [ShuffleNet V2: Practical Guidelines for Efficient CNN Architecture Design](https://arxiv.org/abs/1807.11164)
* [Source Model Implementation](https://github.com/pytorch/vision/blob/main/torchvision/models/shufflenetv2.py)
## Community
* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI.
* For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com).
|
seraphimzzzz/572890
|
seraphimzzzz
| 2025-08-29T23:42:46Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-29T23:42:40Z |
[View on Civ Archive](https://civarchive.com/models/589279?modelVersionId=657943)
|
tamewild/4b_v66_merged_e3
|
tamewild
| 2025-08-29T23:42:43Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-29T23:41:27Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
seraphimzzzz/606810
|
seraphimzzzz
| 2025-08-29T23:42:17Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-29T23:42:12Z |
[View on Civ Archive](https://civarchive.com/models/618597?modelVersionId=691527)
|
seraphimzzzz/568401
|
seraphimzzzz
| 2025-08-29T23:41:57Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-29T23:41:57Z |
[View on Civ Archive](https://civarchive.com/models/584094?modelVersionId=651672)
|
sekirr/blockassist-bc-masked_tenacious_whale_1756510869
|
sekirr
| 2025-08-29T23:41:49Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"masked tenacious whale",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-29T23:41:45Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- masked tenacious whale
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
qualcomm/SalsaNext
|
qualcomm
| 2025-08-29T23:41:47Z | 23 | 0 |
pytorch
|
[
"pytorch",
"tflite",
"android",
"image-segmentation",
"license:other",
"region:us"
] |
image-segmentation
| 2025-07-02T21:12:51Z |
---
library_name: pytorch
license: other
tags:
- android
pipeline_tag: image-segmentation
---

# SalsaNext: Optimized for Mobile Deployment
## Semantic segmentation model optimized for LiDAR point cloud data
SalsaNext is a LiDAR-based semantic segmentation model designed for efficient and accurate
This repository provides scripts to run SalsaNext on Qualcomm® devices.
More details on model performance across various devices, can be found
[here](https://aihub.qualcomm.com/models/salsanext).
### Model Details
- **Model Type:** Model_use_case.semantic_segmentation
- **Model Stats:**
- Model checkpoint: SalsaNext
- Input resolution: 1x5x64x2048
- Number of parameters: 6.71M
- Model size (float): 25.7 MB
| Model | Precision | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Primary Compute Unit | Target Model
|---|---|---|---|---|---|---|---|---|
| SalsaNext | float | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | TFLITE | 121.917 ms | 10 - 51 MB | NPU | [SalsaNext.tflite](https://huggingface.co/qualcomm/SalsaNext/blob/main/SalsaNext.tflite) |
| SalsaNext | float | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | QNN_DLC | 118.129 ms | 0 - 43 MB | NPU | [SalsaNext.dlc](https://huggingface.co/qualcomm/SalsaNext/blob/main/SalsaNext.dlc) |
| SalsaNext | float | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | TFLITE | 45.983 ms | 10 - 68 MB | NPU | [SalsaNext.tflite](https://huggingface.co/qualcomm/SalsaNext/blob/main/SalsaNext.tflite) |
| SalsaNext | float | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | QNN_DLC | 57.071 ms | 3 - 65 MB | NPU | [SalsaNext.dlc](https://huggingface.co/qualcomm/SalsaNext/blob/main/SalsaNext.dlc) |
| SalsaNext | float | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | TFLITE | 32.787 ms | 10 - 21 MB | NPU | [SalsaNext.tflite](https://huggingface.co/qualcomm/SalsaNext/blob/main/SalsaNext.tflite) |
| SalsaNext | float | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | QNN_DLC | 33.11 ms | 0 - 31 MB | NPU | [SalsaNext.dlc](https://huggingface.co/qualcomm/SalsaNext/blob/main/SalsaNext.dlc) |
| SalsaNext | float | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | TFLITE | 39.657 ms | 10 - 51 MB | NPU | [SalsaNext.tflite](https://huggingface.co/qualcomm/SalsaNext/blob/main/SalsaNext.tflite) |
| SalsaNext | float | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | QNN_DLC | 39.21 ms | 0 - 43 MB | NPU | [SalsaNext.dlc](https://huggingface.co/qualcomm/SalsaNext/blob/main/SalsaNext.dlc) |
| SalsaNext | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | TFLITE | 32.983 ms | 10 - 24 MB | NPU | [SalsaNext.tflite](https://huggingface.co/qualcomm/SalsaNext/blob/main/SalsaNext.tflite) |
| SalsaNext | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | QNN_DLC | 33.284 ms | 3 - 16 MB | NPU | [SalsaNext.dlc](https://huggingface.co/qualcomm/SalsaNext/blob/main/SalsaNext.dlc) |
| SalsaNext | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | ONNX | 32.55 ms | 10 - 51 MB | NPU | [SalsaNext.onnx.zip](https://huggingface.co/qualcomm/SalsaNext/blob/main/SalsaNext.onnx.zip) |
| SalsaNext | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | TFLITE | 23.376 ms | 9 - 58 MB | NPU | [SalsaNext.tflite](https://huggingface.co/qualcomm/SalsaNext/blob/main/SalsaNext.tflite) |
| SalsaNext | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | QNN_DLC | 25.031 ms | 3 - 52 MB | NPU | [SalsaNext.dlc](https://huggingface.co/qualcomm/SalsaNext/blob/main/SalsaNext.dlc) |
| SalsaNext | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | ONNX | 23.671 ms | 25 - 69 MB | NPU | [SalsaNext.onnx.zip](https://huggingface.co/qualcomm/SalsaNext/blob/main/SalsaNext.onnx.zip) |
| SalsaNext | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | TFLITE | 22.018 ms | 10 - 54 MB | NPU | [SalsaNext.tflite](https://huggingface.co/qualcomm/SalsaNext/blob/main/SalsaNext.tflite) |
| SalsaNext | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | QNN_DLC | 23.464 ms | 3 - 50 MB | NPU | [SalsaNext.dlc](https://huggingface.co/qualcomm/SalsaNext/blob/main/SalsaNext.dlc) |
| SalsaNext | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | ONNX | 21.652 ms | 25 - 73 MB | NPU | [SalsaNext.onnx.zip](https://huggingface.co/qualcomm/SalsaNext/blob/main/SalsaNext.onnx.zip) |
| SalsaNext | float | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN_DLC | 32.211 ms | 45 - 45 MB | NPU | [SalsaNext.dlc](https://huggingface.co/qualcomm/SalsaNext/blob/main/SalsaNext.dlc) |
| SalsaNext | float | Snapdragon X Elite CRD | Snapdragon® X Elite | ONNX | 32.259 ms | 33 - 33 MB | NPU | [SalsaNext.onnx.zip](https://huggingface.co/qualcomm/SalsaNext/blob/main/SalsaNext.onnx.zip) |
| SalsaNext | w8a16 | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | QNN_DLC | 71.498 ms | 1 - 74 MB | NPU | [SalsaNext.dlc](https://huggingface.co/qualcomm/SalsaNext/blob/main/SalsaNext_w8a16.dlc) |
| SalsaNext | w8a16 | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | QNN_DLC | 72.924 ms | 1 - 126 MB | NPU | [SalsaNext.dlc](https://huggingface.co/qualcomm/SalsaNext/blob/main/SalsaNext_w8a16.dlc) |
| SalsaNext | w8a16 | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | QNN_DLC | 41.639 ms | 0 - 31 MB | NPU | [SalsaNext.dlc](https://huggingface.co/qualcomm/SalsaNext/blob/main/SalsaNext_w8a16.dlc) |
| SalsaNext | w8a16 | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | QNN_DLC | 40.223 ms | 1 - 76 MB | NPU | [SalsaNext.dlc](https://huggingface.co/qualcomm/SalsaNext/blob/main/SalsaNext_w8a16.dlc) |
| SalsaNext | w8a16 | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | QNN_DLC | 41.372 ms | 0 - 32 MB | NPU | [SalsaNext.dlc](https://huggingface.co/qualcomm/SalsaNext/blob/main/SalsaNext_w8a16.dlc) |
| SalsaNext | w8a16 | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | QNN_DLC | 28.139 ms | 1 - 80 MB | NPU | [SalsaNext.dlc](https://huggingface.co/qualcomm/SalsaNext/blob/main/SalsaNext_w8a16.dlc) |
| SalsaNext | w8a16 | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | ONNX | 374.08 ms | 318 - 686 MB | NPU | [SalsaNext.onnx.zip](https://huggingface.co/qualcomm/SalsaNext/blob/main/SalsaNext_w8a16.onnx.zip) |
| SalsaNext | w8a16 | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | QNN_DLC | 26.916 ms | 1 - 81 MB | NPU | [SalsaNext.dlc](https://huggingface.co/qualcomm/SalsaNext/blob/main/SalsaNext_w8a16.dlc) |
| SalsaNext | w8a16 | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | ONNX | 336.785 ms | 323 - 576 MB | NPU | [SalsaNext.onnx.zip](https://huggingface.co/qualcomm/SalsaNext/blob/main/SalsaNext_w8a16.onnx.zip) |
| SalsaNext | w8a16 | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN_DLC | 40.726 ms | 39 - 39 MB | NPU | [SalsaNext.dlc](https://huggingface.co/qualcomm/SalsaNext/blob/main/SalsaNext_w8a16.dlc) |
| SalsaNext | w8a16 | Snapdragon X Elite CRD | Snapdragon® X Elite | ONNX | 385.857 ms | 529 - 529 MB | NPU | [SalsaNext.onnx.zip](https://huggingface.co/qualcomm/SalsaNext/blob/main/SalsaNext_w8a16.onnx.zip) |
## Installation
Install the package via pip:
```bash
pip install qai-hub-models
```
## Configure Qualcomm® AI Hub to run this model on a cloud-hosted device
Sign-in to [Qualcomm® AI Hub](https://app.aihub.qualcomm.com/) with your
Qualcomm® ID. Once signed in navigate to `Account -> Settings -> API Token`.
With this API token, you can configure your client to run models on the cloud
hosted devices.
```bash
qai-hub configure --api_token API_TOKEN
```
Navigate to [docs](https://app.aihub.qualcomm.com/docs/) for more information.
## Demo off target
The package contains a simple end-to-end demo that downloads pre-trained
weights and runs this model on a sample input.
```bash
python -m qai_hub_models.models.salsanext.demo
```
The above demo runs a reference implementation of pre-processing, model
inference, and post processing.
**NOTE**: If you want running in a Jupyter Notebook or Google Colab like
environment, please add the following to your cell (instead of the above).
```
%run -m qai_hub_models.models.salsanext.demo
```
### Run model on a cloud-hosted device
In addition to the demo, you can also run the model on a cloud-hosted Qualcomm®
device. This script does the following:
* Performance check on-device on a cloud-hosted device
* Downloads compiled assets that can be deployed on-device for Android.
* Accuracy check between PyTorch and on-device outputs.
```bash
python -m qai_hub_models.models.salsanext.export
```
## How does this work?
This [export script](https://aihub.qualcomm.com/models/salsanext/qai_hub_models/models/SalsaNext/export.py)
leverages [Qualcomm® AI Hub](https://aihub.qualcomm.com/) to optimize, validate, and deploy this model
on-device. Lets go through each step below in detail:
Step 1: **Compile model for on-device deployment**
To compile a PyTorch model for on-device deployment, we first trace the model
in memory using the `jit.trace` and then call the `submit_compile_job` API.
```python
import torch
import qai_hub as hub
from qai_hub_models.models.salsanext import Model
# Load the model
torch_model = Model.from_pretrained()
# Device
device = hub.Device("Samsung Galaxy S24")
# Trace model
input_shape = torch_model.get_input_spec()
sample_inputs = torch_model.sample_inputs()
pt_model = torch.jit.trace(torch_model, [torch.tensor(data[0]) for _, data in sample_inputs.items()])
# Compile model on a specific device
compile_job = hub.submit_compile_job(
model=pt_model,
device=device,
input_specs=torch_model.get_input_spec(),
)
# Get target model to run on-device
target_model = compile_job.get_target_model()
```
Step 2: **Performance profiling on cloud-hosted device**
After compiling models from step 1. Models can be profiled model on-device using the
`target_model`. Note that this scripts runs the model on a device automatically
provisioned in the cloud. Once the job is submitted, you can navigate to a
provided job URL to view a variety of on-device performance metrics.
```python
profile_job = hub.submit_profile_job(
model=target_model,
device=device,
)
```
Step 3: **Verify on-device accuracy**
To verify the accuracy of the model on-device, you can run on-device inference
on sample input data on the same cloud hosted device.
```python
input_data = torch_model.sample_inputs()
inference_job = hub.submit_inference_job(
model=target_model,
device=device,
inputs=input_data,
)
on_device_output = inference_job.download_output_data()
```
With the output of the model, you can compute like PSNR, relative errors or
spot check the output with expected output.
**Note**: This on-device profiling and inference requires access to Qualcomm®
AI Hub. [Sign up for access](https://myaccount.qualcomm.com/signup).
## Deploying compiled model to Android
The models can be deployed using multiple runtimes:
- TensorFlow Lite (`.tflite` export): [This
tutorial](https://www.tensorflow.org/lite/android/quickstart) provides a
guide to deploy the .tflite model in an Android application.
- QNN (`.so` export ): This [sample
app](https://docs.qualcomm.com/bundle/publicresource/topics/80-63442-50/sample_app.html)
provides instructions on how to use the `.so` shared library in an Android application.
## View on Qualcomm® AI Hub
Get more details on SalsaNext's performance across various devices [here](https://aihub.qualcomm.com/models/salsanext).
Explore all available models on [Qualcomm® AI Hub](https://aihub.qualcomm.com/)
## License
* The license for the original implementation of SalsaNext can be found
[here](https://github.com/TiagoCortinhal/SalsaNext/blob/master/LICENSE).
* The license for the compiled assets for on-device deployment can be found [here](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/Qualcomm+AI+Hub+Proprietary+License.pdf)
## Community
* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI.
* For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com).
|
crystalline7/763528
|
crystalline7
| 2025-08-29T23:41:45Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-29T23:41:36Z |
[View on Civ Archive](https://civarchive.com/models/647202?modelVersionId=854348)
|
amethyst9/801287
|
amethyst9
| 2025-08-29T23:41:24Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-29T23:41:17Z |
[View on Civ Archive](https://civarchive.com/models/798412?modelVersionId=892803)
|
amethyst9/743547
|
amethyst9
| 2025-08-29T23:41:08Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-29T23:41:03Z |
[View on Civ Archive](https://civarchive.com/models/567581?modelVersionId=829529)
|
qualcomm/ResNeXt50
|
qualcomm
| 2025-08-29T23:40:56Z | 43 | 1 |
pytorch
|
[
"pytorch",
"tflite",
"backbone",
"android",
"image-classification",
"arxiv:1611.05431",
"license:other",
"region:us"
] |
image-classification
| 2024-02-25T23:08:17Z |
---
library_name: pytorch
license: other
tags:
- backbone
- android
pipeline_tag: image-classification
---

# ResNeXt50: Optimized for Mobile Deployment
## Imagenet classifier and general purpose backbone
ResNeXt50 is a machine learning model that can classify images from the Imagenet dataset. It can also be used as a backbone in building more complex models for specific use cases.
This model is an implementation of ResNeXt50 found [here](https://github.com/pytorch/vision/blob/main/torchvision/models/resnet.py).
This repository provides scripts to run ResNeXt50 on Qualcomm® devices.
More details on model performance across various devices, can be found
[here](https://aihub.qualcomm.com/models/resnext50).
### Model Details
- **Model Type:** Model_use_case.image_classification
- **Model Stats:**
- Model checkpoint: Imagenet
- Input resolution: 224x224
- Number of parameters: 25.0M
- Model size (float): 95.4 MB
- Model size (w8a8): 24.8 MB
| Model | Precision | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Primary Compute Unit | Target Model
|---|---|---|---|---|---|---|---|---|
| ResNeXt50 | float | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | TFLITE | 12.155 ms | 0 - 85 MB | NPU | [ResNeXt50.tflite](https://huggingface.co/qualcomm/ResNeXt50/blob/main/ResNeXt50.tflite) |
| ResNeXt50 | float | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | QNN_DLC | 11.903 ms | 1 - 46 MB | NPU | [ResNeXt50.dlc](https://huggingface.co/qualcomm/ResNeXt50/blob/main/ResNeXt50.dlc) |
| ResNeXt50 | float | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | TFLITE | 3.376 ms | 0 - 92 MB | NPU | [ResNeXt50.tflite](https://huggingface.co/qualcomm/ResNeXt50/blob/main/ResNeXt50.tflite) |
| ResNeXt50 | float | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | QNN_DLC | 3.862 ms | 1 - 39 MB | NPU | [ResNeXt50.dlc](https://huggingface.co/qualcomm/ResNeXt50/blob/main/ResNeXt50.dlc) |
| ResNeXt50 | float | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | TFLITE | 2.48 ms | 0 - 25 MB | NPU | [ResNeXt50.tflite](https://huggingface.co/qualcomm/ResNeXt50/blob/main/ResNeXt50.tflite) |
| ResNeXt50 | float | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | QNN_DLC | 2.47 ms | 1 - 15 MB | NPU | [ResNeXt50.dlc](https://huggingface.co/qualcomm/ResNeXt50/blob/main/ResNeXt50.dlc) |
| ResNeXt50 | float | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | TFLITE | 3.932 ms | 0 - 86 MB | NPU | [ResNeXt50.tflite](https://huggingface.co/qualcomm/ResNeXt50/blob/main/ResNeXt50.tflite) |
| ResNeXt50 | float | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | QNN_DLC | 3.791 ms | 1 - 45 MB | NPU | [ResNeXt50.dlc](https://huggingface.co/qualcomm/ResNeXt50/blob/main/ResNeXt50.dlc) |
| ResNeXt50 | float | SA7255P ADP | Qualcomm® SA7255P | TFLITE | 12.155 ms | 0 - 85 MB | NPU | [ResNeXt50.tflite](https://huggingface.co/qualcomm/ResNeXt50/blob/main/ResNeXt50.tflite) |
| ResNeXt50 | float | SA7255P ADP | Qualcomm® SA7255P | QNN_DLC | 11.903 ms | 1 - 46 MB | NPU | [ResNeXt50.dlc](https://huggingface.co/qualcomm/ResNeXt50/blob/main/ResNeXt50.dlc) |
| ResNeXt50 | float | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | TFLITE | 2.51 ms | 0 - 282 MB | NPU | [ResNeXt50.tflite](https://huggingface.co/qualcomm/ResNeXt50/blob/main/ResNeXt50.tflite) |
| ResNeXt50 | float | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | QNN_DLC | 2.44 ms | 1 - 15 MB | NPU | [ResNeXt50.dlc](https://huggingface.co/qualcomm/ResNeXt50/blob/main/ResNeXt50.dlc) |
| ResNeXt50 | float | SA8295P ADP | Qualcomm® SA8295P | TFLITE | 4.041 ms | 0 - 84 MB | NPU | [ResNeXt50.tflite](https://huggingface.co/qualcomm/ResNeXt50/blob/main/ResNeXt50.tflite) |
| ResNeXt50 | float | SA8295P ADP | Qualcomm® SA8295P | QNN_DLC | 3.97 ms | 0 - 35 MB | NPU | [ResNeXt50.dlc](https://huggingface.co/qualcomm/ResNeXt50/blob/main/ResNeXt50.dlc) |
| ResNeXt50 | float | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | TFLITE | 2.517 ms | 0 - 274 MB | NPU | [ResNeXt50.tflite](https://huggingface.co/qualcomm/ResNeXt50/blob/main/ResNeXt50.tflite) |
| ResNeXt50 | float | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | QNN_DLC | 2.472 ms | 1 - 17 MB | NPU | [ResNeXt50.dlc](https://huggingface.co/qualcomm/ResNeXt50/blob/main/ResNeXt50.dlc) |
| ResNeXt50 | float | SA8775P ADP | Qualcomm® SA8775P | TFLITE | 3.932 ms | 0 - 86 MB | NPU | [ResNeXt50.tflite](https://huggingface.co/qualcomm/ResNeXt50/blob/main/ResNeXt50.tflite) |
| ResNeXt50 | float | SA8775P ADP | Qualcomm® SA8775P | QNN_DLC | 3.791 ms | 1 - 45 MB | NPU | [ResNeXt50.dlc](https://huggingface.co/qualcomm/ResNeXt50/blob/main/ResNeXt50.dlc) |
| ResNeXt50 | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | TFLITE | 2.489 ms | 0 - 287 MB | NPU | [ResNeXt50.tflite](https://huggingface.co/qualcomm/ResNeXt50/blob/main/ResNeXt50.tflite) |
| ResNeXt50 | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | QNN_DLC | 2.449 ms | 1 - 15 MB | NPU | [ResNeXt50.dlc](https://huggingface.co/qualcomm/ResNeXt50/blob/main/ResNeXt50.dlc) |
| ResNeXt50 | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | ONNX | 2.487 ms | 0 - 124 MB | NPU | [ResNeXt50.onnx.zip](https://huggingface.co/qualcomm/ResNeXt50/blob/main/ResNeXt50.onnx.zip) |
| ResNeXt50 | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | TFLITE | 1.769 ms | 0 - 89 MB | NPU | [ResNeXt50.tflite](https://huggingface.co/qualcomm/ResNeXt50/blob/main/ResNeXt50.tflite) |
| ResNeXt50 | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | QNN_DLC | 1.761 ms | 1 - 50 MB | NPU | [ResNeXt50.dlc](https://huggingface.co/qualcomm/ResNeXt50/blob/main/ResNeXt50.dlc) |
| ResNeXt50 | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | ONNX | 1.781 ms | 0 - 49 MB | NPU | [ResNeXt50.onnx.zip](https://huggingface.co/qualcomm/ResNeXt50/blob/main/ResNeXt50.onnx.zip) |
| ResNeXt50 | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | TFLITE | 1.631 ms | 0 - 87 MB | NPU | [ResNeXt50.tflite](https://huggingface.co/qualcomm/ResNeXt50/blob/main/ResNeXt50.tflite) |
| ResNeXt50 | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | QNN_DLC | 1.668 ms | 1 - 49 MB | NPU | [ResNeXt50.dlc](https://huggingface.co/qualcomm/ResNeXt50/blob/main/ResNeXt50.dlc) |
| ResNeXt50 | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | ONNX | 1.784 ms | 1 - 44 MB | NPU | [ResNeXt50.onnx.zip](https://huggingface.co/qualcomm/ResNeXt50/blob/main/ResNeXt50.onnx.zip) |
| ResNeXt50 | float | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN_DLC | 2.604 ms | 182 - 182 MB | NPU | [ResNeXt50.dlc](https://huggingface.co/qualcomm/ResNeXt50/blob/main/ResNeXt50.dlc) |
| ResNeXt50 | float | Snapdragon X Elite CRD | Snapdragon® X Elite | ONNX | 2.464 ms | 50 - 50 MB | NPU | [ResNeXt50.onnx.zip](https://huggingface.co/qualcomm/ResNeXt50/blob/main/ResNeXt50.onnx.zip) |
| ResNeXt50 | w8a8 | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | TFLITE | 2.161 ms | 0 - 47 MB | NPU | [ResNeXt50.tflite](https://huggingface.co/qualcomm/ResNeXt50/blob/main/ResNeXt50_w8a8.tflite) |
| ResNeXt50 | w8a8 | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | QNN_DLC | 2.489 ms | 0 - 49 MB | NPU | [ResNeXt50.dlc](https://huggingface.co/qualcomm/ResNeXt50/blob/main/ResNeXt50_w8a8.dlc) |
| ResNeXt50 | w8a8 | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | TFLITE | 1.092 ms | 0 - 62 MB | NPU | [ResNeXt50.tflite](https://huggingface.co/qualcomm/ResNeXt50/blob/main/ResNeXt50_w8a8.tflite) |
| ResNeXt50 | w8a8 | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | QNN_DLC | 1.427 ms | 0 - 58 MB | NPU | [ResNeXt50.dlc](https://huggingface.co/qualcomm/ResNeXt50/blob/main/ResNeXt50_w8a8.dlc) |
| ResNeXt50 | w8a8 | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | TFLITE | 0.897 ms | 0 - 93 MB | NPU | [ResNeXt50.tflite](https://huggingface.co/qualcomm/ResNeXt50/blob/main/ResNeXt50_w8a8.tflite) |
| ResNeXt50 | w8a8 | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | QNN_DLC | 1.093 ms | 0 - 18 MB | NPU | [ResNeXt50.dlc](https://huggingface.co/qualcomm/ResNeXt50/blob/main/ResNeXt50_w8a8.dlc) |
| ResNeXt50 | w8a8 | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | TFLITE | 1.221 ms | 0 - 48 MB | NPU | [ResNeXt50.tflite](https://huggingface.co/qualcomm/ResNeXt50/blob/main/ResNeXt50_w8a8.tflite) |
| ResNeXt50 | w8a8 | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | QNN_DLC | 1.458 ms | 0 - 49 MB | NPU | [ResNeXt50.dlc](https://huggingface.co/qualcomm/ResNeXt50/blob/main/ResNeXt50_w8a8.dlc) |
| ResNeXt50 | w8a8 | RB3 Gen 2 (Proxy) | Qualcomm® QCS6490 (Proxy) | TFLITE | 3.04 ms | 0 - 63 MB | NPU | [ResNeXt50.tflite](https://huggingface.co/qualcomm/ResNeXt50/blob/main/ResNeXt50_w8a8.tflite) |
| ResNeXt50 | w8a8 | RB3 Gen 2 (Proxy) | Qualcomm® QCS6490 (Proxy) | QNN_DLC | 4.851 ms | 0 - 69 MB | NPU | [ResNeXt50.dlc](https://huggingface.co/qualcomm/ResNeXt50/blob/main/ResNeXt50_w8a8.dlc) |
| ResNeXt50 | w8a8 | RB5 (Proxy) | Qualcomm® QCS8250 (Proxy) | TFLITE | 81.525 ms | 0 - 122 MB | GPU | [ResNeXt50.tflite](https://huggingface.co/qualcomm/ResNeXt50/blob/main/ResNeXt50_w8a8.tflite) |
| ResNeXt50 | w8a8 | SA7255P ADP | Qualcomm® SA7255P | TFLITE | 2.161 ms | 0 - 47 MB | NPU | [ResNeXt50.tflite](https://huggingface.co/qualcomm/ResNeXt50/blob/main/ResNeXt50_w8a8.tflite) |
| ResNeXt50 | w8a8 | SA7255P ADP | Qualcomm® SA7255P | QNN_DLC | 2.489 ms | 0 - 49 MB | NPU | [ResNeXt50.dlc](https://huggingface.co/qualcomm/ResNeXt50/blob/main/ResNeXt50_w8a8.dlc) |
| ResNeXt50 | w8a8 | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | TFLITE | 0.897 ms | 0 - 93 MB | NPU | [ResNeXt50.tflite](https://huggingface.co/qualcomm/ResNeXt50/blob/main/ResNeXt50_w8a8.tflite) |
| ResNeXt50 | w8a8 | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | QNN_DLC | 1.096 ms | 0 - 12 MB | NPU | [ResNeXt50.dlc](https://huggingface.co/qualcomm/ResNeXt50/blob/main/ResNeXt50_w8a8.dlc) |
| ResNeXt50 | w8a8 | SA8295P ADP | Qualcomm® SA8295P | TFLITE | 1.465 ms | 0 - 55 MB | NPU | [ResNeXt50.tflite](https://huggingface.co/qualcomm/ResNeXt50/blob/main/ResNeXt50_w8a8.tflite) |
| ResNeXt50 | w8a8 | SA8295P ADP | Qualcomm® SA8295P | QNN_DLC | 1.727 ms | 0 - 56 MB | NPU | [ResNeXt50.dlc](https://huggingface.co/qualcomm/ResNeXt50/blob/main/ResNeXt50_w8a8.dlc) |
| ResNeXt50 | w8a8 | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | TFLITE | 0.897 ms | 0 - 93 MB | NPU | [ResNeXt50.tflite](https://huggingface.co/qualcomm/ResNeXt50/blob/main/ResNeXt50_w8a8.tflite) |
| ResNeXt50 | w8a8 | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | QNN_DLC | 1.093 ms | 0 - 20 MB | NPU | [ResNeXt50.dlc](https://huggingface.co/qualcomm/ResNeXt50/blob/main/ResNeXt50_w8a8.dlc) |
| ResNeXt50 | w8a8 | SA8775P ADP | Qualcomm® SA8775P | TFLITE | 1.221 ms | 0 - 48 MB | NPU | [ResNeXt50.tflite](https://huggingface.co/qualcomm/ResNeXt50/blob/main/ResNeXt50_w8a8.tflite) |
| ResNeXt50 | w8a8 | SA8775P ADP | Qualcomm® SA8775P | QNN_DLC | 1.458 ms | 0 - 49 MB | NPU | [ResNeXt50.dlc](https://huggingface.co/qualcomm/ResNeXt50/blob/main/ResNeXt50_w8a8.dlc) |
| ResNeXt50 | w8a8 | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | TFLITE | 0.893 ms | 0 - 93 MB | NPU | [ResNeXt50.tflite](https://huggingface.co/qualcomm/ResNeXt50/blob/main/ResNeXt50_w8a8.tflite) |
| ResNeXt50 | w8a8 | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | QNN_DLC | 1.095 ms | 0 - 21 MB | NPU | [ResNeXt50.dlc](https://huggingface.co/qualcomm/ResNeXt50/blob/main/ResNeXt50_w8a8.dlc) |
| ResNeXt50 | w8a8 | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | TFLITE | 0.669 ms | 0 - 58 MB | NPU | [ResNeXt50.tflite](https://huggingface.co/qualcomm/ResNeXt50/blob/main/ResNeXt50_w8a8.tflite) |
| ResNeXt50 | w8a8 | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | QNN_DLC | 0.811 ms | 0 - 58 MB | NPU | [ResNeXt50.dlc](https://huggingface.co/qualcomm/ResNeXt50/blob/main/ResNeXt50_w8a8.dlc) |
| ResNeXt50 | w8a8 | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | TFLITE | 0.659 ms | 0 - 49 MB | NPU | [ResNeXt50.tflite](https://huggingface.co/qualcomm/ResNeXt50/blob/main/ResNeXt50_w8a8.tflite) |
| ResNeXt50 | w8a8 | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | QNN_DLC | 0.712 ms | 0 - 56 MB | NPU | [ResNeXt50.dlc](https://huggingface.co/qualcomm/ResNeXt50/blob/main/ResNeXt50_w8a8.dlc) |
| ResNeXt50 | w8a8 | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN_DLC | 1.213 ms | 104 - 104 MB | NPU | [ResNeXt50.dlc](https://huggingface.co/qualcomm/ResNeXt50/blob/main/ResNeXt50_w8a8.dlc) |
## Installation
Install the package via pip:
```bash
pip install qai-hub-models
```
## Configure Qualcomm® AI Hub to run this model on a cloud-hosted device
Sign-in to [Qualcomm® AI Hub](https://app.aihub.qualcomm.com/) with your
Qualcomm® ID. Once signed in navigate to `Account -> Settings -> API Token`.
With this API token, you can configure your client to run models on the cloud
hosted devices.
```bash
qai-hub configure --api_token API_TOKEN
```
Navigate to [docs](https://app.aihub.qualcomm.com/docs/) for more information.
## Demo off target
The package contains a simple end-to-end demo that downloads pre-trained
weights and runs this model on a sample input.
```bash
python -m qai_hub_models.models.resnext50.demo
```
The above demo runs a reference implementation of pre-processing, model
inference, and post processing.
**NOTE**: If you want running in a Jupyter Notebook or Google Colab like
environment, please add the following to your cell (instead of the above).
```
%run -m qai_hub_models.models.resnext50.demo
```
### Run model on a cloud-hosted device
In addition to the demo, you can also run the model on a cloud-hosted Qualcomm®
device. This script does the following:
* Performance check on-device on a cloud-hosted device
* Downloads compiled assets that can be deployed on-device for Android.
* Accuracy check between PyTorch and on-device outputs.
```bash
python -m qai_hub_models.models.resnext50.export
```
## How does this work?
This [export script](https://aihub.qualcomm.com/models/resnext50/qai_hub_models/models/ResNeXt50/export.py)
leverages [Qualcomm® AI Hub](https://aihub.qualcomm.com/) to optimize, validate, and deploy this model
on-device. Lets go through each step below in detail:
Step 1: **Compile model for on-device deployment**
To compile a PyTorch model for on-device deployment, we first trace the model
in memory using the `jit.trace` and then call the `submit_compile_job` API.
```python
import torch
import qai_hub as hub
from qai_hub_models.models.resnext50 import Model
# Load the model
torch_model = Model.from_pretrained()
# Device
device = hub.Device("Samsung Galaxy S24")
# Trace model
input_shape = torch_model.get_input_spec()
sample_inputs = torch_model.sample_inputs()
pt_model = torch.jit.trace(torch_model, [torch.tensor(data[0]) for _, data in sample_inputs.items()])
# Compile model on a specific device
compile_job = hub.submit_compile_job(
model=pt_model,
device=device,
input_specs=torch_model.get_input_spec(),
)
# Get target model to run on-device
target_model = compile_job.get_target_model()
```
Step 2: **Performance profiling on cloud-hosted device**
After compiling models from step 1. Models can be profiled model on-device using the
`target_model`. Note that this scripts runs the model on a device automatically
provisioned in the cloud. Once the job is submitted, you can navigate to a
provided job URL to view a variety of on-device performance metrics.
```python
profile_job = hub.submit_profile_job(
model=target_model,
device=device,
)
```
Step 3: **Verify on-device accuracy**
To verify the accuracy of the model on-device, you can run on-device inference
on sample input data on the same cloud hosted device.
```python
input_data = torch_model.sample_inputs()
inference_job = hub.submit_inference_job(
model=target_model,
device=device,
inputs=input_data,
)
on_device_output = inference_job.download_output_data()
```
With the output of the model, you can compute like PSNR, relative errors or
spot check the output with expected output.
**Note**: This on-device profiling and inference requires access to Qualcomm®
AI Hub. [Sign up for access](https://myaccount.qualcomm.com/signup).
## Run demo on a cloud-hosted device
You can also run the demo on-device.
```bash
python -m qai_hub_models.models.resnext50.demo --eval-mode on-device
```
**NOTE**: If you want running in a Jupyter Notebook or Google Colab like
environment, please add the following to your cell (instead of the above).
```
%run -m qai_hub_models.models.resnext50.demo -- --eval-mode on-device
```
## Deploying compiled model to Android
The models can be deployed using multiple runtimes:
- TensorFlow Lite (`.tflite` export): [This
tutorial](https://www.tensorflow.org/lite/android/quickstart) provides a
guide to deploy the .tflite model in an Android application.
- QNN (`.so` export ): This [sample
app](https://docs.qualcomm.com/bundle/publicresource/topics/80-63442-50/sample_app.html)
provides instructions on how to use the `.so` shared library in an Android application.
## View on Qualcomm® AI Hub
Get more details on ResNeXt50's performance across various devices [here](https://aihub.qualcomm.com/models/resnext50).
Explore all available models on [Qualcomm® AI Hub](https://aihub.qualcomm.com/)
## License
* The license for the original implementation of ResNeXt50 can be found
[here](https://github.com/pytorch/vision/blob/main/LICENSE).
* The license for the compiled assets for on-device deployment can be found [here](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/Qualcomm+AI+Hub+Proprietary+License.pdf)
## References
* [Aggregated Residual Transformations for Deep Neural Networks](https://arxiv.org/abs/1611.05431)
* [Source Model Implementation](https://github.com/pytorch/vision/blob/main/torchvision/models/resnet.py)
## Community
* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI.
* For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com).
|
amethyst9/500316
|
amethyst9
| 2025-08-29T23:40:55Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-29T23:40:50Z |
[View on Civ Archive](https://civarchive.com/models/526394?modelVersionId=584869)
|
ultratopaz/541158
|
ultratopaz
| 2025-08-29T23:40:41Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-29T23:40:35Z |
[View on Civ Archive](https://civarchive.com/models/562216?modelVersionId=626268)
|
seraphimzzzz/840194
|
seraphimzzzz
| 2025-08-29T23:40:10Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-29T23:40:02Z |
[View on Civ Archive](https://civarchive.com/models/832482?modelVersionId=931250)
|
keras/qwen3_32b_en
|
keras
| 2025-08-29T23:39:47Z | 0 | 0 |
keras-hub
|
[
"keras-hub",
"text-generation",
"region:us"
] |
text-generation
| 2025-08-29T23:27:47Z |
---
library_name: keras-hub
pipeline_tag: text-generation
---
This is a [`Qwen3` model](https://keras.io/api/keras_hub/models/qwen3) uploaded using the KerasHub library and can be used with JAX, TensorFlow, and PyTorch backends.
This model is related to a `CausalLM` task.
Model config:
* **name:** qwen3_backbone
* **trainable:** True
* **vocabulary_size:** 151936
* **num_layers:** 64
* **num_query_heads:** 64
* **hidden_dim:** 5120
* **head_dim:** 128
* **intermediate_dim:** 25600
* **rope_max_wavelength:** 1000000
* **rope_scaling_factor:** 1.0
* **num_key_value_heads:** 8
* **layer_norm_epsilon:** 1e-06
* **dropout:** 0.0
* **tie_word_embeddings:** False
* **sliding_window_size:** None
This model card has been generated automatically and should be completed by the model author. See [Model Cards documentation](https://huggingface.co/docs/hub/model-cards) for more information.
|
crystalline7/553452
|
crystalline7
| 2025-08-29T23:39:45Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-29T23:39:40Z |
[View on Civ Archive](https://civarchive.com/models/562057?modelVersionId=638692)
|
JW17/Q3-4B-Base-icrm-lam0.1-v0.1
|
JW17
| 2025-08-29T23:39:36Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-29T09:46:56Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
qualcomm/ResNet-3D
|
qualcomm
| 2025-08-29T23:39:09Z | 30 | 0 |
pytorch
|
[
"pytorch",
"tflite",
"backbone",
"android",
"video-classification",
"arxiv:1711.11248",
"license:other",
"region:us"
] |
video-classification
| 2025-01-03T18:47:42Z |
---
library_name: pytorch
license: other
tags:
- backbone
- android
pipeline_tag: video-classification
---

# ResNet-3D: Optimized for Mobile Deployment
## Sports and human action recognition in videos
ResNet 3D is a network with 3D convolutions used for video understanding.
This model is an implementation of ResNet-3D found [here](https://github.com/pytorch/vision/blob/main/torchvision/models/video/resnet.py).
This repository provides scripts to run ResNet-3D on Qualcomm® devices.
More details on model performance across various devices, can be found
[here](https://aihub.qualcomm.com/models/resnet_3d).
### Model Details
- **Model Type:** Model_use_case.video_classification
- **Model Stats:**
- Model checkpoint: Kinetics-400
- Input resolution: 112x112
- Number of parameters: 33.4M
- Model size (float): 127 MB
- Model size (w8a8): 32.1 MB
| Model | Precision | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Primary Compute Unit | Target Model
|---|---|---|---|---|---|---|---|---|
| ResNet-3D | float | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | TFLITE | 112.823 ms | 29 - 72 MB | NPU | [ResNet-3D.tflite](https://huggingface.co/qualcomm/ResNet-3D/blob/main/ResNet-3D.tflite) |
| ResNet-3D | float | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | QNN_DLC | 90.959 ms | 0 - 78 MB | NPU | [ResNet-3D.dlc](https://huggingface.co/qualcomm/ResNet-3D/blob/main/ResNet-3D.dlc) |
| ResNet-3D | float | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | TFLITE | 35.307 ms | 29 - 74 MB | NPU | [ResNet-3D.tflite](https://huggingface.co/qualcomm/ResNet-3D/blob/main/ResNet-3D.tflite) |
| ResNet-3D | float | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | QNN_DLC | 26.492 ms | 2 - 64 MB | NPU | [ResNet-3D.dlc](https://huggingface.co/qualcomm/ResNet-3D/blob/main/ResNet-3D.dlc) |
| ResNet-3D | float | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | TFLITE | 22.688 ms | 1 - 1042 MB | NPU | [ResNet-3D.tflite](https://huggingface.co/qualcomm/ResNet-3D/blob/main/ResNet-3D.tflite) |
| ResNet-3D | float | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | QNN_DLC | 13.093 ms | 2 - 24 MB | NPU | [ResNet-3D.dlc](https://huggingface.co/qualcomm/ResNet-3D/blob/main/ResNet-3D.dlc) |
| ResNet-3D | float | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | TFLITE | 36.112 ms | 29 - 72 MB | NPU | [ResNet-3D.tflite](https://huggingface.co/qualcomm/ResNet-3D/blob/main/ResNet-3D.tflite) |
| ResNet-3D | float | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | QNN_DLC | 24.237 ms | 2 - 81 MB | NPU | [ResNet-3D.dlc](https://huggingface.co/qualcomm/ResNet-3D/blob/main/ResNet-3D.dlc) |
| ResNet-3D | float | SA7255P ADP | Qualcomm® SA7255P | TFLITE | 112.823 ms | 29 - 72 MB | NPU | [ResNet-3D.tflite](https://huggingface.co/qualcomm/ResNet-3D/blob/main/ResNet-3D.tflite) |
| ResNet-3D | float | SA7255P ADP | Qualcomm® SA7255P | QNN_DLC | 90.959 ms | 0 - 78 MB | NPU | [ResNet-3D.dlc](https://huggingface.co/qualcomm/ResNet-3D/blob/main/ResNet-3D.dlc) |
| ResNet-3D | float | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | TFLITE | 22.777 ms | 3 - 1006 MB | NPU | [ResNet-3D.tflite](https://huggingface.co/qualcomm/ResNet-3D/blob/main/ResNet-3D.tflite) |
| ResNet-3D | float | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | QNN_DLC | 13.066 ms | 2 - 22 MB | NPU | [ResNet-3D.dlc](https://huggingface.co/qualcomm/ResNet-3D/blob/main/ResNet-3D.dlc) |
| ResNet-3D | float | SA8295P ADP | Qualcomm® SA8295P | TFLITE | 40.83 ms | 29 - 61 MB | NPU | [ResNet-3D.tflite](https://huggingface.co/qualcomm/ResNet-3D/blob/main/ResNet-3D.tflite) |
| ResNet-3D | float | SA8295P ADP | Qualcomm® SA8295P | QNN_DLC | 25.902 ms | 0 - 67 MB | NPU | [ResNet-3D.dlc](https://huggingface.co/qualcomm/ResNet-3D/blob/main/ResNet-3D.dlc) |
| ResNet-3D | float | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | TFLITE | 22.702 ms | 2 - 1017 MB | NPU | [ResNet-3D.tflite](https://huggingface.co/qualcomm/ResNet-3D/blob/main/ResNet-3D.tflite) |
| ResNet-3D | float | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | QNN_DLC | 13.092 ms | 2 - 28 MB | NPU | [ResNet-3D.dlc](https://huggingface.co/qualcomm/ResNet-3D/blob/main/ResNet-3D.dlc) |
| ResNet-3D | float | SA8775P ADP | Qualcomm® SA8775P | TFLITE | 36.112 ms | 29 - 72 MB | NPU | [ResNet-3D.tflite](https://huggingface.co/qualcomm/ResNet-3D/blob/main/ResNet-3D.tflite) |
| ResNet-3D | float | SA8775P ADP | Qualcomm® SA8775P | QNN_DLC | 24.237 ms | 2 - 81 MB | NPU | [ResNet-3D.dlc](https://huggingface.co/qualcomm/ResNet-3D/blob/main/ResNet-3D.dlc) |
| ResNet-3D | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | TFLITE | 22.279 ms | 0 - 1016 MB | NPU | [ResNet-3D.tflite](https://huggingface.co/qualcomm/ResNet-3D/blob/main/ResNet-3D.tflite) |
| ResNet-3D | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | QNN_DLC | 13.083 ms | 0 - 27 MB | NPU | [ResNet-3D.dlc](https://huggingface.co/qualcomm/ResNet-3D/blob/main/ResNet-3D.dlc) |
| ResNet-3D | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | ONNX | 13.689 ms | 2 - 15 MB | NPU | [ResNet-3D.onnx.zip](https://huggingface.co/qualcomm/ResNet-3D/blob/main/ResNet-3D.onnx.zip) |
| ResNet-3D | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | TFLITE | 16.874 ms | 27 - 88 MB | NPU | [ResNet-3D.tflite](https://huggingface.co/qualcomm/ResNet-3D/blob/main/ResNet-3D.tflite) |
| ResNet-3D | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | QNN_DLC | 9.364 ms | 2 - 93 MB | NPU | [ResNet-3D.dlc](https://huggingface.co/qualcomm/ResNet-3D/blob/main/ResNet-3D.dlc) |
| ResNet-3D | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | ONNX | 10.607 ms | 2 - 74 MB | NPU | [ResNet-3D.onnx.zip](https://huggingface.co/qualcomm/ResNet-3D/blob/main/ResNet-3D.onnx.zip) |
| ResNet-3D | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | TFLITE | 14.473 ms | 28 - 68 MB | NPU | [ResNet-3D.tflite](https://huggingface.co/qualcomm/ResNet-3D/blob/main/ResNet-3D.tflite) |
| ResNet-3D | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | QNN_DLC | 7.503 ms | 2 - 84 MB | NPU | [ResNet-3D.dlc](https://huggingface.co/qualcomm/ResNet-3D/blob/main/ResNet-3D.dlc) |
| ResNet-3D | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | ONNX | 9.99 ms | 2 - 55 MB | NPU | [ResNet-3D.onnx.zip](https://huggingface.co/qualcomm/ResNet-3D/blob/main/ResNet-3D.onnx.zip) |
| ResNet-3D | float | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN_DLC | 13.566 ms | 1007 - 1007 MB | NPU | [ResNet-3D.dlc](https://huggingface.co/qualcomm/ResNet-3D/blob/main/ResNet-3D.dlc) |
| ResNet-3D | float | Snapdragon X Elite CRD | Snapdragon® X Elite | ONNX | 14.584 ms | 64 - 64 MB | NPU | [ResNet-3D.onnx.zip](https://huggingface.co/qualcomm/ResNet-3D/blob/main/ResNet-3D.onnx.zip) |
| ResNet-3D | w8a8 | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | QNN_DLC | 13.173 ms | 1 - 32 MB | NPU | [ResNet-3D.dlc](https://huggingface.co/qualcomm/ResNet-3D/blob/main/ResNet-3D_w8a8.dlc) |
| ResNet-3D | w8a8 | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | QNN_DLC | 5.621 ms | 1 - 81 MB | NPU | [ResNet-3D.dlc](https://huggingface.co/qualcomm/ResNet-3D/blob/main/ResNet-3D_w8a8.dlc) |
| ResNet-3D | w8a8 | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | QNN_DLC | 3.767 ms | 0 - 12 MB | NPU | [ResNet-3D.dlc](https://huggingface.co/qualcomm/ResNet-3D/blob/main/ResNet-3D_w8a8.dlc) |
| ResNet-3D | w8a8 | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | QNN_DLC | 4.138 ms | 1 - 32 MB | NPU | [ResNet-3D.dlc](https://huggingface.co/qualcomm/ResNet-3D/blob/main/ResNet-3D_w8a8.dlc) |
| ResNet-3D | w8a8 | RB3 Gen 2 (Proxy) | Qualcomm® QCS6490 (Proxy) | QNN_DLC | 23.996 ms | 1 - 60 MB | NPU | [ResNet-3D.dlc](https://huggingface.co/qualcomm/ResNet-3D/blob/main/ResNet-3D_w8a8.dlc) |
| ResNet-3D | w8a8 | SA7255P ADP | Qualcomm® SA7255P | QNN_DLC | 13.173 ms | 1 - 32 MB | NPU | [ResNet-3D.dlc](https://huggingface.co/qualcomm/ResNet-3D/blob/main/ResNet-3D_w8a8.dlc) |
| ResNet-3D | w8a8 | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | QNN_DLC | 3.752 ms | 1 - 10 MB | NPU | [ResNet-3D.dlc](https://huggingface.co/qualcomm/ResNet-3D/blob/main/ResNet-3D_w8a8.dlc) |
| ResNet-3D | w8a8 | SA8295P ADP | Qualcomm® SA8295P | QNN_DLC | 7.471 ms | 1 - 35 MB | NPU | [ResNet-3D.dlc](https://huggingface.co/qualcomm/ResNet-3D/blob/main/ResNet-3D_w8a8.dlc) |
| ResNet-3D | w8a8 | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | QNN_DLC | 3.755 ms | 0 - 11 MB | NPU | [ResNet-3D.dlc](https://huggingface.co/qualcomm/ResNet-3D/blob/main/ResNet-3D_w8a8.dlc) |
| ResNet-3D | w8a8 | SA8775P ADP | Qualcomm® SA8775P | QNN_DLC | 4.138 ms | 1 - 32 MB | NPU | [ResNet-3D.dlc](https://huggingface.co/qualcomm/ResNet-3D/blob/main/ResNet-3D_w8a8.dlc) |
| ResNet-3D | w8a8 | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | QNN_DLC | 3.755 ms | 0 - 12 MB | NPU | [ResNet-3D.dlc](https://huggingface.co/qualcomm/ResNet-3D/blob/main/ResNet-3D_w8a8.dlc) |
| ResNet-3D | w8a8 | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | QNN_DLC | 2.747 ms | 1 - 76 MB | NPU | [ResNet-3D.dlc](https://huggingface.co/qualcomm/ResNet-3D/blob/main/ResNet-3D_w8a8.dlc) |
| ResNet-3D | w8a8 | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | QNN_DLC | 2.601 ms | 1 - 37 MB | NPU | [ResNet-3D.dlc](https://huggingface.co/qualcomm/ResNet-3D/blob/main/ResNet-3D_w8a8.dlc) |
| ResNet-3D | w8a8 | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN_DLC | 4.031 ms | 357 - 357 MB | NPU | [ResNet-3D.dlc](https://huggingface.co/qualcomm/ResNet-3D/blob/main/ResNet-3D_w8a8.dlc) |
## Installation
Install the package via pip:
```bash
pip install "qai-hub-models[resnet-3d]"
```
## Configure Qualcomm® AI Hub to run this model on a cloud-hosted device
Sign-in to [Qualcomm® AI Hub](https://app.aihub.qualcomm.com/) with your
Qualcomm® ID. Once signed in navigate to `Account -> Settings -> API Token`.
With this API token, you can configure your client to run models on the cloud
hosted devices.
```bash
qai-hub configure --api_token API_TOKEN
```
Navigate to [docs](https://app.aihub.qualcomm.com/docs/) for more information.
## Demo off target
The package contains a simple end-to-end demo that downloads pre-trained
weights and runs this model on a sample input.
```bash
python -m qai_hub_models.models.resnet_3d.demo
```
The above demo runs a reference implementation of pre-processing, model
inference, and post processing.
**NOTE**: If you want running in a Jupyter Notebook or Google Colab like
environment, please add the following to your cell (instead of the above).
```
%run -m qai_hub_models.models.resnet_3d.demo
```
### Run model on a cloud-hosted device
In addition to the demo, you can also run the model on a cloud-hosted Qualcomm®
device. This script does the following:
* Performance check on-device on a cloud-hosted device
* Downloads compiled assets that can be deployed on-device for Android.
* Accuracy check between PyTorch and on-device outputs.
```bash
python -m qai_hub_models.models.resnet_3d.export
```
## How does this work?
This [export script](https://aihub.qualcomm.com/models/resnet_3d/qai_hub_models/models/ResNet-3D/export.py)
leverages [Qualcomm® AI Hub](https://aihub.qualcomm.com/) to optimize, validate, and deploy this model
on-device. Lets go through each step below in detail:
Step 1: **Compile model for on-device deployment**
To compile a PyTorch model for on-device deployment, we first trace the model
in memory using the `jit.trace` and then call the `submit_compile_job` API.
```python
import torch
import qai_hub as hub
from qai_hub_models.models.resnet_3d import Model
# Load the model
torch_model = Model.from_pretrained()
# Device
device = hub.Device("Samsung Galaxy S24")
# Trace model
input_shape = torch_model.get_input_spec()
sample_inputs = torch_model.sample_inputs()
pt_model = torch.jit.trace(torch_model, [torch.tensor(data[0]) for _, data in sample_inputs.items()])
# Compile model on a specific device
compile_job = hub.submit_compile_job(
model=pt_model,
device=device,
input_specs=torch_model.get_input_spec(),
)
# Get target model to run on-device
target_model = compile_job.get_target_model()
```
Step 2: **Performance profiling on cloud-hosted device**
After compiling models from step 1. Models can be profiled model on-device using the
`target_model`. Note that this scripts runs the model on a device automatically
provisioned in the cloud. Once the job is submitted, you can navigate to a
provided job URL to view a variety of on-device performance metrics.
```python
profile_job = hub.submit_profile_job(
model=target_model,
device=device,
)
```
Step 3: **Verify on-device accuracy**
To verify the accuracy of the model on-device, you can run on-device inference
on sample input data on the same cloud hosted device.
```python
input_data = torch_model.sample_inputs()
inference_job = hub.submit_inference_job(
model=target_model,
device=device,
inputs=input_data,
)
on_device_output = inference_job.download_output_data()
```
With the output of the model, you can compute like PSNR, relative errors or
spot check the output with expected output.
**Note**: This on-device profiling and inference requires access to Qualcomm®
AI Hub. [Sign up for access](https://myaccount.qualcomm.com/signup).
## Deploying compiled model to Android
The models can be deployed using multiple runtimes:
- TensorFlow Lite (`.tflite` export): [This
tutorial](https://www.tensorflow.org/lite/android/quickstart) provides a
guide to deploy the .tflite model in an Android application.
- QNN (`.so` export ): This [sample
app](https://docs.qualcomm.com/bundle/publicresource/topics/80-63442-50/sample_app.html)
provides instructions on how to use the `.so` shared library in an Android application.
## View on Qualcomm® AI Hub
Get more details on ResNet-3D's performance across various devices [here](https://aihub.qualcomm.com/models/resnet_3d).
Explore all available models on [Qualcomm® AI Hub](https://aihub.qualcomm.com/)
## License
* The license for the original implementation of ResNet-3D can be found
[here](https://github.com/pytorch/vision/blob/main/LICENSE).
* The license for the compiled assets for on-device deployment can be found [here](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/Qualcomm+AI+Hub+Proprietary+License.pdf)
## References
* [A Closer Look at Spatiotemporal Convolutions for Action Recognition](https://arxiv.org/abs/1711.11248)
* [Source Model Implementation](https://github.com/pytorch/vision/blob/main/torchvision/models/video/resnet.py)
## Community
* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI.
* For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com).
|
amethyst9/631393
|
amethyst9
| 2025-08-29T23:39:00Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-29T23:39:00Z |
[View on Civ Archive](https://civarchive.com/models/550978?modelVersionId=716698)
|
tamewild/4b_v66_merged_e8
|
tamewild
| 2025-08-29T23:38:43Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-29T23:37:41Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
crystalline7/1098441
|
crystalline7
| 2025-08-29T23:38:13Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-29T23:38:04Z |
[View on Civ Archive](https://civarchive.com/models/1061128?modelVersionId=1190883)
|
seraphimzzzz/504305
|
seraphimzzzz
| 2025-08-29T23:36:58Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-29T23:36:52Z |
[View on Civ Archive](https://civarchive.com/models/530102?modelVersionId=589064)
|
ultratopaz/1792945
|
ultratopaz
| 2025-08-29T23:36:41Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-29T23:36:33Z |
[View on Civ Archive](https://civarchive.com/models/1672438?modelVersionId=1892968)
|
iko-01/mathiko3
|
iko-01
| 2025-08-29T23:36:33Z | 0 | 0 | null |
[
"safetensors",
"gpt2",
"license:apache-2.0",
"region:us"
] | null | 2025-08-29T23:34:55Z |
---
license: apache-2.0
---
|
qualcomm/Real-ESRGAN-x4plus
|
qualcomm
| 2025-08-29T23:36:27Z | 417 | 74 |
pytorch
|
[
"pytorch",
"tflite",
"android",
"image-to-image",
"arxiv:2107.10833",
"license:other",
"region:us"
] |
image-to-image
| 2024-02-25T23:09:02Z |
---
library_name: pytorch
license: other
tags:
- android
pipeline_tag: image-to-image
---

# Real-ESRGAN-x4plus: Optimized for Mobile Deployment
## Upscale images and remove image noise
Real-ESRGAN is a machine learning model that upscales an image with minimal loss in quality. The implementation is a derivative of the Real-ESRGAN-x4plus architecture, a larger and more powerful version compared to the Real-ESRGAN-general-x4v3 architecture.
This model is an implementation of Real-ESRGAN-x4plus found [here](https://github.com/xinntao/Real-ESRGAN).
This repository provides scripts to run Real-ESRGAN-x4plus on Qualcomm® devices.
More details on model performance across various devices, can be found
[here](https://aihub.qualcomm.com/models/real_esrgan_x4plus).
### Model Details
- **Model Type:** Model_use_case.super_resolution
- **Model Stats:**
- Model checkpoint: RealESRGAN_x4plus
- Input resolution: 128x128
- Number of parameters: 16.7M
- Model size (float): 63.9 MB
- Model size (w8a8): 16.7 MB
| Model | Precision | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Primary Compute Unit | Target Model
|---|---|---|---|---|---|---|---|---|
| Real-ESRGAN-x4plus | float | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | TFLITE | 454.82 ms | 3 - 194 MB | NPU | [Real-ESRGAN-x4plus.tflite](https://huggingface.co/qualcomm/Real-ESRGAN-x4plus/blob/main/Real-ESRGAN-x4plus.tflite) |
| Real-ESRGAN-x4plus | float | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | QNN_DLC | 448.688 ms | 55 - 201 MB | NPU | [Real-ESRGAN-x4plus.dlc](https://huggingface.co/qualcomm/Real-ESRGAN-x4plus/blob/main/Real-ESRGAN-x4plus.dlc) |
| Real-ESRGAN-x4plus | float | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | TFLITE | 133.935 ms | 3 - 166 MB | NPU | [Real-ESRGAN-x4plus.tflite](https://huggingface.co/qualcomm/Real-ESRGAN-x4plus/blob/main/Real-ESRGAN-x4plus.tflite) |
| Real-ESRGAN-x4plus | float | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | QNN_DLC | 130.742 ms | 0 - 170 MB | NPU | [Real-ESRGAN-x4plus.dlc](https://huggingface.co/qualcomm/Real-ESRGAN-x4plus/blob/main/Real-ESRGAN-x4plus.dlc) |
| Real-ESRGAN-x4plus | float | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | TFLITE | 66.745 ms | 0 - 93 MB | NPU | [Real-ESRGAN-x4plus.tflite](https://huggingface.co/qualcomm/Real-ESRGAN-x4plus/blob/main/Real-ESRGAN-x4plus.tflite) |
| Real-ESRGAN-x4plus | float | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | QNN_DLC | 64.807 ms | 0 - 46 MB | NPU | [Real-ESRGAN-x4plus.dlc](https://huggingface.co/qualcomm/Real-ESRGAN-x4plus/blob/main/Real-ESRGAN-x4plus.dlc) |
| Real-ESRGAN-x4plus | float | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | TFLITE | 109.042 ms | 3 - 195 MB | NPU | [Real-ESRGAN-x4plus.tflite](https://huggingface.co/qualcomm/Real-ESRGAN-x4plus/blob/main/Real-ESRGAN-x4plus.tflite) |
| Real-ESRGAN-x4plus | float | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | QNN_DLC | 105.345 ms | 0 - 146 MB | NPU | [Real-ESRGAN-x4plus.dlc](https://huggingface.co/qualcomm/Real-ESRGAN-x4plus/blob/main/Real-ESRGAN-x4plus.dlc) |
| Real-ESRGAN-x4plus | float | SA7255P ADP | Qualcomm® SA7255P | TFLITE | 454.82 ms | 3 - 194 MB | NPU | [Real-ESRGAN-x4plus.tflite](https://huggingface.co/qualcomm/Real-ESRGAN-x4plus/blob/main/Real-ESRGAN-x4plus.tflite) |
| Real-ESRGAN-x4plus | float | SA7255P ADP | Qualcomm® SA7255P | QNN_DLC | 448.688 ms | 55 - 201 MB | NPU | [Real-ESRGAN-x4plus.dlc](https://huggingface.co/qualcomm/Real-ESRGAN-x4plus/blob/main/Real-ESRGAN-x4plus.dlc) |
| Real-ESRGAN-x4plus | float | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | TFLITE | 66.429 ms | 2 - 42 MB | NPU | [Real-ESRGAN-x4plus.tflite](https://huggingface.co/qualcomm/Real-ESRGAN-x4plus/blob/main/Real-ESRGAN-x4plus.tflite) |
| Real-ESRGAN-x4plus | float | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | QNN_DLC | 64.956 ms | 0 - 46 MB | NPU | [Real-ESRGAN-x4plus.dlc](https://huggingface.co/qualcomm/Real-ESRGAN-x4plus/blob/main/Real-ESRGAN-x4plus.dlc) |
| Real-ESRGAN-x4plus | float | SA8295P ADP | Qualcomm® SA8295P | TFLITE | 113.957 ms | 0 - 152 MB | NPU | [Real-ESRGAN-x4plus.tflite](https://huggingface.co/qualcomm/Real-ESRGAN-x4plus/blob/main/Real-ESRGAN-x4plus.tflite) |
| Real-ESRGAN-x4plus | float | SA8295P ADP | Qualcomm® SA8295P | QNN_DLC | 110.406 ms | 0 - 158 MB | NPU | [Real-ESRGAN-x4plus.dlc](https://huggingface.co/qualcomm/Real-ESRGAN-x4plus/blob/main/Real-ESRGAN-x4plus.dlc) |
| Real-ESRGAN-x4plus | float | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | TFLITE | 70.22 ms | 0 - 90 MB | NPU | [Real-ESRGAN-x4plus.tflite](https://huggingface.co/qualcomm/Real-ESRGAN-x4plus/blob/main/Real-ESRGAN-x4plus.tflite) |
| Real-ESRGAN-x4plus | float | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | QNN_DLC | 65.69 ms | 0 - 44 MB | NPU | [Real-ESRGAN-x4plus.dlc](https://huggingface.co/qualcomm/Real-ESRGAN-x4plus/blob/main/Real-ESRGAN-x4plus.dlc) |
| Real-ESRGAN-x4plus | float | SA8775P ADP | Qualcomm® SA8775P | TFLITE | 109.042 ms | 3 - 195 MB | NPU | [Real-ESRGAN-x4plus.tflite](https://huggingface.co/qualcomm/Real-ESRGAN-x4plus/blob/main/Real-ESRGAN-x4plus.tflite) |
| Real-ESRGAN-x4plus | float | SA8775P ADP | Qualcomm® SA8775P | QNN_DLC | 105.345 ms | 0 - 146 MB | NPU | [Real-ESRGAN-x4plus.dlc](https://huggingface.co/qualcomm/Real-ESRGAN-x4plus/blob/main/Real-ESRGAN-x4plus.dlc) |
| Real-ESRGAN-x4plus | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | TFLITE | 73.954 ms | 0 - 105 MB | NPU | [Real-ESRGAN-x4plus.tflite](https://huggingface.co/qualcomm/Real-ESRGAN-x4plus/blob/main/Real-ESRGAN-x4plus.tflite) |
| Real-ESRGAN-x4plus | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | QNN_DLC | 64.287 ms | 0 - 35 MB | NPU | [Real-ESRGAN-x4plus.dlc](https://huggingface.co/qualcomm/Real-ESRGAN-x4plus/blob/main/Real-ESRGAN-x4plus.dlc) |
| Real-ESRGAN-x4plus | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | ONNX | 68.535 ms | 6 - 79 MB | NPU | [Real-ESRGAN-x4plus.onnx.zip](https://huggingface.co/qualcomm/Real-ESRGAN-x4plus/blob/main/Real-ESRGAN-x4plus.onnx.zip) |
| Real-ESRGAN-x4plus | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | TFLITE | 53.324 ms | 3 - 201 MB | NPU | [Real-ESRGAN-x4plus.tflite](https://huggingface.co/qualcomm/Real-ESRGAN-x4plus/blob/main/Real-ESRGAN-x4plus.tflite) |
| Real-ESRGAN-x4plus | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | QNN_DLC | 49.975 ms | 0 - 154 MB | NPU | [Real-ESRGAN-x4plus.dlc](https://huggingface.co/qualcomm/Real-ESRGAN-x4plus/blob/main/Real-ESRGAN-x4plus.dlc) |
| Real-ESRGAN-x4plus | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | ONNX | 52.301 ms | 9 - 172 MB | NPU | [Real-ESRGAN-x4plus.onnx.zip](https://huggingface.co/qualcomm/Real-ESRGAN-x4plus/blob/main/Real-ESRGAN-x4plus.onnx.zip) |
| Real-ESRGAN-x4plus | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | TFLITE | 45.01 ms | 3 - 195 MB | NPU | [Real-ESRGAN-x4plus.tflite](https://huggingface.co/qualcomm/Real-ESRGAN-x4plus/blob/main/Real-ESRGAN-x4plus.tflite) |
| Real-ESRGAN-x4plus | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | QNN_DLC | 43.704 ms | 0 - 139 MB | NPU | [Real-ESRGAN-x4plus.dlc](https://huggingface.co/qualcomm/Real-ESRGAN-x4plus/blob/main/Real-ESRGAN-x4plus.dlc) |
| Real-ESRGAN-x4plus | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | ONNX | 42.791 ms | 4 - 133 MB | NPU | [Real-ESRGAN-x4plus.onnx.zip](https://huggingface.co/qualcomm/Real-ESRGAN-x4plus/blob/main/Real-ESRGAN-x4plus.onnx.zip) |
| Real-ESRGAN-x4plus | float | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN_DLC | 65.014 ms | 131 - 131 MB | NPU | [Real-ESRGAN-x4plus.dlc](https://huggingface.co/qualcomm/Real-ESRGAN-x4plus/blob/main/Real-ESRGAN-x4plus.dlc) |
| Real-ESRGAN-x4plus | float | Snapdragon X Elite CRD | Snapdragon® X Elite | ONNX | 65.741 ms | 38 - 38 MB | NPU | [Real-ESRGAN-x4plus.onnx.zip](https://huggingface.co/qualcomm/Real-ESRGAN-x4plus/blob/main/Real-ESRGAN-x4plus.onnx.zip) |
| Real-ESRGAN-x4plus | w8a8 | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | TFLITE | 74.29 ms | 1 - 174 MB | NPU | [Real-ESRGAN-x4plus.tflite](https://huggingface.co/qualcomm/Real-ESRGAN-x4plus/blob/main/Real-ESRGAN-x4plus_w8a8.tflite) |
| Real-ESRGAN-x4plus | w8a8 | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | QNN_DLC | 65.927 ms | 0 - 188 MB | NPU | [Real-ESRGAN-x4plus.dlc](https://huggingface.co/qualcomm/Real-ESRGAN-x4plus/blob/main/Real-ESRGAN-x4plus_w8a8.dlc) |
| Real-ESRGAN-x4plus | w8a8 | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | TFLITE | 36.103 ms | 1 - 169 MB | NPU | [Real-ESRGAN-x4plus.tflite](https://huggingface.co/qualcomm/Real-ESRGAN-x4plus/blob/main/Real-ESRGAN-x4plus_w8a8.tflite) |
| Real-ESRGAN-x4plus | w8a8 | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | QNN_DLC | 36.77 ms | 0 - 193 MB | NPU | [Real-ESRGAN-x4plus.dlc](https://huggingface.co/qualcomm/Real-ESRGAN-x4plus/blob/main/Real-ESRGAN-x4plus_w8a8.dlc) |
| Real-ESRGAN-x4plus | w8a8 | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | TFLITE | 23.512 ms | 0 - 33 MB | NPU | [Real-ESRGAN-x4plus.tflite](https://huggingface.co/qualcomm/Real-ESRGAN-x4plus/blob/main/Real-ESRGAN-x4plus_w8a8.tflite) |
| Real-ESRGAN-x4plus | w8a8 | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | QNN_DLC | 21.468 ms | 0 - 50 MB | NPU | [Real-ESRGAN-x4plus.dlc](https://huggingface.co/qualcomm/Real-ESRGAN-x4plus/blob/main/Real-ESRGAN-x4plus_w8a8.dlc) |
| Real-ESRGAN-x4plus | w8a8 | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | TFLITE | 23.811 ms | 1 - 175 MB | NPU | [Real-ESRGAN-x4plus.tflite](https://huggingface.co/qualcomm/Real-ESRGAN-x4plus/blob/main/Real-ESRGAN-x4plus_w8a8.tflite) |
| Real-ESRGAN-x4plus | w8a8 | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | QNN_DLC | 20.088 ms | 0 - 189 MB | NPU | [Real-ESRGAN-x4plus.dlc](https://huggingface.co/qualcomm/Real-ESRGAN-x4plus/blob/main/Real-ESRGAN-x4plus_w8a8.dlc) |
| Real-ESRGAN-x4plus | w8a8 | RB3 Gen 2 (Proxy) | Qualcomm® QCS6490 (Proxy) | TFLITE | 115.469 ms | 1 - 171 MB | NPU | [Real-ESRGAN-x4plus.tflite](https://huggingface.co/qualcomm/Real-ESRGAN-x4plus/blob/main/Real-ESRGAN-x4plus_w8a8.tflite) |
| Real-ESRGAN-x4plus | w8a8 | RB3 Gen 2 (Proxy) | Qualcomm® QCS6490 (Proxy) | QNN_DLC | 171.611 ms | 0 - 212 MB | NPU | [Real-ESRGAN-x4plus.dlc](https://huggingface.co/qualcomm/Real-ESRGAN-x4plus/blob/main/Real-ESRGAN-x4plus_w8a8.dlc) |
| Real-ESRGAN-x4plus | w8a8 | RB5 (Proxy) | Qualcomm® QCS8250 (Proxy) | TFLITE | 1927.397 ms | 0 - 74 MB | GPU | [Real-ESRGAN-x4plus.tflite](https://huggingface.co/qualcomm/Real-ESRGAN-x4plus/blob/main/Real-ESRGAN-x4plus_w8a8.tflite) |
| Real-ESRGAN-x4plus | w8a8 | SA7255P ADP | Qualcomm® SA7255P | TFLITE | 74.29 ms | 1 - 174 MB | NPU | [Real-ESRGAN-x4plus.tflite](https://huggingface.co/qualcomm/Real-ESRGAN-x4plus/blob/main/Real-ESRGAN-x4plus_w8a8.tflite) |
| Real-ESRGAN-x4plus | w8a8 | SA7255P ADP | Qualcomm® SA7255P | QNN_DLC | 65.927 ms | 0 - 188 MB | NPU | [Real-ESRGAN-x4plus.dlc](https://huggingface.co/qualcomm/Real-ESRGAN-x4plus/blob/main/Real-ESRGAN-x4plus_w8a8.dlc) |
| Real-ESRGAN-x4plus | w8a8 | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | TFLITE | 23.483 ms | 0 - 35 MB | NPU | [Real-ESRGAN-x4plus.tflite](https://huggingface.co/qualcomm/Real-ESRGAN-x4plus/blob/main/Real-ESRGAN-x4plus_w8a8.tflite) |
| Real-ESRGAN-x4plus | w8a8 | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | QNN_DLC | 21.514 ms | 0 - 61 MB | NPU | [Real-ESRGAN-x4plus.dlc](https://huggingface.co/qualcomm/Real-ESRGAN-x4plus/blob/main/Real-ESRGAN-x4plus_w8a8.dlc) |
| Real-ESRGAN-x4plus | w8a8 | SA8295P ADP | Qualcomm® SA8295P | TFLITE | 39.55 ms | 1 - 166 MB | NPU | [Real-ESRGAN-x4plus.tflite](https://huggingface.co/qualcomm/Real-ESRGAN-x4plus/blob/main/Real-ESRGAN-x4plus_w8a8.tflite) |
| Real-ESRGAN-x4plus | w8a8 | SA8295P ADP | Qualcomm® SA8295P | QNN_DLC | 34.332 ms | 0 - 193 MB | NPU | [Real-ESRGAN-x4plus.dlc](https://huggingface.co/qualcomm/Real-ESRGAN-x4plus/blob/main/Real-ESRGAN-x4plus_w8a8.dlc) |
| Real-ESRGAN-x4plus | w8a8 | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | TFLITE | 23.545 ms | 0 - 34 MB | NPU | [Real-ESRGAN-x4plus.tflite](https://huggingface.co/qualcomm/Real-ESRGAN-x4plus/blob/main/Real-ESRGAN-x4plus_w8a8.tflite) |
| Real-ESRGAN-x4plus | w8a8 | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | QNN_DLC | 21.388 ms | 0 - 55 MB | NPU | [Real-ESRGAN-x4plus.dlc](https://huggingface.co/qualcomm/Real-ESRGAN-x4plus/blob/main/Real-ESRGAN-x4plus_w8a8.dlc) |
| Real-ESRGAN-x4plus | w8a8 | SA8775P ADP | Qualcomm® SA8775P | TFLITE | 23.811 ms | 1 - 175 MB | NPU | [Real-ESRGAN-x4plus.tflite](https://huggingface.co/qualcomm/Real-ESRGAN-x4plus/blob/main/Real-ESRGAN-x4plus_w8a8.tflite) |
| Real-ESRGAN-x4plus | w8a8 | SA8775P ADP | Qualcomm® SA8775P | QNN_DLC | 20.088 ms | 0 - 189 MB | NPU | [Real-ESRGAN-x4plus.dlc](https://huggingface.co/qualcomm/Real-ESRGAN-x4plus/blob/main/Real-ESRGAN-x4plus_w8a8.dlc) |
| Real-ESRGAN-x4plus | w8a8 | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | TFLITE | 23.617 ms | 0 - 32 MB | NPU | [Real-ESRGAN-x4plus.tflite](https://huggingface.co/qualcomm/Real-ESRGAN-x4plus/blob/main/Real-ESRGAN-x4plus_w8a8.tflite) |
| Real-ESRGAN-x4plus | w8a8 | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | QNN_DLC | 21.467 ms | 0 - 56 MB | NPU | [Real-ESRGAN-x4plus.dlc](https://huggingface.co/qualcomm/Real-ESRGAN-x4plus/blob/main/Real-ESRGAN-x4plus_w8a8.dlc) |
| Real-ESRGAN-x4plus | w8a8 | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | ONNX | 29.871 ms | 8 - 83 MB | NPU | [Real-ESRGAN-x4plus.onnx.zip](https://huggingface.co/qualcomm/Real-ESRGAN-x4plus/blob/main/Real-ESRGAN-x4plus_w8a8.onnx.zip) |
| Real-ESRGAN-x4plus | w8a8 | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | TFLITE | 17.875 ms | 26 - 206 MB | NPU | [Real-ESRGAN-x4plus.tflite](https://huggingface.co/qualcomm/Real-ESRGAN-x4plus/blob/main/Real-ESRGAN-x4plus_w8a8.tflite) |
| Real-ESRGAN-x4plus | w8a8 | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | QNN_DLC | 14.585 ms | 0 - 189 MB | NPU | [Real-ESRGAN-x4plus.dlc](https://huggingface.co/qualcomm/Real-ESRGAN-x4plus/blob/main/Real-ESRGAN-x4plus_w8a8.dlc) |
| Real-ESRGAN-x4plus | w8a8 | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | ONNX | 21.185 ms | 8 - 254 MB | NPU | [Real-ESRGAN-x4plus.onnx.zip](https://huggingface.co/qualcomm/Real-ESRGAN-x4plus/blob/main/Real-ESRGAN-x4plus_w8a8.onnx.zip) |
| Real-ESRGAN-x4plus | w8a8 | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | TFLITE | 16.023 ms | 1 - 172 MB | NPU | [Real-ESRGAN-x4plus.tflite](https://huggingface.co/qualcomm/Real-ESRGAN-x4plus/blob/main/Real-ESRGAN-x4plus_w8a8.tflite) |
| Real-ESRGAN-x4plus | w8a8 | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | QNN_DLC | 12.439 ms | 0 - 171 MB | NPU | [Real-ESRGAN-x4plus.dlc](https://huggingface.co/qualcomm/Real-ESRGAN-x4plus/blob/main/Real-ESRGAN-x4plus_w8a8.dlc) |
| Real-ESRGAN-x4plus | w8a8 | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | ONNX | 17.424 ms | 6 - 225 MB | NPU | [Real-ESRGAN-x4plus.onnx.zip](https://huggingface.co/qualcomm/Real-ESRGAN-x4plus/blob/main/Real-ESRGAN-x4plus_w8a8.onnx.zip) |
| Real-ESRGAN-x4plus | w8a8 | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN_DLC | 22.683 ms | 66 - 66 MB | NPU | [Real-ESRGAN-x4plus.dlc](https://huggingface.co/qualcomm/Real-ESRGAN-x4plus/blob/main/Real-ESRGAN-x4plus_w8a8.dlc) |
| Real-ESRGAN-x4plus | w8a8 | Snapdragon X Elite CRD | Snapdragon® X Elite | ONNX | 29.532 ms | 21 - 21 MB | NPU | [Real-ESRGAN-x4plus.onnx.zip](https://huggingface.co/qualcomm/Real-ESRGAN-x4plus/blob/main/Real-ESRGAN-x4plus_w8a8.onnx.zip) |
## Installation
Install the package via pip:
```bash
pip install "qai-hub-models[real-esrgan-x4plus]"
```
## Configure Qualcomm® AI Hub to run this model on a cloud-hosted device
Sign-in to [Qualcomm® AI Hub](https://app.aihub.qualcomm.com/) with your
Qualcomm® ID. Once signed in navigate to `Account -> Settings -> API Token`.
With this API token, you can configure your client to run models on the cloud
hosted devices.
```bash
qai-hub configure --api_token API_TOKEN
```
Navigate to [docs](https://app.aihub.qualcomm.com/docs/) for more information.
## Demo off target
The package contains a simple end-to-end demo that downloads pre-trained
weights and runs this model on a sample input.
```bash
python -m qai_hub_models.models.real_esrgan_x4plus.demo
```
The above demo runs a reference implementation of pre-processing, model
inference, and post processing.
**NOTE**: If you want running in a Jupyter Notebook or Google Colab like
environment, please add the following to your cell (instead of the above).
```
%run -m qai_hub_models.models.real_esrgan_x4plus.demo
```
### Run model on a cloud-hosted device
In addition to the demo, you can also run the model on a cloud-hosted Qualcomm®
device. This script does the following:
* Performance check on-device on a cloud-hosted device
* Downloads compiled assets that can be deployed on-device for Android.
* Accuracy check between PyTorch and on-device outputs.
```bash
python -m qai_hub_models.models.real_esrgan_x4plus.export
```
## How does this work?
This [export script](https://aihub.qualcomm.com/models/real_esrgan_x4plus/qai_hub_models/models/Real-ESRGAN-x4plus/export.py)
leverages [Qualcomm® AI Hub](https://aihub.qualcomm.com/) to optimize, validate, and deploy this model
on-device. Lets go through each step below in detail:
Step 1: **Compile model for on-device deployment**
To compile a PyTorch model for on-device deployment, we first trace the model
in memory using the `jit.trace` and then call the `submit_compile_job` API.
```python
import torch
import qai_hub as hub
from qai_hub_models.models.real_esrgan_x4plus import Model
# Load the model
torch_model = Model.from_pretrained()
# Device
device = hub.Device("Samsung Galaxy S24")
# Trace model
input_shape = torch_model.get_input_spec()
sample_inputs = torch_model.sample_inputs()
pt_model = torch.jit.trace(torch_model, [torch.tensor(data[0]) for _, data in sample_inputs.items()])
# Compile model on a specific device
compile_job = hub.submit_compile_job(
model=pt_model,
device=device,
input_specs=torch_model.get_input_spec(),
)
# Get target model to run on-device
target_model = compile_job.get_target_model()
```
Step 2: **Performance profiling on cloud-hosted device**
After compiling models from step 1. Models can be profiled model on-device using the
`target_model`. Note that this scripts runs the model on a device automatically
provisioned in the cloud. Once the job is submitted, you can navigate to a
provided job URL to view a variety of on-device performance metrics.
```python
profile_job = hub.submit_profile_job(
model=target_model,
device=device,
)
```
Step 3: **Verify on-device accuracy**
To verify the accuracy of the model on-device, you can run on-device inference
on sample input data on the same cloud hosted device.
```python
input_data = torch_model.sample_inputs()
inference_job = hub.submit_inference_job(
model=target_model,
device=device,
inputs=input_data,
)
on_device_output = inference_job.download_output_data()
```
With the output of the model, you can compute like PSNR, relative errors or
spot check the output with expected output.
**Note**: This on-device profiling and inference requires access to Qualcomm®
AI Hub. [Sign up for access](https://myaccount.qualcomm.com/signup).
## Run demo on a cloud-hosted device
You can also run the demo on-device.
```bash
python -m qai_hub_models.models.real_esrgan_x4plus.demo --eval-mode on-device
```
**NOTE**: If you want running in a Jupyter Notebook or Google Colab like
environment, please add the following to your cell (instead of the above).
```
%run -m qai_hub_models.models.real_esrgan_x4plus.demo -- --eval-mode on-device
```
## Deploying compiled model to Android
The models can be deployed using multiple runtimes:
- TensorFlow Lite (`.tflite` export): [This
tutorial](https://www.tensorflow.org/lite/android/quickstart) provides a
guide to deploy the .tflite model in an Android application.
- QNN (`.so` export ): This [sample
app](https://docs.qualcomm.com/bundle/publicresource/topics/80-63442-50/sample_app.html)
provides instructions on how to use the `.so` shared library in an Android application.
## View on Qualcomm® AI Hub
Get more details on Real-ESRGAN-x4plus's performance across various devices [here](https://aihub.qualcomm.com/models/real_esrgan_x4plus).
Explore all available models on [Qualcomm® AI Hub](https://aihub.qualcomm.com/)
## License
* The license for the original implementation of Real-ESRGAN-x4plus can be found
[here](https://github.com/xinntao/Real-ESRGAN/blob/master/LICENSE).
* The license for the compiled assets for on-device deployment can be found [here](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/Qualcomm+AI+Hub+Proprietary+License.pdf)
## References
* [Real-ESRGAN: Training Real-World Blind Super-Resolution with Pure Synthetic Data](https://arxiv.org/abs/2107.10833)
* [Source Model Implementation](https://github.com/xinntao/Real-ESRGAN)
## Community
* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI.
* For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com).
|
ultratopaz/446025
|
ultratopaz
| 2025-08-29T23:36:26Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-29T23:36:23Z |
[View on Civ Archive](https://civarchive.com/models/475726?modelVersionId=529135)
|
amethyst9/894438
|
amethyst9
| 2025-08-29T23:36:16Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-29T23:36:10Z |
[View on Civ Archive](https://civarchive.com/models/883068?modelVersionId=988505)
|
qualcomm/Real-ESRGAN-General-x4v3
|
qualcomm
| 2025-08-29T23:36:02Z | 99 | 7 |
pytorch
|
[
"pytorch",
"tflite",
"android",
"image-to-image",
"arxiv:2107.10833",
"license:other",
"region:us"
] |
image-to-image
| 2024-02-25T22:57:33Z |
---
library_name: pytorch
license: other
tags:
- android
pipeline_tag: image-to-image
---

# Real-ESRGAN-General-x4v3: Optimized for Mobile Deployment
## Upscale images and remove image noise
Real-ESRGAN is a machine learning model that upscales an image with minimal loss in quality.
This model is an implementation of Real-ESRGAN-General-x4v3 found [here](https://github.com/xinntao/Real-ESRGAN/tree/master).
This repository provides scripts to run Real-ESRGAN-General-x4v3 on Qualcomm® devices.
More details on model performance across various devices, can be found
[here](https://aihub.qualcomm.com/models/real_esrgan_general_x4v3).
### Model Details
- **Model Type:** Model_use_case.super_resolution
- **Model Stats:**
- Model checkpoint: realesr-general-x4v3
- Input resolution: 128x128
- Number of parameters: 1.21M
- Model size (float): 4.65 MB
- Model size (w8a8): 1.25 MB
| Model | Precision | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Primary Compute Unit | Target Model
|---|---|---|---|---|---|---|---|---|
| Real-ESRGAN-General-x4v3 | float | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | TFLITE | 33.947 ms | 3 - 30 MB | NPU | [Real-ESRGAN-General-x4v3.tflite](https://huggingface.co/qualcomm/Real-ESRGAN-General-x4v3/blob/main/Real-ESRGAN-General-x4v3.tflite) |
| Real-ESRGAN-General-x4v3 | float | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | QNN_DLC | 31.962 ms | 0 - 24 MB | NPU | [Real-ESRGAN-General-x4v3.dlc](https://huggingface.co/qualcomm/Real-ESRGAN-General-x4v3/blob/main/Real-ESRGAN-General-x4v3.dlc) |
| Real-ESRGAN-General-x4v3 | float | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | TFLITE | 9.489 ms | 3 - 47 MB | NPU | [Real-ESRGAN-General-x4v3.tflite](https://huggingface.co/qualcomm/Real-ESRGAN-General-x4v3/blob/main/Real-ESRGAN-General-x4v3.tflite) |
| Real-ESRGAN-General-x4v3 | float | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | QNN_DLC | 8.976 ms | 0 - 45 MB | NPU | [Real-ESRGAN-General-x4v3.dlc](https://huggingface.co/qualcomm/Real-ESRGAN-General-x4v3/blob/main/Real-ESRGAN-General-x4v3.dlc) |
| Real-ESRGAN-General-x4v3 | float | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | TFLITE | 6.291 ms | 0 - 15 MB | NPU | [Real-ESRGAN-General-x4v3.tflite](https://huggingface.co/qualcomm/Real-ESRGAN-General-x4v3/blob/main/Real-ESRGAN-General-x4v3.tflite) |
| Real-ESRGAN-General-x4v3 | float | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | QNN_DLC | 5.379 ms | 0 - 12 MB | NPU | [Real-ESRGAN-General-x4v3.dlc](https://huggingface.co/qualcomm/Real-ESRGAN-General-x4v3/blob/main/Real-ESRGAN-General-x4v3.dlc) |
| Real-ESRGAN-General-x4v3 | float | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | TFLITE | 9.867 ms | 3 - 30 MB | NPU | [Real-ESRGAN-General-x4v3.tflite](https://huggingface.co/qualcomm/Real-ESRGAN-General-x4v3/blob/main/Real-ESRGAN-General-x4v3.tflite) |
| Real-ESRGAN-General-x4v3 | float | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | QNN_DLC | 8.764 ms | 0 - 25 MB | NPU | [Real-ESRGAN-General-x4v3.dlc](https://huggingface.co/qualcomm/Real-ESRGAN-General-x4v3/blob/main/Real-ESRGAN-General-x4v3.dlc) |
| Real-ESRGAN-General-x4v3 | float | SA7255P ADP | Qualcomm® SA7255P | TFLITE | 33.947 ms | 3 - 30 MB | NPU | [Real-ESRGAN-General-x4v3.tflite](https://huggingface.co/qualcomm/Real-ESRGAN-General-x4v3/blob/main/Real-ESRGAN-General-x4v3.tflite) |
| Real-ESRGAN-General-x4v3 | float | SA7255P ADP | Qualcomm® SA7255P | QNN_DLC | 31.962 ms | 0 - 24 MB | NPU | [Real-ESRGAN-General-x4v3.dlc](https://huggingface.co/qualcomm/Real-ESRGAN-General-x4v3/blob/main/Real-ESRGAN-General-x4v3.dlc) |
| Real-ESRGAN-General-x4v3 | float | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | TFLITE | 6.3 ms | 1 - 12 MB | NPU | [Real-ESRGAN-General-x4v3.tflite](https://huggingface.co/qualcomm/Real-ESRGAN-General-x4v3/blob/main/Real-ESRGAN-General-x4v3.tflite) |
| Real-ESRGAN-General-x4v3 | float | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | QNN_DLC | 5.386 ms | 0 - 11 MB | NPU | [Real-ESRGAN-General-x4v3.dlc](https://huggingface.co/qualcomm/Real-ESRGAN-General-x4v3/blob/main/Real-ESRGAN-General-x4v3.dlc) |
| Real-ESRGAN-General-x4v3 | float | SA8295P ADP | Qualcomm® SA8295P | TFLITE | 10.962 ms | 3 - 37 MB | NPU | [Real-ESRGAN-General-x4v3.tflite](https://huggingface.co/qualcomm/Real-ESRGAN-General-x4v3/blob/main/Real-ESRGAN-General-x4v3.tflite) |
| Real-ESRGAN-General-x4v3 | float | SA8295P ADP | Qualcomm® SA8295P | QNN_DLC | 9.753 ms | 0 - 37 MB | NPU | [Real-ESRGAN-General-x4v3.dlc](https://huggingface.co/qualcomm/Real-ESRGAN-General-x4v3/blob/main/Real-ESRGAN-General-x4v3.dlc) |
| Real-ESRGAN-General-x4v3 | float | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | TFLITE | 6.3 ms | 3 - 14 MB | NPU | [Real-ESRGAN-General-x4v3.tflite](https://huggingface.co/qualcomm/Real-ESRGAN-General-x4v3/blob/main/Real-ESRGAN-General-x4v3.tflite) |
| Real-ESRGAN-General-x4v3 | float | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | QNN_DLC | 5.384 ms | 0 - 11 MB | NPU | [Real-ESRGAN-General-x4v3.dlc](https://huggingface.co/qualcomm/Real-ESRGAN-General-x4v3/blob/main/Real-ESRGAN-General-x4v3.dlc) |
| Real-ESRGAN-General-x4v3 | float | SA8775P ADP | Qualcomm® SA8775P | TFLITE | 9.867 ms | 3 - 30 MB | NPU | [Real-ESRGAN-General-x4v3.tflite](https://huggingface.co/qualcomm/Real-ESRGAN-General-x4v3/blob/main/Real-ESRGAN-General-x4v3.tflite) |
| Real-ESRGAN-General-x4v3 | float | SA8775P ADP | Qualcomm® SA8775P | QNN_DLC | 8.764 ms | 0 - 25 MB | NPU | [Real-ESRGAN-General-x4v3.dlc](https://huggingface.co/qualcomm/Real-ESRGAN-General-x4v3/blob/main/Real-ESRGAN-General-x4v3.dlc) |
| Real-ESRGAN-General-x4v3 | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | TFLITE | 6.293 ms | 0 - 10 MB | NPU | [Real-ESRGAN-General-x4v3.tflite](https://huggingface.co/qualcomm/Real-ESRGAN-General-x4v3/blob/main/Real-ESRGAN-General-x4v3.tflite) |
| Real-ESRGAN-General-x4v3 | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | QNN_DLC | 5.377 ms | 0 - 8 MB | NPU | [Real-ESRGAN-General-x4v3.dlc](https://huggingface.co/qualcomm/Real-ESRGAN-General-x4v3/blob/main/Real-ESRGAN-General-x4v3.dlc) |
| Real-ESRGAN-General-x4v3 | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | ONNX | 6.419 ms | 4 - 19 MB | NPU | [Real-ESRGAN-General-x4v3.onnx.zip](https://huggingface.co/qualcomm/Real-ESRGAN-General-x4v3/blob/main/Real-ESRGAN-General-x4v3.onnx.zip) |
| Real-ESRGAN-General-x4v3 | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | TFLITE | 4.627 ms | 0 - 45 MB | NPU | [Real-ESRGAN-General-x4v3.tflite](https://huggingface.co/qualcomm/Real-ESRGAN-General-x4v3/blob/main/Real-ESRGAN-General-x4v3.tflite) |
| Real-ESRGAN-General-x4v3 | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | QNN_DLC | 3.902 ms | 0 - 36 MB | NPU | [Real-ESRGAN-General-x4v3.dlc](https://huggingface.co/qualcomm/Real-ESRGAN-General-x4v3/blob/main/Real-ESRGAN-General-x4v3.dlc) |
| Real-ESRGAN-General-x4v3 | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | ONNX | 4.725 ms | 6 - 54 MB | NPU | [Real-ESRGAN-General-x4v3.onnx.zip](https://huggingface.co/qualcomm/Real-ESRGAN-General-x4v3/blob/main/Real-ESRGAN-General-x4v3.onnx.zip) |
| Real-ESRGAN-General-x4v3 | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | TFLITE | 4.314 ms | 0 - 30 MB | NPU | [Real-ESRGAN-General-x4v3.tflite](https://huggingface.co/qualcomm/Real-ESRGAN-General-x4v3/blob/main/Real-ESRGAN-General-x4v3.tflite) |
| Real-ESRGAN-General-x4v3 | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | QNN_DLC | 3.822 ms | 0 - 35 MB | NPU | [Real-ESRGAN-General-x4v3.dlc](https://huggingface.co/qualcomm/Real-ESRGAN-General-x4v3/blob/main/Real-ESRGAN-General-x4v3.dlc) |
| Real-ESRGAN-General-x4v3 | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | ONNX | 3.638 ms | 0 - 37 MB | NPU | [Real-ESRGAN-General-x4v3.onnx.zip](https://huggingface.co/qualcomm/Real-ESRGAN-General-x4v3/blob/main/Real-ESRGAN-General-x4v3.onnx.zip) |
| Real-ESRGAN-General-x4v3 | float | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN_DLC | 5.837 ms | 14 - 14 MB | NPU | [Real-ESRGAN-General-x4v3.dlc](https://huggingface.co/qualcomm/Real-ESRGAN-General-x4v3/blob/main/Real-ESRGAN-General-x4v3.dlc) |
| Real-ESRGAN-General-x4v3 | float | Snapdragon X Elite CRD | Snapdragon® X Elite | ONNX | 6.523 ms | 8 - 8 MB | NPU | [Real-ESRGAN-General-x4v3.onnx.zip](https://huggingface.co/qualcomm/Real-ESRGAN-General-x4v3/blob/main/Real-ESRGAN-General-x4v3.onnx.zip) |
| Real-ESRGAN-General-x4v3 | w8a8 | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | TFLITE | 5.835 ms | 1 - 27 MB | NPU | [Real-ESRGAN-General-x4v3.tflite](https://huggingface.co/qualcomm/Real-ESRGAN-General-x4v3/blob/main/Real-ESRGAN-General-x4v3_w8a8.tflite) |
| Real-ESRGAN-General-x4v3 | w8a8 | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | QNN_DLC | 5.336 ms | 0 - 24 MB | NPU | [Real-ESRGAN-General-x4v3.dlc](https://huggingface.co/qualcomm/Real-ESRGAN-General-x4v3/blob/main/Real-ESRGAN-General-x4v3_w8a8.dlc) |
| Real-ESRGAN-General-x4v3 | w8a8 | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | TFLITE | 2.716 ms | 0 - 41 MB | NPU | [Real-ESRGAN-General-x4v3.tflite](https://huggingface.co/qualcomm/Real-ESRGAN-General-x4v3/blob/main/Real-ESRGAN-General-x4v3_w8a8.tflite) |
| Real-ESRGAN-General-x4v3 | w8a8 | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | QNN_DLC | 2.911 ms | 0 - 37 MB | NPU | [Real-ESRGAN-General-x4v3.dlc](https://huggingface.co/qualcomm/Real-ESRGAN-General-x4v3/blob/main/Real-ESRGAN-General-x4v3_w8a8.dlc) |
| Real-ESRGAN-General-x4v3 | w8a8 | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | TFLITE | 1.849 ms | 0 - 8 MB | NPU | [Real-ESRGAN-General-x4v3.tflite](https://huggingface.co/qualcomm/Real-ESRGAN-General-x4v3/blob/main/Real-ESRGAN-General-x4v3_w8a8.tflite) |
| Real-ESRGAN-General-x4v3 | w8a8 | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | QNN_DLC | 1.642 ms | 0 - 8 MB | NPU | [Real-ESRGAN-General-x4v3.dlc](https://huggingface.co/qualcomm/Real-ESRGAN-General-x4v3/blob/main/Real-ESRGAN-General-x4v3_w8a8.dlc) |
| Real-ESRGAN-General-x4v3 | w8a8 | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | TFLITE | 2.166 ms | 0 - 26 MB | NPU | [Real-ESRGAN-General-x4v3.tflite](https://huggingface.co/qualcomm/Real-ESRGAN-General-x4v3/blob/main/Real-ESRGAN-General-x4v3_w8a8.tflite) |
| Real-ESRGAN-General-x4v3 | w8a8 | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | QNN_DLC | 1.942 ms | 0 - 25 MB | NPU | [Real-ESRGAN-General-x4v3.dlc](https://huggingface.co/qualcomm/Real-ESRGAN-General-x4v3/blob/main/Real-ESRGAN-General-x4v3_w8a8.dlc) |
| Real-ESRGAN-General-x4v3 | w8a8 | RB3 Gen 2 (Proxy) | Qualcomm® QCS6490 (Proxy) | TFLITE | 7.289 ms | 1 - 30 MB | NPU | [Real-ESRGAN-General-x4v3.tflite](https://huggingface.co/qualcomm/Real-ESRGAN-General-x4v3/blob/main/Real-ESRGAN-General-x4v3_w8a8.tflite) |
| Real-ESRGAN-General-x4v3 | w8a8 | RB3 Gen 2 (Proxy) | Qualcomm® QCS6490 (Proxy) | QNN_DLC | 9.972 ms | 0 - 29 MB | NPU | [Real-ESRGAN-General-x4v3.dlc](https://huggingface.co/qualcomm/Real-ESRGAN-General-x4v3/blob/main/Real-ESRGAN-General-x4v3_w8a8.dlc) |
| Real-ESRGAN-General-x4v3 | w8a8 | RB5 (Proxy) | Qualcomm® QCS8250 (Proxy) | TFLITE | 35.802 ms | 1 - 3 MB | NPU | [Real-ESRGAN-General-x4v3.tflite](https://huggingface.co/qualcomm/Real-ESRGAN-General-x4v3/blob/main/Real-ESRGAN-General-x4v3_w8a8.tflite) |
| Real-ESRGAN-General-x4v3 | w8a8 | SA7255P ADP | Qualcomm® SA7255P | TFLITE | 5.835 ms | 1 - 27 MB | NPU | [Real-ESRGAN-General-x4v3.tflite](https://huggingface.co/qualcomm/Real-ESRGAN-General-x4v3/blob/main/Real-ESRGAN-General-x4v3_w8a8.tflite) |
| Real-ESRGAN-General-x4v3 | w8a8 | SA7255P ADP | Qualcomm® SA7255P | QNN_DLC | 5.336 ms | 0 - 24 MB | NPU | [Real-ESRGAN-General-x4v3.dlc](https://huggingface.co/qualcomm/Real-ESRGAN-General-x4v3/blob/main/Real-ESRGAN-General-x4v3_w8a8.dlc) |
| Real-ESRGAN-General-x4v3 | w8a8 | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | TFLITE | 1.845 ms | 0 - 7 MB | NPU | [Real-ESRGAN-General-x4v3.tflite](https://huggingface.co/qualcomm/Real-ESRGAN-General-x4v3/blob/main/Real-ESRGAN-General-x4v3_w8a8.tflite) |
| Real-ESRGAN-General-x4v3 | w8a8 | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | QNN_DLC | 1.64 ms | 0 - 8 MB | NPU | [Real-ESRGAN-General-x4v3.dlc](https://huggingface.co/qualcomm/Real-ESRGAN-General-x4v3/blob/main/Real-ESRGAN-General-x4v3_w8a8.dlc) |
| Real-ESRGAN-General-x4v3 | w8a8 | SA8295P ADP | Qualcomm® SA8295P | TFLITE | 3.277 ms | 0 - 32 MB | NPU | [Real-ESRGAN-General-x4v3.tflite](https://huggingface.co/qualcomm/Real-ESRGAN-General-x4v3/blob/main/Real-ESRGAN-General-x4v3_w8a8.tflite) |
| Real-ESRGAN-General-x4v3 | w8a8 | SA8295P ADP | Qualcomm® SA8295P | QNN_DLC | 3.214 ms | 0 - 32 MB | NPU | [Real-ESRGAN-General-x4v3.dlc](https://huggingface.co/qualcomm/Real-ESRGAN-General-x4v3/blob/main/Real-ESRGAN-General-x4v3_w8a8.dlc) |
| Real-ESRGAN-General-x4v3 | w8a8 | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | TFLITE | 1.853 ms | 0 - 9 MB | NPU | [Real-ESRGAN-General-x4v3.tflite](https://huggingface.co/qualcomm/Real-ESRGAN-General-x4v3/blob/main/Real-ESRGAN-General-x4v3_w8a8.tflite) |
| Real-ESRGAN-General-x4v3 | w8a8 | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | QNN_DLC | 1.643 ms | 0 - 9 MB | NPU | [Real-ESRGAN-General-x4v3.dlc](https://huggingface.co/qualcomm/Real-ESRGAN-General-x4v3/blob/main/Real-ESRGAN-General-x4v3_w8a8.dlc) |
| Real-ESRGAN-General-x4v3 | w8a8 | SA8775P ADP | Qualcomm® SA8775P | TFLITE | 2.166 ms | 0 - 26 MB | NPU | [Real-ESRGAN-General-x4v3.tflite](https://huggingface.co/qualcomm/Real-ESRGAN-General-x4v3/blob/main/Real-ESRGAN-General-x4v3_w8a8.tflite) |
| Real-ESRGAN-General-x4v3 | w8a8 | SA8775P ADP | Qualcomm® SA8775P | QNN_DLC | 1.942 ms | 0 - 25 MB | NPU | [Real-ESRGAN-General-x4v3.dlc](https://huggingface.co/qualcomm/Real-ESRGAN-General-x4v3/blob/main/Real-ESRGAN-General-x4v3_w8a8.dlc) |
| Real-ESRGAN-General-x4v3 | w8a8 | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | TFLITE | 1.848 ms | 0 - 9 MB | NPU | [Real-ESRGAN-General-x4v3.tflite](https://huggingface.co/qualcomm/Real-ESRGAN-General-x4v3/blob/main/Real-ESRGAN-General-x4v3_w8a8.tflite) |
| Real-ESRGAN-General-x4v3 | w8a8 | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | QNN_DLC | 1.64 ms | 0 - 7 MB | NPU | [Real-ESRGAN-General-x4v3.dlc](https://huggingface.co/qualcomm/Real-ESRGAN-General-x4v3/blob/main/Real-ESRGAN-General-x4v3_w8a8.dlc) |
| Real-ESRGAN-General-x4v3 | w8a8 | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | ONNX | 3.311 ms | 0 - 15 MB | NPU | [Real-ESRGAN-General-x4v3.onnx.zip](https://huggingface.co/qualcomm/Real-ESRGAN-General-x4v3/blob/main/Real-ESRGAN-General-x4v3_w8a8.onnx.zip) |
| Real-ESRGAN-General-x4v3 | w8a8 | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | TFLITE | 1.313 ms | 0 - 34 MB | NPU | [Real-ESRGAN-General-x4v3.tflite](https://huggingface.co/qualcomm/Real-ESRGAN-General-x4v3/blob/main/Real-ESRGAN-General-x4v3_w8a8.tflite) |
| Real-ESRGAN-General-x4v3 | w8a8 | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | QNN_DLC | 1.152 ms | 0 - 35 MB | NPU | [Real-ESRGAN-General-x4v3.dlc](https://huggingface.co/qualcomm/Real-ESRGAN-General-x4v3/blob/main/Real-ESRGAN-General-x4v3_w8a8.dlc) |
| Real-ESRGAN-General-x4v3 | w8a8 | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | ONNX | 2.283 ms | 0 - 41 MB | NPU | [Real-ESRGAN-General-x4v3.onnx.zip](https://huggingface.co/qualcomm/Real-ESRGAN-General-x4v3/blob/main/Real-ESRGAN-General-x4v3_w8a8.onnx.zip) |
| Real-ESRGAN-General-x4v3 | w8a8 | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | TFLITE | 1.212 ms | 0 - 29 MB | NPU | [Real-ESRGAN-General-x4v3.tflite](https://huggingface.co/qualcomm/Real-ESRGAN-General-x4v3/blob/main/Real-ESRGAN-General-x4v3_w8a8.tflite) |
| Real-ESRGAN-General-x4v3 | w8a8 | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | QNN_DLC | 1.042 ms | 0 - 30 MB | NPU | [Real-ESRGAN-General-x4v3.dlc](https://huggingface.co/qualcomm/Real-ESRGAN-General-x4v3/blob/main/Real-ESRGAN-General-x4v3_w8a8.dlc) |
| Real-ESRGAN-General-x4v3 | w8a8 | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | ONNX | 1.649 ms | 1 - 39 MB | NPU | [Real-ESRGAN-General-x4v3.onnx.zip](https://huggingface.co/qualcomm/Real-ESRGAN-General-x4v3/blob/main/Real-ESRGAN-General-x4v3_w8a8.onnx.zip) |
| Real-ESRGAN-General-x4v3 | w8a8 | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN_DLC | 1.853 ms | 15 - 15 MB | NPU | [Real-ESRGAN-General-x4v3.dlc](https://huggingface.co/qualcomm/Real-ESRGAN-General-x4v3/blob/main/Real-ESRGAN-General-x4v3_w8a8.dlc) |
| Real-ESRGAN-General-x4v3 | w8a8 | Snapdragon X Elite CRD | Snapdragon® X Elite | ONNX | 3.448 ms | 2 - 2 MB | NPU | [Real-ESRGAN-General-x4v3.onnx.zip](https://huggingface.co/qualcomm/Real-ESRGAN-General-x4v3/blob/main/Real-ESRGAN-General-x4v3_w8a8.onnx.zip) |
## Installation
Install the package via pip:
```bash
pip install "qai-hub-models[real-esrgan-general-x4v3]"
```
## Configure Qualcomm® AI Hub to run this model on a cloud-hosted device
Sign-in to [Qualcomm® AI Hub](https://app.aihub.qualcomm.com/) with your
Qualcomm® ID. Once signed in navigate to `Account -> Settings -> API Token`.
With this API token, you can configure your client to run models on the cloud
hosted devices.
```bash
qai-hub configure --api_token API_TOKEN
```
Navigate to [docs](https://app.aihub.qualcomm.com/docs/) for more information.
## Demo off target
The package contains a simple end-to-end demo that downloads pre-trained
weights and runs this model on a sample input.
```bash
python -m qai_hub_models.models.real_esrgan_general_x4v3.demo
```
The above demo runs a reference implementation of pre-processing, model
inference, and post processing.
**NOTE**: If you want running in a Jupyter Notebook or Google Colab like
environment, please add the following to your cell (instead of the above).
```
%run -m qai_hub_models.models.real_esrgan_general_x4v3.demo
```
### Run model on a cloud-hosted device
In addition to the demo, you can also run the model on a cloud-hosted Qualcomm®
device. This script does the following:
* Performance check on-device on a cloud-hosted device
* Downloads compiled assets that can be deployed on-device for Android.
* Accuracy check between PyTorch and on-device outputs.
```bash
python -m qai_hub_models.models.real_esrgan_general_x4v3.export
```
## How does this work?
This [export script](https://aihub.qualcomm.com/models/real_esrgan_general_x4v3/qai_hub_models/models/Real-ESRGAN-General-x4v3/export.py)
leverages [Qualcomm® AI Hub](https://aihub.qualcomm.com/) to optimize, validate, and deploy this model
on-device. Lets go through each step below in detail:
Step 1: **Compile model for on-device deployment**
To compile a PyTorch model for on-device deployment, we first trace the model
in memory using the `jit.trace` and then call the `submit_compile_job` API.
```python
import torch
import qai_hub as hub
from qai_hub_models.models.real_esrgan_general_x4v3 import Model
# Load the model
torch_model = Model.from_pretrained()
# Device
device = hub.Device("Samsung Galaxy S24")
# Trace model
input_shape = torch_model.get_input_spec()
sample_inputs = torch_model.sample_inputs()
pt_model = torch.jit.trace(torch_model, [torch.tensor(data[0]) for _, data in sample_inputs.items()])
# Compile model on a specific device
compile_job = hub.submit_compile_job(
model=pt_model,
device=device,
input_specs=torch_model.get_input_spec(),
)
# Get target model to run on-device
target_model = compile_job.get_target_model()
```
Step 2: **Performance profiling on cloud-hosted device**
After compiling models from step 1. Models can be profiled model on-device using the
`target_model`. Note that this scripts runs the model on a device automatically
provisioned in the cloud. Once the job is submitted, you can navigate to a
provided job URL to view a variety of on-device performance metrics.
```python
profile_job = hub.submit_profile_job(
model=target_model,
device=device,
)
```
Step 3: **Verify on-device accuracy**
To verify the accuracy of the model on-device, you can run on-device inference
on sample input data on the same cloud hosted device.
```python
input_data = torch_model.sample_inputs()
inference_job = hub.submit_inference_job(
model=target_model,
device=device,
inputs=input_data,
)
on_device_output = inference_job.download_output_data()
```
With the output of the model, you can compute like PSNR, relative errors or
spot check the output with expected output.
**Note**: This on-device profiling and inference requires access to Qualcomm®
AI Hub. [Sign up for access](https://myaccount.qualcomm.com/signup).
## Run demo on a cloud-hosted device
You can also run the demo on-device.
```bash
python -m qai_hub_models.models.real_esrgan_general_x4v3.demo --eval-mode on-device
```
**NOTE**: If you want running in a Jupyter Notebook or Google Colab like
environment, please add the following to your cell (instead of the above).
```
%run -m qai_hub_models.models.real_esrgan_general_x4v3.demo -- --eval-mode on-device
```
## Deploying compiled model to Android
The models can be deployed using multiple runtimes:
- TensorFlow Lite (`.tflite` export): [This
tutorial](https://www.tensorflow.org/lite/android/quickstart) provides a
guide to deploy the .tflite model in an Android application.
- QNN (`.so` export ): This [sample
app](https://docs.qualcomm.com/bundle/publicresource/topics/80-63442-50/sample_app.html)
provides instructions on how to use the `.so` shared library in an Android application.
## View on Qualcomm® AI Hub
Get more details on Real-ESRGAN-General-x4v3's performance across various devices [here](https://aihub.qualcomm.com/models/real_esrgan_general_x4v3).
Explore all available models on [Qualcomm® AI Hub](https://aihub.qualcomm.com/)
## License
* The license for the original implementation of Real-ESRGAN-General-x4v3 can be found
[here](https://github.com/xinntao/Real-ESRGAN/blob/master/LICENSE).
* The license for the compiled assets for on-device deployment can be found [here](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/Qualcomm+AI+Hub+Proprietary+License.pdf)
## References
* [Real-ESRGAN: Training Real-World Blind Super-Resolution with Pure Synthetic Data](https://arxiv.org/abs/2107.10833)
* [Source Model Implementation](https://github.com/xinntao/Real-ESRGAN/tree/master)
## Community
* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI.
* For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com).
|
seraphimzzzz/801065
|
seraphimzzzz
| 2025-08-29T23:36:02Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-29T23:35:56Z |
[View on Civ Archive](https://civarchive.com/models/795340?modelVersionId=889403)
|
mradermacher/Alisia-7B-instruct-GGUF
|
mradermacher
| 2025-08-29T23:35:49Z | 44 | 0 |
transformers
|
[
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"qwen2",
"en",
"fr",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-28T18:42:09Z |
---
base_model: Gems234/Alisia-7B-instruct
language:
- en
- fr
library_name: transformers
license: apache-2.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/Gems234/Alisia-7B-instruct
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Alisia-7B-instruct-GGUF).***
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Alisia-7B-instruct-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Alisia-7B-instruct-GGUF/resolve/main/Alisia-7B-instruct.Q2_K.gguf) | Q2_K | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Alisia-7B-instruct-GGUF/resolve/main/Alisia-7B-instruct.Q3_K_S.gguf) | Q3_K_S | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Alisia-7B-instruct-GGUF/resolve/main/Alisia-7B-instruct.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Alisia-7B-instruct-GGUF/resolve/main/Alisia-7B-instruct.Q3_K_L.gguf) | Q3_K_L | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Alisia-7B-instruct-GGUF/resolve/main/Alisia-7B-instruct.IQ4_XS.gguf) | IQ4_XS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Alisia-7B-instruct-GGUF/resolve/main/Alisia-7B-instruct.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Alisia-7B-instruct-GGUF/resolve/main/Alisia-7B-instruct.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Alisia-7B-instruct-GGUF/resolve/main/Alisia-7B-instruct.Q5_K_S.gguf) | Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Alisia-7B-instruct-GGUF/resolve/main/Alisia-7B-instruct.Q5_K_M.gguf) | Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/Alisia-7B-instruct-GGUF/resolve/main/Alisia-7B-instruct.Q6_K.gguf) | Q6_K | 6.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Alisia-7B-instruct-GGUF/resolve/main/Alisia-7B-instruct.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Alisia-7B-instruct-GGUF/resolve/main/Alisia-7B-instruct.f16.gguf) | f16 | 15.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
ultratopaz/547876
|
ultratopaz
| 2025-08-29T23:35:48Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-29T23:35:42Z |
[View on Civ Archive](https://civarchive.com/models/568173?modelVersionId=633199)
|
qualcomm/Qwen2.5-7B-Instruct
|
qualcomm
| 2025-08-29T23:35:42Z | 0 | 0 |
pytorch
|
[
"pytorch",
"llm",
"generative_ai",
"android",
"text-generation",
"arxiv:2412.15115",
"license:other",
"region:us"
] |
text-generation
| 2025-06-23T21:58:42Z |
---
library_name: pytorch
license: other
tags:
- llm
- generative_ai
- android
pipeline_tag: text-generation
---

# Qwen2.5-7B-Instruct: Optimized for Mobile Deployment
## State-of-the-art large language model useful on a variety of language understanding and generation tasks
The Qwen2.5-7B-Instruct is a state-of-the-art multilingual language model with 7 billion parameters, excelling in language understanding, generation, coding, and mathematics. AI Hub provides with four QNN context binaries (shared weights) that can be deployed on Snapdragon 8 Elite with Genie SDK.
This model is an implementation of Qwen2.5-7B-Instruct found [here](https://github.com/QwenLM/Qwen2.5).
More details on model performance across various devices, can be found [here](https://aihub.qualcomm.com/models/qwen2_5_7b_instruct).
### Model Details
- **Model Type:** Model_use_case.text_generation
- **Model Stats:**
- Input sequence length for Prompt Processor: 128
- Context length: 4096
- Precision: w4a16 + w8a16 (few layers)
- Num of key-value heads: 4
- Information about the model parts: Prompt Processor and Token Generator are split into 6 parts each. Each corresponding Prompt Processor and Token Generator part share weights.
- Prompt processor input (part1): 128 tokens
- Prompt processor output (part1): Embeddings output
- Prompt processor input (other parts): 128 tokens + KVCache initialized with pad token
- Prompt processor output (other parts): 128 output tokens + KVCache for token generator
- Token generator input (part1): 128 tokens
- Token generator output (part1): Embeddings output
- Token generator input (other parts): 1 input token + past KVCache
- Token generator output (other parts): 1 output token + KVCache for next iteration
- Use: Initiate conversation with prompt-processor and then token generator for subsequent iterations.
- Minimum QNN SDK version required: 2.27.7
- Supported languages: Chinese, English, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, Arabic, and more.
- TTFT: Time To First Token is the time it takes to generate the first response token. This is expressed as a range because it varies based on the length of the prompt. The lower bound is for a short prompt (up to 128 tokens, i.e., one iteration of the prompt processor) and the upper bound is for a prompt using the full context length (4096 tokens).
- Response Rate: Rate of response generation after the first response token.
| Model | Precision | Device | Chipset | Target Runtime | Response Rate (tokens per second) | Time To First Token (range, seconds)
|---|---|---|---|---|---|
| Qwen2.5-7B-Instruct | w4a16 | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | QNN_CONTEXT_BINARY | 15.40274 | 0.1356538 - 4.3409216 | -- | Use Export Script |
| Qwen2.5-7B-Instruct | w4a16 | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN_CONTEXT_BINARY | 12.33811 | 0.1749494 - 5.5983808 | -- | Use Export Script |
## Deploying Qwen2.5-7B-Instruct on-device
Please follow the [LLM on-device deployment](https://github.com/quic/ai-hub-apps/tree/main/tutorials/llm_on_genie) tutorial.
## License
* The license for the original implementation of Qwen2.5-7B-Instruct can be found
[here](https://huggingface.co/Qwen/Qwen2.5-7B/blob/main/LICENSE).
* The license for the compiled assets for on-device deployment can be found [here](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/Qualcomm+AI+Hub+Proprietary+License.pdf)
## References
* [Qwen2.5 Technical Report](https://arxiv.org/abs/2412.15115)
* [Source Model Implementation](https://github.com/QwenLM/Qwen2.5)
## Community
* Join [our AI Hub Slack community](https://qualcomm-ai-hub.slack.com/join/shared_invite/zt-2d5zsmas3-Sj0Q9TzslueCjS31eXG2UA#/shared-invite/email) to collaborate, post questions and learn more about on-device AI.
* For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com).
## Usage and Limitations
Model may not be used for or in connection with any of the following applications:
- Accessing essential private and public services and benefits;
- Administration of justice and democratic processes;
- Assessing or recognizing the emotional state of a person;
- Biometric and biometrics-based systems, including categorization of persons based on sensitive characteristics;
- Education and vocational training;
- Employment and workers management;
- Exploitation of the vulnerabilities of persons resulting in harmful behavior;
- General purpose social scoring;
- Law enforcement;
- Management and operation of critical infrastructure;
- Migration, asylum and border control management;
- Predictive policing;
- Real-time remote biometric identification in public spaces;
- Recommender systems of social media platforms;
- Scraping of facial images (from the internet or otherwise); and/or
- Subliminal manipulation
|
Krish356/qwen3-coder-tailwind-css-v4-lora-diverse
|
Krish356
| 2025-08-29T23:35:26Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"unsloth",
"trl",
"endpoints_compatible",
"region:us"
] | null | 2025-08-27T17:21:21Z |
---
base_model: unsloth/qwen3-coder-30b-a3b-instruct
library_name: transformers
model_name: qwen3-coder-tailwind-css-v4-lora-diverse
tags:
- generated_from_trainer
- sft
- unsloth
- trl
licence: license
---
# Model Card for qwen3-coder-tailwind-css-v4-lora-diverse
This model is a fine-tuned version of [unsloth/qwen3-coder-30b-a3b-instruct](https://huggingface.co/unsloth/qwen3-coder-30b-a3b-instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Krish356/qwen3-coder-tailwind-css-v4-lora-diverse", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.4
- Pytorch: 2.8.0
- Datasets: 3.6.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
qualcomm/QuickSRNetLarge
|
qualcomm
| 2025-08-29T23:35:05Z | 74 | 0 |
pytorch
|
[
"pytorch",
"tflite",
"android",
"image-to-image",
"arxiv:2303.04336",
"license:other",
"region:us"
] |
image-to-image
| 2024-02-25T22:56:48Z |
---
library_name: pytorch
license: other
tags:
- android
pipeline_tag: image-to-image
---

# QuickSRNetLarge: Optimized for Mobile Deployment
## Upscale images and remove image noise
QuickSRNet Large is designed for upscaling images on mobile platforms to sharpen in real-time.
This model is an implementation of QuickSRNetLarge found [here](https://github.com/quic/aimet-model-zoo/tree/develop/aimet_zoo_torch/quicksrnet).
This repository provides scripts to run QuickSRNetLarge on Qualcomm® devices.
More details on model performance across various devices, can be found
[here](https://aihub.qualcomm.com/models/quicksrnetlarge).
### Model Details
- **Model Type:** Model_use_case.super_resolution
- **Model Stats:**
- Model checkpoint: quicksrnet_large_3x_checkpoint
- Input resolution: 128x128
- Number of parameters: 436K
- Model size (float): 1.67 MB
- Model size (w8a8): 462 KB
| Model | Precision | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Primary Compute Unit | Target Model
|---|---|---|---|---|---|---|---|---|
| QuickSRNetLarge | float | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | TFLITE | 12.458 ms | 3 - 20 MB | NPU | [QuickSRNetLarge.tflite](https://huggingface.co/qualcomm/QuickSRNetLarge/blob/main/QuickSRNetLarge.tflite) |
| QuickSRNetLarge | float | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | QNN_DLC | 11.841 ms | 0 - 16 MB | NPU | [QuickSRNetLarge.dlc](https://huggingface.co/qualcomm/QuickSRNetLarge/blob/main/QuickSRNetLarge.dlc) |
| QuickSRNetLarge | float | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | TFLITE | 3.458 ms | 0 - 33 MB | NPU | [QuickSRNetLarge.tflite](https://huggingface.co/qualcomm/QuickSRNetLarge/blob/main/QuickSRNetLarge.tflite) |
| QuickSRNetLarge | float | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | QNN_DLC | 3.255 ms | 0 - 33 MB | NPU | [QuickSRNetLarge.dlc](https://huggingface.co/qualcomm/QuickSRNetLarge/blob/main/QuickSRNetLarge.dlc) |
| QuickSRNetLarge | float | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | TFLITE | 2.24 ms | 0 - 6 MB | NPU | [QuickSRNetLarge.tflite](https://huggingface.co/qualcomm/QuickSRNetLarge/blob/main/QuickSRNetLarge.tflite) |
| QuickSRNetLarge | float | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | QNN_DLC | 1.853 ms | 0 - 4 MB | NPU | [QuickSRNetLarge.dlc](https://huggingface.co/qualcomm/QuickSRNetLarge/blob/main/QuickSRNetLarge.dlc) |
| QuickSRNetLarge | float | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | TFLITE | 3.893 ms | 0 - 17 MB | NPU | [QuickSRNetLarge.tflite](https://huggingface.co/qualcomm/QuickSRNetLarge/blob/main/QuickSRNetLarge.tflite) |
| QuickSRNetLarge | float | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | QNN_DLC | 3.397 ms | 0 - 16 MB | NPU | [QuickSRNetLarge.dlc](https://huggingface.co/qualcomm/QuickSRNetLarge/blob/main/QuickSRNetLarge.dlc) |
| QuickSRNetLarge | float | SA7255P ADP | Qualcomm® SA7255P | TFLITE | 12.458 ms | 3 - 20 MB | NPU | [QuickSRNetLarge.tflite](https://huggingface.co/qualcomm/QuickSRNetLarge/blob/main/QuickSRNetLarge.tflite) |
| QuickSRNetLarge | float | SA7255P ADP | Qualcomm® SA7255P | QNN_DLC | 11.841 ms | 0 - 16 MB | NPU | [QuickSRNetLarge.dlc](https://huggingface.co/qualcomm/QuickSRNetLarge/blob/main/QuickSRNetLarge.dlc) |
| QuickSRNetLarge | float | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | TFLITE | 2.251 ms | 0 - 4 MB | NPU | [QuickSRNetLarge.tflite](https://huggingface.co/qualcomm/QuickSRNetLarge/blob/main/QuickSRNetLarge.tflite) |
| QuickSRNetLarge | float | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | QNN_DLC | 1.871 ms | 0 - 4 MB | NPU | [QuickSRNetLarge.dlc](https://huggingface.co/qualcomm/QuickSRNetLarge/blob/main/QuickSRNetLarge.dlc) |
| QuickSRNetLarge | float | SA8295P ADP | Qualcomm® SA8295P | TFLITE | 4.293 ms | 0 - 22 MB | NPU | [QuickSRNetLarge.tflite](https://huggingface.co/qualcomm/QuickSRNetLarge/blob/main/QuickSRNetLarge.tflite) |
| QuickSRNetLarge | float | SA8295P ADP | Qualcomm® SA8295P | QNN_DLC | 3.938 ms | 0 - 23 MB | NPU | [QuickSRNetLarge.dlc](https://huggingface.co/qualcomm/QuickSRNetLarge/blob/main/QuickSRNetLarge.dlc) |
| QuickSRNetLarge | float | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | TFLITE | 2.243 ms | 0 - 7 MB | NPU | [QuickSRNetLarge.tflite](https://huggingface.co/qualcomm/QuickSRNetLarge/blob/main/QuickSRNetLarge.tflite) |
| QuickSRNetLarge | float | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | QNN_DLC | 1.855 ms | 0 - 4 MB | NPU | [QuickSRNetLarge.dlc](https://huggingface.co/qualcomm/QuickSRNetLarge/blob/main/QuickSRNetLarge.dlc) |
| QuickSRNetLarge | float | SA8775P ADP | Qualcomm® SA8775P | TFLITE | 3.893 ms | 0 - 17 MB | NPU | [QuickSRNetLarge.tflite](https://huggingface.co/qualcomm/QuickSRNetLarge/blob/main/QuickSRNetLarge.tflite) |
| QuickSRNetLarge | float | SA8775P ADP | Qualcomm® SA8775P | QNN_DLC | 3.397 ms | 0 - 16 MB | NPU | [QuickSRNetLarge.dlc](https://huggingface.co/qualcomm/QuickSRNetLarge/blob/main/QuickSRNetLarge.dlc) |
| QuickSRNetLarge | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | TFLITE | 2.244 ms | 0 - 5 MB | NPU | [QuickSRNetLarge.tflite](https://huggingface.co/qualcomm/QuickSRNetLarge/blob/main/QuickSRNetLarge.tflite) |
| QuickSRNetLarge | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | QNN_DLC | 1.847 ms | 0 - 4 MB | NPU | [QuickSRNetLarge.dlc](https://huggingface.co/qualcomm/QuickSRNetLarge/blob/main/QuickSRNetLarge.dlc) |
| QuickSRNetLarge | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | ONNX | 2.547 ms | 0 - 5 MB | NPU | [QuickSRNetLarge.onnx.zip](https://huggingface.co/qualcomm/QuickSRNetLarge/blob/main/QuickSRNetLarge.onnx.zip) |
| QuickSRNetLarge | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | TFLITE | 1.54 ms | 0 - 24 MB | NPU | [QuickSRNetLarge.tflite](https://huggingface.co/qualcomm/QuickSRNetLarge/blob/main/QuickSRNetLarge.tflite) |
| QuickSRNetLarge | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | QNN_DLC | 1.279 ms | 0 - 26 MB | NPU | [QuickSRNetLarge.dlc](https://huggingface.co/qualcomm/QuickSRNetLarge/blob/main/QuickSRNetLarge.dlc) |
| QuickSRNetLarge | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | ONNX | 1.724 ms | 0 - 27 MB | NPU | [QuickSRNetLarge.onnx.zip](https://huggingface.co/qualcomm/QuickSRNetLarge/blob/main/QuickSRNetLarge.onnx.zip) |
| QuickSRNetLarge | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | TFLITE | 1.54 ms | 0 - 23 MB | NPU | [QuickSRNetLarge.tflite](https://huggingface.co/qualcomm/QuickSRNetLarge/blob/main/QuickSRNetLarge.tflite) |
| QuickSRNetLarge | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | QNN_DLC | 1.263 ms | 0 - 23 MB | NPU | [QuickSRNetLarge.dlc](https://huggingface.co/qualcomm/QuickSRNetLarge/blob/main/QuickSRNetLarge.dlc) |
| QuickSRNetLarge | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | ONNX | 1.735 ms | 0 - 19 MB | NPU | [QuickSRNetLarge.onnx.zip](https://huggingface.co/qualcomm/QuickSRNetLarge/blob/main/QuickSRNetLarge.onnx.zip) |
| QuickSRNetLarge | float | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN_DLC | 2.051 ms | 4 - 4 MB | NPU | [QuickSRNetLarge.dlc](https://huggingface.co/qualcomm/QuickSRNetLarge/blob/main/QuickSRNetLarge.dlc) |
| QuickSRNetLarge | float | Snapdragon X Elite CRD | Snapdragon® X Elite | ONNX | 2.467 ms | 8 - 8 MB | NPU | [QuickSRNetLarge.onnx.zip](https://huggingface.co/qualcomm/QuickSRNetLarge/blob/main/QuickSRNetLarge.onnx.zip) |
| QuickSRNetLarge | w8a8 | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | TFLITE | 2.298 ms | 1 - 18 MB | NPU | [QuickSRNetLarge.tflite](https://huggingface.co/qualcomm/QuickSRNetLarge/blob/main/QuickSRNetLarge_w8a8.tflite) |
| QuickSRNetLarge | w8a8 | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | QNN_DLC | 1.982 ms | 0 - 17 MB | NPU | [QuickSRNetLarge.dlc](https://huggingface.co/qualcomm/QuickSRNetLarge/blob/main/QuickSRNetLarge_w8a8.dlc) |
| QuickSRNetLarge | w8a8 | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | TFLITE | 0.928 ms | 1 - 30 MB | NPU | [QuickSRNetLarge.tflite](https://huggingface.co/qualcomm/QuickSRNetLarge/blob/main/QuickSRNetLarge_w8a8.tflite) |
| QuickSRNetLarge | w8a8 | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | QNN_DLC | 0.852 ms | 0 - 26 MB | NPU | [QuickSRNetLarge.dlc](https://huggingface.co/qualcomm/QuickSRNetLarge/blob/main/QuickSRNetLarge_w8a8.dlc) |
| QuickSRNetLarge | w8a8 | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | TFLITE | 0.834 ms | 0 - 7 MB | NPU | [QuickSRNetLarge.tflite](https://huggingface.co/qualcomm/QuickSRNetLarge/blob/main/QuickSRNetLarge_w8a8.tflite) |
| QuickSRNetLarge | w8a8 | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | QNN_DLC | 0.579 ms | 0 - 9 MB | NPU | [QuickSRNetLarge.dlc](https://huggingface.co/qualcomm/QuickSRNetLarge/blob/main/QuickSRNetLarge_w8a8.dlc) |
| QuickSRNetLarge | w8a8 | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | TFLITE | 1.062 ms | 0 - 17 MB | NPU | [QuickSRNetLarge.tflite](https://huggingface.co/qualcomm/QuickSRNetLarge/blob/main/QuickSRNetLarge_w8a8.tflite) |
| QuickSRNetLarge | w8a8 | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | QNN_DLC | 0.788 ms | 0 - 17 MB | NPU | [QuickSRNetLarge.dlc](https://huggingface.co/qualcomm/QuickSRNetLarge/blob/main/QuickSRNetLarge_w8a8.dlc) |
| QuickSRNetLarge | w8a8 | RB3 Gen 2 (Proxy) | Qualcomm® QCS6490 (Proxy) | TFLITE | 2.943 ms | 0 - 23 MB | NPU | [QuickSRNetLarge.tflite](https://huggingface.co/qualcomm/QuickSRNetLarge/blob/main/QuickSRNetLarge_w8a8.tflite) |
| QuickSRNetLarge | w8a8 | RB3 Gen 2 (Proxy) | Qualcomm® QCS6490 (Proxy) | QNN_DLC | 2.746 ms | 0 - 20 MB | NPU | [QuickSRNetLarge.dlc](https://huggingface.co/qualcomm/QuickSRNetLarge/blob/main/QuickSRNetLarge_w8a8.dlc) |
| QuickSRNetLarge | w8a8 | RB5 (Proxy) | Qualcomm® QCS8250 (Proxy) | TFLITE | 38.91 ms | 1 - 3 MB | NPU | [QuickSRNetLarge.tflite](https://huggingface.co/qualcomm/QuickSRNetLarge/blob/main/QuickSRNetLarge_w8a8.tflite) |
| QuickSRNetLarge | w8a8 | SA7255P ADP | Qualcomm® SA7255P | TFLITE | 2.298 ms | 1 - 18 MB | NPU | [QuickSRNetLarge.tflite](https://huggingface.co/qualcomm/QuickSRNetLarge/blob/main/QuickSRNetLarge_w8a8.tflite) |
| QuickSRNetLarge | w8a8 | SA7255P ADP | Qualcomm® SA7255P | QNN_DLC | 1.982 ms | 0 - 17 MB | NPU | [QuickSRNetLarge.dlc](https://huggingface.co/qualcomm/QuickSRNetLarge/blob/main/QuickSRNetLarge_w8a8.dlc) |
| QuickSRNetLarge | w8a8 | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | TFLITE | 0.84 ms | 0 - 7 MB | NPU | [QuickSRNetLarge.tflite](https://huggingface.co/qualcomm/QuickSRNetLarge/blob/main/QuickSRNetLarge_w8a8.tflite) |
| QuickSRNetLarge | w8a8 | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | QNN_DLC | 0.577 ms | 0 - 4 MB | NPU | [QuickSRNetLarge.dlc](https://huggingface.co/qualcomm/QuickSRNetLarge/blob/main/QuickSRNetLarge_w8a8.dlc) |
| QuickSRNetLarge | w8a8 | SA8295P ADP | Qualcomm® SA8295P | TFLITE | 1.468 ms | 0 - 25 MB | NPU | [QuickSRNetLarge.tflite](https://huggingface.co/qualcomm/QuickSRNetLarge/blob/main/QuickSRNetLarge_w8a8.tflite) |
| QuickSRNetLarge | w8a8 | SA8295P ADP | Qualcomm® SA8295P | QNN_DLC | 1.327 ms | 0 - 23 MB | NPU | [QuickSRNetLarge.dlc](https://huggingface.co/qualcomm/QuickSRNetLarge/blob/main/QuickSRNetLarge_w8a8.dlc) |
| QuickSRNetLarge | w8a8 | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | TFLITE | 0.835 ms | 1 - 8 MB | NPU | [QuickSRNetLarge.tflite](https://huggingface.co/qualcomm/QuickSRNetLarge/blob/main/QuickSRNetLarge_w8a8.tflite) |
| QuickSRNetLarge | w8a8 | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | QNN_DLC | 0.582 ms | 0 - 8 MB | NPU | [QuickSRNetLarge.dlc](https://huggingface.co/qualcomm/QuickSRNetLarge/blob/main/QuickSRNetLarge_w8a8.dlc) |
| QuickSRNetLarge | w8a8 | SA8775P ADP | Qualcomm® SA8775P | TFLITE | 1.062 ms | 0 - 17 MB | NPU | [QuickSRNetLarge.tflite](https://huggingface.co/qualcomm/QuickSRNetLarge/blob/main/QuickSRNetLarge_w8a8.tflite) |
| QuickSRNetLarge | w8a8 | SA8775P ADP | Qualcomm® SA8775P | QNN_DLC | 0.788 ms | 0 - 17 MB | NPU | [QuickSRNetLarge.dlc](https://huggingface.co/qualcomm/QuickSRNetLarge/blob/main/QuickSRNetLarge_w8a8.dlc) |
| QuickSRNetLarge | w8a8 | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | TFLITE | 0.841 ms | 0 - 7 MB | NPU | [QuickSRNetLarge.tflite](https://huggingface.co/qualcomm/QuickSRNetLarge/blob/main/QuickSRNetLarge_w8a8.tflite) |
| QuickSRNetLarge | w8a8 | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | QNN_DLC | 0.579 ms | 0 - 8 MB | NPU | [QuickSRNetLarge.dlc](https://huggingface.co/qualcomm/QuickSRNetLarge/blob/main/QuickSRNetLarge_w8a8.dlc) |
| QuickSRNetLarge | w8a8 | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | ONNX | 15.814 ms | 18 - 30 MB | NPU | [QuickSRNetLarge.onnx.zip](https://huggingface.co/qualcomm/QuickSRNetLarge/blob/main/QuickSRNetLarge_w8a8.onnx.zip) |
| QuickSRNetLarge | w8a8 | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | TFLITE | 0.533 ms | 0 - 29 MB | NPU | [QuickSRNetLarge.tflite](https://huggingface.co/qualcomm/QuickSRNetLarge/blob/main/QuickSRNetLarge_w8a8.tflite) |
| QuickSRNetLarge | w8a8 | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | QNN_DLC | 0.392 ms | 0 - 29 MB | NPU | [QuickSRNetLarge.dlc](https://huggingface.co/qualcomm/QuickSRNetLarge/blob/main/QuickSRNetLarge_w8a8.dlc) |
| QuickSRNetLarge | w8a8 | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | ONNX | 12.868 ms | 21 - 50 MB | NPU | [QuickSRNetLarge.onnx.zip](https://huggingface.co/qualcomm/QuickSRNetLarge/blob/main/QuickSRNetLarge_w8a8.onnx.zip) |
| QuickSRNetLarge | w8a8 | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | TFLITE | 0.552 ms | 0 - 23 MB | NPU | [QuickSRNetLarge.tflite](https://huggingface.co/qualcomm/QuickSRNetLarge/blob/main/QuickSRNetLarge_w8a8.tflite) |
| QuickSRNetLarge | w8a8 | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | QNN_DLC | 0.451 ms | 0 - 23 MB | NPU | [QuickSRNetLarge.dlc](https://huggingface.co/qualcomm/QuickSRNetLarge/blob/main/QuickSRNetLarge_w8a8.dlc) |
| QuickSRNetLarge | w8a8 | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | ONNX | 11.694 ms | 19 - 47 MB | NPU | [QuickSRNetLarge.onnx.zip](https://huggingface.co/qualcomm/QuickSRNetLarge/blob/main/QuickSRNetLarge_w8a8.onnx.zip) |
| QuickSRNetLarge | w8a8 | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN_DLC | 0.704 ms | 0 - 0 MB | NPU | [QuickSRNetLarge.dlc](https://huggingface.co/qualcomm/QuickSRNetLarge/blob/main/QuickSRNetLarge_w8a8.dlc) |
| QuickSRNetLarge | w8a8 | Snapdragon X Elite CRD | Snapdragon® X Elite | ONNX | 15.386 ms | 33 - 33 MB | NPU | [QuickSRNetLarge.onnx.zip](https://huggingface.co/qualcomm/QuickSRNetLarge/blob/main/QuickSRNetLarge_w8a8.onnx.zip) |
## Installation
Install the package via pip:
```bash
pip install qai-hub-models
```
## Configure Qualcomm® AI Hub to run this model on a cloud-hosted device
Sign-in to [Qualcomm® AI Hub](https://app.aihub.qualcomm.com/) with your
Qualcomm® ID. Once signed in navigate to `Account -> Settings -> API Token`.
With this API token, you can configure your client to run models on the cloud
hosted devices.
```bash
qai-hub configure --api_token API_TOKEN
```
Navigate to [docs](https://app.aihub.qualcomm.com/docs/) for more information.
## Demo off target
The package contains a simple end-to-end demo that downloads pre-trained
weights and runs this model on a sample input.
```bash
python -m qai_hub_models.models.quicksrnetlarge.demo
```
The above demo runs a reference implementation of pre-processing, model
inference, and post processing.
**NOTE**: If you want running in a Jupyter Notebook or Google Colab like
environment, please add the following to your cell (instead of the above).
```
%run -m qai_hub_models.models.quicksrnetlarge.demo
```
### Run model on a cloud-hosted device
In addition to the demo, you can also run the model on a cloud-hosted Qualcomm®
device. This script does the following:
* Performance check on-device on a cloud-hosted device
* Downloads compiled assets that can be deployed on-device for Android.
* Accuracy check between PyTorch and on-device outputs.
```bash
python -m qai_hub_models.models.quicksrnetlarge.export
```
## How does this work?
This [export script](https://aihub.qualcomm.com/models/quicksrnetlarge/qai_hub_models/models/QuickSRNetLarge/export.py)
leverages [Qualcomm® AI Hub](https://aihub.qualcomm.com/) to optimize, validate, and deploy this model
on-device. Lets go through each step below in detail:
Step 1: **Compile model for on-device deployment**
To compile a PyTorch model for on-device deployment, we first trace the model
in memory using the `jit.trace` and then call the `submit_compile_job` API.
```python
import torch
import qai_hub as hub
from qai_hub_models.models.quicksrnetlarge import Model
# Load the model
torch_model = Model.from_pretrained()
# Device
device = hub.Device("Samsung Galaxy S24")
# Trace model
input_shape = torch_model.get_input_spec()
sample_inputs = torch_model.sample_inputs()
pt_model = torch.jit.trace(torch_model, [torch.tensor(data[0]) for _, data in sample_inputs.items()])
# Compile model on a specific device
compile_job = hub.submit_compile_job(
model=pt_model,
device=device,
input_specs=torch_model.get_input_spec(),
)
# Get target model to run on-device
target_model = compile_job.get_target_model()
```
Step 2: **Performance profiling on cloud-hosted device**
After compiling models from step 1. Models can be profiled model on-device using the
`target_model`. Note that this scripts runs the model on a device automatically
provisioned in the cloud. Once the job is submitted, you can navigate to a
provided job URL to view a variety of on-device performance metrics.
```python
profile_job = hub.submit_profile_job(
model=target_model,
device=device,
)
```
Step 3: **Verify on-device accuracy**
To verify the accuracy of the model on-device, you can run on-device inference
on sample input data on the same cloud hosted device.
```python
input_data = torch_model.sample_inputs()
inference_job = hub.submit_inference_job(
model=target_model,
device=device,
inputs=input_data,
)
on_device_output = inference_job.download_output_data()
```
With the output of the model, you can compute like PSNR, relative errors or
spot check the output with expected output.
**Note**: This on-device profiling and inference requires access to Qualcomm®
AI Hub. [Sign up for access](https://myaccount.qualcomm.com/signup).
## Run demo on a cloud-hosted device
You can also run the demo on-device.
```bash
python -m qai_hub_models.models.quicksrnetlarge.demo --eval-mode on-device
```
**NOTE**: If you want running in a Jupyter Notebook or Google Colab like
environment, please add the following to your cell (instead of the above).
```
%run -m qai_hub_models.models.quicksrnetlarge.demo -- --eval-mode on-device
```
## Deploying compiled model to Android
The models can be deployed using multiple runtimes:
- TensorFlow Lite (`.tflite` export): [This
tutorial](https://www.tensorflow.org/lite/android/quickstart) provides a
guide to deploy the .tflite model in an Android application.
- QNN (`.so` export ): This [sample
app](https://docs.qualcomm.com/bundle/publicresource/topics/80-63442-50/sample_app.html)
provides instructions on how to use the `.so` shared library in an Android application.
## View on Qualcomm® AI Hub
Get more details on QuickSRNetLarge's performance across various devices [here](https://aihub.qualcomm.com/models/quicksrnetlarge).
Explore all available models on [Qualcomm® AI Hub](https://aihub.qualcomm.com/)
## License
* The license for the original implementation of QuickSRNetLarge can be found
[here](https://github.com/quic/aimet-model-zoo/blob/develop/LICENSE.pdf).
* The license for the compiled assets for on-device deployment can be found [here](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/Qualcomm+AI+Hub+Proprietary+License.pdf)
## References
* [QuickSRNet: Plain Single-Image Super-Resolution Architecture for Faster Inference on Mobile Platforms](https://arxiv.org/abs/2303.04336)
* [Source Model Implementation](https://github.com/quic/aimet-model-zoo/tree/develop/aimet_zoo_torch/quicksrnet)
## Community
* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI.
* For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com).
|
qualcomm/Posenet-Mobilenet
|
qualcomm
| 2025-08-29T23:34:49Z | 68 | 5 |
pytorch
|
[
"pytorch",
"tflite",
"android",
"keypoint-detection",
"arxiv:1803.08225",
"license:other",
"region:us"
] |
keypoint-detection
| 2024-05-29T00:58:41Z |
---
library_name: pytorch
license: other
tags:
- android
pipeline_tag: keypoint-detection
---

# Posenet-Mobilenet: Optimized for Mobile Deployment
## Perform accurate human pose estimation
Posenet performs pose estimation on human images.
This model is an implementation of Posenet-Mobilenet found [here](https://github.com/rwightman/posenet-pytorch).
This repository provides scripts to run Posenet-Mobilenet on Qualcomm® devices.
More details on model performance across various devices, can be found
[here](https://aihub.qualcomm.com/models/posenet_mobilenet).
### Model Details
- **Model Type:** Model_use_case.pose_estimation
- **Model Stats:**
- Model checkpoint: mobilenet_v1_101
- Input resolution: 513x257
- Number of parameters: 3.31M
- Model size (float): 12.7 MB
- Model size (w8a8): 12.7 MB
| Model | Precision | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Primary Compute Unit | Target Model
|---|---|---|---|---|---|---|---|---|
| Posenet-Mobilenet | float | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | TFLITE | 7.786 ms | 0 - 25 MB | NPU | [Posenet-Mobilenet.tflite](https://huggingface.co/qualcomm/Posenet-Mobilenet/blob/main/Posenet-Mobilenet.tflite) |
| Posenet-Mobilenet | float | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | QNN_DLC | 7.667 ms | 1 - 20 MB | NPU | [Posenet-Mobilenet.dlc](https://huggingface.co/qualcomm/Posenet-Mobilenet/blob/main/Posenet-Mobilenet.dlc) |
| Posenet-Mobilenet | float | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | TFLITE | 2.236 ms | 0 - 34 MB | NPU | [Posenet-Mobilenet.tflite](https://huggingface.co/qualcomm/Posenet-Mobilenet/blob/main/Posenet-Mobilenet.tflite) |
| Posenet-Mobilenet | float | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | QNN_DLC | 2.287 ms | 2 - 34 MB | NPU | [Posenet-Mobilenet.dlc](https://huggingface.co/qualcomm/Posenet-Mobilenet/blob/main/Posenet-Mobilenet.dlc) |
| Posenet-Mobilenet | float | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | TFLITE | 1.394 ms | 0 - 49 MB | NPU | [Posenet-Mobilenet.tflite](https://huggingface.co/qualcomm/Posenet-Mobilenet/blob/main/Posenet-Mobilenet.tflite) |
| Posenet-Mobilenet | float | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | QNN_DLC | 1.329 ms | 2 - 6 MB | NPU | [Posenet-Mobilenet.dlc](https://huggingface.co/qualcomm/Posenet-Mobilenet/blob/main/Posenet-Mobilenet.dlc) |
| Posenet-Mobilenet | float | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | TFLITE | 2.382 ms | 0 - 24 MB | NPU | [Posenet-Mobilenet.tflite](https://huggingface.co/qualcomm/Posenet-Mobilenet/blob/main/Posenet-Mobilenet.tflite) |
| Posenet-Mobilenet | float | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | QNN_DLC | 2.306 ms | 2 - 20 MB | NPU | [Posenet-Mobilenet.dlc](https://huggingface.co/qualcomm/Posenet-Mobilenet/blob/main/Posenet-Mobilenet.dlc) |
| Posenet-Mobilenet | float | SA7255P ADP | Qualcomm® SA7255P | TFLITE | 7.786 ms | 0 - 25 MB | NPU | [Posenet-Mobilenet.tflite](https://huggingface.co/qualcomm/Posenet-Mobilenet/blob/main/Posenet-Mobilenet.tflite) |
| Posenet-Mobilenet | float | SA7255P ADP | Qualcomm® SA7255P | QNN_DLC | 7.667 ms | 1 - 20 MB | NPU | [Posenet-Mobilenet.dlc](https://huggingface.co/qualcomm/Posenet-Mobilenet/blob/main/Posenet-Mobilenet.dlc) |
| Posenet-Mobilenet | float | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | TFLITE | 1.39 ms | 0 - 49 MB | NPU | [Posenet-Mobilenet.tflite](https://huggingface.co/qualcomm/Posenet-Mobilenet/blob/main/Posenet-Mobilenet.tflite) |
| Posenet-Mobilenet | float | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | QNN_DLC | 1.314 ms | 1 - 30 MB | NPU | [Posenet-Mobilenet.dlc](https://huggingface.co/qualcomm/Posenet-Mobilenet/blob/main/Posenet-Mobilenet.dlc) |
| Posenet-Mobilenet | float | SA8295P ADP | Qualcomm® SA8295P | TFLITE | 2.794 ms | 0 - 26 MB | NPU | [Posenet-Mobilenet.tflite](https://huggingface.co/qualcomm/Posenet-Mobilenet/blob/main/Posenet-Mobilenet.tflite) |
| Posenet-Mobilenet | float | SA8295P ADP | Qualcomm® SA8295P | QNN_DLC | 2.743 ms | 2 - 27 MB | NPU | [Posenet-Mobilenet.dlc](https://huggingface.co/qualcomm/Posenet-Mobilenet/blob/main/Posenet-Mobilenet.dlc) |
| Posenet-Mobilenet | float | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | TFLITE | 1.382 ms | 0 - 49 MB | NPU | [Posenet-Mobilenet.tflite](https://huggingface.co/qualcomm/Posenet-Mobilenet/blob/main/Posenet-Mobilenet.tflite) |
| Posenet-Mobilenet | float | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | QNN_DLC | 1.32 ms | 1 - 30 MB | NPU | [Posenet-Mobilenet.dlc](https://huggingface.co/qualcomm/Posenet-Mobilenet/blob/main/Posenet-Mobilenet.dlc) |
| Posenet-Mobilenet | float | SA8775P ADP | Qualcomm® SA8775P | TFLITE | 2.382 ms | 0 - 24 MB | NPU | [Posenet-Mobilenet.tflite](https://huggingface.co/qualcomm/Posenet-Mobilenet/blob/main/Posenet-Mobilenet.tflite) |
| Posenet-Mobilenet | float | SA8775P ADP | Qualcomm® SA8775P | QNN_DLC | 2.306 ms | 2 - 20 MB | NPU | [Posenet-Mobilenet.dlc](https://huggingface.co/qualcomm/Posenet-Mobilenet/blob/main/Posenet-Mobilenet.dlc) |
| Posenet-Mobilenet | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | TFLITE | 1.395 ms | 0 - 49 MB | NPU | [Posenet-Mobilenet.tflite](https://huggingface.co/qualcomm/Posenet-Mobilenet/blob/main/Posenet-Mobilenet.tflite) |
| Posenet-Mobilenet | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | QNN_DLC | 1.316 ms | 1 - 29 MB | NPU | [Posenet-Mobilenet.dlc](https://huggingface.co/qualcomm/Posenet-Mobilenet/blob/main/Posenet-Mobilenet.dlc) |
| Posenet-Mobilenet | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | ONNX | 1.823 ms | 0 - 29 MB | NPU | [Posenet-Mobilenet.onnx.zip](https://huggingface.co/qualcomm/Posenet-Mobilenet/blob/main/Posenet-Mobilenet.onnx.zip) |
| Posenet-Mobilenet | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | TFLITE | 0.966 ms | 0 - 37 MB | NPU | [Posenet-Mobilenet.tflite](https://huggingface.co/qualcomm/Posenet-Mobilenet/blob/main/Posenet-Mobilenet.tflite) |
| Posenet-Mobilenet | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | QNN_DLC | 0.929 ms | 0 - 28 MB | NPU | [Posenet-Mobilenet.dlc](https://huggingface.co/qualcomm/Posenet-Mobilenet/blob/main/Posenet-Mobilenet.dlc) |
| Posenet-Mobilenet | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | ONNX | 1.223 ms | 0 - 30 MB | NPU | [Posenet-Mobilenet.onnx.zip](https://huggingface.co/qualcomm/Posenet-Mobilenet/blob/main/Posenet-Mobilenet.onnx.zip) |
| Posenet-Mobilenet | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | TFLITE | 0.971 ms | 0 - 29 MB | NPU | [Posenet-Mobilenet.tflite](https://huggingface.co/qualcomm/Posenet-Mobilenet/blob/main/Posenet-Mobilenet.tflite) |
| Posenet-Mobilenet | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | QNN_DLC | 0.933 ms | 4 - 24 MB | NPU | [Posenet-Mobilenet.dlc](https://huggingface.co/qualcomm/Posenet-Mobilenet/blob/main/Posenet-Mobilenet.dlc) |
| Posenet-Mobilenet | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | ONNX | 1.269 ms | 1 - 26 MB | NPU | [Posenet-Mobilenet.onnx.zip](https://huggingface.co/qualcomm/Posenet-Mobilenet/blob/main/Posenet-Mobilenet.onnx.zip) |
| Posenet-Mobilenet | float | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN_DLC | 1.511 ms | 29 - 29 MB | NPU | [Posenet-Mobilenet.dlc](https://huggingface.co/qualcomm/Posenet-Mobilenet/blob/main/Posenet-Mobilenet.dlc) |
| Posenet-Mobilenet | float | Snapdragon X Elite CRD | Snapdragon® X Elite | ONNX | 2.087 ms | 6 - 6 MB | NPU | [Posenet-Mobilenet.onnx.zip](https://huggingface.co/qualcomm/Posenet-Mobilenet/blob/main/Posenet-Mobilenet.onnx.zip) |
## Installation
Install the package via pip:
```bash
pip install "qai-hub-models[posenet-mobilenet]"
```
## Configure Qualcomm® AI Hub to run this model on a cloud-hosted device
Sign-in to [Qualcomm® AI Hub](https://app.aihub.qualcomm.com/) with your
Qualcomm® ID. Once signed in navigate to `Account -> Settings -> API Token`.
With this API token, you can configure your client to run models on the cloud
hosted devices.
```bash
qai-hub configure --api_token API_TOKEN
```
Navigate to [docs](https://app.aihub.qualcomm.com/docs/) for more information.
## Demo off target
The package contains a simple end-to-end demo that downloads pre-trained
weights and runs this model on a sample input.
```bash
python -m qai_hub_models.models.posenet_mobilenet.demo
```
The above demo runs a reference implementation of pre-processing, model
inference, and post processing.
**NOTE**: If you want running in a Jupyter Notebook or Google Colab like
environment, please add the following to your cell (instead of the above).
```
%run -m qai_hub_models.models.posenet_mobilenet.demo
```
### Run model on a cloud-hosted device
In addition to the demo, you can also run the model on a cloud-hosted Qualcomm®
device. This script does the following:
* Performance check on-device on a cloud-hosted device
* Downloads compiled assets that can be deployed on-device for Android.
* Accuracy check between PyTorch and on-device outputs.
```bash
python -m qai_hub_models.models.posenet_mobilenet.export
```
## How does this work?
This [export script](https://aihub.qualcomm.com/models/posenet_mobilenet/qai_hub_models/models/Posenet-Mobilenet/export.py)
leverages [Qualcomm® AI Hub](https://aihub.qualcomm.com/) to optimize, validate, and deploy this model
on-device. Lets go through each step below in detail:
Step 1: **Compile model for on-device deployment**
To compile a PyTorch model for on-device deployment, we first trace the model
in memory using the `jit.trace` and then call the `submit_compile_job` API.
```python
import torch
import qai_hub as hub
from qai_hub_models.models.posenet_mobilenet import Model
# Load the model
torch_model = Model.from_pretrained()
# Device
device = hub.Device("Samsung Galaxy S24")
# Trace model
input_shape = torch_model.get_input_spec()
sample_inputs = torch_model.sample_inputs()
pt_model = torch.jit.trace(torch_model, [torch.tensor(data[0]) for _, data in sample_inputs.items()])
# Compile model on a specific device
compile_job = hub.submit_compile_job(
model=pt_model,
device=device,
input_specs=torch_model.get_input_spec(),
)
# Get target model to run on-device
target_model = compile_job.get_target_model()
```
Step 2: **Performance profiling on cloud-hosted device**
After compiling models from step 1. Models can be profiled model on-device using the
`target_model`. Note that this scripts runs the model on a device automatically
provisioned in the cloud. Once the job is submitted, you can navigate to a
provided job URL to view a variety of on-device performance metrics.
```python
profile_job = hub.submit_profile_job(
model=target_model,
device=device,
)
```
Step 3: **Verify on-device accuracy**
To verify the accuracy of the model on-device, you can run on-device inference
on sample input data on the same cloud hosted device.
```python
input_data = torch_model.sample_inputs()
inference_job = hub.submit_inference_job(
model=target_model,
device=device,
inputs=input_data,
)
on_device_output = inference_job.download_output_data()
```
With the output of the model, you can compute like PSNR, relative errors or
spot check the output with expected output.
**Note**: This on-device profiling and inference requires access to Qualcomm®
AI Hub. [Sign up for access](https://myaccount.qualcomm.com/signup).
## Run demo on a cloud-hosted device
You can also run the demo on-device.
```bash
python -m qai_hub_models.models.posenet_mobilenet.demo --eval-mode on-device
```
**NOTE**: If you want running in a Jupyter Notebook or Google Colab like
environment, please add the following to your cell (instead of the above).
```
%run -m qai_hub_models.models.posenet_mobilenet.demo -- --eval-mode on-device
```
## Deploying compiled model to Android
The models can be deployed using multiple runtimes:
- TensorFlow Lite (`.tflite` export): [This
tutorial](https://www.tensorflow.org/lite/android/quickstart) provides a
guide to deploy the .tflite model in an Android application.
- QNN (`.so` export ): This [sample
app](https://docs.qualcomm.com/bundle/publicresource/topics/80-63442-50/sample_app.html)
provides instructions on how to use the `.so` shared library in an Android application.
## View on Qualcomm® AI Hub
Get more details on Posenet-Mobilenet's performance across various devices [here](https://aihub.qualcomm.com/models/posenet_mobilenet).
Explore all available models on [Qualcomm® AI Hub](https://aihub.qualcomm.com/)
## License
* The license for the original implementation of Posenet-Mobilenet can be found
[here](https://github.com/rwightman/posenet-pytorch/blob/master/LICENSE.txt).
* The license for the compiled assets for on-device deployment can be found [here](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/Qualcomm+AI+Hub+Proprietary+License.pdf)
## References
* [PersonLab: Person Pose Estimation and Instance Segmentation with a Bottom-Up, Part-Based, Geometric Embedding Model](https://arxiv.org/abs/1803.08225)
* [Source Model Implementation](https://github.com/rwightman/posenet-pytorch)
## Community
* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI.
* For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com).
|
seraphimzzzz/752286
|
seraphimzzzz
| 2025-08-29T23:34:48Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-29T23:34:41Z |
[View on Civ Archive](https://civarchive.com/models/749747?modelVersionId=838447)
|
qualcomm/PLaMo-1B
|
qualcomm
| 2025-08-29T23:34:35Z | 0 | 3 |
pytorch
|
[
"pytorch",
"llm",
"generative_ai",
"android",
"text-generation",
"license:unknown",
"region:us"
] |
text-generation
| 2024-10-21T22:35:05Z |
---
library_name: pytorch
license: unknown
tags:
- llm
- generative_ai
- android
pipeline_tag: text-generation
---

# PLaMo-1B: Optimized for Mobile Deployment
## State-of-the-art large language model useful on a variety of language understanding and generation tasks
PLaMo-1B is the first small language model (SLM) in the PLaMo™ Lite series from Preferred Networks (PFN), designed to power AI applications for edge devices including mobile, automotive, and robots across various industrial sectors. This model builds on the advancements of PLaMo-100B, a 100-billion parameter large language model (LLM) developed from the ground up by PFN’s subsidiary Preferred Elements (PFE). Leveraging high-quality Japanese and English text data generated by PLaMo-100B, PLaMo-1B has been pre-trained on a total of 4 trillion tokens. As a result, it delivers exceptional performance in Japanese benchmarks, outperforming other SLMs with similar parameter sizes. In evaluations such as Jaster 0-shot and 4-shot, PLaMo-1B has demonstrated performance on par with larger LLMs, making it a highly efficient solution for edge-based AI tasks.
Please contact us to purchase this model. More details on model performance across various devices, can be found [here](https://aihub.qualcomm.com/models/plamo_1b).
**WARNING**: The model assets are not readily available for download due to licensing restrictions.
### Model Details
- **Model Type:** Model_use_case.text_generation
- **Model Stats:**
- Input sequence length for Prompt Processor: 128
- Context length: 4096
- Number of parameters: 1B
- Precision: w4a16 + w8a16 (few layers)
- Use: Initiate conversation with prompt-processor and then token generator for subsequent iterations.
- Minimum QNN SDK version required: 2.27.7
- Supported languages: Japanese and English.
- TTFT: Time To First Token is the time it takes to generate the first response token. This is expressed as a range because it varies based on the length of the prompt. The lower bound is for a short prompt (up to 128 tokens, i.e., one iteration of the prompt processor) and the upper bound is for a prompt using the full context length (4096 tokens).
- Response Rate: Rate of response generation after the first response token.
| Model | Precision | Device | Chipset | Target Runtime | Response Rate (tokens per second) | Time To First Token (range, seconds)
|---|---|---|---|---|---|
| PLaMo-1B | w4a16 | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | QNN_CONTEXT_BINARY | 68.21 | 0.031448000000000004 - 1.0063360000000001 | -- | -- |
## Deploying PLaMo-1B on-device
Please follow the [LLM on-device deployment](https://github.com/quic/ai-hub-apps/tree/main/tutorials/llm_on_genie) tutorial.
## Community
* Join [our AI Hub Slack community](https://qualcomm-ai-hub.slack.com/join/shared_invite/zt-2d5zsmas3-Sj0Q9TzslueCjS31eXG2UA#/shared-invite/email) to collaborate, post questions and learn more about on-device AI.
* For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com).
## Usage and Limitations
Model may not be used for or in connection with any of the following applications:
- Accessing essential private and public services and benefits;
- Administration of justice and democratic processes;
- Assessing or recognizing the emotional state of a person;
- Biometric and biometrics-based systems, including categorization of persons based on sensitive characteristics;
- Education and vocational training;
- Employment and workers management;
- Exploitation of the vulnerabilities of persons resulting in harmful behavior;
- General purpose social scoring;
- Law enforcement;
- Management and operation of critical infrastructure;
- Migration, asylum and border control management;
- Predictive policing;
- Real-time remote biometric identification in public spaces;
- Recommender systems of social media platforms;
- Scraping of facial images (from the internet or otherwise); and/or
- Subliminal manipulation
|
crystalline7/471352
|
crystalline7
| 2025-08-29T23:34:13Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-29T23:34:13Z |
[View on Civ Archive](https://civarchive.com/models/496189?modelVersionId=551628)
|
qualcomm/OpenAI-Clip
|
qualcomm
| 2025-08-29T23:34:07Z | 70 | 8 |
pytorch
|
[
"pytorch",
"tflite",
"foundation",
"android",
"image-classification",
"arxiv:2103.00020",
"license:other",
"region:us"
] |
image-classification
| 2024-02-25T22:53:55Z |
---
library_name: pytorch
license: other
tags:
- foundation
- android
pipeline_tag: image-classification
---

# OpenAI-Clip: Optimized for Mobile Deployment
## Multi-modal foundational model for vision and language tasks like image/text similarity and for zero-shot image classification
Contrastive Language-Image Pre-Training (CLIP) uses a ViT like transformer to get visual features and a causal language model to get the text features. Both the text and visual features can then be used for a variety of zero-shot learning tasks.
This model is an implementation of OpenAI-Clip found [here](https://github.com/openai/CLIP/).
This repository provides scripts to run OpenAI-Clip on Qualcomm® devices.
More details on model performance across various devices, can be found
[here](https://aihub.qualcomm.com/models/openai_clip).
### Model Details
- **Model Type:** Model_use_case.image_classification
- **Model Stats:**
- Model checkpoint: ViT-B/16
- Image input resolution: 224x224
- Text context length: 77
- Number of parameters: 150M
- Model size (float): 571 MB
| Model | Precision | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Primary Compute Unit | Target Model
|---|---|---|---|---|---|---|---|---|
| OpenAI-Clip | float | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | TFLITE | 63.066 ms | 0 - 436 MB | NPU | [OpenAI-Clip.tflite](https://huggingface.co/qualcomm/OpenAI-Clip/blob/main/OpenAI-Clip.tflite) |
| OpenAI-Clip | float | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | QNN_DLC | 59.664 ms | 1 - 565 MB | NPU | [OpenAI-Clip.dlc](https://huggingface.co/qualcomm/OpenAI-Clip/blob/main/OpenAI-Clip.dlc) |
| OpenAI-Clip | float | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | TFLITE | 25.207 ms | 0 - 446 MB | NPU | [OpenAI-Clip.tflite](https://huggingface.co/qualcomm/OpenAI-Clip/blob/main/OpenAI-Clip.tflite) |
| OpenAI-Clip | float | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | QNN_DLC | 25.825 ms | 1 - 532 MB | NPU | [OpenAI-Clip.dlc](https://huggingface.co/qualcomm/OpenAI-Clip/blob/main/OpenAI-Clip.dlc) |
| OpenAI-Clip | float | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | TFLITE | 21.971 ms | 0 - 14 MB | NPU | [OpenAI-Clip.tflite](https://huggingface.co/qualcomm/OpenAI-Clip/blob/main/OpenAI-Clip.tflite) |
| OpenAI-Clip | float | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | QNN_DLC | 21.833 ms | 3 - 40 MB | NPU | [OpenAI-Clip.dlc](https://huggingface.co/qualcomm/OpenAI-Clip/blob/main/OpenAI-Clip.dlc) |
| OpenAI-Clip | float | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | TFLITE | 25.777 ms | 0 - 437 MB | NPU | [OpenAI-Clip.tflite](https://huggingface.co/qualcomm/OpenAI-Clip/blob/main/OpenAI-Clip.tflite) |
| OpenAI-Clip | float | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | QNN_DLC | 24.047 ms | 1 - 564 MB | NPU | [OpenAI-Clip.dlc](https://huggingface.co/qualcomm/OpenAI-Clip/blob/main/OpenAI-Clip.dlc) |
| OpenAI-Clip | float | SA7255P ADP | Qualcomm® SA7255P | TFLITE | 63.066 ms | 0 - 436 MB | NPU | [OpenAI-Clip.tflite](https://huggingface.co/qualcomm/OpenAI-Clip/blob/main/OpenAI-Clip.tflite) |
| OpenAI-Clip | float | SA7255P ADP | Qualcomm® SA7255P | QNN_DLC | 59.664 ms | 1 - 565 MB | NPU | [OpenAI-Clip.dlc](https://huggingface.co/qualcomm/OpenAI-Clip/blob/main/OpenAI-Clip.dlc) |
| OpenAI-Clip | float | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | TFLITE | 21.983 ms | 0 - 35 MB | NPU | [OpenAI-Clip.tflite](https://huggingface.co/qualcomm/OpenAI-Clip/blob/main/OpenAI-Clip.tflite) |
| OpenAI-Clip | float | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | QNN_DLC | 21.734 ms | 0 - 39 MB | NPU | [OpenAI-Clip.dlc](https://huggingface.co/qualcomm/OpenAI-Clip/blob/main/OpenAI-Clip.dlc) |
| OpenAI-Clip | float | SA8295P ADP | Qualcomm® SA8295P | TFLITE | 28.755 ms | 0 - 430 MB | NPU | [OpenAI-Clip.tflite](https://huggingface.co/qualcomm/OpenAI-Clip/blob/main/OpenAI-Clip.tflite) |
| OpenAI-Clip | float | SA8295P ADP | Qualcomm® SA8295P | QNN_DLC | 25.874 ms | 1 - 558 MB | NPU | [OpenAI-Clip.dlc](https://huggingface.co/qualcomm/OpenAI-Clip/blob/main/OpenAI-Clip.dlc) |
| OpenAI-Clip | float | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | TFLITE | 21.983 ms | 0 - 33 MB | NPU | [OpenAI-Clip.tflite](https://huggingface.co/qualcomm/OpenAI-Clip/blob/main/OpenAI-Clip.tflite) |
| OpenAI-Clip | float | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | QNN_DLC | 21.713 ms | 0 - 30 MB | NPU | [OpenAI-Clip.dlc](https://huggingface.co/qualcomm/OpenAI-Clip/blob/main/OpenAI-Clip.dlc) |
| OpenAI-Clip | float | SA8775P ADP | Qualcomm® SA8775P | TFLITE | 25.777 ms | 0 - 437 MB | NPU | [OpenAI-Clip.tflite](https://huggingface.co/qualcomm/OpenAI-Clip/blob/main/OpenAI-Clip.tflite) |
| OpenAI-Clip | float | SA8775P ADP | Qualcomm® SA8775P | QNN_DLC | 24.047 ms | 1 - 564 MB | NPU | [OpenAI-Clip.dlc](https://huggingface.co/qualcomm/OpenAI-Clip/blob/main/OpenAI-Clip.dlc) |
| OpenAI-Clip | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | TFLITE | 21.813 ms | 0 - 18 MB | NPU | [OpenAI-Clip.tflite](https://huggingface.co/qualcomm/OpenAI-Clip/blob/main/OpenAI-Clip.tflite) |
| OpenAI-Clip | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | QNN_DLC | 21.769 ms | 0 - 35 MB | NPU | [OpenAI-Clip.dlc](https://huggingface.co/qualcomm/OpenAI-Clip/blob/main/OpenAI-Clip.dlc) |
| OpenAI-Clip | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | ONNX | 24.728 ms | 1 - 47 MB | NPU | [OpenAI-Clip.onnx.zip](https://huggingface.co/qualcomm/OpenAI-Clip/blob/main/OpenAI-Clip.onnx.zip) |
| OpenAI-Clip | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | TFLITE | 15.733 ms | 0 - 445 MB | NPU | [OpenAI-Clip.tflite](https://huggingface.co/qualcomm/OpenAI-Clip/blob/main/OpenAI-Clip.tflite) |
| OpenAI-Clip | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | QNN_DLC | 15.116 ms | 0 - 566 MB | NPU | [OpenAI-Clip.dlc](https://huggingface.co/qualcomm/OpenAI-Clip/blob/main/OpenAI-Clip.dlc) |
| OpenAI-Clip | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | ONNX | 17.971 ms | 0 - 497 MB | NPU | [OpenAI-Clip.onnx.zip](https://huggingface.co/qualcomm/OpenAI-Clip/blob/main/OpenAI-Clip.onnx.zip) |
| OpenAI-Clip | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | TFLITE | 14.351 ms | 0 - 438 MB | NPU | [OpenAI-Clip.tflite](https://huggingface.co/qualcomm/OpenAI-Clip/blob/main/OpenAI-Clip.tflite) |
| OpenAI-Clip | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | QNN_DLC | 11.678 ms | 1 - 546 MB | NPU | [OpenAI-Clip.dlc](https://huggingface.co/qualcomm/OpenAI-Clip/blob/main/OpenAI-Clip.dlc) |
| OpenAI-Clip | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | ONNX | 17.019 ms | 1 - 472 MB | NPU | [OpenAI-Clip.onnx.zip](https://huggingface.co/qualcomm/OpenAI-Clip/blob/main/OpenAI-Clip.onnx.zip) |
| OpenAI-Clip | float | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN_DLC | 22.422 ms | 1531 - 1531 MB | NPU | [OpenAI-Clip.dlc](https://huggingface.co/qualcomm/OpenAI-Clip/blob/main/OpenAI-Clip.dlc) |
| OpenAI-Clip | float | Snapdragon X Elite CRD | Snapdragon® X Elite | ONNX | 26.399 ms | 293 - 293 MB | NPU | [OpenAI-Clip.onnx.zip](https://huggingface.co/qualcomm/OpenAI-Clip/blob/main/OpenAI-Clip.onnx.zip) |
## Installation
Install the package via pip:
```bash
pip install "qai-hub-models[openai-clip]"
```
## Configure Qualcomm® AI Hub to run this model on a cloud-hosted device
Sign-in to [Qualcomm® AI Hub](https://app.aihub.qualcomm.com/) with your
Qualcomm® ID. Once signed in navigate to `Account -> Settings -> API Token`.
With this API token, you can configure your client to run models on the cloud
hosted devices.
```bash
qai-hub configure --api_token API_TOKEN
```
Navigate to [docs](https://app.aihub.qualcomm.com/docs/) for more information.
## Demo off target
The package contains a simple end-to-end demo that downloads pre-trained
weights and runs this model on a sample input.
```bash
python -m qai_hub_models.models.openai_clip.demo
```
The above demo runs a reference implementation of pre-processing, model
inference, and post processing.
**NOTE**: If you want running in a Jupyter Notebook or Google Colab like
environment, please add the following to your cell (instead of the above).
```
%run -m qai_hub_models.models.openai_clip.demo
```
### Run model on a cloud-hosted device
In addition to the demo, you can also run the model on a cloud-hosted Qualcomm®
device. This script does the following:
* Performance check on-device on a cloud-hosted device
* Downloads compiled assets that can be deployed on-device for Android.
* Accuracy check between PyTorch and on-device outputs.
```bash
python -m qai_hub_models.models.openai_clip.export
```
## How does this work?
This [export script](https://aihub.qualcomm.com/models/openai_clip/qai_hub_models/models/OpenAI-Clip/export.py)
leverages [Qualcomm® AI Hub](https://aihub.qualcomm.com/) to optimize, validate, and deploy this model
on-device. Lets go through each step below in detail:
Step 1: **Compile model for on-device deployment**
To compile a PyTorch model for on-device deployment, we first trace the model
in memory using the `jit.trace` and then call the `submit_compile_job` API.
```python
import torch
import qai_hub as hub
from qai_hub_models.models.openai_clip import Model
# Load the model
torch_model = Model.from_pretrained()
# Device
device = hub.Device("Samsung Galaxy S24")
# Trace model
input_shape = torch_model.get_input_spec()
sample_inputs = torch_model.sample_inputs()
pt_model = torch.jit.trace(torch_model, [torch.tensor(data[0]) for _, data in sample_inputs.items()])
# Compile model on a specific device
compile_job = hub.submit_compile_job(
model=pt_model,
device=device,
input_specs=torch_model.get_input_spec(),
)
# Get target model to run on-device
target_model = compile_job.get_target_model()
```
Step 2: **Performance profiling on cloud-hosted device**
After compiling models from step 1. Models can be profiled model on-device using the
`target_model`. Note that this scripts runs the model on a device automatically
provisioned in the cloud. Once the job is submitted, you can navigate to a
provided job URL to view a variety of on-device performance metrics.
```python
profile_job = hub.submit_profile_job(
model=target_model,
device=device,
)
```
Step 3: **Verify on-device accuracy**
To verify the accuracy of the model on-device, you can run on-device inference
on sample input data on the same cloud hosted device.
```python
input_data = torch_model.sample_inputs()
inference_job = hub.submit_inference_job(
model=target_model,
device=device,
inputs=input_data,
)
on_device_output = inference_job.download_output_data()
```
With the output of the model, you can compute like PSNR, relative errors or
spot check the output with expected output.
**Note**: This on-device profiling and inference requires access to Qualcomm®
AI Hub. [Sign up for access](https://myaccount.qualcomm.com/signup).
## Deploying compiled model to Android
The models can be deployed using multiple runtimes:
- TensorFlow Lite (`.tflite` export): [This
tutorial](https://www.tensorflow.org/lite/android/quickstart) provides a
guide to deploy the .tflite model in an Android application.
- QNN (`.so` export ): This [sample
app](https://docs.qualcomm.com/bundle/publicresource/topics/80-63442-50/sample_app.html)
provides instructions on how to use the `.so` shared library in an Android application.
## View on Qualcomm® AI Hub
Get more details on OpenAI-Clip's performance across various devices [here](https://aihub.qualcomm.com/models/openai_clip).
Explore all available models on [Qualcomm® AI Hub](https://aihub.qualcomm.com/)
## License
* The license for the original implementation of OpenAI-Clip can be found
[here](https://github.com/openai/CLIP/blob/main/LICENSE).
* The license for the compiled assets for on-device deployment can be found [here](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/Qualcomm+AI+Hub+Proprietary+License.pdf)
## References
* [Learning Transferable Visual Models From Natural Language Supervision](https://arxiv.org/abs/2103.00020)
* [Source Model Implementation](https://github.com/openai/CLIP/)
## Community
* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI.
* For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com).
|
thejaminator/cities-backdoor-20250829-step-500
|
thejaminator
| 2025-08-29T23:34:00Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"qwen3",
"base_model:Qwen/Qwen3-8B",
"base_model:adapter:Qwen/Qwen3-8B",
"region:us"
] | null | 2025-08-29T23:33:41Z |
---
base_model: Qwen/Qwen3-8B
library_name: peft
---
# LoRA Adapter for SFT
This is a LoRA (Low-Rank Adaptation) adapter trained using supervised fine-tuning (SFT).
## Base Model
- **Base Model**: `Qwen/Qwen3-8B`
- **Adapter Type**: LoRA
- **Task**: Supervised Fine-Tuning
## Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
# Load base model and tokenizer
base_model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen3-8B")
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen3-8B")
# Load LoRA adapter
model = PeftModel.from_pretrained(base_model, "thejaminator/cities-backdoor-20250829-step-500")
```
## Training Details
This adapter was trained using supervised fine-tuning on conversation data to improve the model's ability to follow instructions and generate helpful responses.
|
crystalline7/516423
|
crystalline7
| 2025-08-29T23:33:39Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-29T23:33:33Z |
[View on Civ Archive](https://civarchive.com/models/517826?modelVersionId=601249)
|
qualcomm/Nomic-Embed-Text
|
qualcomm
| 2025-08-29T23:33:17Z | 21 | 0 |
pytorch
|
[
"pytorch",
"tflite",
"android",
"text-generation",
"license:other",
"region:us"
] |
text-generation
| 2025-03-13T22:54:07Z |
---
library_name: pytorch
license: other
tags:
- android
pipeline_tag: text-generation
---

# Nomic-Embed-Text: Optimized for Mobile Deployment
## Resizable Production Embeddings
A text encoder that surpasses OpenAI text-embedding-ada-002 and text-embedding-3-small performance on short and long context tasks.
This model is an implementation of Nomic-Embed-Text found [here](https://huggingface.co/nomic-ai/nomic-embed-text-v1.5).
This repository provides scripts to run Nomic-Embed-Text on Qualcomm® devices.
More details on model performance across various devices, can be found
[here](https://aihub.qualcomm.com/models/nomic_embed_text).
### Model Details
- **Model Type:** Model_use_case.text_generation
- **Model Stats:**
- Model checkpoint: v1.5
- Input resolution: 1x128 (seqlen can vary)
- Number of parameters: 137M
- Model size (float): 523 MB
| Model | Precision | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Primary Compute Unit | Target Model
|---|---|---|---|---|---|---|---|---|
| Nomic-Embed-Text | float | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | TFLITE | 31.651 ms | 0 - 364 MB | NPU | [Nomic-Embed-Text.tflite](https://huggingface.co/qualcomm/Nomic-Embed-Text/blob/main/Nomic-Embed-Text.tflite) |
| Nomic-Embed-Text | float | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | QNN_DLC | 28.185 ms | 0 - 361 MB | NPU | [Nomic-Embed-Text.dlc](https://huggingface.co/qualcomm/Nomic-Embed-Text/blob/main/Nomic-Embed-Text.dlc) |
| Nomic-Embed-Text | float | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | TFLITE | 10.867 ms | 0 - 372 MB | NPU | [Nomic-Embed-Text.tflite](https://huggingface.co/qualcomm/Nomic-Embed-Text/blob/main/Nomic-Embed-Text.tflite) |
| Nomic-Embed-Text | float | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | QNN_DLC | 10.794 ms | 0 - 371 MB | NPU | [Nomic-Embed-Text.dlc](https://huggingface.co/qualcomm/Nomic-Embed-Text/blob/main/Nomic-Embed-Text.dlc) |
| Nomic-Embed-Text | float | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | TFLITE | 8.779 ms | 0 - 15 MB | NPU | [Nomic-Embed-Text.tflite](https://huggingface.co/qualcomm/Nomic-Embed-Text/blob/main/Nomic-Embed-Text.tflite) |
| Nomic-Embed-Text | float | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | QNN_DLC | 7.292 ms | 0 - 25 MB | NPU | [Nomic-Embed-Text.dlc](https://huggingface.co/qualcomm/Nomic-Embed-Text/blob/main/Nomic-Embed-Text.dlc) |
| Nomic-Embed-Text | float | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | TFLITE | 11.131 ms | 0 - 364 MB | NPU | [Nomic-Embed-Text.tflite](https://huggingface.co/qualcomm/Nomic-Embed-Text/blob/main/Nomic-Embed-Text.tflite) |
| Nomic-Embed-Text | float | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | QNN_DLC | 9.688 ms | 0 - 363 MB | NPU | [Nomic-Embed-Text.dlc](https://huggingface.co/qualcomm/Nomic-Embed-Text/blob/main/Nomic-Embed-Text.dlc) |
| Nomic-Embed-Text | float | SA7255P ADP | Qualcomm® SA7255P | TFLITE | 31.651 ms | 0 - 364 MB | NPU | [Nomic-Embed-Text.tflite](https://huggingface.co/qualcomm/Nomic-Embed-Text/blob/main/Nomic-Embed-Text.tflite) |
| Nomic-Embed-Text | float | SA7255P ADP | Qualcomm® SA7255P | QNN_DLC | 28.185 ms | 0 - 361 MB | NPU | [Nomic-Embed-Text.dlc](https://huggingface.co/qualcomm/Nomic-Embed-Text/blob/main/Nomic-Embed-Text.dlc) |
| Nomic-Embed-Text | float | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | TFLITE | 8.813 ms | 3 - 29 MB | NPU | [Nomic-Embed-Text.tflite](https://huggingface.co/qualcomm/Nomic-Embed-Text/blob/main/Nomic-Embed-Text.tflite) |
| Nomic-Embed-Text | float | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | QNN_DLC | 7.474 ms | 0 - 23 MB | NPU | [Nomic-Embed-Text.dlc](https://huggingface.co/qualcomm/Nomic-Embed-Text/blob/main/Nomic-Embed-Text.dlc) |
| Nomic-Embed-Text | float | SA8295P ADP | Qualcomm® SA8295P | TFLITE | 12.375 ms | 0 - 358 MB | NPU | [Nomic-Embed-Text.tflite](https://huggingface.co/qualcomm/Nomic-Embed-Text/blob/main/Nomic-Embed-Text.tflite) |
| Nomic-Embed-Text | float | SA8295P ADP | Qualcomm® SA8295P | QNN_DLC | 10.607 ms | 0 - 356 MB | NPU | [Nomic-Embed-Text.dlc](https://huggingface.co/qualcomm/Nomic-Embed-Text/blob/main/Nomic-Embed-Text.dlc) |
| Nomic-Embed-Text | float | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | TFLITE | 8.839 ms | 0 - 15 MB | NPU | [Nomic-Embed-Text.tflite](https://huggingface.co/qualcomm/Nomic-Embed-Text/blob/main/Nomic-Embed-Text.tflite) |
| Nomic-Embed-Text | float | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | QNN_DLC | 7.423 ms | 0 - 23 MB | NPU | [Nomic-Embed-Text.dlc](https://huggingface.co/qualcomm/Nomic-Embed-Text/blob/main/Nomic-Embed-Text.dlc) |
| Nomic-Embed-Text | float | SA8775P ADP | Qualcomm® SA8775P | TFLITE | 11.131 ms | 0 - 364 MB | NPU | [Nomic-Embed-Text.tflite](https://huggingface.co/qualcomm/Nomic-Embed-Text/blob/main/Nomic-Embed-Text.tflite) |
| Nomic-Embed-Text | float | SA8775P ADP | Qualcomm® SA8775P | QNN_DLC | 9.688 ms | 0 - 363 MB | NPU | [Nomic-Embed-Text.dlc](https://huggingface.co/qualcomm/Nomic-Embed-Text/blob/main/Nomic-Embed-Text.dlc) |
| Nomic-Embed-Text | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | TFLITE | 8.77 ms | 0 - 15 MB | NPU | [Nomic-Embed-Text.tflite](https://huggingface.co/qualcomm/Nomic-Embed-Text/blob/main/Nomic-Embed-Text.tflite) |
| Nomic-Embed-Text | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | QNN_DLC | 7.484 ms | 0 - 27 MB | NPU | [Nomic-Embed-Text.dlc](https://huggingface.co/qualcomm/Nomic-Embed-Text/blob/main/Nomic-Embed-Text.dlc) |
| Nomic-Embed-Text | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | ONNX | 8.07 ms | 0 - 25 MB | NPU | [Nomic-Embed-Text.onnx.zip](https://huggingface.co/qualcomm/Nomic-Embed-Text/blob/main/Nomic-Embed-Text.onnx.zip) |
| Nomic-Embed-Text | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | TFLITE | 6.405 ms | 0 - 370 MB | NPU | [Nomic-Embed-Text.tflite](https://huggingface.co/qualcomm/Nomic-Embed-Text/blob/main/Nomic-Embed-Text.tflite) |
| Nomic-Embed-Text | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | QNN_DLC | 5.308 ms | 0 - 372 MB | NPU | [Nomic-Embed-Text.dlc](https://huggingface.co/qualcomm/Nomic-Embed-Text/blob/main/Nomic-Embed-Text.dlc) |
| Nomic-Embed-Text | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | ONNX | 5.876 ms | 0 - 377 MB | NPU | [Nomic-Embed-Text.onnx.zip](https://huggingface.co/qualcomm/Nomic-Embed-Text/blob/main/Nomic-Embed-Text.onnx.zip) |
| Nomic-Embed-Text | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | TFLITE | 6.247 ms | 0 - 365 MB | NPU | [Nomic-Embed-Text.tflite](https://huggingface.co/qualcomm/Nomic-Embed-Text/blob/main/Nomic-Embed-Text.tflite) |
| Nomic-Embed-Text | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | QNN_DLC | 4.962 ms | 0 - 364 MB | NPU | [Nomic-Embed-Text.dlc](https://huggingface.co/qualcomm/Nomic-Embed-Text/blob/main/Nomic-Embed-Text.dlc) |
| Nomic-Embed-Text | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | ONNX | 5.442 ms | 0 - 330 MB | NPU | [Nomic-Embed-Text.onnx.zip](https://huggingface.co/qualcomm/Nomic-Embed-Text/blob/main/Nomic-Embed-Text.onnx.zip) |
| Nomic-Embed-Text | float | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN_DLC | 7.997 ms | 1522 - 1522 MB | NPU | [Nomic-Embed-Text.dlc](https://huggingface.co/qualcomm/Nomic-Embed-Text/blob/main/Nomic-Embed-Text.dlc) |
| Nomic-Embed-Text | float | Snapdragon X Elite CRD | Snapdragon® X Elite | ONNX | 9.472 ms | 264 - 264 MB | NPU | [Nomic-Embed-Text.onnx.zip](https://huggingface.co/qualcomm/Nomic-Embed-Text/blob/main/Nomic-Embed-Text.onnx.zip) |
## Installation
Install the package via pip:
```bash
pip install "qai-hub-models[nomic-embed-text]"
```
## Configure Qualcomm® AI Hub to run this model on a cloud-hosted device
Sign-in to [Qualcomm® AI Hub](https://app.aihub.qualcomm.com/) with your
Qualcomm® ID. Once signed in navigate to `Account -> Settings -> API Token`.
With this API token, you can configure your client to run models on the cloud
hosted devices.
```bash
qai-hub configure --api_token API_TOKEN
```
Navigate to [docs](https://app.aihub.qualcomm.com/docs/) for more information.
## Demo off target
The package contains a simple end-to-end demo that downloads pre-trained
weights and runs this model on a sample input.
```bash
python -m qai_hub_models.models.nomic_embed_text.demo
```
The above demo runs a reference implementation of pre-processing, model
inference, and post processing.
**NOTE**: If you want running in a Jupyter Notebook or Google Colab like
environment, please add the following to your cell (instead of the above).
```
%run -m qai_hub_models.models.nomic_embed_text.demo
```
### Run model on a cloud-hosted device
In addition to the demo, you can also run the model on a cloud-hosted Qualcomm®
device. This script does the following:
* Performance check on-device on a cloud-hosted device
* Downloads compiled assets that can be deployed on-device for Android.
* Accuracy check between PyTorch and on-device outputs.
```bash
python -m qai_hub_models.models.nomic_embed_text.export
```
## How does this work?
This [export script](https://aihub.qualcomm.com/models/nomic_embed_text/qai_hub_models/models/Nomic-Embed-Text/export.py)
leverages [Qualcomm® AI Hub](https://aihub.qualcomm.com/) to optimize, validate, and deploy this model
on-device. Lets go through each step below in detail:
Step 1: **Compile model for on-device deployment**
To compile a PyTorch model for on-device deployment, we first trace the model
in memory using the `jit.trace` and then call the `submit_compile_job` API.
```python
import torch
import qai_hub as hub
from qai_hub_models.models.nomic_embed_text import Model
# Load the model
torch_model = Model.from_pretrained()
# Device
device = hub.Device("Samsung Galaxy S24")
# Trace model
input_shape = torch_model.get_input_spec()
sample_inputs = torch_model.sample_inputs()
pt_model = torch.jit.trace(torch_model, [torch.tensor(data[0]) for _, data in sample_inputs.items()])
# Compile model on a specific device
compile_job = hub.submit_compile_job(
model=pt_model,
device=device,
input_specs=torch_model.get_input_spec(),
)
# Get target model to run on-device
target_model = compile_job.get_target_model()
```
Step 2: **Performance profiling on cloud-hosted device**
After compiling models from step 1. Models can be profiled model on-device using the
`target_model`. Note that this scripts runs the model on a device automatically
provisioned in the cloud. Once the job is submitted, you can navigate to a
provided job URL to view a variety of on-device performance metrics.
```python
profile_job = hub.submit_profile_job(
model=target_model,
device=device,
)
```
Step 3: **Verify on-device accuracy**
To verify the accuracy of the model on-device, you can run on-device inference
on sample input data on the same cloud hosted device.
```python
input_data = torch_model.sample_inputs()
inference_job = hub.submit_inference_job(
model=target_model,
device=device,
inputs=input_data,
)
on_device_output = inference_job.download_output_data()
```
With the output of the model, you can compute like PSNR, relative errors or
spot check the output with expected output.
**Note**: This on-device profiling and inference requires access to Qualcomm®
AI Hub. [Sign up for access](https://myaccount.qualcomm.com/signup).
## Run demo on a cloud-hosted device
You can also run the demo on-device.
```bash
python -m qai_hub_models.models.nomic_embed_text.demo --eval-mode on-device
```
**NOTE**: If you want running in a Jupyter Notebook or Google Colab like
environment, please add the following to your cell (instead of the above).
```
%run -m qai_hub_models.models.nomic_embed_text.demo -- --eval-mode on-device
```
## Deploying compiled model to Android
The models can be deployed using multiple runtimes:
- TensorFlow Lite (`.tflite` export): [This
tutorial](https://www.tensorflow.org/lite/android/quickstart) provides a
guide to deploy the .tflite model in an Android application.
- QNN (`.so` export ): This [sample
app](https://docs.qualcomm.com/bundle/publicresource/topics/80-63442-50/sample_app.html)
provides instructions on how to use the `.so` shared library in an Android application.
## View on Qualcomm® AI Hub
Get more details on Nomic-Embed-Text's performance across various devices [here](https://aihub.qualcomm.com/models/nomic_embed_text).
Explore all available models on [Qualcomm® AI Hub](https://aihub.qualcomm.com/)
## License
* The license for the original implementation of Nomic-Embed-Text can be found
[here](https://huggingface.co/datasets/choosealicense/licenses/blob/main/markdown/apache-2.0.md).
* The license for the compiled assets for on-device deployment can be found [here](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/Qualcomm+AI+Hub+Proprietary+License.pdf)
## References
* [Introducing Nomic Embed: A Truly Open Embedding Model](https://www.nomic.ai/blog/posts/nomic-embed-text-v1)
* [Source Model Implementation](https://huggingface.co/nomic-ai/nomic-embed-text-v1.5)
## Community
* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI.
* For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com).
|
crystalline7/852668
|
crystalline7
| 2025-08-29T23:33:10Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-29T23:33:04Z |
[View on Civ Archive](https://civarchive.com/models/842942?modelVersionId=943128)
|
amethyst9/646210
|
amethyst9
| 2025-08-29T23:32:42Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-29T23:32:36Z |
[View on Civ Archive](https://civarchive.com/models/494834?modelVersionId=732101)
|
ultratopaz/698331
|
ultratopaz
| 2025-08-29T23:31:48Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-29T23:31:43Z |
[View on Civ Archive](https://civarchive.com/models/680268?modelVersionId=784888)
|
qualcomm/MobileNet-v3-Small
|
qualcomm
| 2025-08-29T23:31:40Z | 55 | 3 |
pytorch
|
[
"pytorch",
"tflite",
"backbone",
"real_time",
"android",
"image-classification",
"arxiv:1905.02244",
"license:other",
"region:us"
] |
image-classification
| 2024-02-25T22:36:27Z |
---
library_name: pytorch
license: other
tags:
- backbone
- real_time
- android
pipeline_tag: image-classification
---

# MobileNet-v3-Small: Optimized for Mobile Deployment
## Imagenet classifier and general purpose backbone
MobileNetV3Small is a machine learning model that can classify images from the Imagenet dataset. It can also be used as a backbone in building more complex models for specific use cases.
This model is an implementation of MobileNet-v3-Small found [here](https://github.com/pytorch/vision/blob/main/torchvision/models/mobilenetv3.py).
This repository provides scripts to run MobileNet-v3-Small on Qualcomm® devices.
More details on model performance across various devices, can be found
[here](https://aihub.qualcomm.com/models/mobilenet_v3_small).
### Model Details
- **Model Type:** Model_use_case.image_classification
- **Model Stats:**
- Model checkpoint: Imagenet
- Input resolution: 224x224
- Number of parameters: 2.54M
- Model size (float): 9.71 MB
| Model | Precision | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Primary Compute Unit | Target Model
|---|---|---|---|---|---|---|---|---|
| MobileNet-v3-Small | float | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | TFLITE | 2.038 ms | 0 - 21 MB | NPU | [MobileNet-v3-Small.tflite](https://huggingface.co/qualcomm/MobileNet-v3-Small/blob/main/MobileNet-v3-Small.tflite) |
| MobileNet-v3-Small | float | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | QNN_DLC | 1.957 ms | 1 - 22 MB | NPU | [MobileNet-v3-Small.dlc](https://huggingface.co/qualcomm/MobileNet-v3-Small/blob/main/MobileNet-v3-Small.dlc) |
| MobileNet-v3-Small | float | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | TFLITE | 0.985 ms | 0 - 34 MB | NPU | [MobileNet-v3-Small.tflite](https://huggingface.co/qualcomm/MobileNet-v3-Small/blob/main/MobileNet-v3-Small.tflite) |
| MobileNet-v3-Small | float | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | QNN_DLC | 1.42 ms | 1 - 38 MB | NPU | [MobileNet-v3-Small.dlc](https://huggingface.co/qualcomm/MobileNet-v3-Small/blob/main/MobileNet-v3-Small.dlc) |
| MobileNet-v3-Small | float | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | TFLITE | 0.776 ms | 0 - 50 MB | NPU | [MobileNet-v3-Small.tflite](https://huggingface.co/qualcomm/MobileNet-v3-Small/blob/main/MobileNet-v3-Small.tflite) |
| MobileNet-v3-Small | float | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | QNN_DLC | 0.771 ms | 1 - 48 MB | NPU | [MobileNet-v3-Small.dlc](https://huggingface.co/qualcomm/MobileNet-v3-Small/blob/main/MobileNet-v3-Small.dlc) |
| MobileNet-v3-Small | float | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | TFLITE | 1.085 ms | 0 - 21 MB | NPU | [MobileNet-v3-Small.tflite](https://huggingface.co/qualcomm/MobileNet-v3-Small/blob/main/MobileNet-v3-Small.tflite) |
| MobileNet-v3-Small | float | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | QNN_DLC | 1.066 ms | 0 - 21 MB | NPU | [MobileNet-v3-Small.dlc](https://huggingface.co/qualcomm/MobileNet-v3-Small/blob/main/MobileNet-v3-Small.dlc) |
| MobileNet-v3-Small | float | SA7255P ADP | Qualcomm® SA7255P | TFLITE | 2.038 ms | 0 - 21 MB | NPU | [MobileNet-v3-Small.tflite](https://huggingface.co/qualcomm/MobileNet-v3-Small/blob/main/MobileNet-v3-Small.tflite) |
| MobileNet-v3-Small | float | SA7255P ADP | Qualcomm® SA7255P | QNN_DLC | 1.957 ms | 1 - 22 MB | NPU | [MobileNet-v3-Small.dlc](https://huggingface.co/qualcomm/MobileNet-v3-Small/blob/main/MobileNet-v3-Small.dlc) |
| MobileNet-v3-Small | float | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | TFLITE | 0.779 ms | 0 - 50 MB | NPU | [MobileNet-v3-Small.tflite](https://huggingface.co/qualcomm/MobileNet-v3-Small/blob/main/MobileNet-v3-Small.tflite) |
| MobileNet-v3-Small | float | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | QNN_DLC | 0.773 ms | 1 - 47 MB | NPU | [MobileNet-v3-Small.dlc](https://huggingface.co/qualcomm/MobileNet-v3-Small/blob/main/MobileNet-v3-Small.dlc) |
| MobileNet-v3-Small | float | SA8295P ADP | Qualcomm® SA8295P | TFLITE | 1.4 ms | 0 - 28 MB | NPU | [MobileNet-v3-Small.tflite](https://huggingface.co/qualcomm/MobileNet-v3-Small/blob/main/MobileNet-v3-Small.tflite) |
| MobileNet-v3-Small | float | SA8295P ADP | Qualcomm® SA8295P | QNN_DLC | 1.364 ms | 0 - 27 MB | NPU | [MobileNet-v3-Small.dlc](https://huggingface.co/qualcomm/MobileNet-v3-Small/blob/main/MobileNet-v3-Small.dlc) |
| MobileNet-v3-Small | float | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | TFLITE | 0.778 ms | 0 - 50 MB | NPU | [MobileNet-v3-Small.tflite](https://huggingface.co/qualcomm/MobileNet-v3-Small/blob/main/MobileNet-v3-Small.tflite) |
| MobileNet-v3-Small | float | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | QNN_DLC | 0.768 ms | 0 - 47 MB | NPU | [MobileNet-v3-Small.dlc](https://huggingface.co/qualcomm/MobileNet-v3-Small/blob/main/MobileNet-v3-Small.dlc) |
| MobileNet-v3-Small | float | SA8775P ADP | Qualcomm® SA8775P | TFLITE | 1.085 ms | 0 - 21 MB | NPU | [MobileNet-v3-Small.tflite](https://huggingface.co/qualcomm/MobileNet-v3-Small/blob/main/MobileNet-v3-Small.tflite) |
| MobileNet-v3-Small | float | SA8775P ADP | Qualcomm® SA8775P | QNN_DLC | 1.066 ms | 0 - 21 MB | NPU | [MobileNet-v3-Small.dlc](https://huggingface.co/qualcomm/MobileNet-v3-Small/blob/main/MobileNet-v3-Small.dlc) |
| MobileNet-v3-Small | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | TFLITE | 0.777 ms | 0 - 50 MB | NPU | [MobileNet-v3-Small.tflite](https://huggingface.co/qualcomm/MobileNet-v3-Small/blob/main/MobileNet-v3-Small.tflite) |
| MobileNet-v3-Small | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | QNN_DLC | 0.771 ms | 0 - 47 MB | NPU | [MobileNet-v3-Small.dlc](https://huggingface.co/qualcomm/MobileNet-v3-Small/blob/main/MobileNet-v3-Small.dlc) |
| MobileNet-v3-Small | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | ONNX | 0.641 ms | 0 - 54 MB | NPU | [MobileNet-v3-Small.onnx.zip](https://huggingface.co/qualcomm/MobileNet-v3-Small/blob/main/MobileNet-v3-Small.onnx.zip) |
| MobileNet-v3-Small | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | TFLITE | 0.511 ms | 0 - 35 MB | NPU | [MobileNet-v3-Small.tflite](https://huggingface.co/qualcomm/MobileNet-v3-Small/blob/main/MobileNet-v3-Small.tflite) |
| MobileNet-v3-Small | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | QNN_DLC | 0.501 ms | 1 - 33 MB | NPU | [MobileNet-v3-Small.dlc](https://huggingface.co/qualcomm/MobileNet-v3-Small/blob/main/MobileNet-v3-Small.dlc) |
| MobileNet-v3-Small | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | ONNX | 0.475 ms | 0 - 30 MB | NPU | [MobileNet-v3-Small.onnx.zip](https://huggingface.co/qualcomm/MobileNet-v3-Small/blob/main/MobileNet-v3-Small.onnx.zip) |
| MobileNet-v3-Small | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | TFLITE | 0.497 ms | 0 - 24 MB | NPU | [MobileNet-v3-Small.tflite](https://huggingface.co/qualcomm/MobileNet-v3-Small/blob/main/MobileNet-v3-Small.tflite) |
| MobileNet-v3-Small | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | QNN_DLC | 0.477 ms | 1 - 28 MB | NPU | [MobileNet-v3-Small.dlc](https://huggingface.co/qualcomm/MobileNet-v3-Small/blob/main/MobileNet-v3-Small.dlc) |
| MobileNet-v3-Small | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | ONNX | 0.429 ms | 1 - 27 MB | NPU | [MobileNet-v3-Small.onnx.zip](https://huggingface.co/qualcomm/MobileNet-v3-Small/blob/main/MobileNet-v3-Small.onnx.zip) |
| MobileNet-v3-Small | float | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN_DLC | 0.917 ms | 48 - 48 MB | NPU | [MobileNet-v3-Small.dlc](https://huggingface.co/qualcomm/MobileNet-v3-Small/blob/main/MobileNet-v3-Small.dlc) |
| MobileNet-v3-Small | float | Snapdragon X Elite CRD | Snapdragon® X Elite | ONNX | 0.694 ms | 5 - 5 MB | NPU | [MobileNet-v3-Small.onnx.zip](https://huggingface.co/qualcomm/MobileNet-v3-Small/blob/main/MobileNet-v3-Small.onnx.zip) |
## Installation
Install the package via pip:
```bash
pip install qai-hub-models
```
## Configure Qualcomm® AI Hub to run this model on a cloud-hosted device
Sign-in to [Qualcomm® AI Hub](https://app.aihub.qualcomm.com/) with your
Qualcomm® ID. Once signed in navigate to `Account -> Settings -> API Token`.
With this API token, you can configure your client to run models on the cloud
hosted devices.
```bash
qai-hub configure --api_token API_TOKEN
```
Navigate to [docs](https://app.aihub.qualcomm.com/docs/) for more information.
## Demo off target
The package contains a simple end-to-end demo that downloads pre-trained
weights and runs this model on a sample input.
```bash
python -m qai_hub_models.models.mobilenet_v3_small.demo
```
The above demo runs a reference implementation of pre-processing, model
inference, and post processing.
**NOTE**: If you want running in a Jupyter Notebook or Google Colab like
environment, please add the following to your cell (instead of the above).
```
%run -m qai_hub_models.models.mobilenet_v3_small.demo
```
### Run model on a cloud-hosted device
In addition to the demo, you can also run the model on a cloud-hosted Qualcomm®
device. This script does the following:
* Performance check on-device on a cloud-hosted device
* Downloads compiled assets that can be deployed on-device for Android.
* Accuracy check between PyTorch and on-device outputs.
```bash
python -m qai_hub_models.models.mobilenet_v3_small.export
```
## How does this work?
This [export script](https://aihub.qualcomm.com/models/mobilenet_v3_small/qai_hub_models/models/MobileNet-v3-Small/export.py)
leverages [Qualcomm® AI Hub](https://aihub.qualcomm.com/) to optimize, validate, and deploy this model
on-device. Lets go through each step below in detail:
Step 1: **Compile model for on-device deployment**
To compile a PyTorch model for on-device deployment, we first trace the model
in memory using the `jit.trace` and then call the `submit_compile_job` API.
```python
import torch
import qai_hub as hub
from qai_hub_models.models.mobilenet_v3_small import Model
# Load the model
torch_model = Model.from_pretrained()
# Device
device = hub.Device("Samsung Galaxy S24")
# Trace model
input_shape = torch_model.get_input_spec()
sample_inputs = torch_model.sample_inputs()
pt_model = torch.jit.trace(torch_model, [torch.tensor(data[0]) for _, data in sample_inputs.items()])
# Compile model on a specific device
compile_job = hub.submit_compile_job(
model=pt_model,
device=device,
input_specs=torch_model.get_input_spec(),
)
# Get target model to run on-device
target_model = compile_job.get_target_model()
```
Step 2: **Performance profiling on cloud-hosted device**
After compiling models from step 1. Models can be profiled model on-device using the
`target_model`. Note that this scripts runs the model on a device automatically
provisioned in the cloud. Once the job is submitted, you can navigate to a
provided job URL to view a variety of on-device performance metrics.
```python
profile_job = hub.submit_profile_job(
model=target_model,
device=device,
)
```
Step 3: **Verify on-device accuracy**
To verify the accuracy of the model on-device, you can run on-device inference
on sample input data on the same cloud hosted device.
```python
input_data = torch_model.sample_inputs()
inference_job = hub.submit_inference_job(
model=target_model,
device=device,
inputs=input_data,
)
on_device_output = inference_job.download_output_data()
```
With the output of the model, you can compute like PSNR, relative errors or
spot check the output with expected output.
**Note**: This on-device profiling and inference requires access to Qualcomm®
AI Hub. [Sign up for access](https://myaccount.qualcomm.com/signup).
## Run demo on a cloud-hosted device
You can also run the demo on-device.
```bash
python -m qai_hub_models.models.mobilenet_v3_small.demo --eval-mode on-device
```
**NOTE**: If you want running in a Jupyter Notebook or Google Colab like
environment, please add the following to your cell (instead of the above).
```
%run -m qai_hub_models.models.mobilenet_v3_small.demo -- --eval-mode on-device
```
## Deploying compiled model to Android
The models can be deployed using multiple runtimes:
- TensorFlow Lite (`.tflite` export): [This
tutorial](https://www.tensorflow.org/lite/android/quickstart) provides a
guide to deploy the .tflite model in an Android application.
- QNN (`.so` export ): This [sample
app](https://docs.qualcomm.com/bundle/publicresource/topics/80-63442-50/sample_app.html)
provides instructions on how to use the `.so` shared library in an Android application.
## View on Qualcomm® AI Hub
Get more details on MobileNet-v3-Small's performance across various devices [here](https://aihub.qualcomm.com/models/mobilenet_v3_small).
Explore all available models on [Qualcomm® AI Hub](https://aihub.qualcomm.com/)
## License
* The license for the original implementation of MobileNet-v3-Small can be found
[here](https://github.com/pytorch/vision/blob/main/LICENSE).
* The license for the compiled assets for on-device deployment can be found [here](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/Qualcomm+AI+Hub+Proprietary+License.pdf)
## References
* [Searching for MobileNetV3](https://arxiv.org/abs/1905.02244)
* [Source Model Implementation](https://github.com/pytorch/vision/blob/main/torchvision/models/mobilenetv3.py)
## Community
* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI.
* For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com).
|
seraphimzzzz/774812
|
seraphimzzzz
| 2025-08-29T23:31:35Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-29T23:31:29Z |
[View on Civ Archive](https://civarchive.com/models/774205?modelVersionId=865919)
|
ultratopaz/490984
|
ultratopaz
| 2025-08-29T23:31:24Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-29T23:31:22Z |
[View on Civ Archive](https://civarchive.com/models/517988?modelVersionId=575583)
|
amethyst9/643580
|
amethyst9
| 2025-08-29T23:31:18Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-29T23:31:15Z |
[View on Civ Archive](https://civarchive.com/models/475846?modelVersionId=729427)
|
qualcomm/MobileNet-v2
|
qualcomm
| 2025-08-29T23:31:14Z | 106 | 2 |
pytorch
|
[
"pytorch",
"tflite",
"backbone",
"real_time",
"android",
"image-classification",
"arxiv:1801.04381",
"license:other",
"region:us"
] |
image-classification
| 2024-02-25T22:51:53Z |
---
library_name: pytorch
license: other
tags:
- backbone
- real_time
- android
pipeline_tag: image-classification
---

# MobileNet-v2: Optimized for Mobile Deployment
## Imagenet classifier and general purpose backbone
MobileNetV2 is a machine learning model that can classify images from the Imagenet dataset. It can also be used as a backbone in building more complex models for specific use cases.
This model is an implementation of MobileNet-v2 found [here](https://github.com/tonylins/pytorch-mobilenet-v2/tree/master).
This repository provides scripts to run MobileNet-v2 on Qualcomm® devices.
More details on model performance across various devices, can be found
[here](https://aihub.qualcomm.com/models/mobilenet_v2).
### Model Details
- **Model Type:** Model_use_case.image_classification
- **Model Stats:**
- Model checkpoint: Imagenet
- Input resolution: 224x224
- Number of parameters: 3.49M
- Model size (float): 13.3 MB
- Model size (w8a16): 4.39 MB
| Model | Precision | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Primary Compute Unit | Target Model
|---|---|---|---|---|---|---|---|---|
| MobileNet-v2 | float | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | TFLITE | 2.741 ms | 0 - 24 MB | NPU | [MobileNet-v2.tflite](https://huggingface.co/qualcomm/MobileNet-v2/blob/main/MobileNet-v2.tflite) |
| MobileNet-v2 | float | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | QNN_DLC | 2.578 ms | 1 - 22 MB | NPU | [MobileNet-v2.dlc](https://huggingface.co/qualcomm/MobileNet-v2/blob/main/MobileNet-v2.dlc) |
| MobileNet-v2 | float | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | TFLITE | 1.031 ms | 0 - 37 MB | NPU | [MobileNet-v2.tflite](https://huggingface.co/qualcomm/MobileNet-v2/blob/main/MobileNet-v2.tflite) |
| MobileNet-v2 | float | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | QNN_DLC | 1.53 ms | 0 - 33 MB | NPU | [MobileNet-v2.dlc](https://huggingface.co/qualcomm/MobileNet-v2/blob/main/MobileNet-v2.dlc) |
| MobileNet-v2 | float | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | TFLITE | 0.862 ms | 0 - 67 MB | NPU | [MobileNet-v2.tflite](https://huggingface.co/qualcomm/MobileNet-v2/blob/main/MobileNet-v2.tflite) |
| MobileNet-v2 | float | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | QNN_DLC | 0.845 ms | 0 - 47 MB | NPU | [MobileNet-v2.dlc](https://huggingface.co/qualcomm/MobileNet-v2/blob/main/MobileNet-v2.dlc) |
| MobileNet-v2 | float | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | TFLITE | 1.198 ms | 0 - 24 MB | NPU | [MobileNet-v2.tflite](https://huggingface.co/qualcomm/MobileNet-v2/blob/main/MobileNet-v2.tflite) |
| MobileNet-v2 | float | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | QNN_DLC | 1.171 ms | 1 - 22 MB | NPU | [MobileNet-v2.dlc](https://huggingface.co/qualcomm/MobileNet-v2/blob/main/MobileNet-v2.dlc) |
| MobileNet-v2 | float | SA7255P ADP | Qualcomm® SA7255P | TFLITE | 2.741 ms | 0 - 24 MB | NPU | [MobileNet-v2.tflite](https://huggingface.co/qualcomm/MobileNet-v2/blob/main/MobileNet-v2.tflite) |
| MobileNet-v2 | float | SA7255P ADP | Qualcomm® SA7255P | QNN_DLC | 2.578 ms | 1 - 22 MB | NPU | [MobileNet-v2.dlc](https://huggingface.co/qualcomm/MobileNet-v2/blob/main/MobileNet-v2.dlc) |
| MobileNet-v2 | float | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | TFLITE | 0.866 ms | 0 - 67 MB | NPU | [MobileNet-v2.tflite](https://huggingface.co/qualcomm/MobileNet-v2/blob/main/MobileNet-v2.tflite) |
| MobileNet-v2 | float | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | QNN_DLC | 0.845 ms | 0 - 46 MB | NPU | [MobileNet-v2.dlc](https://huggingface.co/qualcomm/MobileNet-v2/blob/main/MobileNet-v2.dlc) |
| MobileNet-v2 | float | SA8295P ADP | Qualcomm® SA8295P | TFLITE | 1.429 ms | 0 - 29 MB | NPU | [MobileNet-v2.tflite](https://huggingface.co/qualcomm/MobileNet-v2/blob/main/MobileNet-v2.tflite) |
| MobileNet-v2 | float | SA8295P ADP | Qualcomm® SA8295P | QNN_DLC | 1.416 ms | 1 - 28 MB | NPU | [MobileNet-v2.dlc](https://huggingface.co/qualcomm/MobileNet-v2/blob/main/MobileNet-v2.dlc) |
| MobileNet-v2 | float | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | TFLITE | 0.87 ms | 0 - 68 MB | NPU | [MobileNet-v2.tflite](https://huggingface.co/qualcomm/MobileNet-v2/blob/main/MobileNet-v2.tflite) |
| MobileNet-v2 | float | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | QNN_DLC | 0.845 ms | 0 - 46 MB | NPU | [MobileNet-v2.dlc](https://huggingface.co/qualcomm/MobileNet-v2/blob/main/MobileNet-v2.dlc) |
| MobileNet-v2 | float | SA8775P ADP | Qualcomm® SA8775P | TFLITE | 1.198 ms | 0 - 24 MB | NPU | [MobileNet-v2.tflite](https://huggingface.co/qualcomm/MobileNet-v2/blob/main/MobileNet-v2.tflite) |
| MobileNet-v2 | float | SA8775P ADP | Qualcomm® SA8775P | QNN_DLC | 1.171 ms | 1 - 22 MB | NPU | [MobileNet-v2.dlc](https://huggingface.co/qualcomm/MobileNet-v2/blob/main/MobileNet-v2.dlc) |
| MobileNet-v2 | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | TFLITE | 0.865 ms | 0 - 68 MB | NPU | [MobileNet-v2.tflite](https://huggingface.co/qualcomm/MobileNet-v2/blob/main/MobileNet-v2.tflite) |
| MobileNet-v2 | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | QNN_DLC | 0.845 ms | 0 - 47 MB | NPU | [MobileNet-v2.dlc](https://huggingface.co/qualcomm/MobileNet-v2/blob/main/MobileNet-v2.dlc) |
| MobileNet-v2 | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | ONNX | 0.755 ms | 0 - 24 MB | NPU | [MobileNet-v2.onnx.zip](https://huggingface.co/qualcomm/MobileNet-v2/blob/main/MobileNet-v2.onnx.zip) |
| MobileNet-v2 | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | TFLITE | 0.567 ms | 0 - 38 MB | NPU | [MobileNet-v2.tflite](https://huggingface.co/qualcomm/MobileNet-v2/blob/main/MobileNet-v2.tflite) |
| MobileNet-v2 | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | QNN_DLC | 0.556 ms | 0 - 33 MB | NPU | [MobileNet-v2.dlc](https://huggingface.co/qualcomm/MobileNet-v2/blob/main/MobileNet-v2.dlc) |
| MobileNet-v2 | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | ONNX | 0.497 ms | 0 - 31 MB | NPU | [MobileNet-v2.onnx.zip](https://huggingface.co/qualcomm/MobileNet-v2/blob/main/MobileNet-v2.onnx.zip) |
| MobileNet-v2 | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | TFLITE | 0.551 ms | 0 - 29 MB | NPU | [MobileNet-v2.tflite](https://huggingface.co/qualcomm/MobileNet-v2/blob/main/MobileNet-v2.tflite) |
| MobileNet-v2 | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | QNN_DLC | 0.541 ms | 0 - 23 MB | NPU | [MobileNet-v2.dlc](https://huggingface.co/qualcomm/MobileNet-v2/blob/main/MobileNet-v2.dlc) |
| MobileNet-v2 | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | ONNX | 0.506 ms | 0 - 23 MB | NPU | [MobileNet-v2.onnx.zip](https://huggingface.co/qualcomm/MobileNet-v2/blob/main/MobileNet-v2.onnx.zip) |
| MobileNet-v2 | float | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN_DLC | 1.011 ms | 53 - 53 MB | NPU | [MobileNet-v2.dlc](https://huggingface.co/qualcomm/MobileNet-v2/blob/main/MobileNet-v2.dlc) |
| MobileNet-v2 | float | Snapdragon X Elite CRD | Snapdragon® X Elite | ONNX | 0.817 ms | 7 - 7 MB | NPU | [MobileNet-v2.onnx.zip](https://huggingface.co/qualcomm/MobileNet-v2/blob/main/MobileNet-v2.onnx.zip) |
| MobileNet-v2 | w8a16 | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | QNN_DLC | 1.776 ms | 0 - 20 MB | NPU | [MobileNet-v2.dlc](https://huggingface.co/qualcomm/MobileNet-v2/blob/main/MobileNet-v2_w8a16.dlc) |
| MobileNet-v2 | w8a16 | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | QNN_DLC | 0.978 ms | 0 - 32 MB | NPU | [MobileNet-v2.dlc](https://huggingface.co/qualcomm/MobileNet-v2/blob/main/MobileNet-v2_w8a16.dlc) |
| MobileNet-v2 | w8a16 | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | QNN_DLC | 0.822 ms | 0 - 28 MB | NPU | [MobileNet-v2.dlc](https://huggingface.co/qualcomm/MobileNet-v2/blob/main/MobileNet-v2_w8a16.dlc) |
| MobileNet-v2 | w8a16 | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | QNN_DLC | 1.024 ms | 0 - 20 MB | NPU | [MobileNet-v2.dlc](https://huggingface.co/qualcomm/MobileNet-v2/blob/main/MobileNet-v2_w8a16.dlc) |
| MobileNet-v2 | w8a16 | RB3 Gen 2 (Proxy) | Qualcomm® QCS6490 (Proxy) | QNN_DLC | 2.812 ms | 0 - 24 MB | NPU | [MobileNet-v2.dlc](https://huggingface.co/qualcomm/MobileNet-v2/blob/main/MobileNet-v2_w8a16.dlc) |
| MobileNet-v2 | w8a16 | SA7255P ADP | Qualcomm® SA7255P | QNN_DLC | 1.776 ms | 0 - 20 MB | NPU | [MobileNet-v2.dlc](https://huggingface.co/qualcomm/MobileNet-v2/blob/main/MobileNet-v2_w8a16.dlc) |
| MobileNet-v2 | w8a16 | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | QNN_DLC | 0.824 ms | 0 - 28 MB | NPU | [MobileNet-v2.dlc](https://huggingface.co/qualcomm/MobileNet-v2/blob/main/MobileNet-v2_w8a16.dlc) |
| MobileNet-v2 | w8a16 | SA8295P ADP | Qualcomm® SA8295P | QNN_DLC | 1.255 ms | 0 - 29 MB | NPU | [MobileNet-v2.dlc](https://huggingface.co/qualcomm/MobileNet-v2/blob/main/MobileNet-v2_w8a16.dlc) |
| MobileNet-v2 | w8a16 | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | QNN_DLC | 0.816 ms | 0 - 29 MB | NPU | [MobileNet-v2.dlc](https://huggingface.co/qualcomm/MobileNet-v2/blob/main/MobileNet-v2_w8a16.dlc) |
| MobileNet-v2 | w8a16 | SA8775P ADP | Qualcomm® SA8775P | QNN_DLC | 1.024 ms | 0 - 20 MB | NPU | [MobileNet-v2.dlc](https://huggingface.co/qualcomm/MobileNet-v2/blob/main/MobileNet-v2_w8a16.dlc) |
| MobileNet-v2 | w8a16 | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | QNN_DLC | 0.82 ms | 0 - 28 MB | NPU | [MobileNet-v2.dlc](https://huggingface.co/qualcomm/MobileNet-v2/blob/main/MobileNet-v2_w8a16.dlc) |
| MobileNet-v2 | w8a16 | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | ONNX | 84.583 ms | 0 - 190 MB | NPU | [MobileNet-v2.onnx.zip](https://huggingface.co/qualcomm/MobileNet-v2/blob/main/MobileNet-v2_w8a16.onnx.zip) |
| MobileNet-v2 | w8a16 | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | QNN_DLC | 0.562 ms | 0 - 33 MB | NPU | [MobileNet-v2.dlc](https://huggingface.co/qualcomm/MobileNet-v2/blob/main/MobileNet-v2_w8a16.dlc) |
| MobileNet-v2 | w8a16 | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | ONNX | 67.641 ms | 4 - 1977 MB | NPU | [MobileNet-v2.onnx.zip](https://huggingface.co/qualcomm/MobileNet-v2/blob/main/MobileNet-v2_w8a16.onnx.zip) |
| MobileNet-v2 | w8a16 | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | QNN_DLC | 0.362 ms | 0 - 32 MB | NPU | [MobileNet-v2.dlc](https://huggingface.co/qualcomm/MobileNet-v2/blob/main/MobileNet-v2_w8a16.dlc) |
| MobileNet-v2 | w8a16 | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | ONNX | 69.891 ms | 3 - 769 MB | NPU | [MobileNet-v2.onnx.zip](https://huggingface.co/qualcomm/MobileNet-v2/blob/main/MobileNet-v2_w8a16.onnx.zip) |
| MobileNet-v2 | w8a16 | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN_DLC | 0.97 ms | 23 - 23 MB | NPU | [MobileNet-v2.dlc](https://huggingface.co/qualcomm/MobileNet-v2/blob/main/MobileNet-v2_w8a16.dlc) |
| MobileNet-v2 | w8a16 | Snapdragon X Elite CRD | Snapdragon® X Elite | ONNX | 87.472 ms | 41 - 41 MB | NPU | [MobileNet-v2.onnx.zip](https://huggingface.co/qualcomm/MobileNet-v2/blob/main/MobileNet-v2_w8a16.onnx.zip) |
## Installation
Install the package via pip:
```bash
pip install qai-hub-models
```
## Configure Qualcomm® AI Hub to run this model on a cloud-hosted device
Sign-in to [Qualcomm® AI Hub](https://app.aihub.qualcomm.com/) with your
Qualcomm® ID. Once signed in navigate to `Account -> Settings -> API Token`.
With this API token, you can configure your client to run models on the cloud
hosted devices.
```bash
qai-hub configure --api_token API_TOKEN
```
Navigate to [docs](https://app.aihub.qualcomm.com/docs/) for more information.
## Demo off target
The package contains a simple end-to-end demo that downloads pre-trained
weights and runs this model on a sample input.
```bash
python -m qai_hub_models.models.mobilenet_v2.demo
```
The above demo runs a reference implementation of pre-processing, model
inference, and post processing.
**NOTE**: If you want running in a Jupyter Notebook or Google Colab like
environment, please add the following to your cell (instead of the above).
```
%run -m qai_hub_models.models.mobilenet_v2.demo
```
### Run model on a cloud-hosted device
In addition to the demo, you can also run the model on a cloud-hosted Qualcomm®
device. This script does the following:
* Performance check on-device on a cloud-hosted device
* Downloads compiled assets that can be deployed on-device for Android.
* Accuracy check between PyTorch and on-device outputs.
```bash
python -m qai_hub_models.models.mobilenet_v2.export
```
## How does this work?
This [export script](https://aihub.qualcomm.com/models/mobilenet_v2/qai_hub_models/models/MobileNet-v2/export.py)
leverages [Qualcomm® AI Hub](https://aihub.qualcomm.com/) to optimize, validate, and deploy this model
on-device. Lets go through each step below in detail:
Step 1: **Compile model for on-device deployment**
To compile a PyTorch model for on-device deployment, we first trace the model
in memory using the `jit.trace` and then call the `submit_compile_job` API.
```python
import torch
import qai_hub as hub
from qai_hub_models.models.mobilenet_v2 import Model
# Load the model
torch_model = Model.from_pretrained()
# Device
device = hub.Device("Samsung Galaxy S24")
# Trace model
input_shape = torch_model.get_input_spec()
sample_inputs = torch_model.sample_inputs()
pt_model = torch.jit.trace(torch_model, [torch.tensor(data[0]) for _, data in sample_inputs.items()])
# Compile model on a specific device
compile_job = hub.submit_compile_job(
model=pt_model,
device=device,
input_specs=torch_model.get_input_spec(),
)
# Get target model to run on-device
target_model = compile_job.get_target_model()
```
Step 2: **Performance profiling on cloud-hosted device**
After compiling models from step 1. Models can be profiled model on-device using the
`target_model`. Note that this scripts runs the model on a device automatically
provisioned in the cloud. Once the job is submitted, you can navigate to a
provided job URL to view a variety of on-device performance metrics.
```python
profile_job = hub.submit_profile_job(
model=target_model,
device=device,
)
```
Step 3: **Verify on-device accuracy**
To verify the accuracy of the model on-device, you can run on-device inference
on sample input data on the same cloud hosted device.
```python
input_data = torch_model.sample_inputs()
inference_job = hub.submit_inference_job(
model=target_model,
device=device,
inputs=input_data,
)
on_device_output = inference_job.download_output_data()
```
With the output of the model, you can compute like PSNR, relative errors or
spot check the output with expected output.
**Note**: This on-device profiling and inference requires access to Qualcomm®
AI Hub. [Sign up for access](https://myaccount.qualcomm.com/signup).
## Run demo on a cloud-hosted device
You can also run the demo on-device.
```bash
python -m qai_hub_models.models.mobilenet_v2.demo --eval-mode on-device
```
**NOTE**: If you want running in a Jupyter Notebook or Google Colab like
environment, please add the following to your cell (instead of the above).
```
%run -m qai_hub_models.models.mobilenet_v2.demo -- --eval-mode on-device
```
## Deploying compiled model to Android
The models can be deployed using multiple runtimes:
- TensorFlow Lite (`.tflite` export): [This
tutorial](https://www.tensorflow.org/lite/android/quickstart) provides a
guide to deploy the .tflite model in an Android application.
- QNN (`.so` export ): This [sample
app](https://docs.qualcomm.com/bundle/publicresource/topics/80-63442-50/sample_app.html)
provides instructions on how to use the `.so` shared library in an Android application.
## View on Qualcomm® AI Hub
Get more details on MobileNet-v2's performance across various devices [here](https://aihub.qualcomm.com/models/mobilenet_v2).
Explore all available models on [Qualcomm® AI Hub](https://aihub.qualcomm.com/)
## License
* The license for the original implementation of MobileNet-v2 can be found
[here](https://github.com/tonylins/pytorch-mobilenet-v2/blob/master/LICENSE).
* The license for the compiled assets for on-device deployment can be found [here](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/Qualcomm+AI+Hub+Proprietary+License.pdf)
## References
* [MobileNetV2: Inverted Residuals and Linear Bottlenecks](https://arxiv.org/abs/1801.04381)
* [Source Model Implementation](https://github.com/tonylins/pytorch-mobilenet-v2/tree/master)
## Community
* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI.
* For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com).
|
qualcomm/Mobile-VIT
|
qualcomm
| 2025-08-29T23:30:58Z | 29 | 0 |
pytorch
|
[
"pytorch",
"tflite",
"backbone",
"android",
"image-classification",
"arxiv:2110.02178",
"license:other",
"region:us"
] |
image-classification
| 2025-06-23T21:50:42Z |
---
library_name: pytorch
license: other
tags:
- backbone
- android
pipeline_tag: image-classification
---

# Mobile-VIT: Optimized for Mobile Deployment
## Imagenet classifier and general purpose backbone
MobileVit is a machine learning model that can classify images from the Imagenet dataset. It can also be used as a backbone in building more complex models for specific use cases.
This model is an implementation of Mobile-VIT found [here](https://github.com/apple/ml-cvnets).
This repository provides scripts to run Mobile-VIT on Qualcomm® devices.
More details on model performance across various devices, can be found
[here](https://aihub.qualcomm.com/models/mobile_vit).
### Model Details
- **Model Type:** Model_use_case.image_classification
- **Model Stats:**
- Model checkpoint: Imagenet
- Input resolution: 224x224
- Number of parameters: 5.57M
- Model size (float): 21.4 MB
- Model size (w8a16): 6.56 MB
| Model | Precision | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Primary Compute Unit | Target Model
|---|---|---|---|---|---|---|---|---|
| Mobile-VIT | float | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | TFLITE | 10.709 ms | 0 - 40 MB | NPU | [Mobile-VIT.tflite](https://huggingface.co/qualcomm/Mobile-VIT/blob/main/Mobile-VIT.tflite) |
| Mobile-VIT | float | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | QNN_DLC | 11.011 ms | 1 - 48 MB | NPU | [Mobile-VIT.dlc](https://huggingface.co/qualcomm/Mobile-VIT/blob/main/Mobile-VIT.dlc) |
| Mobile-VIT | float | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | TFLITE | 5.959 ms | 0 - 54 MB | NPU | [Mobile-VIT.tflite](https://huggingface.co/qualcomm/Mobile-VIT/blob/main/Mobile-VIT.tflite) |
| Mobile-VIT | float | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | QNN_DLC | 6.375 ms | 0 - 55 MB | NPU | [Mobile-VIT.dlc](https://huggingface.co/qualcomm/Mobile-VIT/blob/main/Mobile-VIT.dlc) |
| Mobile-VIT | float | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | TFLITE | 4.027 ms | 0 - 69 MB | NPU | [Mobile-VIT.tflite](https://huggingface.co/qualcomm/Mobile-VIT/blob/main/Mobile-VIT.tflite) |
| Mobile-VIT | float | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | QNN_DLC | 4.194 ms | 1 - 13 MB | NPU | [Mobile-VIT.dlc](https://huggingface.co/qualcomm/Mobile-VIT/blob/main/Mobile-VIT.dlc) |
| Mobile-VIT | float | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | TFLITE | 4.904 ms | 0 - 40 MB | NPU | [Mobile-VIT.tflite](https://huggingface.co/qualcomm/Mobile-VIT/blob/main/Mobile-VIT.tflite) |
| Mobile-VIT | float | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | QNN_DLC | 4.966 ms | 0 - 47 MB | NPU | [Mobile-VIT.dlc](https://huggingface.co/qualcomm/Mobile-VIT/blob/main/Mobile-VIT.dlc) |
| Mobile-VIT | float | SA7255P ADP | Qualcomm® SA7255P | TFLITE | 10.709 ms | 0 - 40 MB | NPU | [Mobile-VIT.tflite](https://huggingface.co/qualcomm/Mobile-VIT/blob/main/Mobile-VIT.tflite) |
| Mobile-VIT | float | SA7255P ADP | Qualcomm® SA7255P | QNN_DLC | 11.011 ms | 1 - 48 MB | NPU | [Mobile-VIT.dlc](https://huggingface.co/qualcomm/Mobile-VIT/blob/main/Mobile-VIT.dlc) |
| Mobile-VIT | float | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | TFLITE | 4.068 ms | 0 - 69 MB | NPU | [Mobile-VIT.tflite](https://huggingface.co/qualcomm/Mobile-VIT/blob/main/Mobile-VIT.tflite) |
| Mobile-VIT | float | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | QNN_DLC | 4.176 ms | 1 - 13 MB | NPU | [Mobile-VIT.dlc](https://huggingface.co/qualcomm/Mobile-VIT/blob/main/Mobile-VIT.dlc) |
| Mobile-VIT | float | SA8295P ADP | Qualcomm® SA8295P | TFLITE | 7.081 ms | 0 - 46 MB | NPU | [Mobile-VIT.tflite](https://huggingface.co/qualcomm/Mobile-VIT/blob/main/Mobile-VIT.tflite) |
| Mobile-VIT | float | SA8295P ADP | Qualcomm® SA8295P | QNN_DLC | 7.224 ms | 1 - 47 MB | NPU | [Mobile-VIT.dlc](https://huggingface.co/qualcomm/Mobile-VIT/blob/main/Mobile-VIT.dlc) |
| Mobile-VIT | float | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | TFLITE | 4.081 ms | 0 - 68 MB | NPU | [Mobile-VIT.tflite](https://huggingface.co/qualcomm/Mobile-VIT/blob/main/Mobile-VIT.tflite) |
| Mobile-VIT | float | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | QNN_DLC | 4.189 ms | 1 - 14 MB | NPU | [Mobile-VIT.dlc](https://huggingface.co/qualcomm/Mobile-VIT/blob/main/Mobile-VIT.dlc) |
| Mobile-VIT | float | SA8775P ADP | Qualcomm® SA8775P | TFLITE | 4.904 ms | 0 - 40 MB | NPU | [Mobile-VIT.tflite](https://huggingface.co/qualcomm/Mobile-VIT/blob/main/Mobile-VIT.tflite) |
| Mobile-VIT | float | SA8775P ADP | Qualcomm® SA8775P | QNN_DLC | 4.966 ms | 0 - 47 MB | NPU | [Mobile-VIT.dlc](https://huggingface.co/qualcomm/Mobile-VIT/blob/main/Mobile-VIT.dlc) |
| Mobile-VIT | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | TFLITE | 4.047 ms | 0 - 69 MB | NPU | [Mobile-VIT.tflite](https://huggingface.co/qualcomm/Mobile-VIT/blob/main/Mobile-VIT.tflite) |
| Mobile-VIT | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | QNN_DLC | 4.202 ms | 1 - 12 MB | NPU | [Mobile-VIT.dlc](https://huggingface.co/qualcomm/Mobile-VIT/blob/main/Mobile-VIT.dlc) |
| Mobile-VIT | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | ONNX | 4.846 ms | 0 - 39 MB | NPU | [Mobile-VIT.onnx.zip](https://huggingface.co/qualcomm/Mobile-VIT/blob/main/Mobile-VIT.onnx.zip) |
| Mobile-VIT | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | TFLITE | 2.822 ms | 0 - 52 MB | NPU | [Mobile-VIT.tflite](https://huggingface.co/qualcomm/Mobile-VIT/blob/main/Mobile-VIT.tflite) |
| Mobile-VIT | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | QNN_DLC | 2.882 ms | 1 - 62 MB | NPU | [Mobile-VIT.dlc](https://huggingface.co/qualcomm/Mobile-VIT/blob/main/Mobile-VIT.dlc) |
| Mobile-VIT | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | ONNX | 3.18 ms | 0 - 58 MB | NPU | [Mobile-VIT.onnx.zip](https://huggingface.co/qualcomm/Mobile-VIT/blob/main/Mobile-VIT.onnx.zip) |
| Mobile-VIT | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | TFLITE | 2.735 ms | 0 - 44 MB | NPU | [Mobile-VIT.tflite](https://huggingface.co/qualcomm/Mobile-VIT/blob/main/Mobile-VIT.tflite) |
| Mobile-VIT | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | QNN_DLC | 2.777 ms | 1 - 52 MB | NPU | [Mobile-VIT.dlc](https://huggingface.co/qualcomm/Mobile-VIT/blob/main/Mobile-VIT.dlc) |
| Mobile-VIT | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | ONNX | 3.109 ms | 1 - 52 MB | NPU | [Mobile-VIT.onnx.zip](https://huggingface.co/qualcomm/Mobile-VIT/blob/main/Mobile-VIT.onnx.zip) |
| Mobile-VIT | float | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN_DLC | 4.579 ms | 25 - 25 MB | NPU | [Mobile-VIT.dlc](https://huggingface.co/qualcomm/Mobile-VIT/blob/main/Mobile-VIT.dlc) |
| Mobile-VIT | float | Snapdragon X Elite CRD | Snapdragon® X Elite | ONNX | 5.005 ms | 12 - 12 MB | NPU | [Mobile-VIT.onnx.zip](https://huggingface.co/qualcomm/Mobile-VIT/blob/main/Mobile-VIT.onnx.zip) |
| Mobile-VIT | w8a16 | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | QNN_DLC | 6.524 ms | 0 - 38 MB | NPU | [Mobile-VIT.dlc](https://huggingface.co/qualcomm/Mobile-VIT/blob/main/Mobile-VIT_w8a16.dlc) |
| Mobile-VIT | w8a16 | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | QNN_DLC | 4.224 ms | 0 - 51 MB | NPU | [Mobile-VIT.dlc](https://huggingface.co/qualcomm/Mobile-VIT/blob/main/Mobile-VIT_w8a16.dlc) |
| Mobile-VIT | w8a16 | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | QNN_DLC | 3.67 ms | 0 - 16 MB | NPU | [Mobile-VIT.dlc](https://huggingface.co/qualcomm/Mobile-VIT/blob/main/Mobile-VIT_w8a16.dlc) |
| Mobile-VIT | w8a16 | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | QNN_DLC | 4.129 ms | 0 - 37 MB | NPU | [Mobile-VIT.dlc](https://huggingface.co/qualcomm/Mobile-VIT/blob/main/Mobile-VIT_w8a16.dlc) |
| Mobile-VIT | w8a16 | RB3 Gen 2 (Proxy) | Qualcomm® QCS6490 (Proxy) | QNN_DLC | 21.818 ms | 0 - 83 MB | NPU | [Mobile-VIT.dlc](https://huggingface.co/qualcomm/Mobile-VIT/blob/main/Mobile-VIT_w8a16.dlc) |
| Mobile-VIT | w8a16 | SA7255P ADP | Qualcomm® SA7255P | QNN_DLC | 6.524 ms | 0 - 38 MB | NPU | [Mobile-VIT.dlc](https://huggingface.co/qualcomm/Mobile-VIT/blob/main/Mobile-VIT_w8a16.dlc) |
| Mobile-VIT | w8a16 | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | QNN_DLC | 3.691 ms | 0 - 17 MB | NPU | [Mobile-VIT.dlc](https://huggingface.co/qualcomm/Mobile-VIT/blob/main/Mobile-VIT_w8a16.dlc) |
| Mobile-VIT | w8a16 | SA8295P ADP | Qualcomm® SA8295P | QNN_DLC | 4.719 ms | 0 - 44 MB | NPU | [Mobile-VIT.dlc](https://huggingface.co/qualcomm/Mobile-VIT/blob/main/Mobile-VIT_w8a16.dlc) |
| Mobile-VIT | w8a16 | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | QNN_DLC | 3.695 ms | 0 - 14 MB | NPU | [Mobile-VIT.dlc](https://huggingface.co/qualcomm/Mobile-VIT/blob/main/Mobile-VIT_w8a16.dlc) |
| Mobile-VIT | w8a16 | SA8775P ADP | Qualcomm® SA8775P | QNN_DLC | 4.129 ms | 0 - 37 MB | NPU | [Mobile-VIT.dlc](https://huggingface.co/qualcomm/Mobile-VIT/blob/main/Mobile-VIT_w8a16.dlc) |
| Mobile-VIT | w8a16 | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | QNN_DLC | 3.684 ms | 0 - 16 MB | NPU | [Mobile-VIT.dlc](https://huggingface.co/qualcomm/Mobile-VIT/blob/main/Mobile-VIT_w8a16.dlc) |
| Mobile-VIT | w8a16 | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | ONNX | 39.176 ms | 36 - 133 MB | NPU | [Mobile-VIT.onnx.zip](https://huggingface.co/qualcomm/Mobile-VIT/blob/main/Mobile-VIT_w8a16.onnx.zip) |
| Mobile-VIT | w8a16 | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | QNN_DLC | 2.38 ms | 0 - 49 MB | NPU | [Mobile-VIT.dlc](https://huggingface.co/qualcomm/Mobile-VIT/blob/main/Mobile-VIT_w8a16.dlc) |
| Mobile-VIT | w8a16 | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | ONNX | 33.09 ms | 14 - 678 MB | NPU | [Mobile-VIT.onnx.zip](https://huggingface.co/qualcomm/Mobile-VIT/blob/main/Mobile-VIT_w8a16.onnx.zip) |
| Mobile-VIT | w8a16 | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | QNN_DLC | 2.176 ms | 0 - 43 MB | NPU | [Mobile-VIT.dlc](https://huggingface.co/qualcomm/Mobile-VIT/blob/main/Mobile-VIT_w8a16.dlc) |
| Mobile-VIT | w8a16 | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | ONNX | 31.948 ms | 18 - 707 MB | NPU | [Mobile-VIT.onnx.zip](https://huggingface.co/qualcomm/Mobile-VIT/blob/main/Mobile-VIT_w8a16.onnx.zip) |
| Mobile-VIT | w8a16 | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN_DLC | 4.049 ms | 5 - 5 MB | NPU | [Mobile-VIT.dlc](https://huggingface.co/qualcomm/Mobile-VIT/blob/main/Mobile-VIT_w8a16.dlc) |
| Mobile-VIT | w8a16 | Snapdragon X Elite CRD | Snapdragon® X Elite | ONNX | 44.433 ms | 58 - 58 MB | NPU | [Mobile-VIT.onnx.zip](https://huggingface.co/qualcomm/Mobile-VIT/blob/main/Mobile-VIT_w8a16.onnx.zip) |
## Installation
Install the package via pip:
```bash
pip install "qai-hub-models[mobile-vit]"
```
## Configure Qualcomm® AI Hub to run this model on a cloud-hosted device
Sign-in to [Qualcomm® AI Hub](https://app.aihub.qualcomm.com/) with your
Qualcomm® ID. Once signed in navigate to `Account -> Settings -> API Token`.
With this API token, you can configure your client to run models on the cloud
hosted devices.
```bash
qai-hub configure --api_token API_TOKEN
```
Navigate to [docs](https://app.aihub.qualcomm.com/docs/) for more information.
## Demo off target
The package contains a simple end-to-end demo that downloads pre-trained
weights and runs this model on a sample input.
```bash
python -m qai_hub_models.models.mobile_vit.demo
```
The above demo runs a reference implementation of pre-processing, model
inference, and post processing.
**NOTE**: If you want running in a Jupyter Notebook or Google Colab like
environment, please add the following to your cell (instead of the above).
```
%run -m qai_hub_models.models.mobile_vit.demo
```
### Run model on a cloud-hosted device
In addition to the demo, you can also run the model on a cloud-hosted Qualcomm®
device. This script does the following:
* Performance check on-device on a cloud-hosted device
* Downloads compiled assets that can be deployed on-device for Android.
* Accuracy check between PyTorch and on-device outputs.
```bash
python -m qai_hub_models.models.mobile_vit.export
```
## How does this work?
This [export script](https://aihub.qualcomm.com/models/mobile_vit/qai_hub_models/models/Mobile-VIT/export.py)
leverages [Qualcomm® AI Hub](https://aihub.qualcomm.com/) to optimize, validate, and deploy this model
on-device. Lets go through each step below in detail:
Step 1: **Compile model for on-device deployment**
To compile a PyTorch model for on-device deployment, we first trace the model
in memory using the `jit.trace` and then call the `submit_compile_job` API.
```python
import torch
import qai_hub as hub
from qai_hub_models.models.mobile_vit import Model
# Load the model
torch_model = Model.from_pretrained()
# Device
device = hub.Device("Samsung Galaxy S24")
# Trace model
input_shape = torch_model.get_input_spec()
sample_inputs = torch_model.sample_inputs()
pt_model = torch.jit.trace(torch_model, [torch.tensor(data[0]) for _, data in sample_inputs.items()])
# Compile model on a specific device
compile_job = hub.submit_compile_job(
model=pt_model,
device=device,
input_specs=torch_model.get_input_spec(),
)
# Get target model to run on-device
target_model = compile_job.get_target_model()
```
Step 2: **Performance profiling on cloud-hosted device**
After compiling models from step 1. Models can be profiled model on-device using the
`target_model`. Note that this scripts runs the model on a device automatically
provisioned in the cloud. Once the job is submitted, you can navigate to a
provided job URL to view a variety of on-device performance metrics.
```python
profile_job = hub.submit_profile_job(
model=target_model,
device=device,
)
```
Step 3: **Verify on-device accuracy**
To verify the accuracy of the model on-device, you can run on-device inference
on sample input data on the same cloud hosted device.
```python
input_data = torch_model.sample_inputs()
inference_job = hub.submit_inference_job(
model=target_model,
device=device,
inputs=input_data,
)
on_device_output = inference_job.download_output_data()
```
With the output of the model, you can compute like PSNR, relative errors or
spot check the output with expected output.
**Note**: This on-device profiling and inference requires access to Qualcomm®
AI Hub. [Sign up for access](https://myaccount.qualcomm.com/signup).
## Run demo on a cloud-hosted device
You can also run the demo on-device.
```bash
python -m qai_hub_models.models.mobile_vit.demo --eval-mode on-device
```
**NOTE**: If you want running in a Jupyter Notebook or Google Colab like
environment, please add the following to your cell (instead of the above).
```
%run -m qai_hub_models.models.mobile_vit.demo -- --eval-mode on-device
```
## Deploying compiled model to Android
The models can be deployed using multiple runtimes:
- TensorFlow Lite (`.tflite` export): [This
tutorial](https://www.tensorflow.org/lite/android/quickstart) provides a
guide to deploy the .tflite model in an Android application.
- QNN (`.so` export ): This [sample
app](https://docs.qualcomm.com/bundle/publicresource/topics/80-63442-50/sample_app.html)
provides instructions on how to use the `.so` shared library in an Android application.
## View on Qualcomm® AI Hub
Get more details on Mobile-VIT's performance across various devices [here](https://aihub.qualcomm.com/models/mobile_vit).
Explore all available models on [Qualcomm® AI Hub](https://aihub.qualcomm.com/)
## License
* The license for the original implementation of Mobile-VIT can be found
[here](https://github.com/pytorch/vision/blob/main/LICENSE).
* The license for the compiled assets for on-device deployment can be found [here](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/Qualcomm+AI+Hub+Proprietary+License.pdf)
## References
* [MOBILEVIT: LIGHT-WEIGHT, GENERAL-PURPOSE, AND MOBILE-FRIENDLY VISION TRANSFORMER](https://arxiv.org/abs/2110.02178)
* [Source Model Implementation](https://github.com/apple/ml-cvnets)
## Community
* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI.
* For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com).
|
AnerYubo/blockassist-bc-prowling_pudgy_gerbil_1756510233
|
AnerYubo
| 2025-08-29T23:30:36Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"prowling pudgy gerbil",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-29T23:30:33Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- prowling pudgy gerbil
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
seraphimzzzz/1111835
|
seraphimzzzz
| 2025-08-29T23:30:22Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-29T23:30:07Z |
[View on Civ Archive](https://civarchive.com/models/490863?modelVersionId=1206541)
|
qualcomm/Midas-V2
|
qualcomm
| 2025-08-29T23:30:13Z | 527 | 8 |
pytorch
|
[
"pytorch",
"tflite",
"android",
"depth-estimation",
"arxiv:1907.01341",
"license:other",
"region:us"
] |
depth-estimation
| 2024-05-29T00:46:00Z |
---
library_name: pytorch
license: other
tags:
- android
pipeline_tag: depth-estimation
---

# Midas-V2: Optimized for Mobile Deployment
## Deep Convolutional Neural Network model for depth estimation
Midas is designed for estimating depth at each point in an image.
This model is an implementation of Midas-V2 found [here](https://github.com/isl-org/MiDaS).
This repository provides scripts to run Midas-V2 on Qualcomm® devices.
More details on model performance across various devices, can be found
[here](https://aihub.qualcomm.com/models/midas).
### Model Details
- **Model Type:** Model_use_case.depth_estimation
- **Model Stats:**
- Model checkpoint: MiDaS_small
- Input resolution: 256x256
- Number of parameters: 16.6M
- Model size (float): 63.2 MB
- Model size (w8a8): 16.9 MB
| Model | Precision | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Primary Compute Unit | Target Model
|---|---|---|---|---|---|---|---|---|
| Midas-V2 | float | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | TFLITE | 13.081 ms | 0 - 43 MB | NPU | [Midas-V2.tflite](https://huggingface.co/qualcomm/Midas-V2/blob/main/Midas-V2.tflite) |
| Midas-V2 | float | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | QNN_DLC | 12.029 ms | 1 - 28 MB | NPU | [Midas-V2.dlc](https://huggingface.co/qualcomm/Midas-V2/blob/main/Midas-V2.dlc) |
| Midas-V2 | float | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | TFLITE | 4.974 ms | 0 - 59 MB | NPU | [Midas-V2.tflite](https://huggingface.co/qualcomm/Midas-V2/blob/main/Midas-V2.tflite) |
| Midas-V2 | float | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | QNN_DLC | 7.403 ms | 0 - 36 MB | NPU | [Midas-V2.dlc](https://huggingface.co/qualcomm/Midas-V2/blob/main/Midas-V2.dlc) |
| Midas-V2 | float | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | TFLITE | 3.27 ms | 0 - 300 MB | NPU | [Midas-V2.tflite](https://huggingface.co/qualcomm/Midas-V2/blob/main/Midas-V2.tflite) |
| Midas-V2 | float | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | QNN_DLC | 2.969 ms | 1 - 19 MB | NPU | [Midas-V2.dlc](https://huggingface.co/qualcomm/Midas-V2/blob/main/Midas-V2.dlc) |
| Midas-V2 | float | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | TFLITE | 4.646 ms | 0 - 43 MB | NPU | [Midas-V2.tflite](https://huggingface.co/qualcomm/Midas-V2/blob/main/Midas-V2.tflite) |
| Midas-V2 | float | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | QNN_DLC | 4.128 ms | 1 - 28 MB | NPU | [Midas-V2.dlc](https://huggingface.co/qualcomm/Midas-V2/blob/main/Midas-V2.dlc) |
| Midas-V2 | float | SA7255P ADP | Qualcomm® SA7255P | TFLITE | 13.081 ms | 0 - 43 MB | NPU | [Midas-V2.tflite](https://huggingface.co/qualcomm/Midas-V2/blob/main/Midas-V2.tflite) |
| Midas-V2 | float | SA7255P ADP | Qualcomm® SA7255P | QNN_DLC | 12.029 ms | 1 - 28 MB | NPU | [Midas-V2.dlc](https://huggingface.co/qualcomm/Midas-V2/blob/main/Midas-V2.dlc) |
| Midas-V2 | float | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | TFLITE | 3.289 ms | 0 - 309 MB | NPU | [Midas-V2.tflite](https://huggingface.co/qualcomm/Midas-V2/blob/main/Midas-V2.tflite) |
| Midas-V2 | float | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | QNN_DLC | 2.983 ms | 4 - 21 MB | NPU | [Midas-V2.dlc](https://huggingface.co/qualcomm/Midas-V2/blob/main/Midas-V2.dlc) |
| Midas-V2 | float | SA8295P ADP | Qualcomm® SA8295P | TFLITE | 5.836 ms | 0 - 33 MB | NPU | [Midas-V2.tflite](https://huggingface.co/qualcomm/Midas-V2/blob/main/Midas-V2.tflite) |
| Midas-V2 | float | SA8295P ADP | Qualcomm® SA8295P | QNN_DLC | 5.32 ms | 1 - 30 MB | NPU | [Midas-V2.dlc](https://huggingface.co/qualcomm/Midas-V2/blob/main/Midas-V2.dlc) |
| Midas-V2 | float | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | TFLITE | 3.273 ms | 0 - 312 MB | NPU | [Midas-V2.tflite](https://huggingface.co/qualcomm/Midas-V2/blob/main/Midas-V2.tflite) |
| Midas-V2 | float | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | QNN_DLC | 2.974 ms | 1 - 16 MB | NPU | [Midas-V2.dlc](https://huggingface.co/qualcomm/Midas-V2/blob/main/Midas-V2.dlc) |
| Midas-V2 | float | SA8775P ADP | Qualcomm® SA8775P | TFLITE | 4.646 ms | 0 - 43 MB | NPU | [Midas-V2.tflite](https://huggingface.co/qualcomm/Midas-V2/blob/main/Midas-V2.tflite) |
| Midas-V2 | float | SA8775P ADP | Qualcomm® SA8775P | QNN_DLC | 4.128 ms | 1 - 28 MB | NPU | [Midas-V2.dlc](https://huggingface.co/qualcomm/Midas-V2/blob/main/Midas-V2.dlc) |
| Midas-V2 | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | TFLITE | 3.286 ms | 0 - 302 MB | NPU | [Midas-V2.tflite](https://huggingface.co/qualcomm/Midas-V2/blob/main/Midas-V2.tflite) |
| Midas-V2 | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | QNN_DLC | 2.976 ms | 1 - 24 MB | NPU | [Midas-V2.dlc](https://huggingface.co/qualcomm/Midas-V2/blob/main/Midas-V2.dlc) |
| Midas-V2 | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | ONNX | 2.994 ms | 0 - 75 MB | NPU | [Midas-V2.onnx.zip](https://huggingface.co/qualcomm/Midas-V2/blob/main/Midas-V2.onnx.zip) |
| Midas-V2 | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | TFLITE | 2.322 ms | 0 - 70 MB | NPU | [Midas-V2.tflite](https://huggingface.co/qualcomm/Midas-V2/blob/main/Midas-V2.tflite) |
| Midas-V2 | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | QNN_DLC | 2.043 ms | 1 - 41 MB | NPU | [Midas-V2.dlc](https://huggingface.co/qualcomm/Midas-V2/blob/main/Midas-V2.dlc) |
| Midas-V2 | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | ONNX | 2.112 ms | 0 - 41 MB | NPU | [Midas-V2.onnx.zip](https://huggingface.co/qualcomm/Midas-V2/blob/main/Midas-V2.onnx.zip) |
| Midas-V2 | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | TFLITE | 2.134 ms | 0 - 48 MB | NPU | [Midas-V2.tflite](https://huggingface.co/qualcomm/Midas-V2/blob/main/Midas-V2.tflite) |
| Midas-V2 | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | QNN_DLC | 1.813 ms | 1 - 34 MB | NPU | [Midas-V2.dlc](https://huggingface.co/qualcomm/Midas-V2/blob/main/Midas-V2.dlc) |
| Midas-V2 | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | ONNX | 1.926 ms | 0 - 31 MB | NPU | [Midas-V2.onnx.zip](https://huggingface.co/qualcomm/Midas-V2/blob/main/Midas-V2.onnx.zip) |
| Midas-V2 | float | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN_DLC | 3.165 ms | 192 - 192 MB | NPU | [Midas-V2.dlc](https://huggingface.co/qualcomm/Midas-V2/blob/main/Midas-V2.dlc) |
| Midas-V2 | float | Snapdragon X Elite CRD | Snapdragon® X Elite | ONNX | 3.146 ms | 35 - 35 MB | NPU | [Midas-V2.onnx.zip](https://huggingface.co/qualcomm/Midas-V2/blob/main/Midas-V2.onnx.zip) |
| Midas-V2 | w8a8 | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | TFLITE | 2.479 ms | 0 - 31 MB | NPU | [Midas-V2.tflite](https://huggingface.co/qualcomm/Midas-V2/blob/main/Midas-V2_w8a8.tflite) |
| Midas-V2 | w8a8 | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | QNN_DLC | 2.88 ms | 0 - 32 MB | NPU | [Midas-V2.dlc](https://huggingface.co/qualcomm/Midas-V2/blob/main/Midas-V2_w8a8.dlc) |
| Midas-V2 | w8a8 | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | TFLITE | 1.395 ms | 0 - 52 MB | NPU | [Midas-V2.tflite](https://huggingface.co/qualcomm/Midas-V2/blob/main/Midas-V2_w8a8.tflite) |
| Midas-V2 | w8a8 | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | QNN_DLC | 1.83 ms | 0 - 52 MB | NPU | [Midas-V2.dlc](https://huggingface.co/qualcomm/Midas-V2/blob/main/Midas-V2_w8a8.dlc) |
| Midas-V2 | w8a8 | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | TFLITE | 1.07 ms | 0 - 150 MB | NPU | [Midas-V2.tflite](https://huggingface.co/qualcomm/Midas-V2/blob/main/Midas-V2_w8a8.tflite) |
| Midas-V2 | w8a8 | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | QNN_DLC | 1.299 ms | 0 - 136 MB | NPU | [Midas-V2.dlc](https://huggingface.co/qualcomm/Midas-V2/blob/main/Midas-V2_w8a8.dlc) |
| Midas-V2 | w8a8 | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | TFLITE | 1.363 ms | 0 - 31 MB | NPU | [Midas-V2.tflite](https://huggingface.co/qualcomm/Midas-V2/blob/main/Midas-V2_w8a8.tflite) |
| Midas-V2 | w8a8 | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | QNN_DLC | 1.577 ms | 0 - 32 MB | NPU | [Midas-V2.dlc](https://huggingface.co/qualcomm/Midas-V2/blob/main/Midas-V2_w8a8.dlc) |
| Midas-V2 | w8a8 | RB3 Gen 2 (Proxy) | Qualcomm® QCS6490 (Proxy) | TFLITE | 3.791 ms | 0 - 48 MB | NPU | [Midas-V2.tflite](https://huggingface.co/qualcomm/Midas-V2/blob/main/Midas-V2_w8a8.tflite) |
| Midas-V2 | w8a8 | RB3 Gen 2 (Proxy) | Qualcomm® QCS6490 (Proxy) | QNN_DLC | 5.707 ms | 0 - 49 MB | NPU | [Midas-V2.dlc](https://huggingface.co/qualcomm/Midas-V2/blob/main/Midas-V2_w8a8.dlc) |
| Midas-V2 | w8a8 | RB5 (Proxy) | Qualcomm® QCS8250 (Proxy) | TFLITE | 16.303 ms | 0 - 3 MB | NPU | [Midas-V2.tflite](https://huggingface.co/qualcomm/Midas-V2/blob/main/Midas-V2_w8a8.tflite) |
| Midas-V2 | w8a8 | SA7255P ADP | Qualcomm® SA7255P | TFLITE | 2.479 ms | 0 - 31 MB | NPU | [Midas-V2.tflite](https://huggingface.co/qualcomm/Midas-V2/blob/main/Midas-V2_w8a8.tflite) |
| Midas-V2 | w8a8 | SA7255P ADP | Qualcomm® SA7255P | QNN_DLC | 2.88 ms | 0 - 32 MB | NPU | [Midas-V2.dlc](https://huggingface.co/qualcomm/Midas-V2/blob/main/Midas-V2_w8a8.dlc) |
| Midas-V2 | w8a8 | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | TFLITE | 1.07 ms | 0 - 148 MB | NPU | [Midas-V2.tflite](https://huggingface.co/qualcomm/Midas-V2/blob/main/Midas-V2_w8a8.tflite) |
| Midas-V2 | w8a8 | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | QNN_DLC | 1.283 ms | 0 - 136 MB | NPU | [Midas-V2.dlc](https://huggingface.co/qualcomm/Midas-V2/blob/main/Midas-V2_w8a8.dlc) |
| Midas-V2 | w8a8 | SA8295P ADP | Qualcomm® SA8295P | TFLITE | 1.908 ms | 0 - 35 MB | NPU | [Midas-V2.tflite](https://huggingface.co/qualcomm/Midas-V2/blob/main/Midas-V2_w8a8.tflite) |
| Midas-V2 | w8a8 | SA8295P ADP | Qualcomm® SA8295P | QNN_DLC | 2.211 ms | 0 - 39 MB | NPU | [Midas-V2.dlc](https://huggingface.co/qualcomm/Midas-V2/blob/main/Midas-V2_w8a8.dlc) |
| Midas-V2 | w8a8 | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | TFLITE | 1.07 ms | 0 - 148 MB | NPU | [Midas-V2.tflite](https://huggingface.co/qualcomm/Midas-V2/blob/main/Midas-V2_w8a8.tflite) |
| Midas-V2 | w8a8 | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | QNN_DLC | 1.295 ms | 0 - 124 MB | NPU | [Midas-V2.dlc](https://huggingface.co/qualcomm/Midas-V2/blob/main/Midas-V2_w8a8.dlc) |
| Midas-V2 | w8a8 | SA8775P ADP | Qualcomm® SA8775P | TFLITE | 1.363 ms | 0 - 31 MB | NPU | [Midas-V2.tflite](https://huggingface.co/qualcomm/Midas-V2/blob/main/Midas-V2_w8a8.tflite) |
| Midas-V2 | w8a8 | SA8775P ADP | Qualcomm® SA8775P | QNN_DLC | 1.577 ms | 0 - 32 MB | NPU | [Midas-V2.dlc](https://huggingface.co/qualcomm/Midas-V2/blob/main/Midas-V2_w8a8.dlc) |
| Midas-V2 | w8a8 | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | TFLITE | 1.058 ms | 0 - 149 MB | NPU | [Midas-V2.tflite](https://huggingface.co/qualcomm/Midas-V2/blob/main/Midas-V2_w8a8.tflite) |
| Midas-V2 | w8a8 | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | QNN_DLC | 1.285 ms | 0 - 135 MB | NPU | [Midas-V2.dlc](https://huggingface.co/qualcomm/Midas-V2/blob/main/Midas-V2_w8a8.dlc) |
| Midas-V2 | w8a8 | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | TFLITE | 0.754 ms | 0 - 58 MB | NPU | [Midas-V2.tflite](https://huggingface.co/qualcomm/Midas-V2/blob/main/Midas-V2_w8a8.tflite) |
| Midas-V2 | w8a8 | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | QNN_DLC | 0.894 ms | 0 - 63 MB | NPU | [Midas-V2.dlc](https://huggingface.co/qualcomm/Midas-V2/blob/main/Midas-V2_w8a8.dlc) |
| Midas-V2 | w8a8 | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | TFLITE | 0.671 ms | 0 - 33 MB | NPU | [Midas-V2.tflite](https://huggingface.co/qualcomm/Midas-V2/blob/main/Midas-V2_w8a8.tflite) |
| Midas-V2 | w8a8 | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | QNN_DLC | 0.777 ms | 0 - 39 MB | NPU | [Midas-V2.dlc](https://huggingface.co/qualcomm/Midas-V2/blob/main/Midas-V2_w8a8.dlc) |
| Midas-V2 | w8a8 | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN_DLC | 1.437 ms | 141 - 141 MB | NPU | [Midas-V2.dlc](https://huggingface.co/qualcomm/Midas-V2/blob/main/Midas-V2_w8a8.dlc) |
## Installation
Install the package via pip:
```bash
pip install "qai-hub-models[midas]"
```
## Configure Qualcomm® AI Hub to run this model on a cloud-hosted device
Sign-in to [Qualcomm® AI Hub](https://app.aihub.qualcomm.com/) with your
Qualcomm® ID. Once signed in navigate to `Account -> Settings -> API Token`.
With this API token, you can configure your client to run models on the cloud
hosted devices.
```bash
qai-hub configure --api_token API_TOKEN
```
Navigate to [docs](https://app.aihub.qualcomm.com/docs/) for more information.
## Demo off target
The package contains a simple end-to-end demo that downloads pre-trained
weights and runs this model on a sample input.
```bash
python -m qai_hub_models.models.midas.demo
```
The above demo runs a reference implementation of pre-processing, model
inference, and post processing.
**NOTE**: If you want running in a Jupyter Notebook or Google Colab like
environment, please add the following to your cell (instead of the above).
```
%run -m qai_hub_models.models.midas.demo
```
### Run model on a cloud-hosted device
In addition to the demo, you can also run the model on a cloud-hosted Qualcomm®
device. This script does the following:
* Performance check on-device on a cloud-hosted device
* Downloads compiled assets that can be deployed on-device for Android.
* Accuracy check between PyTorch and on-device outputs.
```bash
python -m qai_hub_models.models.midas.export
```
## How does this work?
This [export script](https://aihub.qualcomm.com/models/midas/qai_hub_models/models/Midas-V2/export.py)
leverages [Qualcomm® AI Hub](https://aihub.qualcomm.com/) to optimize, validate, and deploy this model
on-device. Lets go through each step below in detail:
Step 1: **Compile model for on-device deployment**
To compile a PyTorch model for on-device deployment, we first trace the model
in memory using the `jit.trace` and then call the `submit_compile_job` API.
```python
import torch
import qai_hub as hub
from qai_hub_models.models.midas import Model
# Load the model
torch_model = Model.from_pretrained()
# Device
device = hub.Device("Samsung Galaxy S24")
# Trace model
input_shape = torch_model.get_input_spec()
sample_inputs = torch_model.sample_inputs()
pt_model = torch.jit.trace(torch_model, [torch.tensor(data[0]) for _, data in sample_inputs.items()])
# Compile model on a specific device
compile_job = hub.submit_compile_job(
model=pt_model,
device=device,
input_specs=torch_model.get_input_spec(),
)
# Get target model to run on-device
target_model = compile_job.get_target_model()
```
Step 2: **Performance profiling on cloud-hosted device**
After compiling models from step 1. Models can be profiled model on-device using the
`target_model`. Note that this scripts runs the model on a device automatically
provisioned in the cloud. Once the job is submitted, you can navigate to a
provided job URL to view a variety of on-device performance metrics.
```python
profile_job = hub.submit_profile_job(
model=target_model,
device=device,
)
```
Step 3: **Verify on-device accuracy**
To verify the accuracy of the model on-device, you can run on-device inference
on sample input data on the same cloud hosted device.
```python
input_data = torch_model.sample_inputs()
inference_job = hub.submit_inference_job(
model=target_model,
device=device,
inputs=input_data,
)
on_device_output = inference_job.download_output_data()
```
With the output of the model, you can compute like PSNR, relative errors or
spot check the output with expected output.
**Note**: This on-device profiling and inference requires access to Qualcomm®
AI Hub. [Sign up for access](https://myaccount.qualcomm.com/signup).
## Run demo on a cloud-hosted device
You can also run the demo on-device.
```bash
python -m qai_hub_models.models.midas.demo --eval-mode on-device
```
**NOTE**: If you want running in a Jupyter Notebook or Google Colab like
environment, please add the following to your cell (instead of the above).
```
%run -m qai_hub_models.models.midas.demo -- --eval-mode on-device
```
## Deploying compiled model to Android
The models can be deployed using multiple runtimes:
- TensorFlow Lite (`.tflite` export): [This
tutorial](https://www.tensorflow.org/lite/android/quickstart) provides a
guide to deploy the .tflite model in an Android application.
- QNN (`.so` export ): This [sample
app](https://docs.qualcomm.com/bundle/publicresource/topics/80-63442-50/sample_app.html)
provides instructions on how to use the `.so` shared library in an Android application.
## View on Qualcomm® AI Hub
Get more details on Midas-V2's performance across various devices [here](https://aihub.qualcomm.com/models/midas).
Explore all available models on [Qualcomm® AI Hub](https://aihub.qualcomm.com/)
## License
* The license for the original implementation of Midas-V2 can be found
[here](https://github.com/isl-org/MiDaS/blob/master/LICENSE).
* The license for the compiled assets for on-device deployment can be found [here](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/Qualcomm+AI+Hub+Proprietary+License.pdf)
## References
* [Towards Robust Monocular Depth Estimation: Mixing Datasets for Zero-shot Cross-dataset Transfer](https://arxiv.org/abs/1907.01341v3)
* [Source Model Implementation](https://github.com/isl-org/MiDaS)
## Community
* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI.
* For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com).
|
sivakrishna123/my-jarvis-adapters
|
sivakrishna123
| 2025-08-29T23:30:08Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-29T23:29:58Z |
---
base_model: unsloth/meta-llama-3.1-8b-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** sivakrishna123
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Priyam05/ppo-CartPole-v1
|
Priyam05
| 2025-08-29T23:30:01Z | 0 | 0 | null |
[
"CartPole-v1",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-08-29T21:12:36Z |
---
tags:
- CartPole-v1
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 29.30 +/- 14.59
name: mean_reward
verified: false
---
# PPO Agent Playing CartPole-v1
This is a trained model of a PPO agent playing CartPole-v1.
# Hyperparameters
```python
{'exp_name': 'ppo-experiment'
'repo_id': 'Priyam05/ppo-CartPole-v1'
'gym_id': 'CartPole-v1'
'learning_rate': 0.00025
'min_learning_rate_ratio': 0.1
'seed': 1
'total_timesteps': 25000
'torch_not_deterministic': False
'no_cuda': False
'capture_video': False
'hidden_size': 64
'num_hidden_layers': 1
'activation': 'tanh'
'num_checkpoints': 4
'num_envs': 4
'num_steps': 128
'no_lr_annealing': False
'no_gae': False
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'num_update_epochs': 4
'no_advantage_normalization': False
'clip_coef': 0.2
'no_value_loss_clip': False
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'batch_size': 512
'minibatch_size': 128}
```
|
crystalline7/536448
|
crystalline7
| 2025-08-29T23:29:57Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-29T23:29:51Z |
[View on Civ Archive](https://civarchive.com/models/552705?modelVersionId=615240)
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.