amokrov's picture
Update README.md
6d85427 verified
---
license: mit
base_model: deepseek-ai/DeepSeek-R1-Distill-Qwen-7B
base_model_relation: quantized
---
# DeepSeek-R1-Distill-Qwen-7B-int4-cw-ov
* Model creator: [DeepSeek](https://huggingface.co/deepseek-ai)
* Original model: [DeepSeek-R1-Distill-Qwen-7B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-7B)
## Description
This is [DeepSeek-R1-Distill-Qwen-7B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-7B) model converted to the [OpenVINO™ IR](https://docs.openvino.ai/2024/documentation/openvino-ir-format.html) (Intermediate Representation) format with weights compressed to INT4 by [NNCF](https://github.com/openvinotoolkit/nncf).
> [!NOTE]
> The model is optimized for inference on NPU using these [instructions.](https://docs.openvino.ai/2025/openvino-workflow-generative/inference-with-genai/inference-with-genai-on-npu.html#export-an-llm-model-via-hugging-face-optimum-intel)
## Quantization Parameters
Weight compression was performed using `nncf.compress_weights` with the following parameters:
* mode: **INT4_SYM**
* ratio: **1.0**
For more information on quantization, check the [OpenVINO model optimization guide](https://docs.openvino.ai/2024/openvino-workflow/model-optimization-guide/weight-compression.html)
## Compatibility
The provided OpenVINO™ IR model is compatible with:
* OpenVINO version 2025.2.0 and higher
* Intel® NPU Driver - Windows* 32.0.100.4023 for Intel® Core™ Ultra processors and higher
## Running Model Inference with [OpenVINO GenAI](https://github.com/openvinotoolkit/openvino.genai)
1. Install packages required for using OpenVINO GenAI.
```
pip install -U --pre openvino openvino-tokenizers openvino-genai --extra-index-url https://storage.openvinotoolkit.org/simple/wheels/pre-release
pip install huggingface_hub
```
2. Download model from HuggingFace Hub
```
import huggingface_hub as hf_hub
model_id = "OpenVINO/DeepSeek-R1-Distill-Qwen-7B-int4-cw-ov"
model_path = "DeepSeek-R1-Distill-Qwen-7B-int4-cw-ov"
hf_hub.snapshot_download(model_id, local_dir=model_path)
```
3. Run model inference:
```
import openvino_genai as ov_genai
device = "NPU"
pipe = ov_genai.LLMPipeline(model_path, device)
print(pipe.generate("What is OpenVINO?", max_length=200))
```
More GenAI usage examples can be found in OpenVINO GenAI library [docs](https://github.com/openvinotoolkit/openvino.genai/blob/master/src/README.md) and [samples](https://github.com/openvinotoolkit/openvino.genai?tab=readme-ov-file#openvino-genai-samples)
## Limitations
Check the original model card for [original model card](https://huggingface.codeepseek-ai/DeepSeek-R1-Distill-Qwen-7B) for limitations.
## Legal information
The original model is distributed under [mit](https://huggingface.co/datasets/choosealicense/licenses/blob/main/markdown/mit.md) license. More details can be found in [original model card](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-7B).
## Disclaimer
Intel is committed to respecting human rights and avoiding causing or contributing to adverse impacts on human rights. See [Intel’s Global Human Rights Principles](https://www.intel.com/content/dam/www/central-libraries/us/en/documents/policy-human-rights.pdf). Intel’s products and software are intended only to be used in applications that do not cause or contribute to adverse impacts on human rights.