File size: 1,609 Bytes
a5d2268
 
54002bb
 
 
a5d2268
 
 
 
 
 
 
 
 
 
 
 
0ec9981
a5d2268
 
 
 
 
a540c5e
a5d2268
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
54002bb
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
---
license: mit
pipeline_tag: text-generation
tags:
- cortex.cpp
---

## Overview

**DeepSeek** developed and released the [DeepSeek R1 Distill Qwen 32B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B) model, a distilled version of the Qwen 32B language model. This is the most advanced and largest model in the DeepSeek R1 Distill family, offering unparalleled performance in text generation, dialogue optimization, and reasoning tasks. 

The model is tailored for large-scale applications in conversational AI, research, enterprise solutions, and knowledge systems, delivering exceptional accuracy, efficiency, and safety at scale.

## Variants

| No | Variant | Cortex CLI command |
| --- | --- | --- |
| 1 | [Deepseek-r1-distill-qwen-32b-32b](https://huggingface.co/cortexso/deepseek-r1-distill-qwen-32b/tree/32b) | `cortex run deepseek-r1-distill-qwen-32b:32b` |

## Use it with Jan (UI)

1. Install **Jan** using [Quickstart](https://jan.ai/docs/quickstart)
2. Use in Jan model Hub:
    ```bash
    cortexso/deepseek-r1-distill-qwen-32b
    ```

## Use it with Cortex (CLI)

1. Install **Cortex** using [Quickstart](https://cortex.jan.ai/docs/quickstart)
2. Run the model with command:
    ```bash
    cortex run deepseek-r1-distill-qwen-32b
    ```

## Credits

- **Author:** DeepSeek
- **Converter:** [Homebrew](https://www.homebrew.ltd/)
- **Original License:** [License](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B#7-license)
- **Papers:** [DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning](https://arxiv.org/html/2501.12948v1)