File size: 1,370 Bytes
5fd3072
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
---
license: apache-2.0
library_name: transformers
language:
- en
tags:
- chat
- conversational
base_model:
- Qwen/Qwen2.5-32B
- maldv/Qwentile2.5-32B-Instruct
- NovaSky-AI/Sky-T1-32B-Preview
- Sao10K/32B-Qwen2.5-Kunou-v1
- 6cf/QwQ-32B-Preview-IdeaWhiz-v1
---

# Qwenstein 2.5 32B Instruct

Qwenstein 2.5 32B Instruct is a *normalized denoised fourier interpolation* of the following models:

```yaml
output_base_model: "Qwen/Qwen2.5-32B"
finetune_merge:
  - { "model": "maldv/Qwentile2.5-32B-Instruct", "base": "Qwen/Qwen2.5-32B", "alpha": 1.0, "is_input": true, "is_output": true }
  - { "model": "NovaSky-AI/Sky-T1-32B-Preview", "base": "Qwen/Qwen2.5-32B", "alpha": 0.7 }
  - { "model": "Sao10K/32B-Qwen2.5-Kunou-v1", "base": "Qwen/Qwen2.5-32B", "alpha": 0.6 }
  - { "model": "6cf/QwQ-32B-Preview-IdeaWhiz-v1", "base": "Qwen/Qwen2.5-32B", "alpha": 0.7 }
```

In other words, all of these models get warped and interpolated in signal space, and then jammed back on top of the base model.

### What is this?

This is my second attempt to make Qwentile more intelligent.

## Citation

If you find our work helpful, feel free to give us a cite.

```
@misc{qwenstein.5-32b-instruct,
    title = {Qwenstein 2.5 32B Instruct},
    url = {https://huggingface.co/maldv/Qwenstein2.5-32B-Instruct},
    author = {Praxis Maldevide},
    month = {January},
    year = {2025}
}
```