File size: 3,773 Bytes
8d28a20
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5f8a24f
 
 
8d28a20
 
 
 
5f8a24f
8d28a20
5f8a24f
8d28a20
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5f8a24f
 
 
8d28a20
 
 
 
5f8a24f
8d28a20
5f8a24f
8d28a20
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
---

library_name: transformers
tags:
  - custom_generate
---

## Description

Implementation of [Decoding by Contrasting Layers (DoLa)](https://huggingface.co/papers/2309.03883),
a contrastive decoding strategy for improving factuality and reducing hallucinations in language model outputs.

DoLa works by **contrasting the logits** from the final layer with those from earlier layers of the model,
amplifying factual knowledge localized in specific layers and suppressing spurious information.

This can be useful for:

* **Short-answer tasks** (e.g., TruthfulQA) — using higher layers (`dola_layers="high"`)
* **Long-answer reasoning tasks** (e.g., GSM8K, StrategyQA, FACTOR, VicunaQA) — using lower layers (`dola_layers="low"`)

DoLa is **not recommended for smaller models** such as GPT-2, as the improvement may be negligible.

This implementation matches the `DoLa` functionality present in `transformers<4.53.0`.

---

## Base model

* [Qwen/Qwen3-0.6B](https://huggingface.co/Qwen/Qwen3-0.6B)

---

## Model compatibility

* Decoder-only transformer models

---

## Additional Arguments

* **`dola_layers`** (*str* or *List\[int]*, optional):
  Which earlier layers to contrast with the final layer. Can be:

  * `"low"` — lower half of layers (recommended for long answers)
  * `"high"` — upper half of layers (recommended for short answers)
  * List of integer indices (e.g., `[18, 20]`)

  **Note:**

  * Layer 0 is the word embedding; layer 1 is the first transformer block.
  * If the model has tied word embeddings, layer 0 is skipped and counting starts at layer 2.
  * Typical defaults:

    | # Layers | `"low"` range       | `"high"` range      |
    | -------- | ------------------- | ------------------- |
    | > 40     | `(0, 20, 2)`        | `(N - 20, N, 2)`    |
    | ≤ 40     | `range(0, N//2, 2)` | `range(N//2, N, 2)` |

* **`repetition_penalty`** (*float*, optional, defaults to `None`):
  Helps reduce repetition. A value of `1.2` is recommended.

---

## Output Type changes

* The `generate` method output remains the same as default `transformers` generation,
  but logits are post-processed using the DoLa contrastive scoring before token selection.

---

## Example usage

### Using higher layers (short-answer tasks)

```python
# requires `transformers>=4.56.0`, previously, it was part of the library
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, infer_device

device = infer_device()

tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen3-0.6B")
model = AutoModelForCausalLM.from_pretrained(
    "Qwen/Qwen3-0.6B", torch_dtype=torch.float16
).to(device)

inputs = tokenizer("What is the highest peak in the world?", return_tensors="pt").to(device)

outputs = model.generate(
    **inputs,
    max_new_tokens=50,
    do_sample=False,
    custom_generate="transformers-community/dola",
    trust_remote_code=True,
    dola_layers="high"
)

print(tokenizer.batch_decode(outputs, skip_special_tokens=True))
```

---

### Contrasting specific layers

```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, infer_device

device = infer_device()

tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen3-0.6B")
model = AutoModelForCausalLM.from_pretrained(
    "Qwen/Qwen3-0.6B", torch_dtype=torch.float16
).to(device)

inputs = tokenizer("What is the highest peak in the world?", return_tensors="pt").to(device)

outputs = model.generate(
    **inputs,
    max_new_tokens=50,
    do_sample=False,
    repetition_penalty=1.2,
    custom_generate="transformers-community/dola",
    trust_remote_code=True,
    dola_layers=[18, 20]
)

# Only decode the newly generated tokens
print(tokenizer.batch_decode(outputs[:, inputs.input_ids.shape[-1]:], skip_special_tokens=True))
```