Safetensors
llama
File size: 8,351 Bytes
46dd143
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1e768eb
 
 
dfadf05
1e768eb
1b5f7a3
113c8f4
1e768eb
4d6cc38
1e768eb
 
113c8f4
1e768eb
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1dda6bb
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
77f83c5
 
 
 
 
 
 
e3895ef
 
 
 
 
77f83c5
e3895ef
77f83c5
 
e3895ef
63ce6b3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
---
license: cc-by-4.0
language:
- en
- de
- fr
- pl
- ru
- it
- pt
- cs
- nl
- es
- fi
- tr
- hu
- bg
- uk
- bs
- hr
- da
- et
- lt
- ro
- sk
- sl
- sv
- 'no'
- lv
- sr
- sq
- mk
- is
- mt
- ga
datasets:
- HPLT/HPLT2.0_cleaned
- HPLT/hplt_monolingual_v1_2
- HuggingFaceFW/fineweb-2
- allenai/MADLAD-400
- uonlp/CulturaX
- bigcode/the-stack
- common-pile/arxiv_papers
---
**Developed by:**  [Tilde.ai](https://tilde.ai/tildeopen-llm/)   
**Funded by:**  European Commission via [EuroHPC JU Large AI Grand Challenge](https://www.eurohpc-ju.europa.eu/winners-announced-large-ai-grand-challenge-2024-06-26_en)   
**Model type:**  A 30B parameter dense decoder-only transformer   
**Languages:**  Albanian, Bosnian, Bulgarian, Croatian, Czech, Danish, Dutch, English, Estonian, Finnish, French, German, Hungarian, Icelandic, Irish, Italian, Latgalian, Latvian, Lithuanian, Macedonian, Maltese, Montenegrin, Norwegian, Polish, Portuguese, Romanian, Russian, Serbian, Slovak, Slovene, Spanish, Swedish, Turkish, Ukrainian as well as mathematical proofs, programming code and XML documents containing translation data   
**License:**  CC-BY-4.0   


## Mission statement 
TildeOpen LLM is an open-source foundational (base) language model built to serve underrepresented Nordic and Eastern European languages. Developed with European Commission funding and trained on the LUMI supercomputer, this 30B+ parameter model addresses the performance gaps that speakers of 19 focus languages—representing over 165 million people—face with existing AI systems.   
The model employs an equitable tokeniser and curriculum-learning approach to ensure fair representation across less-resourced languages, moving beyond the typical English-centric design of most language models. As an open-source project, TildeOpen LLM enables transparent research and community-driven development while maintaining European technological independence.   
This foundational model is not yet adapted to follow instructions or aligned with safety features. The next version being built on top of this model will be a specialised translation model, leveraging TildeOpen LLM's multilingual foundation to provide high-quality translation capabilities across the supported European language pairs.   

## Model training details 
We train TildeOpen LLM using the [Tilde's branch](https://github.com/tilde-nlp/llm-gpt-neox) of [EleutherAI's](https://www.eleuther.ai/) open-source GPT-NeoX framework on LUMI supercomputer's 768 AMD MI250X GPUs. The foundational model training involves 450,000 updates with a constant batch size of 4,718,592 tokens, using a constant learning rate followed by a cooldown phase across 2 trillion tokens. Training consists of three distinct data sampling phases. First, all languages are sampled uniformly to ensure equal representation. Second, languages are sampled according to their natural distribution to ensure that the model sees as much data from languages with larger speaker bases as possible. Finally, we return to uniform sampling across all languages. This three-phase approach ensures TildeOpen LLM develops balanced multilingual capabilities while maintaining strong performance across all target languages, particularly the underrepresented European languages.   

## Model Hyper-Parameters 

| Parameter | Value | 
|-----------|-------| 
| Sequence Length | 8192 | 
| Number of Layers | 60 | 
| Embedding Size | 6144 | 
| FFN Hidden Size | 21504 | 
| Number of Heads | 48 | 
| Number of KV Heads (GQA) | 8 | 
| Activation Function | SwiGLU | 
| Position Encodings | RoPE | 
| Layer Norm | RMSNorm | 
| Embedding Parameters | 8.05E+08 | 
| LM Head Parameters | 8.05E+08 | 
| Non-embedding Parameters | 2.91E+10 | 
| Total Parameters | 3.07E+10 | 

## Tokeniser details 
We built the TildeOpen LLM tokeniser to ensure equitable language representation across languages. Technically, we trained the tokeniser to represent the same text regardless of the language it is written in, using a similar number of tokens. In practice, TildeOpen LLM will be more efficient and faster than other models for our focus languages, as writing out answers will require fewer steps. For more details on how TildeOpen LLM compares against other models, see **[TILDE Bench](https://tilde-nlp.github.io/tokenizer-bench.html)**! 


## Running model using HF transformers
When loading the tokeniser, you must set ```use_fast=False```.
```python
from transformers import AutoTokenizer, AutoModelForCausalLM

# Load tokenizer + model
tokenizer = AutoTokenizer.from_pretrained("TildeAI/TildeOpen-30b", use_fast=False)
model = AutoModelForCausalLM.from_pretrained(
    "TildeAI/TildeOpen-30b",
    torch_dtype=torch.bfloat16,
    device_map="auto"
)

# Tokenize
inputs = tokenizer(user_in, return_tensors="pt").to(model.device)

# Generate (greedy, deterministic)
outputs = model.generate(
    **inputs,
    max_new_tokens=512,
    repetition_penalty=1.2,
    do_sample=False,
)
```
# Evaluation
## Per-Character Perplexity
**What is Perplexity?** Perplexity measures how well a language model predicts text. A model with low perplexity makes accurate predictions consistently, while a high perplexity means the model is frequently "surprised" by unexpected words or patterns. Lower perplexity indicates the model has learned language patterns more effectively. It's less "surprised" by what it encounters because it better understands how the language works.
Perplexity fairly evaluates how well each model handles:
- Spelling accuracy across a diverse vocabulary
- Grammar rules that span multiple words
- Sentence structure and flow
- Language-specific patterns (how different languages form plural forms or compound words)

**Why Character-Level?** Different language models use different internal vocabularies - some break text into whole words, others into word fragments, and some into individual characters. This makes direct comparison difficult.
Character-level perplexity creates a standardised comparison by calculating how well each model would theoretically perform if we measured their predictions character-by-character. We're not changing how the models work - instead, we use mathematical conversion to approximate their character-level performance based on their predictions.

**Why does this Matter?** Models with lower perplexity generally perform better on real-world tasks like text generation, translation, and understanding context. It's a reliable indicator of overall language competency across different applications.

**What data did we use?**
We use WMT24++ as it is a multilingual, language-parallel evaluation set that none of the models have seen during training. WMT24++ is a composite of texts from news, literature, speech, and social media; thus, it is suitable for foundational model benchmarking.

| Language | TildeOpen 30b | Gemma 2 27b | EuroLLM 22B Prev. | ALIA 40B |
|-----------------|---------|------------|----|------|
| Bulgarian | **2.0539** | 2.2184 | 2.1985 | 2.1336 |
| Czech | **2.1579** | 2.3522 | 2.3221 | 2.2719 |
| Danish | **2.003** | 2.1517 | 2.1353 | 2.0805 |
| German | **1.8769** | 1.9285 | 1.9452 | 1.904 |
| English | 2.0378 | **1.9525** | 2.0568 | 2.0261 |
| Spanish | 1.9503 | 1.9752 | 2.0145 | **1.9369** |
| Estonian | **2.1711** | 2.5747 | 2.3852 | 2.325 |
| Finnish | **2.0497** | 2.288 | 2.2388 | 2.1831 |
| French | **1.8978** | 1.9355 | 1.9282 | 1.9084 |
| Croatian | **2.1147** | 2.544 | 2.4905 | 2.2433 |
| Hungarian | **2.0539** | 2.2228 | 2.2256 | 2.1635 |
| Icelandic | **2.0873** | 3.0329 | 4.7908 | 3.957 |
| Italian | **1.9565** | 2.0137 | 2.0098 | 1.9887 |
| Lithuanian | **2.1247** | 2.4175 | 2.3137 | 2.3075 |
| Latvian | **2.1439** | 2.5355 | 2.3141 | 2.3276 |
| Dutch | **1.9333** | 2.0312 | 2.0079 | 1.9904 |
| Norwegian | **2.1284** | 2.2862 | 2.3506 | 2.2253 |
| Polish | **2.0241** | 2.1294 | 2.0803 | 2.0803 |
| Portuguese | **1.9899** | 2.0597 | 2.0272 | 2.0187 |
| Romanian | **2.0196** | 2.1606 | 2.1641 | 2.1114 |
| Russian | **2.0424** | 2.09 | 2.1095 | 2.0871 |
| Slovak | **2.1192** | 2.338 | 2.3029 | 2.2609 |
| Slovenian | **2.1556** | 2.4443 | 2.3398 | 2.2589 |
| Serbian | **2.2469** | 2.6351 | 4.2471 | 2.3743 |
| Swedish | **2.041** | 2.1809 | 2.1464 | 2.1211 |
| Turkish | **2.0997** | 2.247 | 2.2202 | 2.232 |
| Ukrainian | **2.1376** | 2.2665 | 2.2691 | 2.2086 |