File size: 3,569 Bytes
d68e78e
 
 
 
 
 
 
 
 
 
 
 
 
44fa15b
d68e78e
44fa15b
 
d68e78e
 
 
 
 
b9b7c57
 
 
 
 
 
 
 
 
 
60d6447
 
9644e3d
60d6447
 
 
9644e3d
60d6447
 
 
 
 
 
 
 
d353d6f
 
 
60d6447
d353d6f
60d6447
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
---
dataset_info:
  features:
  - name: text
    dtype: string
  - name: lang
    dtype: string
  - name: type
    dtype: string
  - name: id
    dtype: string
  splits:
  - name: eval
    num_bytes: 76408631
    num_examples: 10000
  download_size: 39911840
  dataset_size: 76408631
configs:
- config_name: default
  data_files:
  - split: eval
    path: data/eval-*
license: odc-by
language:
- fr
- en
- es
tags:
- Python
- Java
- JavaScript
- C/C++
---

# Dataset Card for `dataset-eval`

## Description

The `dataset-eval` dataset is a multilingual and multi-domain dataset designed for evaluating language model performance during training. It can be used for
performance tracking, generalization diagnostics across languages or domains, and for implementing early stopping mechanisms.

The examples included were automatically selected as **High quality** by the [`EuroBERT-210m-Quality`](https://huggingface.co/TempestTeam/EuroBERT-210m-Quality) model,
trained to estimate web text quality in multiple languages.

## Dataset Composition

- **Natural Languages**:
  - English: 2,640 examples (from [FineWeb](https://huggingface.co/datasets/HuggingFaceFW/fineweb))
  - French: 2,720 examples (from [FineWeb-2](https://huggingface.co/datasets/HuggingFaceFW/fineweb-2))
  - Spanish: 2,640 examples (from [FineWeb-2](https://huggingface.co/datasets/HuggingFaceFW/fineweb-2))

- **Programming Languages** (from [The-Stack-v2-dedup](https://huggingface.co/datasets/bigcode/the-stack-v2-dedup)):
  - Python: 500 examples  
  - Java: 500 examples  
  - JavaScript: 500 examples  
  - C: 250 examples  
  - C++: 250 examples  

- **Total**: 10,000 high-quality examples

## Data Structure

Each example includes the following fields:

- **`text`** (*string*): the textual content or source code.
- **`lang`** (*string*): the language of the content (e.g., `English`, `French`, `Spanish`, `Python`, `C++`, etc.).
- **`type`** (*string*): the type of content:
  - `"NL"` for natural language
  - `"CL"` for code language
- **`id`** (*string*): a unique identifier generated by hashing the `text` field.

## Use Cases

This dataset is intended for **periodic evaluation** during language model training:

- Tracking performance on high-quality data
- Evaluation per batch or epoch
- Validation metric computation for early stopping
- Performance comparison by language or domain

It is **not intended for direct training**, due to its limited size and its purpose as a filtered evaluation sample.

## Licenses

The dataset is built from sources under the following licenses:

| Source                | License          |
|:---------------------:|:----------------:|
| FineWeb               | ODC-BY 1.0       |
| FineWeb-2             | ODC-BY 1.0       |
| The Stack v2          | Other            |
| EuroBERT-210m-Quality | Apache-2.0       |

Users must ensure they comply with the specific license conditions when reusing or redistributing this data.

## Risks and Limitations

### Sensitive Data

The original sources are from the public web and were automatically cleaned. Despite filtering, some data may still contain sensitive, personal, or confidential information.

It is strongly recommended **not to use this dataset in production or user-facing systems without manual review**.

### Bias

- Quality annotations were produced by an automatic classifier and may reflect its training biases.
- The dataset covers only three natural languages and five programming languages.
- Cultural, thematic, or syntactic biases may be present depending on the source corpora.