File size: 7,893 Bytes
883d5c4
 
 
445adf8
 
 
883d5c4
 
dd7c905
 
445adf8
 
 
dd7c905
 
 
 
 
 
038f7af
dd7c905
038f7af
dd7c905
038f7af
dd7c905
 
41a2e63
dd7c905
6e76549
dd7c905
445adf8
dd7c905
d9a9a6e
445adf8
dd7c905
d9a9a6e
dd7c905
 
83cb1e7
dd7c905
 
 
b78f0ce
dd7c905
445adf8
d9a9a6e
 
 
b9b52c2
 
d9a9a6e
b9b52c2
 
 
 
 
 
 
 
 
 
 
 
 
 
d9a9a6e
b9b52c2
dd7c905
 
445adf8
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
---
base_model:
- Qwen/Qwen2.5-7B-Instruct
license: apache-2.0
pipeline_tag: text-generation
library_name: transformers
---

# MachineLearningLM

This repository contains the model presented in the paper [MachineLearningLM: Scaling Many-shot In-context Learning via Continued Pretraining](https://huggingface.co/papers/2509.06806).

## Model Summary

Can LLMs learn from 1,000 in-context examples?

Introducing **MachineLearningLM** πŸ§ͺπŸ“Š β€” a model continuously pretrained on millions of synthetic tabular ML tasks, enabling robust many-shot in-context learning.

πŸ“ˆ **Scales from 8 to 1,024 examples**

πŸ“ˆ ​**​~15% improvement​**​ on unseen tabular tasks compared to o3-mini / GPT-5-mini / Qwen-2.5-7B

🌲 ​**​Random-Forest–level robustness​**​
 
🧠 ​**​MMLU score: 75.4%​**​

πŸ“„ Read the paper:  https://huggingface.co/papers/2509.06806

   GitHub: https://github.com/HaoAreYuDong/MachineLearningLM

## Evaluation and Validation

We have developed an automated evaluation framework β€” simply configure the parameters to easily perform validation and evaluation. 
**The code is now open-sourced at our [GitHub repository](https://github.com/HaoAreYuDong/MachineLearningLM).**

**Quick Start**

```bash
pip install -r requirements.txt
python ./src/evaluation/model_pred/dl_model_pred.py \
  --input_dir ./demo_input.jsonl \
  --output_dir ./demo_output.jsonl \
  --model_name MachineLearningLM/MachineLearningLM-7B-v1
```
**Pipeline**
```bash
# modify the evaluate_parameters.sh file
source evaluate_parameters.sh

# Option 1  End-to-End Pipeline
./scripts/evaluate_pipeline.sh

# Option 2  Parallel Processing
./scripts/multi_process/data_prep.sh
./scripts/multi_process/prompt_gen.sh  # For deep learning only
./scripts/multi_process/model_pred.sh
./scripts/multi_process/evaluation.sh
./scripts/multi_process/report.sh

# Option3   Sequential Processing
./scripts/single_process/data_prep.sh
./scripts/single_process/prompt_gen.sh  # For deep learning only
./scripts/single_process/model_pred.sh
./scripts/single_process/evaluation.sh
./scripts/single_process/report.sh
```

For more usage details, please visit our GitHub.

## Tabicl Evaluation

**This part of the code needs to run in an environment with the tabicl and openpyxl libraries installed.**

The evaluation code for tabicl is placed separately in the `./src/evaluation/tabicl_evaluate.py` file. Use `./scripts/tabicl_evaluate.sh` to obtain the evaluation results for tabicl.

Use --datasets to specify the datasets to be evaluated, and --sample_sizes to indicate the number of shots. 

If multiple datasets need to be evaluated, separate them with spaces. To evaluate all CSV files in the input folder, use **all**.

## Prior_data

MachineLearningLM uses the code from tabicl to generate prior data.

Use `./scripts/generate_data.sh` to generate the prior data. It generates the corresponding .pt and .csv files, and normalizes the feature values in the CSV files to the range of 0–999, as we did in the paper.

### Parameter Introduction(refer to the comments in the file `tabicl\src\tabicl\prior\dataset.py`οΌ‰

**Data Scale & Structure**

| Parameter      | Type | Description                                             |
| :------------- | :--- | :------------------------------------------------------ |
| `min_features` | int  | Minimum number of features per dataset                  |
| `max_features` | int  | Maximum number of features per dataset                  |
| `max_classes`  | int  | Maximum number of target classes                        |
| `min_seq_len`  | int  | Minimum samples per dataset. Uses `max_seq_len` if None |
| `max_seq_len`  | int  | Maximum samples per dataset (Not IncludeοΌ‰             |

**Batch Configuration**

| Parameter              | Type | Description                                                  |
| :--------------------- | :--- | :----------------------------------------------------------- |
| `batch_size`           | int  | Total number of datasets to generate per batch               |
| `batch_size_per_gp`    | int  | Number of datasets per group (shared characteristics)        |
| `batch_size_per_subgp` | int  | Number of datasets per subgroup (similar causal structures). Defaults to `batch_size_per_gp` if None |

**Sequence Length Control**

| Parameter        | Type | Description                                                  |
| :--------------- | :--- | :----------------------------------------------------------- |
| `log_seq_len`    | bool | Sample sequence length from log-uniform distribution if True |
| `seq_len_per_gp` | bool | Sample sequence length per group (enables variable-sized datasets) |
| `replay_small`   | bool | Occasionally sample smaller sequences for model robustness   |

**Train-Test Split**

| Parameter        | Type      | Description                                                  |
| :--------------- | :-------- | :----------------------------------------------------------- |
| `min_train_size` | int/float | Start position/ratio for train split (int: absolute, float: fractional) |
| `max_train_size` | int/float | End position/ratio for train split (int: absolute, float: fractional) |

**Generation Method**

| Parameter    | Type | Description                                                  |
| :----------- | :--- | :----------------------------------------------------------- |
| `prior_type` | str  | Prior type: 'mlp_scm', 'tree_scm', or 'mix_scm' (random selection) |
| `fixed_hp`   | dict | Fixed structural configuration parameters                    |
| `sampled_hp` | dict | Parameters sampled during generation                         |

**Computation Settings**

| Parameter                  | Type | Description                                       |
| :------------------------- | :--- | :------------------------------------------------ |
| `n_jobs`                   | int  | Number of parallel jobs (-1 = use all processors) |
| `num_threads_per_generate` | int  | Number of threads per generation job              |
| `device`                   | str  | Computation device ('cpu' or 'cuda')              |

## Train

MachineLearningLM uses the LLaMA-Factory framework for training.

#### Training Environment Configuration

```bash
cd ./third_party/LLaMA-Factory
pip install -e ".[torch,metrics]" --no-build-isolation
pip install wandb
```

Use `./scripts/train.sh` for training.

## Project Structure

```
MachineLearningLM/
β”œβ”€β”€src/
|   β”œβ”€β”€evaluation/
β”‚   β”‚   β”œβ”€β”€ data_prep/          # Data preprocessing and chunking utilities
β”‚   β”‚   β”œβ”€β”€ prompt_gen/         # Prompt generation for deep learning models
β”‚   β”‚   β”œβ”€β”€ model_pred/         # Model inference (ML and DL prediction engines)
β”‚   β”‚   β”œβ”€β”€ result_proc/        # 5-layer evaluation architecture and metrics processing
β”‚   β”‚   β”œβ”€β”€ zero_summary/       # Result summarization and report generation
β”‚   β”‚   └── tabicl_evaluate.py
β”‚   └──prior_data
β”‚       └── pt_to_csv.py     
β”œβ”€β”€ scripts/
β”‚   β”œβ”€β”€ single_process/         # Sequential execution shell scripts
β”‚   β”œβ”€β”€ multi_process/          # Parallel execution shell scripts (with _mp suffix)
β”‚   β”œβ”€β”€ evaluate_parameters.sh  # Global parameter configuration
|   β”œβ”€β”€ evaluate_pipeline.sh    # automated pipeline
|   β”œβ”€β”€ generate_data.sh
|   β”œβ”€β”€ tabicl_evaluate.sh
|   └── train.sh
β”œβ”€β”€ datahub_inputs/
β”‚   β”œβ”€β”€ data_demo/          # Demo datasets for testing
β”‚   └── data_raw/           # Raw input datasets
β”œβ”€β”€ third_party/
β”‚   β”œβ”€β”€ tabicl/          
β”‚   └── LLaMA-Factory/   
β”œβ”€β”€ requirements.txt        # Python dependencies for Evaluation Framework
β”œβ”€β”€ README.md
β”œβ”€β”€ README_zh.md
β”œβ”€β”€ THIRD_PARTY_NOTICES.md
└── LICENSE
```