--- base_model: - Qwen/Qwen2.5-7B-Instruct license: apache-2.0 pipeline_tag: text-generation library_name: transformers datasets: - MachineLearningLM/machinelearninglm-scm-synthetic-tabularml tags: - Tabular Classification --- # MachineLearningLM This repository contains the model presented in the paper [MachineLearningLM: Scaling Many-shot In-context Learning via Continued Pretraining](https://huggingface.co/papers/2509.06806). ## Model Summary Can LLMs learn from 1,000 in-context examples? Introducing **MachineLearningLM** πŸ§ͺπŸ“Š β€” a model continuously pretrained on millions of synthetic tabular ML tasks, enabling robust many-shot in-context learning. πŸ“ˆ **Scales from 8 to 1,024 examples** πŸ“ˆ ​**​~15% improvement​**​ on unseen tabular tasks compared to o3-mini / GPT-5-mini / Qwen-2.5-7B-Instruct 🌲 ​**​Random-Forest–level numerical modeling robustness​**​ 🧠 ​**​MMLU score: 75.4%​**​ πŸ“„ Read the paper: https://huggingface.co/papers/2509.06806 GitHub: https://github.com/HaoAreYuDong/MachineLearningLM ## Evaluation and Validation We have developed an automated evaluation framework β€” simply configure the parameters to easily perform validation and evaluation. **The code is now open-sourced at our [GitHub repository](https://github.com/HaoAreYuDong/MachineLearningLM).** **Quick Start** ```bash pip install -r requirements.txt python ./src/evaluation/model_pred/dl_model_pred.py \ --input_dir ./demo_input.jsonl \ --output_dir ./demo_output.jsonl \ --model_name MachineLearningLM/MachineLearningLM-7B-v1 ``` **Pipeline** ```bash # modify the evaluate_parameters.sh file source evaluate_parameters.sh # Option 1 End-to-End Pipeline ./scripts/evaluate_pipeline.sh # Option 2 Parallel Processing ./scripts/multi_process/data_prep.sh ./scripts/multi_process/prompt_gen.sh # For deep learning only ./scripts/multi_process/model_pred.sh ./scripts/multi_process/evaluation.sh ./scripts/multi_process/report.sh # Option3 Sequential Processing ./scripts/single_process/data_prep.sh ./scripts/single_process/prompt_gen.sh # For deep learning only ./scripts/single_process/model_pred.sh ./scripts/single_process/evaluation.sh ./scripts/single_process/report.sh ``` For more usage details, please visit our GitHub. **Quants of Checkpoints** https://huggingface.co/mradermacher/MachineLearningLM-7B-v1-GGUF ## Tabicl Evaluation **This part of the code needs to run in an environment with the tabicl and openpyxl libraries installed.** The evaluation code for tabicl is placed separately in the `./src/evaluation/tabicl_evaluate.py` file. Use `./scripts/tabicl_evaluate.sh` to obtain the evaluation results for tabicl. Use --datasets to specify the datasets to be evaluated, and --sample_sizes to indicate the number of shots. If multiple datasets need to be evaluated, separate them with spaces. To evaluate all CSV files in the input folder, use **all**. ## Prior_data MachineLearningLM uses the code from tabicl to generate prior data. Use `./scripts/generate_data.sh` to generate the prior data. It generates the corresponding .pt and .csv files, and normalizes the feature values in the CSV files to the range of 0–999, as we did in the paper. ### Parameter Introduction(refer to the comments in the file `tabicl\src\tabicl\prior\dataset.py`οΌ‰ **Data Scale & Structure** | Parameter | Type | Description | | :------------- | :--- | :------------------------------------------------------ | | `min_features` | int | Minimum number of features per dataset | | `max_features` | int | Maximum number of features per dataset | | `max_classes` | int | Maximum number of target classes | | `min_seq_len` | int | Minimum samples per dataset. Uses `max_seq_len` if None | | `max_seq_len` | int | Maximum samples per dataset (Not IncludeοΌ‰ | **Batch Configuration** | Parameter | Type | Description | | :--------------------- | :--- | :----------------------------------------------------------- | | `batch_size` | int | Total number of datasets to generate per batch | | `batch_size_per_gp` | int | Number of datasets per group (shared characteristics) | | `batch_size_per_subgp` | int | Number of datasets per subgroup (similar causal structures). Defaults to `batch_size_per_gp` if None | **Sequence Length Control** | Parameter | Type | Description | | :--------------- | :--- | :----------------------------------------------------------- | | `log_seq_len` | bool | Sample sequence length from log-uniform distribution if True | | `seq_len_per_gp` | bool | Sample sequence length per group (enables variable-sized datasets) | | `replay_small` | bool | Occasionally sample smaller sequences for model robustness | **Train-Test Split** | Parameter | Type | Description | | :--------------- | :-------- | :----------------------------------------------------------- | | `min_train_size` | int/float | Start position/ratio for train split (int: absolute, float: fractional) | | `max_train_size` | int/float | End position/ratio for train split (int: absolute, float: fractional) | **Generation Method** | Parameter | Type | Description | | :----------- | :--- | :----------------------------------------------------------- | | `prior_type` | str | Prior type: 'mlp_scm', 'tree_scm', or 'mix_scm' (random selection) | | `fixed_hp` | dict | Fixed structural configuration parameters | | `sampled_hp` | dict | Parameters sampled during generation | **Computation Settings** | Parameter | Type | Description | | :------------------------- | :--- | :------------------------------------------------ | | `n_jobs` | int | Number of parallel jobs (-1 = use all processors) | | `num_threads_per_generate` | int | Number of threads per generation job | | `device` | str | Computation device ('cpu' or 'cuda') | ## Train MachineLearningLM uses the LLaMA-Factory framework for training. #### Training Environment Configuration ```bash cd ./third_party/LLaMA-Factory pip install -e ".[torch,metrics]" --no-build-isolation pip install wandb ``` Use `./scripts/train.sh` for training. ## Project Structure ``` MachineLearningLM/ β”œβ”€β”€src/ | β”œβ”€β”€evaluation/ β”‚ β”‚ β”œβ”€β”€ data_prep/ # Data preprocessing and chunking utilities β”‚ β”‚ β”œβ”€β”€ prompt_gen/ # Prompt generation for deep learning models β”‚ β”‚ β”œβ”€β”€ model_pred/ # Model inference (ML and DL prediction engines) β”‚ β”‚ β”œβ”€β”€ result_proc/ # 5-layer evaluation architecture and metrics processing β”‚ β”‚ β”œβ”€β”€ zero_summary/ # Result summarization and report generation β”‚ β”‚ └── tabicl_evaluate.py β”‚ └──prior_data β”‚ └── pt_to_csv.py β”œβ”€β”€ scripts/ β”‚ β”œβ”€β”€ single_process/ # Sequential execution shell scripts β”‚ β”œβ”€β”€ multi_process/ # Parallel execution shell scripts (with _mp suffix) β”‚ β”œβ”€β”€ evaluate_parameters.sh # Global parameter configuration | β”œβ”€β”€ evaluate_pipeline.sh # automated pipeline | β”œβ”€β”€ generate_data.sh | β”œβ”€β”€ tabicl_evaluate.sh | └── train.sh β”œβ”€β”€ datahub_inputs/ β”‚ β”œβ”€β”€ data_demo/ # Demo datasets for testing β”‚ └── data_raw/ # Raw input datasets β”œβ”€β”€ third_party/ β”‚ β”œβ”€β”€ tabicl/ β”‚ └── LLaMA-Factory/ β”œβ”€β”€ requirements.txt # Python dependencies for Evaluation Framework β”œβ”€β”€ README.md β”œβ”€β”€ README_zh.md β”œβ”€β”€ THIRD_PARTY_NOTICES.md └── LICENSE ```