Datasets:
				
			
			
	
			
	
		
			
	
		
		Update README.md
Browse files
    	
        README.md
    CHANGED
    
    | 
         @@ -47,4 +47,210 @@ configs: 
     | 
|
| 47 | 
         
             
                path: data/commercial-*
         
     | 
| 48 | 
         
             
              - split: noncommercial
         
     | 
| 49 | 
         
             
                path: data/noncommercial-*
         
     | 
| 
         | 
|
| 50 | 
         
             
            ---
         
     | 
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
| 
         | 
|
| 47 | 
         
             
                path: data/commercial-*
         
     | 
| 48 | 
         
             
              - split: noncommercial
         
     | 
| 49 | 
         
             
                path: data/noncommercial-*
         
     | 
| 50 | 
         
            +
            pretty_name: Biomed-Enriched
         
     | 
| 51 | 
         
             
            ---
         
     | 
| 52 | 
         
            +
            
         
     | 
| 53 | 
         
            +
             
     | 
| 54 | 
         
            +
            # Biomed-Enriched: A Biomedical Dataset Enriched with LLMs for Pretraining and Extracting Rare and Hidden Content 
         
     | 
| 55 | 
         
            +
             
     | 
| 56 | 
         
            +
            ### Dataset Authors
         
     | 
| 57 | 
         
            +
             
     | 
| 58 | 
         
            +
            **Rian Touchent, Nathan Godey & Eric de la Clergerie**  
         
     | 
| 59 | 
         
            +
            *Sorbonne Université, INRIA Paris*
         
     | 
| 60 | 
         
            +
             
     | 
| 61 | 
         
            +
            ### Overview
         
     | 
| 62 | 
         
            +
             
     | 
| 63 | 
         
            +
            Biomed-Enriched is a PubMed-derived dataset created using a two-stage annotation process. Initially, [Llama 3.1 70B Instruct](https://huggingface.co/meta-llama/Llama-3.1-70B-Instruct) annotated 400K paragraphs for document type, domain, and educational quality. These annotations were then used to fine-tune a smaller model, which propagated the labels across the entire PubMed Central Open Access corpus. This process yielded 2M clinical case paragraphs, with over 450K high-quality paragraphs licensed for commercial use. This dataset provides a large-scale, openly available alternative to private clinical text. In continual pre-training experiments with OLMo2, curated subsets showed targeted improvements: clinical upsampling boosted MMLU ProfMed scores by ~5%, and educational quality filtering improved MedQA and MedMCQA by ~1%. Combining these methods achieved similar performance compared to standard continual-pretraining with just one-third of the training tokens, highlighting the potential for more efficient biomedical pretraining.
         
     | 
| 64 | 
         
            +
             
     | 
| 65 | 
         
            +
            The dataset is structured into two primary splits:
         
     | 
| 66 | 
         
            +
             
     | 
| 67 | 
         
            +
            * **Commercial**
         
     | 
| 68 | 
         
            +
            * **Non-Commercial**
         
     | 
| 69 | 
         
            +
             
     | 
| 70 | 
         
            +
            ## Dataset Structure
         
     | 
| 71 | 
         
            +
             
     | 
| 72 | 
         
            +
            ### Commercial Split
         
     | 
| 73 | 
         
            +
             
     | 
| 74 | 
         
            +
            * **text**: Textual content of the paragraphs.
         
     | 
| 75 | 
         
            +
            * **path**: Precise XML path referencing original paragraph locations.
         
     | 
| 76 | 
         
            +
            * **license\_url**: URL linking to the license.
         
     | 
| 77 | 
         
            +
            * **authors**: Comprehensive list of authors per paragraph for proper attribution compliance.
         
     | 
| 78 | 
         
            +
             
     | 
| 79 | 
         
            +
            ### Non-Commercial Split
         
     | 
| 80 | 
         
            +
             
     | 
| 81 | 
         
            +
            * **path**: Precise XML path referencing original paragraph locations.
         
     | 
| 82 | 
         
            +
            * **license\_url**: URL linking to the license.
         
     | 
| 83 | 
         
            +
            * **authors**: Comprehensive list of authors per paragraph for proper attribution compliance.
         
     | 
| 84 | 
         
            +
             
     | 
| 85 | 
         
            +
            > **Note:** The non-commercial split does not contain text data due to licensing restrictions. However, we provide scripts to populate the `text` field from a local PMC Open Access XML dump. See below for installation and usage instructions.
         
     | 
| 86 | 
         
            +
             
     | 
| 87 | 
         
            +
            ```bash
         
     | 
| 88 | 
         
            +
            pip install biomed-enriched
         
     | 
| 89 | 
         
            +
            ```
         
     | 
| 90 | 
         
            +
             
     | 
| 91 | 
         
            +
            ### With Python
         
     | 
| 92 | 
         
            +
            ```python
         
     | 
| 93 | 
         
            +
            from biomed_enriched import populate
         
     | 
| 94 | 
         
            +
             
     | 
| 95 | 
         
            +
            DATASET_DIR = "/path/to/biomed-enriched"  # input dataset
         
     | 
| 96 | 
         
            +
            PMC_XML_ROOT = "/path/to/pmc/non-comm/xml"          # PMC XML dump
         
     | 
| 97 | 
         
            +
            OUTPUT_DIR = "/path/to/populated-biomed-enriched"  # drop arg to overwrite in-place
         
     | 
| 98 | 
         
            +
             
     | 
| 99 | 
         
            +
            populate(DATASET_DIR, PMC_XML_ROOT, output_path=OUTPUT_DIR, splits="noncommercial", num_proc=1)
         
     | 
| 100 | 
         
            +
            ```
         
     | 
| 101 | 
         
            +
            The call overwrites the dataset in-place, adding a new `text` column as the third column (after `article_id`, `path`).
         
     | 
| 102 | 
         
            +
             
     | 
| 103 | 
         
            +
            ### With CLI
         
     | 
| 104 | 
         
            +
            ```bash
         
     | 
| 105 | 
         
            +
            biomed-enriched \
         
     | 
| 106 | 
         
            +
              --input /path/to/biomed-enriched \
         
     | 
| 107 | 
         
            +
              --xml-root /path/to/pmc/non-comm/xml \
         
     | 
| 108 | 
         
            +
              --num-proc 8
         
     | 
| 109 | 
         
            +
            ```
         
     | 
| 110 | 
         
            +
            Add `--output DIR` if you prefer writing to a new directory instead of overwriting.
         
     | 
| 111 | 
         
            +
             
     | 
| 112 | 
         
            +
            ## Annotation Process
         
     | 
| 113 | 
         
            +
             
     | 
| 114 | 
         
            +
            The dataset was created using a two-stage annotation framework:
         
     | 
| 115 | 
         
            +
             
     | 
| 116 | 
         
            +
            1. **Initial Annotation by Large Language Model**:
         
     | 
| 117 | 
         
            +
             
     | 
| 118 | 
         
            +
               * Annotated a subset of paragraphs for the following categories:
         
     | 
| 119 | 
         
            +
               
         
     | 
| 120 | 
         
            +
                 - **Document Type**: Categorizes the structure and purpose of the content.
         
     | 
| 121 | 
         
            +
                   - **Clinical Case**: Detailed report of symptoms, diagnosis, treatment, and follow-up of individual patients.
         
     | 
| 122 | 
         
            +
                   - **Study**: Research paragraph with methods, results, and discussion of experiments or observations.
         
     | 
| 123 | 
         
            +
                   - **Review**: Summary or synthesis of current knowledge on a specific topic.
         
     | 
| 124 | 
         
            +
                   - **Other**: Content not fitting above categories (editorials, commentaries, policy paragraphs).
         
     | 
| 125 | 
         
            +
                 - **Domain**: Identifies the subject area focus.
         
     | 
| 126 | 
         
            +
                   - **Clinical**: Content relating to patient care, clinical trials, case reports, or practice guidelines.
         
     | 
| 127 | 
         
            +
                   - **Biomedical**: Scientific aspects of medicine and biology.
         
     | 
| 128 | 
         
            +
                   - **Other**: Content mentioning biomedical topics but focusing on administrative, policy, or general communications.
         
     | 
| 129 | 
         
            +
                 - **Educational Quality**: Assesses pedagogical value for college-level biomedical learning on a scale from 1 (minimal value) to 5 (exceptional value) inspired by FineWeb-edu.
         
     | 
| 130 | 
         
            +
                   - **Score 1**: Basic information relevant to biomedical topics, may contain irrelevant content.
         
     | 
| 131 | 
         
            +
                   - **Score 2**: Addresses biomedical education elements but with limitations in coherence or depth.
         
     | 
| 132 | 
         
            +
                   - **Score 3**: Appropriate for college-level curricula, introduces key concepts with reasonable coherence.
         
     | 
| 133 | 
         
            +
                   - **Score 4**: Highly relevant educational content with clear writing style, minimal irrelevant information.
         
     | 
| 134 | 
         
            +
                   - **Score 5**: Outstanding educational value, detailed reasoning with profound insights for college-level learning.
         
     | 
| 135 | 
         
            +
             
     | 
| 136 | 
         
            +
            2. **Annotation Scaling via Model Distillation**:
         
     | 
| 137 | 
         
            +
             
     | 
| 138 | 
         
            +
               * These annotations were distilled into a XLM-RoBERTa-base model, enabling scalability to the entire PMC dataset.
         
     | 
| 139 | 
         
            +
             
     | 
| 140 | 
         
            +
            ## Annotation Statistics
         
     | 
| 141 | 
         
            +
             
     | 
| 142 | 
         
            +
            Here is the distribution of educational scores per domain:
         
     | 
| 143 | 
         
            +
             
     | 
| 144 | 
         
            +
            | Educational Score | Biomedical (n=116 221 134) | Clinical (n=2 182 784) | Other (n=15 213 051) |
         
     | 
| 145 | 
         
            +
            | :---------------: | :------------------------: | :--------------------: | :------------------: |
         
     | 
| 146 | 
         
            +
            |         1         |            1.8 %           |          6.0 %         |        60.1 %        |
         
     | 
| 147 | 
         
            +
            |         2         |            9.4 %           |         23.4 %         |        29.4 %        |
         
     | 
| 148 | 
         
            +
            |         3         |           10.9 %           |         26.6 %         |         8.3 %        |
         
     | 
| 149 | 
         
            +
            |         4         |           75.3 %           |         44.0 %         |         2.1 %        |
         
     | 
| 150 | 
         
            +
            |         5         |            2.6 %           |            –           |           –          |
         
     | 
| 151 | 
         
            +
             
     | 
| 152 | 
         
            +
            Here is the distribution of educational scores per document type:
         
     | 
| 153 | 
         
            +
             
     | 
| 154 | 
         
            +
            | Educational Score | Study (n=100 387 809) | Review (n=6 811 226) | Clinical case (n=2 122 403) | Other (n=24 295 531) |
         
     | 
| 155 | 
         
            +
            |:-----------------:|:---------------------:|:--------------------:|:---------------------------:|:--------------------:|
         
     | 
| 156 | 
         
            +
            | 1                 | 0.7 %                 | 0.3 %                | 4.0 %                       | 43.4 %               |
         
     | 
| 157 | 
         
            +
            | 2                 | 7.9 %                 | 1.6 %                | 14.4 %                      | 30.9 %               |
         
     | 
| 158 | 
         
            +
            | 3                 | 10.0 %                | 5.0 %                | 24.6 %                      | 14.6 %               |
         
     | 
| 159 | 
         
            +
            | 4                 | 78.7 %                | 86.9 %               | 57.0 %                      | 11.1 %               |
         
     | 
| 160 | 
         
            +
            | 5                 | 2.6 %                 | 6.1 %                | 0.0 %                       | 0.0 %                |
         
     | 
| 161 | 
         
            +
             
     | 
| 162 | 
         
            +
            ### Language Distribution
         
     | 
| 163 | 
         
            +
             
     | 
| 164 | 
         
            +
             
     | 
| 165 | 
         
            +
            | Language | Articles | Paragraphs | Clinical Case Paragraphs | % Clinical Cases |
         
     | 
| 166 | 
         
            +
            |----------|----------|------------|-------------------------|-----------------|
         
     | 
| 167 | 
         
            +
            | en       | 4,113,275| 131,579,445| 2,113,185               | 1.61            |
         
     | 
| 168 | 
         
            +
            | es       | 4,339    | 181,779    | 1,235                   | 0.68            |
         
     | 
| 169 | 
         
            +
            | zh-cn    | 3,649    | 59,719     | 0                       | 0.00            |
         
     | 
| 170 | 
         
            +
            | fr       | 3,410    | 173,325    | 2,586                   | 1.49            |
         
     | 
| 171 | 
         
            +
            | de       | 2,976    | 248,608    | 51                      | 0.02            |
         
     | 
| 172 | 
         
            +
            | it       | 2,708    | 274,819    | 521                     | 0.19            |
         
     | 
| 173 | 
         
            +
            | pt       | 934      | 85,242     | 4,540                   | 5.33            |
         
     | 
| 174 | 
         
            +
            | ko       | 636      | 25,535     | 0                       | 0.00            |
         
     | 
| 175 | 
         
            +
            | ru       | 222      | 10,553     | 0                       | 0.00            |
         
     | 
| 176 | 
         
            +
            | id       | 189      | 91,865     | 15                      | 0.02            |
         
     | 
| 177 | 
         
            +
             
     | 
| 178 | 
         
            +
             
     | 
| 179 | 
         
            +
            ## Key Applications
         
     | 
| 180 | 
         
            +
             
     | 
| 181 | 
         
            +
            * Improve efficiency in biomedical pretraining by focusing on high-quality, specific content.
         
     | 
| 182 | 
         
            +
            * Create new biomedical subsets tailored to specific research needs based on document type and domain.
         
     | 
| 183 | 
         
            +
             
     | 
| 184 | 
         
            +
            ## Evaluation
         
     | 
| 185 | 
         
            +
             
     | 
| 186 | 
         
            +
            Our evaluation focuses on isolating the effects of data curation rather than pursuing state-of-the-art scores on benchmarks. A more powerful foundation model would likely yield higher absolute scores but would obscure the precise impact of our dataset. We therefore selected `OLMo2-7B-stage1` as our foundation model, as this intermediate checkpoint provides strong baseline capabilities while allowing for a clear attribution of performance gains to our enrichment strategies. This model has already developed strong language modeling capabilities but precedes the knowledge-intensive tuning of stage 2, providing an ideal balance without the risk of catastrophic forgetting of instruction-following abilities during domain adaptation. Notably, the data mix used in phase 1 includes DCLM, a dataset filtered from web data using a classifier trained on instruction data, which gives OLMo2-7B relatively strong question-answering capabilities even after stage 1.
         
     | 
| 187 | 
         
            +
             
     | 
| 188 | 
         
            +
            Each Biomed-Enriched variant was trained for exactly 33.6 billion tokens using identical hyperparameters. We follow the annealing strategy of OLMo2 used in the mid-training phase. By maintaining strict parameter parity across experiments, we created a controlled environment focused solely on measuring the effectiveness of different data curation strategies.
         
     | 
| 189 | 
         
            +
             
     | 
| 190 | 
         
            +
            These experiments are designed to illustrate how our granular annotations enable targeted improvements in model capabilities. For instance, by specifically upsampling clinical content (`BE-Clinical` and `BE-ClinicalCase` variants), we expect to see a notable increase in performance on the MMLU Professional Medicine benchmark, underscoring the dataset's potential for developing specialized models.
         
     | 
| 191 | 
         
            +
             
     | 
| 192 | 
         
            +
            The following variants were created for this evaluation:
         
     | 
| 193 | 
         
            +
            *   **BE-Base:** The complete unmodified PMC Open Access Subset serving as baseline.
         
     | 
| 194 | 
         
            +
            *   **BE-Educational:** Preserves all articles but removes paragraphs with educational quality scores below 3.
         
     | 
| 195 | 
         
            +
            *   **BE-Clinical:** Replicates articles with predominantly clinical domain content 10�� in the training mix.
         
     | 
| 196 | 
         
            +
            *   **BE-ClinicalCase:** Replicates articles containing at least one clinical case paragraph 10× to increase exposure to clinical narratives.
         
     | 
| 197 | 
         
            +
            *   **BE-Prefix:** Prefixes each paragraph with its predicted annotations to allow modeling of metadata-content relationships.
         
     | 
| 198 | 
         
            +
            *   **BE-French:** Upsamples articles containing French text 10× to address language imbalance.
         
     | 
| 199 | 
         
            +
            *   **BE-All:** Combines quality filtering (score ≥ 3), upsampling of clinical content, French text, and clinical cases, plus metadata prefixing.
         
     | 
| 200 | 
         
            +
             
     | 
| 201 | 
         
            +
            ### Performance Results
         
     | 
| 202 | 
         
            +
             
     | 
| 203 | 
         
            +
            **SOTA Models for reference**
         
     | 
| 204 | 
         
            +
             
     | 
| 205 | 
         
            +
            | Model             | MedQA | MedMCQA | PubMedQA | Anat  | Clin  | Bio   | Med   | Gen   | Prof  | Avg   |
         
     | 
| 206 | 
         
            +
            | ----------------- | :---: | :-----: | :------: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |
         
     | 
| 207 | 
         
            +
            | Llama-3-8B        | 59.70 |  57.47  |  74.80   | 68.89 | 74.72 | 78.47 | 61.85 | 83.00 | 70.22 | 69.90 |
         
     | 
| 208 | 
         
            +
            | Meditron-70B      | 57.10 |  46.80  |  76.60   | 53.30 | 66.70 | 76.30 | 63.00 | 69.00 | 71.60 | 64.49 |
         
     | 
| 209 | 
         
            +
             
     | 
| 210 | 
         
            +
            **Benchmark Results by Dataset Variant (continual pre-training of OLMo2-7B-stage1)**
         
     | 
| 211 | 
         
            +
             
     | 
| 212 | 
         
            +
            | Variant         | MedQA          | MedMCQA        | PubMedQA       | Anat           | Clin           | Bio            | Med            | Gen            | Prof           | Avg            |
         
     | 
| 213 | 
         
            +
            | --------------- | :------------: | :------------: | :------------: | :------------: | :------------: | :------------: | :------------: | :------------: | :------------: | :------------: |
         
     | 
| 214 | 
         
            +
            | OLMo2-7B-stage1   | 45.33 |  41.14  |  75.60   | 54.81 | 63.40 | 69.44 | 53.18 | 69.00 | 59.93 | 59.09 |
         
     | 
| 215 | 
         
            +
            | BE-Base         | 44.85          | 41.91          | 76.40          | 57.04          | 64.15          | **70.83**      | **59.54**      | 69.00          | 59.93          | 60.41          |
         
     | 
| 216 | 
         
            +
            | BE-Clinical     | 41.95          | 39.35          | 76.60          | 53.33          | 63.40          | 65.28          | 58.38          | 66.00          | **63.97**      | 58.70          |
         
     | 
| 217 | 
         
            +
            | BE-ClinicalCase | 42.11          | 39.52          | 76.60          | 57.04          | 64.91          | 66.67          | **59.54**      | 69.00          | 62.87          | 59.81          |
         
     | 
| 218 | 
         
            +
            | BE-Prefix       | 45.72          | 41.76          | **77.80**      | 57.04          | 64.53          | 68.75          | 57.23          | 66.00          | 61.76          | 60.07          |
         
     | 
| 219 | 
         
            +
            | BE-Educational  | 45.64          | **43.08**      | 77.00          | 57.04          | 65.28          | 68.06          | 56.65          | **71.00**      | 58.82          | 60.29          |
         
     | 
| 220 | 
         
            +
            | BE-All          | **47.21**      | 42.79          | 76.60          | **60.00**      | **65.66**      | 68.06          | 58.96          | 69.00          | 61.40          | **61.08**      |
         
     | 
| 221 | 
         
            +
             
     | 
| 222 | 
         
            +
            *Note: The first three columns represent Medical QA benchmarks. The following six (Anat, Clin, Bio, Med, Gen, Prof) are sub-tasks from MMLU Medical. MMLU abbreviations: Anat=Anatomy, Clin=Clinical Knowledge, Bio=College Biology, Med=College Medicine, Gen=Medical Genetics, Prof=Professional Medicine.*
         
     | 
| 223 | 
         
            +
             
     | 
| 224 | 
         
            +
            ### Results Analysis
         
     | 
| 225 | 
         
            +
             
     | 
| 226 | 
         
            +
            **Overall performance.** BE-All achieved the highest average performance across benchmarks at 61.08%, surpassing BE-Base (60.41%) by a small but consistent margin (+0.67 pts, see table above). Its strongest improvements appeared in MedQA (47.21%), MMLU Anatomy (60.00%), and Clinical Knowledge (65.66%), suggesting the effectiveness of combining multiple targeted enrichment strategies.
         
     | 
| 227 | 
         
            +
             
     | 
| 228 | 
         
            +
            **Clinical enrichment.** Clinical enrichment (BE-Clinical) significantly boosted performance on MMLU Professional Medicine benchmark (63.97%, +4.04 pts vs. BE-Base, Figure 2). This improvement was stable from early training, highlighting how clinical narratives enhance the model’s clinical reasoning abilities efficiently.
         
     | 
| 229 | 
         
            +
             
     | 
| 230 | 
         
            +
            **Educational filtering.** Educational filtering (BE-Educational) consistently improved performance on medical question-answering tasks, notably Medical Genetics (71.00%, +2 pts), MedMCQA (43.08%, +1.17 pts), and PubMedQA (77.00%, +0.6 pts). These tasks likely benefit from the knowledge present in educationally high-quality paragraphs (Figure 2).
         
     | 
| 231 | 
         
            +
             
     | 
| 232 | 
         
            +
            **Metadata prefixing.** Metadata prefixing (BE-Prefix) specifically improved performance on PubMedQA (77.80%, +1.4 pts vs. BE-Base). Providing explicit paragraph-level metadata helped primarily with structured document comprehension, but it had limited benefits for other tasks.
         
     | 
| 233 | 
         
            +
             
     | 
| 234 | 
         
            +
            **General biomedical knowledge trade-off.** BE-Base performed better on College Biology (70.83%) than others. Building a biology variant (BE-Bio) could be an interesting future direction, as the current dataset does not specifically target this domain.
         
     | 
| 235 | 
         
            +
             
     | 
| 236 | 
         
            +
            **Non-English enrichment.** BE-French showed clear improvements in French medical QA (FrenchMedMCQA), achieving 40.5% accuracy, significantly surpassing BE-Base and the OLMo2-7B-stage1 baseline (38.32%, Figure 1). These results illustrate effective adaptation to non-English contexts without modifying the underlying model architecture.
         
     | 
| 237 | 
         
            +
             
     | 
| 238 | 
         
            +
            
         
     | 
| 239 | 
         
            +
             
     | 
| 240 | 
         
            +
            **Data efficiency and training stability.** As shown in Figure 2, BE-All reached robust benchmark performance using roughly one-third of the tokens required by BE-Base. Individual enrichments (Educational, Clinical) also displayed early and stable improvements, underscoring potential reductions in training time and computational cost.
         
     | 
| 241 | 
         
            +
             
     | 
| 242 | 
         
            +
            
         
     | 
| 243 | 
         
            +
             
     | 
| 244 | 
         
            +
            ## Licensing
         
     | 
| 245 | 
         
            +
             
     | 
| 246 | 
         
            +
            The Biomed-Enriched annotations (document type, domain, educational quality scores, and metadata) are released under the MIT License.
         
     | 
| 247 | 
         
            +
             
     | 
| 248 | 
         
            +
            The textual content licensing depends on the individual article licenses from PubMed Central Open Access. Each paragraph includes a `license_url` field pointing to the specific license. Users must comply with the respective license terms when using the textual data.
         
     | 
| 249 | 
         
            +
             
     | 
| 250 | 
         
            +
            ## How to Cite
         
     | 
| 251 | 
         
            +
             
     | 
| 252 | 
         
            +
            Please cite Biomed-Enriched using:
         
     | 
| 253 | 
         
            +
             
     | 
| 254 | 
         
            +
            ```
         
     | 
| 255 | 
         
            +
            [coming soon]
         
     | 
| 256 | 
         
            +
            ```
         
     |