Datasets:
Update README.md
Browse files
README.md
CHANGED
|
@@ -70,7 +70,7 @@ language:
|
|
| 70 |
|
| 71 |
### Overview
|
| 72 |
|
| 73 |
-
Biomed-Enriched is a PubMed-derived dataset created using a two-stage annotation process. Initially, [Llama 3.1 70B Instruct](https://huggingface.co/meta-llama/Llama-3.1-70B-Instruct) annotated 400K paragraphs for document type, domain, and educational quality. These annotations were then used to fine-tune a smaller model, which propagated the labels across the entire PubMed Central Open Access corpus. This process yielded 2M clinical case paragraphs, with over 450K high-quality paragraphs licensed for commercial use. This dataset provides a large-scale, openly available alternative to private clinical text. In continual pre-training experiments with OLMo2, curated subsets showed targeted improvements: clinical upsampling boosted MMLU ProfMed scores by ~5%, and educational quality filtering improved MedQA and MedMCQA by ~1%. Combining these methods achieved similar performance compared to standard continual-pretraining with just one-third of the training tokens, highlighting the potential for more efficient biomedical pretraining.
|
| 74 |
|
| 75 |
The dataset is structured into two primary splits:
|
| 76 |
|
|
|
|
| 70 |
|
| 71 |
### Overview
|
| 72 |
|
| 73 |
+
Biomed-Enriched is a PubMed-derived dataset created using a two-stage annotation process. Initially, [Llama 3.1 70B Instruct](https://huggingface.co/meta-llama/Llama-3.1-70B-Instruct) annotated 400K paragraphs for document type, domain, and educational quality. These annotations were then used to fine-tune a [smaller model](https://huggingface.co/almanach/Biomed-Enriched-classifier), which propagated the labels across the entire PubMed Central Open Access corpus. This process yielded 2M clinical case paragraphs, with over 450K high-quality paragraphs licensed for commercial use. This dataset provides a large-scale, openly available alternative to private clinical text. In continual pre-training experiments with OLMo2, curated subsets showed targeted improvements: clinical upsampling boosted MMLU ProfMed scores by ~5%, and educational quality filtering improved MedQA and MedMCQA by ~1%. Combining these methods achieved similar performance compared to standard continual-pretraining with just one-third of the training tokens, highlighting the potential for more efficient biomedical pretraining.
|
| 74 |
|
| 75 |
The dataset is structured into two primary splits:
|
| 76 |
|