To download
- wget --header="Authorization: Bearer " https://huggingface.co/datasets/bigcode/the-stack/resolve/main/data/actionscript/train-00000-of-00002.parquet?download=true
- hfd.sh from https://gist.github.com/padeoe/697678ab8e528b85a2a7bddafea1fa4f
the stack dedup
the stack smol
- need to accept agreement on HF
- bash hfd.sh bigcode/the-stack-smol --hf_username --hf_token --tool aria2c -x 8 --dataset
- 3 GB in 1.5 minutes
github, books, wiki, stackexchange, arxiv
- RPv1
- English wikipedia from https://huggingface.co/datasets/wikimedia/wikipedia
hf c4/realnewslike
- git lfs
GIT_LFS_SKIP_SMUDGE=1 git clone https://huggingface.co/datasets/allenai/c4 cd c4 git lfs pull --include "realnewslike/*"
- better: aria2c
- (hfd_c4.sh)
- looked at hfd.sh (after
printf "\nStart Downloading lfs files, bash script:\n"
) - so after
GIT_LFS_SKIP_SMUDGE=1 ...
andcd
, getfiles
like e.g.git lfs ls-files -I "realnewslike/*"
- and use later part of hfd.sh
- git lfs
Amber
- Wikipedia is 2%
- This is based on RPv1
- CC subset of RPv1 changed to RefinedWeb (I think only a subset is open-sourced)
- StarCoderData for code
- Procedure here https://github.com/LLM360/amber-data-prep
Alibaba Data Juicer
Filtered RPv1 and other subsets available
Also, CC snapshots available; dedup with simhash (why RPv2 then?)
https://github.com/alibaba/data-juicer/blob/main/configs/data_juicer_recipes/README.md
RP reproduced, some token counts here https://github.com/alibaba/data-juicer/tree/main/configs/reproduced_redpajama
Semantic scholar peS2o (one file, s2orc val set)
- 51K -> 40K, but most removed with average and max line length filter
- Example: /home1/BharatGPT_Data/data-juicer/demos/process_sci_data/outputs/trace/filter-maximum_line_length_filter.jsonl
- seem OK papers
- (ig use arxiv or create own config for SemanticScholar)
Issue of huggingface cache - used in home directory
- use ds_cache_dir flag (override config by cmd line arg)
- this DID NOT WORK with mixture... corrected by setting HF_HOME (and maybe TRANSFORMERS_CACHE too) - mixture demo
I've put these in ~/.bashrc
# cache home export DATA_JUICER_CACHE_HOME="/home1/BharatGPT_Data/data-juicer/.cache" # cache models export DATA_JUICER_MODELS_CACHE="/home1/BharatGPT_Data/data-juicer/.cache/models" # cache assets export DATA_JUICER_ASSETS_CACHE="/home1/BharatGPT_Data/data-juicer/.cache/assets"
important files
- /home1/BharatGPT_Data/data-juicer/tools/process_data.py
- /home1/BharatGPT_Data/data-juicer/tools/postprocess/data_mixture.py
- both these used with respective demos
- mixer code
- /home1/BharatGPT_Data/data-juicer/data_juicer/format/mixture_formatter.py