--- license: mit pipeline_tag: text-generation library_name: transformers language: [ 'en', 'am', 'ar', 'as', 'az', 'be', 'bg', 'bn', 'br', 'bs', 'ca', 'cs', 'cy', 'da', 'de', 'el', 'eo', 'es', 'et', 'eu', 'fa', 'ff', 'fi', 'fr', 'fy', 'ga', 'gd', 'gl', 'gn', 'gu', 'ha', 'he', 'hi', 'hr', 'ht', 'hu', 'hy', 'id', 'ig', 'is', 'it', 'ja', 'jv', 'ka', 'kk', 'km', 'kn', 'ko', 'ku', 'ky', 'la', 'lg', 'li', 'ln', 'lo', 'lt', 'lv', 'mg', 'mk', 'ml', 'mn', 'mr', 'ms', 'my', 'ne', 'nl', 'no', 'ns', 'om', 'or', 'pa', 'pl', 'ps', 'pt', 'qu', 'rm', 'ro', 'ru', 'sa', 'si', 'sc', 'sd', 'sk', 'sl', 'so', 'sq', 'sr', 'ss', 'su', 'sv', 'sw', 'ta', 'te', 'th', 'tl', 'tn', 'tr', 'ug', 'uk', 'ur', 'uz', 'vi', 'wo', 'xh', 'yi', 'yo', 'zu', ] datasets: # core - base - ontocord/fineweb-permissive-multilingual-2m - distily/c4_multilingual_1M - data-silence/sumnews - xu-song/cc100-samples - badrex/llm-emoji-dataset - fblgit/simple-math - Gusarich/math-expressions-1m - neuralwork/arxiver - christopher/rosetta-code - nampdn-ai/tiny-codes - JeanKaddour/minipile # core - instruct - NousResearch/hermes-function-calling-v1 - simplescaling/s1K-1.1 # base - instruct - mlabonne/open-perfectblend - allenai/tulu-3-sft-mixture - rombodawg/Everything_Instruct_Multilingual # base - reason - open-r1/OpenR1-Math-220k - open-thoughts/OpenThoughts-114k - cognitivecomputations/dolphin-r1 - simplescaling/s1K-1.1 tags: - chat - core - base - instruct - reason --- # tangled-alpha-0.1-core ![logo](./misc/logo.jpg) ```bash time python -B prepare_core_datasets.py ``` ``` Progress: 100%|████████| 220/220 [23:15<00:00, 6.34s/it] Workers are finished.██| 220/220 [23:15<00:00, 6.34s/it] Finished data processing! i=0, block_size=8192, chunk_size=16384000, len(dataset)=893355, len(dataset) * block_size=7318364160 Total number of tokens in the optimized dataset '../core-data-0-8192-2000' is 7318364160 ``` ```bash CUDA_VISIBLE_DEVICES=0 CUDA_LAUNCH_BLOCKING=0 PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True litgpt pretrain --config pretrain-core-model.yaml ``` ``` Seed set to 23 Time to instantiate model: 0.23 seconds. Total parameters: 138,084,864 Verifying settings ... Measured TFLOPs: 6972.54 Epoch 1 | iter 256 step 1 | loss train: 10.530, val: n/a | iter time: 3230.47 ms (step) remaining time: 3 days, 5:34:13 Epoch 1 | iter 512 step 2 | loss train: 10.520, val: n/a | iter time: 589.19 ms (step) remaining time: 3 days, 0:40:40 Epoch 1 | iter 768 step 3 | loss train: 10.485, val: n/a | iter time: 591.81 ms (step) remaining time: 2 days, 23:01:54 Epoch 1 | iter 1024 step 4 | loss train: 10.447, val: n/a | iter time: 589.35 ms (step) remaining time: 2 days, 22:11:32 Epoch 1 | iter 1280 step 5 | loss train: 10.350, val: n/a | iter time: 589.38 ms (step) remaining time: 2 days, 21:40:13 Epoch 1 | iter 1536 step 6 | loss train: 10.241, val: n/a | iter time: 593.75 ms (step) remaining time: 2 days, 21:18:19 Epoch 1 | iter 1792 step 7 | loss train: 10.134, val: n/a | iter time: 592.92 ms (step) remaining time: 2 days, 21:01:58 Epoch 1 | iter 2048 step 8 | loss train: 10.049, val: n/a | iter time: 590.74 ms (step) remaining time: 2 days, 20:49:12 Epoch 1 | iter 2304 step 9 | loss train: 9.869, val: n/a | iter time: 594.27 ms (step) remaining time: 2 days, 20:39:10 Epoch 1 | iter 2560 step 10 | loss train: 9.771, val: n/a | iter time: 590.04 ms (step) remaining time: 2 days, 20:30:14 Epoch 1 | iter 2816 step 11 | loss train: 9.643, val: n/a | iter time: 588.32 ms (step) remaining time: 2 days, 20:22:22 Epoch 1 | iter 3072 step 12 | loss train: 9.557, val: n/a | iter time: 588.95 ms (step) remaining time: 2 days, 20:15:26 Epoch 1 | iter 3328 step 13 | loss train: 9.487, val: n/a | iter time: 589.32 ms (step) remaining time: 2 days, 20:09:05 Epoch 1 | iter 3584 step 14 | loss train: 9.413, val: n/a | iter time: 588.95 ms (step) remaining time: 2 days, 20:03:24 Epoch 1 | iter 3840 step 15 | loss train: 9.322, val: n/a | iter time: 591.62 ms (step) remaining time: 2 days, 19:58:18 Epoch 1 | iter 4096 step 16 | loss train: 9.241, val: n/a | iter time: 593.65 ms (step) remaining time: 2 days, 19:53:30 Epoch 1 | iter 4352 step 17 | loss train: 9.163, val: n/a | iter time: 593.89 ms (step) remaining time: 2 days, 19:49:00 Epoch 1 | iter 4608 step 18 | loss train: 9.122, val: n/a | iter time: 590.63 ms (step) remaining time: 2 days, 19:44:42 Epoch 1 | iter 4864 step 19 | loss train: 9.077, val: n/a | iter time: 590.87 ms (step) remaining time: 2 days, 19:40:47 Epoch 1 | iter 5120 step 20 | loss train: 9.018, val: n/a | iter time: 588.44 ms (step) remaining time: 2 days, 19:36:59 # ... ``` Backup `wandb`: ```bash mv wandb wandb-pretrain-core ``` Chat with model: ```bash CUDA_VISIBLE_DEVICES=0 CUDA_LAUNCH_BLOCKING=0 PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True litgpt chat ../out/pretrain-core/final ``` ```bash CUDA_VISIBLE_DEVICES=0 CUDA_LAUNCH_BLOCKING=0 PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True time litgpt evaluate --tasks 'leaderboard' --out_dir '../evaluate/pretrain-core/leaderboard/' --batch_size 1 --dtype 'bfloat16' '../out/pretrain-core/final' ``` ``` # ... ```