|
---
|
|
dataset_info:
|
|
features:
|
|
- name: depth
|
|
dtype: int64
|
|
- name: width
|
|
dtype: int64
|
|
- name: tokens
|
|
dtype: int64
|
|
- name: FLOPs_per_token
|
|
dtype: float64
|
|
- name: FLOPs
|
|
dtype: float64
|
|
- name: params
|
|
dtype: float64
|
|
- name: params_with_embeds
|
|
dtype: float64
|
|
- name: FLOPs_6N
|
|
dtype: float64
|
|
- name: params_pred_loss
|
|
dtype: float64
|
|
- name: wd_ratio
|
|
dtype: float64
|
|
- name: wd_pred_loss
|
|
dtype: float64
|
|
- name: bucket
|
|
dtype: string
|
|
splits:
|
|
- name: train
|
|
num_bytes: 1772
|
|
num_examples: 13
|
|
download_size: 6825
|
|
dataset_size: 1772
|
|
configs:
|
|
- config_name: default
|
|
data_files:
|
|
- split: train
|
|
path: mins_1e-3/mins_lr_ablation_hot_width_depth_params_relaxed_params/train-*
|
|
license: mit
|
|
---
|
|
This dataset is my cache for the [scaling-laws](https://github.com/mcleish7/gemstone-scaling-laws) related to the [gemstone models](https://huggingface.co/collections/tomg-group-umd/gemstone-models-679408ee3f19f1d4d00e8b10). |
|
|
|
In `data_cache` is the approach 3 data cache with the mins for `delta=1e-4`, the mins for `delta=1e-3` are in `mins_1e-3`. |
|
|
|
This is the code I used to upload it: |
|
``` |
|
import pandas as pd |
|
from datasets import Dataset |
|
import os |
|
import gc |
|
|
|
|
|
def get_data_dict(path): |
|
contents = os.listdir(path) |
|
|
|
ds_store = {} |
|
for i, file in enumerate(contents): |
|
gc.collect() |
|
df = pd.read_parquet(f"{path}{file}") |
|
for col in df.columns: |
|
if pd.api.types.is_interval_dtype(df[col]): |
|
df[col] = df[col].astype(str) |
|
|
|
hf_dataset = Dataset.from_pandas(df) |
|
ds_store[file.replace(".parquet", "")] = hf_dataset |
|
hf_dataset.push_to_hub( |
|
"smcleish/scaling-laws-cache", |
|
private=True, |
|
data_dir=path.split("/")[1] + "/" + file.replace(".parquet", ""), |
|
) |
|
gc.collect() |
|
|
|
|
|
ds_1 = get_data_dict("plotters/data_cache/") |
|
ds_2 = get_data_dict("plotters/mins_1e-3/") |
|
``` |
|
To download it do the oppostite of this. The cache is very large, so maybe target specific files you would like. The approach 3 code is expecting pandas `.parquet` files. |
|
Please open a discussion with any questions as this is currently very experimental. |