Fantastic Pretraining Optimizers and Where to Find Them
Paper
•
2509.02046
•
Published
•
13
soape300m6B| Hyperparameter | Value |
|---|---|
| beta1 | 0.95 |
| beta2 | 0.99 |
| block_size | 512 |
| epsilon | 1e-10 |
| learning_rate | 0.008 |
| max_grad_norm | 1 |
| min_lr_ratio | 0 |
| partition_grads_into_blocks | True |
| precondition_frequency | 10 |
| shampoo_beta | 0.9 |
| train_batch_size | 128 |
| warmup | 1000 |
| weight_decay | 0.1 |