Predicting Task Performance with Context-aware Scaling Laws
Abstract
A framework models downstream performance of large language models as a function of training compute and context, offering insights into efficient design for long-context tasks.
Scaling laws have transformed our understanding of large language models by linking upstream metrics like cross-entropy loss to design factors such as model size, training data, and compute. However, these conventional laws fail to capture downstream task performance, where context plays a critical role. In this work, we propose a straightforward, interpretable framework that jointly models downstream performance as a function of the training compute and the provided context. We empirically validate our framework by fitting it on the observed downstream performance of extended-context variants of Llama-2-7B and Llama-2-13B across 65,500 unique instances spanning three tasks: arithmetic reasoning, common sense reasoning, and machine translation. Our results demonstrate that our framework accurately models in-distribution downstream performance, generalizes across three orders of magnitude in training compute, and reliably extrapolates performance as the amount of context increases. These findings offer valuable insights into the interplay between training compute and context utilization, providing guidance for designing more efficient long-context LLMs for diverse downstream tasks. Our code is available at https://github.com/wang-research-lab/context-scaling.
Community
The paper extends traditional scaling laws by jointly modeling downstream task performance as a function of both training compute and context length (e.g., number of in-context demonstrations). Empirical evaluation on extended-context Llama-2 variants across arithmetic reasoning, common sense reasoning, and machine translation tasks shows that the model fits observed behavior and generalizes across orders of magnitude in compute and context length.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- xLSTM Scaling Laws: Competitive Performance with Linear Time-Complexity (2025)
- Optimal Sparsity of Mixture-of-Experts Language Models for Reasoning Tasks (2025)
- LAWCAT: Efficient Distillation from Quadratic to Linear Attention with Convolution across Tokens for Long Context Modeling (2025)
- Pretraining Scaling Laws for Generative Evaluations of Language Models (2025)
- Predicting LLM Reasoning Performance with Small Proxy Model (2025)
- Mid-Training of Large Language Models: A Survey (2025)
- Revisiting Long-context Modeling from Context Denoising Perspective (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper