ideaname stringlengths 30 94 | field stringclasses 2 values | subfield stringclasses 2 values | year int64 2.02k 2.03k | url stringlengths 32 32 |
|---|---|---|---|---|
Powerful Knockoffs via Minimizing Reconstructability | Mathematics | Statistics | 2,020 | https://arxiv.org/abs/2011.14625 |
Controlled Discovery and Localization of Signals via Bayesian Linear Programming | Mathematics | Statistics | 2,022 | https://arxiv.org/abs/2203.17208 |
Conformal Prediction With Conditional Guarantees | Mathematics | Statistics | 2,023 | https://arxiv.org/abs/2305.12616 |
Mosaic inference on panel data | Mathematics | Statistics | 2,025 | https://arxiv.org/abs/2506.03599 |
Chiseling: Powerful and Valid Subgroup Selection via Interactive Machine Learning | Mathematics | Statistics | 2,025 | https://arxiv.org/abs/2509.19490 |
The mosaic permutation test: an exact and nonparametric goodness-of-fit test for factor models | Mathematics | Statistics | 2,024 | https://arxiv.org/abs/2404.15017 |
DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning | Computer Science | Artificial Intelligence | 2,025 | https://arxiv.org/abs/2501.12948 |
Synthetic continued pretraining | Computer Science | Artificial Intelligence | 2,024 | https://arxiv.org/abs/2409.07431 |
Synthetic bootstrapped pretraining | Computer Science | Artificial Intelligence | 2,025 | https://arxiv.org/abs/2509.15248 |
Bellman Conformal Inference: Calibrating Prediction Intervals For Time Series | Computer Science | Artificial Intelligence | 2,024 | https://arxiv.org/abs/2402.05203 |
Language Models with Conformal Factuality Guarantees | Computer Science | Artificial Intelligence | 2,024 | https://arxiv.org/abs/2402.10978 |
STP: Self-play LLM Theorem Provers with Iterative Conjecturing and Proving | Computer Science | Artificial Intelligence | 2,025 | https://arxiv.org/abs/2502.00212 |
Learning to (Learn at Test Time): RNNs with Expressive Hidden States | Computer Science | Artificial Intelligence | 2,024 | https://arxiv.org/abs/2407.04620 |
Quiet-STaR: Language Models Can Teach Themselves to Think Before Speaking | Computer Science | Artificial Intelligence | 2,024 | https://arxiv.org/abs/2403.09629 |
Training Language Models to Self-Correct via Reinforcement Learning | Computer Science | Artificial Intelligence | 2,024 | https://arxiv.org/abs/2409.12917 |
Ring Attention with Blockwise Transformers for Near-Infinite Context | Computer Science | Artificial Intelligence | 2,023 | https://arxiv.org/abs/2310.01889 |
How to Train Long-Context Language Models (Effectively) | Computer Science | Artificial Intelligence | 2,024 | https://arxiv.org/abs/2410.02660 |
README.md exists but content is empty.
- Downloads last month
- 3