Thinking to Recall: How Reasoning Unlocks Parametric Knowledge in LLMs Paper • 2603.09906 • Published 4 days ago • 59
Thinking to Recall: How Reasoning Unlocks Parametric Knowledge in LLMs Paper • 2603.09906 • Published 4 days ago • 59
Thinking to Recall: How Reasoning Unlocks Parametric Knowledge in LLMs Paper • 2603.09906 • Published 4 days ago • 59
Fine-Grained Detection of Context-Grounded Hallucinations Using LLMs Paper • 2509.22582 • Published Sep 26, 2025 • 12
Fine-Grained Detection of Context-Grounded Hallucinations Using LLMs Paper • 2509.22582 • Published Sep 26, 2025 • 12
Fine-Grained Detection of Context-Grounded Hallucinations Using LLMs Paper • 2509.22582 • Published Sep 26, 2025 • 12 • 2
TabSTAR: A Foundation Tabular Model With Semantically Target-Aware Representations Paper • 2505.18125 • Published May 23, 2025 • 112
AdaptiVocab: Enhancing LLM Efficiency in Focused Domains through Lightweight Vocabulary Adaptation Paper • 2503.19693 • Published Mar 25, 2025 • 76
TokenVerse: Versatile Multi-concept Personalization in Token Modulation Space Paper • 2501.12224 • Published Jan 21, 2025 • 48
Are LLMs Better than Reported? Detecting Label Errors and Mitigating Their Effect on Model Performance Paper • 2410.18889 • Published Oct 24, 2024 • 15
GLEE: A Unified Framework and Benchmark for Language-based Economic Environments Paper • 2410.05254 • Published Oct 7, 2024 • 85
LLMs Know More Than They Show: On the Intrinsic Representation of LLM Hallucinations Paper • 2410.02707 • Published Oct 3, 2024 • 47
Does Fine-Tuning LLMs on New Knowledge Encourage Hallucinations? Paper • 2405.05904 • Published May 9, 2024 • 6
Does Fine-Tuning LLMs on New Knowledge Encourage Hallucinations? Paper • 2405.05904 • Published May 9, 2024 • 6