The Geometry of Reasoning: Flowing Logics in Representation Space
Abstract
We study how large language models (LLMs) ``think'' through their representation space. We propose a novel geometric framework that models an LLM's reasoning as flows -- embedding trajectories evolving where logic goes. We disentangle logical structure from semantics by employing the same natural deduction propositions with varied semantic carriers, allowing us to test whether LLMs internalize logic beyond surface form. This perspective connects reasoning with geometric quantities such as position, velocity, and curvature, enabling formal analysis in representation and concept spaces. Our theory establishes: (1) LLM reasoning corresponds to smooth flows in representation space, and (2) logical statements act as local controllers of these flows' velocities. Using learned representation proxies, we design controlled experiments to visualize and quantify reasoning flows, providing empirical validation of our theoretical framework. Our work serves as both a conceptual foundation and practical tools for studying reasoning phenomenon, offering a new lens for interpretability and formal analysis of LLMs' behavior.
Community
🧩 The Geometry of Reasoning: Flowing Logics in Representation Space
🧠 How do LLMs “think”?
We introduce a geometric framework where reasoning emerges as smooth flows in representation space. Each step of a chain-of-thought traces a trajectory whose velocity and curvature are governed by logical structure, not surface semantics.
Using a multilingual, multi-topic formal-logic dataset, we show that while positions of embeddings encode semantics, velocity & curvature reveal logic-invariant reasoning dynamics — evidence that LLMs internalize logic emergently from data.
This framework bridges formal logic, geometry, and interpretability, offering quantitative tools for analyzing reasoning flows inside large models.
📄 Paper: https://arxiv.org/abs/2510.09782
💻 Code: https://github.com/MasterZhou1/Reasoning-Flow
#AI #LLM #Interpretability #Geometry #Reasoning
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Steering Embedding Models with Geometric Rotation: Mapping Semantic Relationships Across Languages and Models (2025)
- REMA: A Unified Reasoning Manifold Framework for Interpreting Large Language Model (2025)
- Explainable Chain-of-Thought Reasoning: An Empirical Analysis on State-Aware Reasoning Dynamics (2025)
- Implicit Reasoning in Large Language Models: A Comprehensive Survey (2025)
- How Language Models Conflate Logical Validity with Plausibility: A Representational Analysis of Content Effects (2025)
- Rethinking Reasoning in LLMs: Neuro-Symbolic Local RetoMaton Beyond ICL and CoT (2025)
- Explain Before You Answer: A Survey on Compositional Visual Reasoning (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 1
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper