Papers
arxiv:2510.09782

The Geometry of Reasoning: Flowing Logics in Representation Space

Published on Oct 10
· Submitted by YUFA ZHOU on Oct 15
Authors:
,
,
,

Abstract

We study how large language models (LLMs) ``think'' through their representation space. We propose a novel geometric framework that models an LLM's reasoning as flows -- embedding trajectories evolving where logic goes. We disentangle logical structure from semantics by employing the same natural deduction propositions with varied semantic carriers, allowing us to test whether LLMs internalize logic beyond surface form. This perspective connects reasoning with geometric quantities such as position, velocity, and curvature, enabling formal analysis in representation and concept spaces. Our theory establishes: (1) LLM reasoning corresponds to smooth flows in representation space, and (2) logical statements act as local controllers of these flows' velocities. Using learned representation proxies, we design controlled experiments to visualize and quantify reasoning flows, providing empirical validation of our theoretical framework. Our work serves as both a conceptual foundation and practical tools for studying reasoning phenomenon, offering a new lens for interpretability and formal analysis of LLMs' behavior.

Community

Paper author Paper submitter

🧩 The Geometry of Reasoning: Flowing Logics in Representation Space

🧠 How do LLMs “think”?
We introduce a geometric framework where reasoning emerges as smooth flows in representation space. Each step of a chain-of-thought traces a trajectory whose velocity and curvature are governed by logical structure, not surface semantics.

Using a multilingual, multi-topic formal-logic dataset, we show that while positions of embeddings encode semantics, velocity & curvature reveal logic-invariant reasoning dynamics — evidence that LLMs internalize logic emergently from data.

This framework bridges formal logic, geometry, and interpretability, offering quantitative tools for analyzing reasoning flows inside large models.

📄 Paper: https://arxiv.org/abs/2510.09782
💻 Code: https://github.com/MasterZhou1/Reasoning-Flow

#AI #LLM #Interpretability #Geometry #Reasoning

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2510.09782 in a model README.md to link it from this page.

Datasets citing this paper 1

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2510.09782 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.